playlist
stringclasses 160
values | file_name
stringlengths 9
102
| content
stringlengths 29
329k
|
---|---|---|
Robotics_1_Prof_De_Luca | Robotics_1_Prof_De_Luca_Lecture_21_14_Nov_2014.txt | okay so uh we can now start looking from more distance on how a robotic system work we have detailed at least kinematics and inward kinematics we will leave to the differential analysis the next lecture but now we want to verify how see how robots are programmed and on top of that what is the general structure of a control architecture or supervision architecture how do you prefer to call it that considers tasks considers components sensors actuators and their interaction and reasoning about sensing information in order to perform the tasks in the best way when the task is not just as trivial as go from a to b and that's it okay so our next slide will be guided in two blocks one on robot programming uh i will stay at the very generic level we will not program robots you have some course devoted to that at least within the master in artificial intelligence and robotics and the second part which we'll most likely see on monday will be devoted to supervision and control architecture okay having said this what should a roblox program do what are the specific of robot programming well first of all it needs to work together with a real-time operating system because all the our action i mean let's say 95 of the action should be online you can do also offline programming but this is a different story for simulation purposes you do a lot of offline programming but all the execution is very important and the real-time constraints are very important even because if you're doing sensor based motion control then you have to read sensor in real time and process this information in real time we have seen the multiplicity of sensor that can be used for a robotic application both proprioceptive and extrositive so this should be something that is considered at the level of programming and then motion control execution so you command some motion and we will see how to do this in particular at the kinematic level but also at the dynamic level with torque commands but this is a story that we will analyze in the second course of robotics too but we should verify that these commands are executed in parallel the robots perform some tasks in its environment so changes the stages of the word so if we are want to model also what happens to the words we have to keep track of changes due to the commands that we give to the robots both in nominal cases and in case of perturbed situation like a robot should pick up an object and bring it to a different place but it loses on the way it's a very unfortunate situation but if i have sensor i can discover this so i should recover resume the situation possibly go and pick up the object and then resume the original program so this should be modeled in terms of status of the word and check of how the word uh evolves while the robot is doing their action and this check is done with sensors of course another very important aspect which is more and more important even in industry i should say is the interaction both physical and cognitive of a human with robots so they sharing the same workspace they should coexist in the same environment without giving up safety which is mandatory so how this is integrated into a programming and a control architecture is also very important as well as the fact that you should monitor your sensor or your actuator and you should detect if there are faults so that you can especially in the presence of a human this is a requisite for guaranteeing safety in all condition and then you should be able to recover from error if possible for instance you may have a robot with many joints and having a failure of one actuator so you can block that joint still you have some functionality left if you have eight degrees of freedom one is gone but with the seven degrees of freedom you may still do the task provided that you reconfigure all your modules algorithms and so on for instance you have a jacobian which is no longer six by eight but it's six by seven but then you can do the same task i think things like that and then of course we're talking about programming so there are languages so what are the languages that characterize uh robotics they are specific mainly in terms of data structure and of the set of instruction that are present otherwise they are languages just like c plus plus or whatever language you're used to okay there's a warning here which will be clear later on namely that the environment in which you program a robot depends also on the level in which you can interact with the various functionality of the robot itself and when i talk about functional architecture i mean what kind of task or device or robot level you want to enter and command this will be clearer when we in the second part when we will talk about uh supervision architecture and how they can be decomposed in a modular way and how they should interact in a rather generic architecture okay so let's talk about historically there was a first generation of programming languages for robots and this goes under the head heading programming by teaching what do i mean by that well essentially uh you you could have a teach box so a joystick or something similar to that or a keyboard and move uh in a manual way by pressing buttons and or moving the joystick the robots to bring it into a specific configuration then you could learn this configuration save this configuration better set and then move it to another configuration so in fact you're teaching the robot what to do by moving it in a very primitive way and then once you have concluded this you may insert this piece of elementary motion into a cycle and then you can add to this cycle some option do this until some switch turns on or turns off and things like that this was the first generation programming style and those functionality are still there in more recent programs languages so it's something that you may use in some cases but of course it's not the only possibility so uh sorry okay so this was uh expressed in this way so programming by directly executing the task at least in some say clean situation but maybe before the object were coming into the industrial process so the operator guides manually or via teachbox the robot along some desired path or a sequence of nodes when we are satisfied with the configuration that the robot has reached we sample and store this configuration and then may specify later how we would like to go from one configuration to the next one that we have taught to the robot so this is a kind of a code skeleton expressed then in some instruction and this instruction have some parameters and you can modify the parameter of this program so there were no really special need of programming because i the operator which was not an engineer maybe could do according to his experience the movement of the robot and save the execution in this way and then later on one may decide if this cycle should be done at slow speed or a high speed or intermediate speeds there were also this option in a very generic way not putting numbers but just putting percentages percentages with respect to the maximum nominal speed of the robot so that an operator should not really have very sophisticated programming skills this way in this way you could program various industrial tasks like spray painting or arc or spot welding or even palletizing you remember those tasks and when we illustrated the various applications especially in the rob in the industrial domain of robotics and these are historical i would say robot programming language the t3 by milagron or the funky by ibm who were built upon this skeleton so they provide some data structure and instruction that accepts this skeleton learn by teaching and then could modify this elementary instructions second generation this becomes more closer to what we are doing now so this is a language struck a structured language so with the option of flow of the program typically interpreted not compiled and interactive in the sense that the user can modify it while the program is running okay eventually there are the typical branching or while instruction and if you buy a robot all robot comes with their own programming language and this is kind of disappointing because if you had developed some program on a six degrees of freedom robots you may think that if i changed the navitar timber parameter and i changed the library of the kinematic function i could repeat this exactly the same which is not true because you have to change the coding although all these robot programs one for each manufacturer i would say not one for each robot so kuka has its own language komao has its own language unimation used to have its own language when it was still alive and so on so if you don't change the brands you can keep the same type of programming so this was the most common situation there was some effort many years ago i should say in order to try to standard find a standard for this and the most that the most advanced result that was found was not standardizing the language but standardizing some modules which interface to the specific robot so you can develop a program in your own language and then you apply like a driver for a for a printer the driver for the printer takes some generic instruction and specialized to the specific device these things were done uh in the 90s in robotics well the preferred way instead is to use some any common language python java c plus plus whatever to develop libraries possibly open source libraries so that people can then combine not do the effort of reprogramming everything from scratch but combine things and then add their own specific needs and then converts the final results into let's say the specific uh commands of a of a robot okay or even go without the programming language of the manufacturer especially in research application you can directly do your own programming up to the end if you're using an industrial robot you should always leave with some low level instruction that are run in parallel to what you're asking to the robot like checking that the motor are correctly operating and in general safety checking and coordination of uh reading and riding on input and output ports so this is let's say the uh i i should say the low-level interface that you cannot get rid of if you want to get rid of it you should reproduce the same in your own code which takes a lot of time and you may not be qualified for using your software in some real application just for research so this is why when you're developing project which have some or may have some industrial impact you tend to develop your new results your new algorithms but then be able to interface with an existing controller with its own programming environment because this is certified to work in practice okay so there are a lot of uh critical aspect to be considered in this case otherwise of is clear that using us the most the most common languages would help so here uh the access here is at the action level so with this type of programming you really can drive the single motors okay uh of the of the robot uh moreover remember that the robot is not a singleton in in an application especially in an industrial application you have seen cells where multiple robots should cooperate and should exchange information through common buses and so this type of programming languages for robots have also instruction and way to communicate with the external world which includes other robots or other devices machine tools uh conveyor belts and and so on and here there are a huge number of possibility here i just mentioned a few for instance aml when ibm was still producing robots or val 2 which was the latest version including capability of handling vision or force sensor by unimation or pdl2 by comau this is the italian leader in robots or the qrl language by kuka and i put this in red because we have cucas downstairs and i've used just for illustration the set of data structure and command for the of the krl language okay so how do you interface with kuka you will see this in the in the lab visit this is the teach pendant the teach pendant is kind of a notepad so with a screen a lot of buttons a nice space mouse so it's a mouse which has six degree of freedom possibility because this drives six joints okay so if you move properly this space mouse you can move for instance one joint at a time or one coordinate at the time of in the cartesian space okay so at this level it's the teach pendant so the user is teaching the robot what to do and then you can switch in speed or move joint by joint incrementally by minimum increments by this type of buttons okay six and so on actually this is a different story uh when we receive the seven degrees of freedom lightweight robot by kuka which is still a research product now has become an industrial product they had the same interface although there were seven joints we have an interface with six pairs of button so to access to one of this joint we should alt switch or whatever to have this access so not very friendly from this point of view so this is the teach pendant then there is uh the programming through the krl language and the two things are integrated so you can instruct some motion through the teach pendant by moving the robot or write some code or merge some skeleton code generated in this teaching mode with some other part which you programmed offline and do things like that the other things that is important is they have a robot sensor interface rsi xml this is the coding which works in internet and this is a way for interfacing the robot with external devices so other robots or sensors typically vision or force sensing or connect in different way and there's another interface that kuka developed which is called fri fast research interface which is the same thing but applied only to the kuka lightweight and now also to the industrial version which is called iva uh as a robot assistant in industry this fast research interface allows much better capabilities of entering into the low level controller control layer of the robot and it does at a much higher frequency we will see that while there are cycle of say 12 millisecond from one reading to the next one using the rsi interface with the fast research interface we can speed up up to one millisecond okay so one kilohertz and which change a lot of the story what you can do in between and how accurate you can be with your external codes and sensing so this is uh the basic set of instructions so if you start i don't know if you can read it but there are of course declaration of variable or some structure and between the structure definition you will have things like frames defining frames is basics for describing tasks we have seen why and also for describing some kinematics which is coded inside the programming environment for this robot but of course you may wish to code some other task kinematics for instance you may be interested in some particular point in the chain not just the end effector so you may need to have access to the frame that describes the relative position of the link and stop at certain point just like we have done with the chain of homogeneous transformation in the dynamic ertenberg convention okay well this is very important all programming languages for robot have some move instruction because motion is principle of robotics we have seen it's not just a computer there is some motion in the real world here you see uh six type of motion commands each comp with this is describing a in a manual reference manual so they refer to some page of the manual i've just taken out from there so these are three absolute and three relative instructions you see that circ or cirque relative lin or relative and ptp or ptp relative ptp stands for point to point and it's a motion which is defined in the joint space while lean and circ are motion which are defined in the cartesian space along a linear path or along a circular path see we can combine this and make much more sophisticated motion or program ourselves some strange motion which is not a line and not a circle but if we want to use their controller our program should then decompose no matter how complex is the part that we have defined into this elementary path so that we can call their own laws okay unless we define some low-level interface where we define directly the results of this planning at the joint level and then the controller takes just new position which are just sampled out of a very complex path it's like solving inverse kinematics points after point okay we will see how this can be done and what are the options when we will talk about uh trajectory planning in the cartesian space or in the joint space there we will see all option of or a few option the most important option for programming motion okay and then here there are some uh instructions for controlling the execution of a problem for hull if then loop repeat until cases which goes like this i forgot to say this comes with some parameters they are called with some parameters now linear motion it's a linear motion in the cartesian space of course from one point to another point so that has some argument when you call this instruction similarly uh when you check and you switch your case you may go and read some input ports or do something while riding on some output ports so interfacing the robot with something outside in fact here you have input and outputs you have typically analog input and algorithm output or digital input or the possibility of generating pulses or receiving pulses for instance pulses from encoders okay and then you can break the system this is an instruction which is set on the brakes brakes on break off or work under interrupt or so stopping some part of the program and executing something else until something happens or trigger some of this instruction based on some information which is not pre specified in advance for instance trigger when distance is lower than say 2 centimeter and this trigger is based on a sensor input some sensor that provides the distance and when this value goes below then you switch to a different part of the program okay so you see that with respect to a standard programming language there's not nothing really new except the fact that you are conditioned by events which are acquired from the real world through sensors okay this is very important even in this second generation okay and then basic data set as i said there's a declaration there but you have frame vectors you have a kind of mathematical library inside so that you can do vector product you can do inverse of matrices at least of four by four matrices or whatever okay so typical motion primitives this is very important so this is why i'm i'm focusing here so taking from the manual so this is the point-to-point motion pay attention it's represented into in the cartesian space because if i show you some joint motion i can just show a plot of how the joint will vary but you don't see exactly what happened now you should see a movie or she the robot downstairs but if i take a picture like that so this point-to-point motion is a linear motion in the joint space so if i want to represent this here i would say okay i'm not doing it for that kuka robot what i'm doing for my uh good friend the two are i have some q1 and q2 and okay they should start from the same value and then i'm saying that i want to go from this configuration characterized by these two points to our another configuration which is this one so and this at the same time so this will go like this and this will go like this okay so this is a linear motion in the joint space and this is programmed with the ptp instruction now what happens to the end effector when i'm moving linearly the joints this does not mean by no means that the end effector moves along a linear path just because there is all the trigonometry involved so and this is what you see here so point p1 is the direct kinematic mapping of the first configuration point p2 is the direct kinematic mapping of the second configuration i'm moving linearly in the configuration space but here is what the tool center point is doing just representation just to show that the two spaces are completely different okay indeed it will start from p1 and go to p2 but there will be no linear motion in this case to perform linear motion you have to specify a cartesian linear motion okay so you specify for going from maybe the same point p1 maybe this point p1 is a point where you brought the robot with the teach box so he knows that i mean he's in p1 but he's also also store the configuration okay now when i store the configuration in q1 and q2 then it's very simple to command the linear path from q1 to q2 or from sorry from the initial to the final configuration here there's a two joints so there's some confusion uh while it's very simple and visually very clear what i'm doing if i'm specifying a linear path so with the linear motion in the cartesian space but remember i cannot act quiet directly the cartesian motion because the actuators are in the joint space so in somehow i should convert this path and you will see also the linear the timing law that i use on this path because i can move at constant speed or start with zero velocity accelerate then travel at some medium speed and then decelerate and stop and so on and so on now all this motion should be converted into some commands at the joint level because there is where the actuation takes place this is why it's so important direct and inverse kinematic and differential and inverse differential kinematic at the velocity and acceleration level finally if you give three points in the cartesian space and if these three points are not aligned then you can define a unique circle going through the three points so since this is very common in the application there's a special circular motion command which is cartesian takes a start an auxiliary point and an end point and generates a motion in the cartesian space okay what about the orientation of the end effect now if i give you a path in a 3d space this is a part of a point it's not nothing is said about the orientation i could go linearly like this going in a linear or i can go linearly by changing my orientation so some something else should be specified in terms of orientation and there are various options the most common are keep the constant orientation so you start with the given orientation keep this orientation all the way through okay now okay let's let's comment the other option then go back to this situation or you can use a set of angles to define the change of orientation so you define a roll pitch yaw angle for the initial orientation roll pitch yo angle for the final orientation and then you interpolate like here in a linear fashion between the initial value and the final value of these angles okay so you change orientation it's not really evident what you're doing in between in fact we will see that for the orientation there's a much better way of doing this by using the results of the axis angle representation okay we will talk about that when programming so this should be combined with this type of motion because we are working on the end effector level when we are commanding uh motion in the joint space we take for granted what happened in the cartesian space so we have no control no special control on the path nor on the orientation that the and the factor is taking but we are very fast because we are directly commanding uh motion in the joint space we don't have to do this kinematic inversion okay which is instead the case of uh when commanding when programming motion at the end effector level now you can do whatever you wish with this program in particular you can write a code now after you've brought the robot here in p1 and in p2 you can write a code that says go in linear on along a linear cartesian path from p1 to p2 but what if the robot while moving while executing this command crosses the singularities what happens then okay so even if you can program this you have always to check the actual result this is why offline programming and simulation is very important because you can discover some problem in advance otherwise the robot will start executing the task and all of a sudden have increasing speeds at the joints until it saturates and some protection goes on and the robot stops okay and this is something that you don't want to have when the robot is online executing some tasks okay so remember that all the analysis is not thrown out of the window even if we can program that way in fact the kuka controller receiving such type of commands will do exactly what we learn in this room about inversion and inversion at the kinema at the differential level checking things and trying to uh stop in advance the motion or modify the motion so that the robot can go through singularity in a robust way okay and we should know how this works because if we want to do it for our robots we have to reproduce this type of behavior so this is why we learn all the mathematics that is behind otherwise we would program and stop as i said data structure again no no which is better a bit better as i said there are frames frames are everywhere and in in the krl language they are preceded by a dollar so this is the word frame base frame a tool frame a robot frame which may coincide with frame zero of the dynavit artimer convention and so on and so on and then you can do computation once you have defined this frame you can do computation which are either constant homogeneous transformation or homogeneous transformation that depend on some variable for instance the position of the tool with respect to the position of the base depends on the direct kinematics like we have learned okay so this is already embedded and then of course you can move independently of how you define things in the cartesian space and through frames you can move independently each joint and you should know what is the positive sense of rotation and this is why you have pictures like that in all manuals of programming languages so let's say a few more words about interfaces because we want to do something that goes with this robot that goes beyond what the manufacturer also intended to do for industrial application so this is why kuka developed i mean actually actually developed this uh robot sensor interface because in some industrial application use of vision use of force sensing is uh may change radically performance or speed up cycle times so this is why they provided to the generic user this sensor interface and we use this channel to implement our codes and talk to the robot unfortunately with a cycle of only 12 milliseconds not not faster than that and so through this interface you can acquire information from the system for instance position data of the joints from the encoders or currents that circulate in the motors and other signals like this or sense the robot controller data which are output commands coming from your own program that commands for instance the joint velocity of the robot or just sensor data which are used within the programming language that we have seen before so this second part so not the in just the interface to sensors is very relevant for us why because we are using sensor but we interface sensor to our own let's say pc do some elaboration and then finally send commands of motion or reference position to the robot controller to the embedded robot controller so we are using mostly this interface for influencing the robots from one interpolation cycle to the next with our own code which process external sense while kuka itself intended this interface for directly interfacing sensors to their own programming code okay so this is the main difference that we have here so we can intervene to the path planning of the robots so changing their own path planning we can use the signals that we receive for doing some diagnosis for instance we have used this measurement of the current in the motor to discover if there is an extra collision with the environment or with some human okay so this is something that we have added to it uh and then uh there are more technical aspects so that we can use this protocol for communicating uh with uh other objects that are seen inside the programming environment as a programming object that exchange data in input and output we won't go into the detail of this so this is just for your reference if in the future you will happen to work directly on the kuka kr5 and of course there's a monitoring of time out because if you allow to enter some command data from your own program or to acquire some sensor information from your external sensor then you have to set this limit of 12 milliseconds if you take longer than that some strange effect may happen because the controller of the kuka may read whatever it is on that port at that instant and this may be a signal without any significance or may be stuck waiting forever for this signal that it's not coming so the data exchange should be monitored and there's an extra process doing this so that synchronization is kept and you're not allowed to go beyond the 12 millisecond otherwise the robot goes into safety and stop itself okay so if you're doing too much computation because you want to do many nice things with many sensors and you do not respect this timing in real time then the robot stops anyway this is the monitoring of data exchange so here i show you a couple of examples of use of this sensor interface this is classical in a sense so here you have a robot you have the cabinets you have some tasks where the end effector should enter in contact and apply some forces to this structure okay so there's a tool and there's a force sensor the force sensor may be a natty device so some external thing it's not produced by kuka is something that needs to be interfaced from the outside it's not something that you can embed so this is mounted on the wrist at the wrist or close to the tool and typically you may wish to have some motion some motion in this direction b which is activated only when you have reached some pressure in some direction so only when the force x reaches some value so how do you do this you have to use the sensor interface to acquire force measurement from your force sensor and we have seen that the force sensor provides measurement in its own frame okay so if you don't convert this externally but just throw in this the controller should know the where this sensor is in fact it's in some there is some frame which moves together with the direct kinematics then there may be some final offset added so it converts this into its base coordinate so into the x direction reads then the force in the x direction and if this is larger than some value he knows that he's in contact and applies the right force and then he could start moving so in terms of commands it may be some linear motion commanded in order that the system approaches and then a command which is relative so you provide linear motion along the direction of the velocity v relative to the fact that the force sensing in the direction x is providing some input of let's say more than 10 newton okay and then you start moving this is a classical way that you can do and you render more autonomous your robot because you don't have to place things exactly in that way there may be some uncertainty and you compensate for the uncertainty by adding an external sensor but then this external sensor needs to be read in real time and intervenes in the planning of motion so in the programming aspect of the robots here well actually uh here we have used uh without external sensing the robot so uh some colleague of yours in uh in a project of robotics too they have commanded the robot's velocity by mimicking the presence of a virtual obstacle in the environment so the robot is programmed to do a circular path in the cartesian space but then it hits this virtual surface and needs to modify its motion so that it doesn't penetrate this virtual surface until is able to recover with the original motion this is why they choose some cyclic part circular or or or square path here they have a view in matlab here they have a view in web bots so there are programming environment where you can show also some graphical here a skinny version here a more realistic version and here is the actual motion of the robot as seen from a camera that overlooks things so see without joint range limits or including virtual limits this is uh by the way here we are commanding only a motion for the position of the end effector so a three-dimensional task we don't care about orientation since the robot has six joints we will work here with the jacobian which is three by six so we are in a case if we want to solve an inverse kinematic problem we are exactly in a case where we cannot use for instance the neutral method as such so inverting a non-square jacobian we should use the pseudo inverse or the jacobian transpose if we use the gradient method so this is how it works by the way they think they built up things superposing the robot without the limits and then with the limits so you see here on top you will see two robots splitting one is the robot without the virtual limit so performing the full circle and the other is the actual robot modifying its motion until so from here you can understand that the circle was placed like this and it's cut on top for a while okay so just i mean what what's the sense of this just to understand how to this is was just starting uh understanding how to let the robot sensor interface talk with a program developed in matlab or then converted in c plus plus whatever it was again here you see the two robots differing so but i think that you get the point here to this interface this is uh uh maybe more interesting this is a master thesis of emmanuel magrini some years ago now is doing the phd with us dealing with human robot interaction and some additional stuff so he was giving commands to the both vocal commands and gesture commands to the robot these were recognized by a kinect which was tracing his hands and the vocabulary was very simple like listen to me so that kuka stops doing what they're doing and waits for some instruction or follow or give me give me means come to the place where i put my right hand or left hand or follow my right hand or left hand or follow my closest end my nearest end to you so this may switch from one situation to the other and then stop the collaboration and whatever so here again we have a uh kinect overlooking the monitoring the the environment so recognizing the human computing distances and then moving the robot through the sensor interface so every commands every 12 milliseconds in this 12 milliseconds the kinect is processed and the commands are computed uh for the robot itself so you don't you maybe don't don't hear the voice but you see here kuka listen to me you see the commands written to that so give me the end effector goes to the place where he recognized the until he doesn't give a new command the robot so this is follow nearest hand so it just first goes to the left hand and you see now all mo all joints moving the six joint moving you see also joint 4 rotating now this is interesting to see now he put the other hand it becomes closer and then robots change situation by the way again it's only position control for the undefector so it has a three degrees of redundancy and this motion in the joint space is performed in such a way that this tool this tool is always best visible from the kinect okay so we made this type of modification this is very dangerous by the way this tool is very dangerous but this is another story okay and then and then the interaction ends okay now uh what about the other interface the fast research interface that now we are using more intensively so this comes with the lightweight robot the lightweight robot was built was bought uh in september 2012 with research found from the safari project that we are coordinating and this has a different way of working so here outside you develop your custom code and we developed codes for doing kinematic control trajectory planning resolution of redundancy dynamic modeling torque dynamic computation uh physical human robot interaction with contact with the human a lot of things that we do for research okay so we developed our program there we have libraries we have ros nodes developed in that in that area and it interfaced with their uh motion control kernel of the robot so this this part at the rate which can go down to one millisecond two milliseconds ten milliseconds depends i mean this can be changed let's say that two milliseconds so uh half a kilohertz is very reliable one millisecond sometimes it goes some protection on okay of course you can still use kl commands there or just go to the motion canal and execute velocity command at the joint level or torque command at the joint level torque commands are there if you have a dynamic model of the raw but we have done a dynamic identification of the model all subjects that will be dealt with in robotics too okay so uh you see that for instance you have uh the way of acquiring current measurement or commands and receiving fri commands from the outside so this is the way in which they communicate and this video which is new by the way with respect to the previous this is september 2014 just show i mean don't care about what it does just illustrate the fact that we are able to command the lightweight robot for instance with velocity commanded the joints in particular in this video we were interested in generating commands at the velocity level so q dots as if they were coming from an acceleration or a torque command which is one differential level higher and implementing this in discrete time so that if we produce exactly the same behavior of the second order control law but it's much simpler to implement provided that you do things properly and still has the same accuracy okay so this video shows uh a task first a task where we're commanding velocity should go through four points so start from here go here go here and then stop here on top and this is a velocity command and you see that at some point the robot stops because with this velocity command some limit torque of to the motor were exceeded okay so we decided to do this scheme at the acceleration level where commands are more smooth and then we can compensate for this internal motion so that the whole task is executed but the acceleration control is very complex so the idea behind this video is let's have this discrete time implementation at the velocity level that reproduces the same behavior of the acceleration level command okay which is much simpler actually okay and now at the end we compared the two the execution so now the orange robot becomes blue if we have the acceleration level control o or green you don't see this when you're doing things at the velocity level in this way and you see here there are some uh differences which are uh very negligible the two motions are exactly the same so apart from what is this result you may be interested or not but this shows that you really have low level access you can command acceleration you can command velocity you can compare the things and they are very reliable other uses of the fri well these are taken from the literature there are people that do coordination of dual arms so they have one robot and another robot either remotely or being the left and right arm of the same mounted on the same torso of a humanoid-like manipulator and the communication between the two controllers these are two separate uh manipulator are done through the fast research interface and they both communicate to a unit that coordinates the motion for instance let one be the master and the other be the slave or do simultaneous control so that they can apply forces to a common held object or things like that very similar here this is very nice i think this is developed at the university of siena the fact that you can instead of using for instance the the space mouse of the teach box of the original kuka controller uh they use an haptic device which is a structure a very light structure which acts like a joystick and has six degrees of freedom you see it's a parallel structure if you can recognize you put your fingers here on this side and you can move this very light things in x y z direction and then also change orientation so you're moving with your hand like this and you really feel what you're doing and this becomes command for the end effector of the robot through the fast research interface and if you had a force sensor mounted there you can feel back some touch on your device and if you don't have a sensor there but you have a virtual reality environment that you set up and when the robot in the virtual reality the real robot is moving it's moving also in a virtual reality in the virtual reality is hitting something and you feel this on your command so you have this haptic feedback all realized and for haptic feedback every time you need for sensing you should be very fast while for controlling motion you can also give up some speed in terms of communication information on force should be very fast otherwise you go immediately unstable which means that you try to for instance holding a force in contact and the robot starts bouncing and vibrating in contact because of the delays in the communication okay so it's very critical to have fast research fast interface and command when you have contact and exchange of forces let me conclude with this this is just the list of open source software that you may use and you probably have already used some in other courses that are relevant for robotics and they are relevant both for real-time control but also for simulation for interfacing with robots but also in general with other devices and sensors most of these are research oriented it's an open community and typically this is an academic community but the quality i would say that of this object is really excellent and they are relatively easy to use one project was the player stage so this is a robot server for network robotics it works under linux or mac os system and it can be interfaced to a large variety of hardware and the stage version provides also a nice two-dimensional simulation environment this has been expanded into a three-dimensional simulator but this become then an independent project which is goes under the name of gazebo and in gazebo you can do really not only nice rendering but also realistic physical simulation with a internal engine which is called ode which means ordinary differential equation which means an integration of equation so you define an object you define the mass of this object and some inertial properties and then the system automatically integrates the associated physical equation that describe the motion of this object when subject to forces and tors okay so it's a very nice things or you can use vrep as i said the rep is not free in its full version it's produced by copelia robotics i think it's a swiss-based company but there's an educational version that is free and it's enough for our needs or your needs and in this case every body or object that you can define in cad has embedded its own controller so the description of how it behaves how it moves and the nice thing is that this is a user code that you can embed in any language so in c plus plus plus in java or even in matlab and so on so this is a very flexible and the graphic is good enough then some of you may have used this in the courses with professor nardi so there's open rdk did you encounter this already open rdk ok in any case this is oriented for for instance in the robocup we have a team headed by professor nardi and professor yoki in our department and there you have the single robot acting as agents and they have to exchange and communicate in a highly dynamic environment because everything changes including the opposite uh team and so they use a kind of blackboard type communication where each agent puts its own data and can read the data that other agents have put there with a kind of flexible and modular structure the same thing happens with ross how many of you have used ross or have heard about ross in which course in robot programming against same thing okay so this again is uh what is called middleware so this it's uh an architecture where you can put uh elementary parts which are called nodes so with the standard way of communicating with message passing and uh again they work it's similar to this environment in fact i don't know if probably nardi moved to ross because now it's becoming the real standard not only at the research level but also some industrial users are starting using this there's a wiki site where you can find whatever you need and it's open source again as all this list again the blackboard type is a mechanism of communication very similar with this publish and subscribe option and ros itself can generate you the structure of communication once you code uh the nodes in the proper way so this is very very of course remember it's a middleware because inside you have to put the meat this is a way of organizing things it's not the core the core is inside the single nodes so the real program python or c plus plus that you do is inside one node or multiple nodes ros enables to let a larger architecture to work in a synchronized and coherent way well this is another less information on that just to have a list so let me conclude today with this last slide so ending the programming part of course there is a third generation of languages this is not real existing on the market but it's much used in research what is this well it's from one side if you compare with the development of conventional programming from structural programming you go to object oriented so you there's a level of abstraction it's a called task oriented because it's not really linked to a specific robot it's more linked to the reasoning about the how the word changes because of the action of the robot okay so it's more let's say ai oriented because you reason about things and you have to keep track of things uh and the nice thing is that it's even it's getting more and more cognitive so it's reasoning not on a limited domain but the domain which is open because it's dynamically changing and it's driven by the sensor information uh remember that the robocap it's nice it's it's fun to play to program to see it sometimes but it's representative of a situation where things change in an unpredictable way although within some rules so it's very close to real life independent of the fact that robots are playing soccer or whatever okay the scientific value of robocap is exactly the possibility or the need to develop things that understand and elaborate and are efficient in dynamically changing environment which are censored by the robots okay so this is important in general so and the excess in this case when we were next topic will be the hierarchical organization of the supervision controller and there we will have the robot level the action level and on top the task level which is rather independent of the robot i would say because it's only related with what you should do and how what are the action that you should perform in order to reach some goal uh independent of what robot you're using for doing this okay so i think i'll stop here and we will see on monday have a nice weekend so wait for my email with information on the new material |
Robotics_1_Prof_De_Luca | Robotics_1_Prof_De_Luca_Lecture_09_17_Oct_2014.txt | now we will start seeing Some mathematics connected through the description of geometry and motion of robot manipulators since we have already mentioned that's are over 20 believe there will be a chain of region bodies connected by John's we will start today with a over viewing the way in with the ways in which we can describe position an orientation other region body in space and this is a good body will be a genetic link of our robot manipulator or more specifically the end-effector body a carrying some guru or payload of our money please but for the benefit of abstraction now we will consider just genetic rigid body in space so how do we characterize position and orientation we have to make sure that with stock from solid ground so we define some reference frame in which will describe whatever we need to describe of geometry and of motion for instance the reference frame eighty with a three-axis orthogonal axes ache se yeaa Acad pay attention that we'll always make use of right-handed frames my type and frames means that the vector product of the units nectar outlines with X and why so X vector why will produce though direction of the z-axis this is a right-handed frame we'll only work with pride and the tree of course you can do the whole course we left and the train Wotton are not and I love to do is changed frame from one type to the other while doing things I think all the whites who get mess so in terms of this reference frame we place somewhere this green body and its community sorry dude body then you may remember thats it's not important which the the description of which point we r at using for characterizing the position saw we worry praise from now on this body without another reference frame of course right-handed which we called at FB which is placed somewhere attached to the body and has its I unit axis X be white B&C be starting from this point RK at it's not said that the origin be of this frame misplaced physically on the link itself for instance if I want to describe the position a revision of my forearm I may use our frame attached here which and moves together with this body so change position and orientation space but I can also use this frame as long as this frame most exciting the same weight so it's at patched to the body okay so attachments not on the body but moves with the body okay cell than if you remove the green body it's a matter of describing the relative position and orientation of praying be with respect to frame a so let's taught separately about position an orientation so to characterize the position offering me namely of the body that you're interested in two we just I characterize the position of authority why not King how do we characterize this while with the vector PA be which says where is point B police with respect to frame a okay now the vector PA is an object saw sometimes you may be used to look at the it with the vector on top but we've don't use this message but this is a vector and as a vector it can be represented in several ways several coordinates for the most natural one would be you have this X a why eighty ZP have you have your vector p in particular PA be now so you project this on the plane and you will have accordion 8c according at X and Courtney a why here okay for this is parties in courtneys of course you may use other set of coordinates for instance you may use mmm saving the Conchords seen in the carport these are cast these %um court minutes of that point or of that vector say effective going from the origin to that point can cylindrical coordinates means that well these are defined like are the fact H mom and that H is exactly the hate here and then a you would say that the are is the dispense and then the peacock is some like the seemingly not so you say I'm rotating I'm at that distance are so the are and then I'm this hate age and in fact you can more from genetic coordinate XY see to cylindrical coordinate he's a way K so for instance you cannot right by the safety of the opposite you can write XY and Z while the third one is easy Z is just like the hate while this one his are a co-signed feedstock and are signed feet so given I presentation of the suspect and you cylindrical coordinates OR a tribal are age the the time taken computer kurt is important can you do the reverse well yes you can in fact you should write are theyve got H as function of as function of XYZ so let's start from the pop from the bottle which is the easiest one just reverse this thing and now I are with this is the distance from the axis seem so in fact is just square of X los quiero fly which thieves are square cosine squared plus our square sign square saw to police are square then you extract the rules and you keep only the blast sign because this is the distance and you have are what about Peter how can you know find out either so somebody says arc tangent of what well you say our transit means that you want to first think the pageant so you probably me why divided by your ex saw are disappearing have sine or cosine soap and rinse off the top so this would be why pics okay well a remember that the arc tangent takes only values in a not in the 300 he 360 degrees okay Rd between 0 and by soul if I my point was behind there this function would projected for the front which is wrong we will see later on that as a better function than that which he is the up at on to which takes both arguments like before but works on the four quadrant and before doing the racial meeting why annex looks at the sign of Y and the sign affects and then decide where the computation should put the ankle and this is the final with the on the sexy will come back later to the property of this a of course this function North this function is defined when X&Y our CEO effect on X&Y are zero so if we take a point which has only some component C and would like to computer selling the coordinates then we'd certainly say that Rezo a we say that ATC but then the Anglesey face not be fine maybe any actually there are infinite solutions see this problem of Matt being an inverse mapping maybe well the fine or not for instance this mapping is always will be fine in this direction you don't have singularities in the inverse mapping you may have some situation where you have to they care so you don't evaluates this function in that case is okay because it makes no sense even if the function for one point can always be fine in said pair for one point but general it's better to not a bad way the function out of its domain of the the finish okay so this is it for position there's nothing more the only thing that I would like you too take into account is that when you're expressing a vector let's forget about cynical Courtney now let's use kurt is important even if you don't think about it you're expressing this three coordinates the three numbers are implicitly making reference to a reference frame so the same vector beaches are dramatic object which leaves %uh in the free space in the clouds if you want to can be represented with respect to the front reference free and we need a a way of changing these three numbers depending on the fact that we are using this frame or for instance the be free as a reference okay saw we will come back on this me too now what about audience station let's cancel this or intuition that out many ways for representing it and we will devote quite a lot of time representation of orientation but the more classical and global one we'll see what global means is using a a rotation matrix it's strange because a I would call it audience station me thinks because issues for representing orientation but we will see togethers pretty soon that why this correlation matrix so our vision matrix is a3 by 3 like its three by three because we are leaving in the three-dimensional space if he would defined rotation matrix on the plane they would be to buy two if we define rotation matrix in an n-dimensional space which can we cannot measure we cannot figure out what it is it would be an end by an matrix resistor powerful ness of means so which is our orthogonal and norm all which means orthonormal do you know what an orthonormal me thinks it's sort of sodium what a normal meets his what medics can be represented by vectors their columns and I'm a text or 410 if the columns are orthogonal to each other for instance the three unit vector along X Y and Z are orthogonal okay at its norm Uhl if it's columns have units normal for instance the unit vector of XYZ art unit parts of if our range in a no in a matrix would lead to a normal makes so orthogonal a normal is more for normal now an orthonormal matrix has this property that the schemers is equal to its transpose this is fundamental okay I however rotational made eggs are not only orthogonal orthonormal plus they have determinant equal to plus one you can have orthonormal matrix with a 30 min out -1 for instance this matrix this may 6 is orthogonal yes because the ones are rate so when when you do the scholar product of the columns articles the free vector are units shore square plus square plus where group square feet it's one 11 the determinant is -1 so this is our orthonormal matrix but it's not a rotation matrix or does not represent an orientation of a frame with respect to another friend of the same kind okay so this is important we will use in particular so we will use this this type of at Algebra I structure to represent the orientation of rain be with respect to free a and we will use this notation are a see be eighty means that this matrix represent your invitation of be with respect the frame game from here we are explicitly bringing the dependence on the reference frame and if we want to do the same for position vector I I forgot to say that is a genetic be vector when we are expressing it with three numbers seem fifties and Courtney we add than a year to say that this are the coordinates when the vector is expressed with respect to three a okay you see that a sub super scheme in front of this object means the frame with respect to what we are doing computation so this is the orientation and we will see now what this mean well we may also have this kind of reverse operation what does this mean it's the present its represent the orientation of frame a as seen from fring be so exactly the inverse operations can't because these is up north in Orem a text this inverse operation is just the transpose and in fact a we can write that this is equal to are AB transpose and we can also writes that if we do this product we obtain the identity saw when we are representing your invitation offering a with respect to a frame be and then we're presenting their are impatient offering be with respect to frame eighty when we Concord NH this operation so we make the product of rotation matrix we are in fact representing for a Mary with respect differing a and doesn't change a vintage so it's the 18th okay now you me wonder but why identity means no change of orientation we will see right now but keep in mind that there are these simple algebra i relationship saw how is this genetic meet the ex made of it's made of the a representation in frame a of the three unit vectors which characterize the three-axis of frame be we have XP it's a unit vector in the direction of be when we represented in frame a we have two project this vector on the three axis and take the three numbers think and saying for this and same for that and now you can't starts realizing that this type of I relation implies that you have in effect us why this is a unit vector anything remains the same unit vectors if we express it in free be or if the expressive in frame day its normal does not change we're just looking at this specter from different that action okay same for this and thankfully so these are all unit vector and even orthogonality if two vectors of 2010 xB and white be the remain orthogonal if the act if I express them in some frame or in another frame doesn't matter its invariant that so this is or four mile matrix in addition scenes the reference frame he's a right-handed frame then it's the tournament will be plus one and help -1 okay so remember these are the things now if I look at the genetic column here for instance the second one the number that I find here which characterize projection of a unit vector why be along X eight which is a unit vector yeaa and CA so they are scalar product of unique perspective remember the rule of what is the at number coming out of a product of the vector its normal phone vector stand norm of the other effect this time the cosine ang of the angle between two vectors sees these are two units vectors the norm r1 so what remains is just the cosine so these are so-called the numbers here are directional cosine of the axis y be with respect to make say yeaa and CA and same for the other two columns cell if we want to ride it more explicitly a here you see that I'm taken the three unit vectors speaks eighty yeaa and CA and I'm projecting them along XP a sorry forget restart I'm taking the unit vector XP and I'm project in a along it's a yeaa NZ a song doing the scalar product of XP and X-eighty so weak say transposes are all time XP column both are unit vectors so what is there the cosine of the angle in between and same for the second component same for the comp on so this is what I compact the rewrites as xB expressing frame a saw the first the first up column of the previous slide so it's xB expressing frame a are these three numbers same for the second column this is the unit vector why you be expressed in frame eighty which means that I project and do the scalar product of white be with the unit spectra X of the frame a and then unit vector wireframe II a and then of you expect to see offering a and same for the last one okay the result is an orthonormal matrix with the third young plus one a and this is one but for instance just to remind that now there's of another important think which is implicit in this relationship so those are chain rule property meaning that if I take a genetic rotation matrix again which represent your invitation of the frame with respect to another frame and I multiply for another rotation matrix the problem peace again a rotation matrix so again it's orthonormal with the Thurman plus one okay so we have in fact the structure of the group a group and this group is called special orthogonal group of them mentioned three they mentioned three is because these out meat is three by three so you're injured general special orthogonal group of the mention and its orthogonal because this is made by orthonormal maybe sis any special because it hasn't yet determined plus one okay now a its a group so if you com com company element with the product operation you obtain another element of the group so you then again a rotation matrix and as every group has a neutral element so if you multiply a rotation matrix by the neutral element the rotation matrix should not change what is this element is the I did okay and finally the Chamber's elements also should exist in a group and this is Steven by the transpose after or regional element okay because these are or phone or on May 26 now there's a nice interpretation if you do thinks probably like we did here so if you have this in no to index all the subscript of the first matrix and the superscript up the second meetings of the same kind then you have a clear interpretation of what you're doing namely this represent your intention of a frame I with respect to foreign key this represents your intuition offer another frame J with respect to frame I when I do the product I can eliminate this and the result is another matrix switches again a rotation matrix which represent your invitation of the last spring J with respect to the origin offering K okay what you may wonder what if I exchange the order of this so what if I'm big in this rotation make this is and I'm a riding instead are EJ times are K II what is this so first of all easy it arcane Jade know why not because in general brother got me tix is not commodities okay these are rotation matrix special meetings not no one a I sit in general infect because of their interpretation if this rotation matrix represents G change of orientation both with respect to the same axes then they commute because if I'm rotating around some axis by some angle and then I'm rotating again by another angle and if I interchange this angle I'm hoping the same finally fish okay so the product in that case would become movies but in general it is not so this is not so what it is the in birth the identity matrix not read I would say nothing it's a rotation matrix it's a rotation it's a combination of two additional meetings represent an orientation because the result because of the Stockton of the group is again a rotation matrix so again it represents someone station but it's just I've made some change orientation and I mean the first one then another change in this is the final result but what it is I don't know pink so a let's do okay maybe I have up this is the notes that its import let's do some example a a very simple except now I used to do this with a coke can but this would be to commercial so I'm using an Italian problem okay suppose that you have this bottle %ah wall okay with his brand on its and suppose that we have this bottle with the name in from and re-entered like this okay so its main axes along CA and the center of the name he is on X a K and I suppose that would be fine I have here a an example we define some orientation with this matrix are be with respect to a and im a Texas simple this one 0 0 -1 -1 00 and then course 0 one see okay now a if I'm bringing ball we cheesiness neutral orientation and I'm assuming the orientation of rain bi how with the box will be placed drinking well City remember what what are the skull what is this column first column of this rotation me 3 percent what represent the directional cosines of X axis XP when I'm projecting its along X a yeaa and CAD which means that this 0 hugh means that XVI's of 41 out two weeks a 0 humans that XP is orthogonal to yaad and the -1 year means that XP is align with x8 but has the opposite okay saw these is the origin of a which coincide with this audition but now for clarity I'm drawing this so the first axis xB this is xB with respect to a is oriented like minus z80 blazer and the like this okay now what is this this is why I B expressed with respect to a so I have to draw all the axis y be where it is oriented is the opposite side of which axis affix a so exeis here %uh and its coming so this is I'm perspective it's going so why be is here and what is this mean is CB express with respect to a so when I projected is orthogonal to it's a it's ok to go into the eighties giving just the direction of why I so it's both both disoriented like this so this is the be first question is this a right-handed frame I don't know if you see it yes because it's I it's difficult to make it I okay but takes then why goes inside so if i'm looking from here I see X&Y the problem in them or if we so this is a right-handed free and this result is just because this has interment plus one if I'm changing a sign here if I put -1 year I would have obtain the same X and why axis offering me but this would be reversed and this would be a left-handed friend so I'm going from a right-handed or left-handed and then I make if I makes products like this I i'm I'm losing my mind okay so this is correct so result how is the bottom-placed already entered I would say well it has also its always in this main direction mmm as a K and where is the name I'm not really a its at the bottom so it's it has morphed from here to you and then okay so you see that if I give your a generic rotation matrix with numbers been always understand what is the frame behind this rotation matrix with if I give you the reference frame so if I give you the frame with respect to which I am expressing things otherwise its it's not really okay so the rotation matrix again is an object the site an abstraction does not need to Carey subscript and superscript okay because it's our orthonormal matrix with them and I'm plus one it's one genetic element of this special orthogonal group 3 so I don't care I can make computation with that but if we r using it as a representation of orientation a then I can interpret the columns as or intrusion of the axes of one frame with respect to the other and assaults our problem of representing things we will see later on that this a representation is somewhat redundant because we have nine numbers in this three by three matrix overall this numbers cannot be out to be threatening because they should be such that the columns are orthogonal such that the columns are unitary and such that the third minute his gloves plus one okay so there is no more compact representation like the one that we suggest that at the beginning in order to I represent your intuition of this body in space I'm a need only three angles saw how is this related to this three by three main thesis we will see how to make this transformation and affected weekend make more compact representation of orientation sought not using rotation matrix but just three angles but we will run always always no matter how we do this into singularities so there will be some representation some orientation that cannot be represented in a compact way even this many miles representation but rotation matrix represent or possible orientations for the in this sense the app global okay a now let's see another aspect out let's go back to position vectors and now we have this vector from our region to pee or just be we forget about the audit vector p so when we see a point to me a implicitly think of a vector going from the origins of the frame that point expressed in frenzied all thought this with that axes x0 y0 seat as you know it is given by three numbers home which we call peaks 0 by: 0 PZ Zi okay for instance if this fact there was a line with why at its number would be 0 some value which is the length of this vector and then 0 again now it's a genetic position is like this how when you're writing it in this way should think that you're representing the same genetic structure of a vector with numbers because you're expressing its with reference to some reference frame sold in fact I can dream right this 3 number in a more complicated way I i can say that the first scholar is the wait of what of an axis X one expressed in a Sorry Sorry no I'm I'm I'm going ahead a okay before tweet this that's what one step in between okay I can write this p0 so be X by: BZ 000 up and then I can rewrite this s p0 X with respect to a frame annexes which is made like this and then p why 0 with respect 100 and then Pete Z 0 with respect to 00 axe okay this is missing so what what I mean by that I mean that this three numbers our house organized in my multiplication of 3 vectors this is that XT relaxes expressed in frame 0 for this is simply Juan 00 okay this is same thing is the YZ relaxes expressed in prim 0 so it's 010 and same here okay so you're actually organizing things acts like rewriting this by he'd entered the matrix times the same original picked to see when you do the product of the identity thanks yes you have first column time the first scholar second column packed the second scholar there's problems than the first got so what your actually saying is that you're not changing your reference frame evening there reference frame you have three numbers in the same frame is like using the identity as the transformation you have not changed the orientation now suppose that you want to a 3 percent this in a different frame in a different frame and the different frame is frame 1 so you take the same vector the same vector and your now instead of reading these three number in black you like to see read the three numbers which are the projection of the specter along the units axis of frame one book rotated 3 a sense Sol what you needs to do is to obtain some representation of the orientation of frame one with respect to frame 0 so you're writing are 0-1 we cheese characterize by the unit specter x1 projected on fring 0 unit spectra why project zero and so on and then 23 number that we results from this identity will be the coordinates of the same vector so peaks by: PC knowledge expressed in frame 1 okay or that expression on top so you know what you're doing is you're having a way of changing coordinate of affect in particular if you know the vector expressing frame 1 so you know these three numbers in order to compute its expression in frenzy of so these three numbers you off to build-up the rotation matrix of frame 1 with respect to frenzy okay and vice versa if you know this three numbers and you would like to know the expression of the same vector in frame 1 you have to isolate p1 how do you do this by for multiplying by the inverse of are single one so you have be one isolated here and you will have arsenal won transpose p0 no because the inverse is the transpose of this cell let's do one example of application of this and pay attention this is very important robotics because sometimes you know some information about an object appoints expressed in the reference frame of a sensor then since you have a a Koch sensor or a four star sensor on the wrist and you're measuring the force in that direction but in fact you would like to know this force how is this press into the reference frame of your lab or your table or your cell whatever it is I can so you need to do this change of representation for instance here suppose that you know affecting in frame 1 and this vector is just Pete one is one 11 okay so if I want to represent this x1 white one C-one this would be a vector which comes out like this and is along the okay and its length would be square 3 working so it's not the end up principal I orthogonal sector made by positive value of XY and z. and point think they're now spas that I would like to no p0 for this numbers when I know that this frame with respect to the 0 offering which may be my reference frame as all is characterized by this matrix okay strange enough the first to vote if this is our problem somebody says OK this frame which is the sense of rain which you measure the spectrum which may be a farce which may be a velocity whatever it is at I know that's good reading these three numbers but in fact I want to know this freshen up the same vector in the frenzied all which is characterized by this first thing you don't believe that you check that these is up rotation matrix because otherwise your competition makes no sense think is this a rotation matrix well let's check at the the normality this is squared off this is one over three-plus square off this one or second six close way of this one or two so it sa a 10 to over six plus 146 plus three or six choice six or six is one okay same thing like this because when you square them minus thank you doesn't matter and here mmm think one-third it's it's correct now yes no perfect it's not correct this to miss okay so don't believe it check it first I Ave orthogonal well I should make the product these thanks they saw this gives one-third these give 16 so one-third plus 16 -1 Hall and this is the wrong and then of these times this an of these thank these and if I haven't yet okay they should all be 0 the last thing you should check is that to determine that this plus one top let's take this for granted but don't don't forget to to verify this now I make the simple computation p0 is are 01 p1 argue no one is here the one is here so when I do this product should some the columns not for you first Royce this plus this plus this so it's 3 over square root of three which is also square root of three and this is the first comport then I should some this one saw second row thanks this column this plus this plus this so it's Juan one -2 or all worth square root of 60 this is the wrong and finally I some the three columns because use one $1/1 and this gives again 0 Sol this specter is just this one and has only one component in that that action saw I should computer how r the why and see that 0 NC 0 axes of frenzy top but I can certainly say where is the x0 axis do you agree if I out needs to draw here where a spring 0 well I can see at this axis his that XE lacks why's that because the same vector this the fact that the lynching but I I know that its expression in this frame has only they feel component so it relies on the axial axis and then there would be some I say y0 30 I don't care now okay to remember the vector has not changed just its representation so the numbers that we use for carrying over computation director is always the same okay cell I outcome of this is that the same object the station meetings what that we have been thru use for representing the orientation of the frame with respect to another one is also use for changing coordinates of a vector when expressed from are they the frame with respect to affix the freemen by summer's okay is accepted the same pot now let's do a last step which is the following one suppose that we do I don't teach mom we r doing a rotation of a fix it for free and here to make things simple and computable easy we do what is called an elementary rotation so we start from a reference frame suppose that in this case I was take X why NZ so like the walls of this room and now I'm making a rotation and I and with a frame which is rotated and I call it RFC so the same z-axis did not change thanks I'm I've made an elementary rotation around one of the main axis of the regional 3 in particular in this case around the c-axis now if I look at a a genetic vector and since I'm doing competition in the plane I will I put this vector on the plane or imagine that this plane is not this he doesn't see that it was see was your plane is just some seaplane whatever King I and suppose that of this specter that goes from the common are teaching you see that as a common origin here to the that point P I know its coordinates in that a that three so I know what is called here you the and W company would be the c-axis now what are the so I know these numbers you v and then whenever you see and I want to compute book a black numbers XY and z. well let's take see out-of-plane a in the sense that it's the same we just me the rotation so whatever was this high it up this point it's remained the same so the is equal to W but what about the other while the other need some trigonometry many seem so this X is complete the light all be minus xB but all he is just you and there's a think that angle by which I have rotated frame and this is cosine of the that I have to subtract this but due to geometry I know that that angle over his ex equity the to and so this part it's just the sign of bebop by the are which is nothing else than the okay similarly in order to evaluate why I some alcee which is just you times sign up feat that and then see why and see why I look at this angle again and I have the great use which is be and then now multiply by cosine of this the so I have this expression and this holds nomad 35 but they did by the time positive or negative its genetic expression but sector let organize them you know smart that way and that I see that if I think the components of this specter in frame see so what I would have called 0p or just be expressed in C so that the number you the W looking at this I can multiply it by this matrix and obtain what is on the left XYZ which is nothing else than a coordinates of be when expressed in frenzy so these is a rotation matrix and since it's of an elementary revision made it's because I've rotated thinks around the z-axis I call it in this particular way with the sea okay by an angle theta you can't immediately tell that this is an orthonormal matrix and now it's easy because these cosine sine square science and square one orthogonal co-signed miners sign cosine plus these are on the zero so and then determine cosine squared plus n square time swung so it's what plus one and again what is the axis z axis of frame of the C frame well it hit coincide with the z axis of the 03 001 and again these are directional cosine of the X axis of the C frame that I think that one and the y axis of the sea fring repeated one when expressed to them region okay saw now we have given thus tractors to this rotation matrix to represent in the change of coordinates how we are using the idea of change have cordoned but we know that to change court in Tweedy the rotation matrix and so this rotation may degree percent also you're impatient of rain C with respect frenzy of and this is an elementary by an elementary change of orientation around the z-axis and see me that me okay you can also notes two things very important we shot hold in general that if you suppose that you're not changing by be there but by Minas keep so the frame RFC would be the law problem then a you would octane expected the same matrix transpose which is the innkeepers which can be interpreted as follows: so if I'm rotating frenzy of my teeth out to frame she then I need to rotate francie by minus the doc to get back to fring 0 is just the reverse overage buddy versace's transpose which in this case in the time in 30 cases just changing the sign of of the angle mmm have I leave it as an exercise to obtain the elementary rotation matrix when your rotating the app I the frame by an eleven by an angle C-dot Corp see your whatever you want to call around the x-axis and you have the one here and the same stock to hear as this one or around her y axis and then you have the one on the diagonal the second position but pay attention here the sign of the the sign of the sino si is on the other side on the lower Wrangler particles track okay so you can do this by 11:30 trigonometry like did it before cell last think that I want to show is that the same matrix and here's why is called rotation issues to rotate vector now pay attention we are not changing expiration of a vector mom expressing one frame or the other we are not representing orientation overflowing with respect that we have just 13 but what we do is we apply rotation poor electors in this frame for instance victory rotates and goes to vector the prime this operation is something that in graphic card can be done on hardware on ten thousands of actor at the same time when you're changing representation of the view you're changing for the invitation then although Vectis pointing to the have pics of I no whenever solids are in your view should change accordingly at the same time my and the powerful of the power of a a graphic card is that it can be implemented on hardware this operation of a huge amount of data okay so this is one thing you see that I have only one frame here know all the frame to represents no change of corn is just I have a a vector and I'm loving the fact that going there how can I represents the coordinates of the specter after this operation of a rotation not water using a rotation matrix and this is why it's called rotation I think let's do things simple and make this rotation as a an elementary one so around the z-axis so I have a vector there on the plane XY or on any other plane a at the level c and I'm rotating the specter by some amount say Peter Saudi generally the vector was had an angle Allah fun with respect to x-axis and head coordinate X&Y which can be expressed in this week not so mother use of the vector be bar and then cosine and sign of all for now I'm giving this rotation by an angle theta and I Octane be prime which has coordinated ex-prime why Prime and see primes the Prime has not changed forget about this part and I can do some competition because the new angola is alpha plus the to not the original one plus the one of the rotation so I can represent according to the ex-prime way prime as before with the right angle saw motherless of me call sign off of a plus the band sign of us at: now expand this trigonometric some and I have go sank assignments and sign in sine cosine plus sin sin and then I recognized for instance that the sign of place just why multiply by causing fever and be causing our fight is just X multiply by simply don't have his expression and similarly as a ball if I reorganize thing in a matrix way no surprise I found as before that the coordinates of the rotated vector with respect to the up have a regional for no conscious obtain by the elementary rotation matrix around the z-axis in fact I was giving adaptation around ZX but this holes through if I'm giving a rotation around an axis I'm to any vector and the results of the rotator there are they good vector is obtain through the rotation matrix supplied to the Regional Court and again rotation this is why score the rotation matrix so if we summarize this thing's I we have 3 equivalent interpretation of the same mathematical object this or for normal matrix with the tournament plus one which is a rotation matrix can be used as Sasha for doing three thinks first representing the orientation of a body or offer frame with respect to another friend the first things that we have introduced second to change the coldness of some vector from water at a that frame to on a regional thing finally as an operator we should rotates affected the third be interpretation that the F just see so all of them are represented by rotation mantises the name rotation remains even if we are using its representing orientation or if you're using it for changing coordinates other vector okay now short break reaches of the computational mange suppose that's now you have several reference frame and it's not it's not the up by chance that I'm using this example because this is exactly what will happen in the direct him attic of a manipulator I'm starting with the frame attached to the base then I patch of rain to ensuring and then I have to describe position orientation of one link with respect to the previous one or of yes a seated frame with respect to the previous one and then of the following one with respect to the pews one and chaining things and doing product I will obtain the final orientation okay there is also the position and the displacement to take into account we will see this leader how to both things together now just on intuition Sol in this sly I have a reference frame here then I have placed another reference frame you're just for the purpose of clarity infected the origin are coincident okay just not to these drawl several frame in the same place so this displacement which is locating the origins of rainwater with respect to preview in fact it's zero so it's not there then I have a a second another free in our through again with the same margin but I this place is just for sake of clarity and then another one again place like this so I have four frames our 01 2&3 pay attention four frames although I'm and with 3 because we start always with is he offering okay so there are four frames all with the same origin but all each of them has a different orientation so I'm using up 3 probation meet missus to represent their relative orientation so I'm using my matress are 0-1-2 characterize the orientation offering one with respect to bring see them matrix are 122 corrected ice you're impatient offering to with respect to frame one and finally art 23 characterize irritation offering three with respect to frame to and now I have this problem at the end of this chain of friends frames I have a vector and I know the coordinator of this vector in that frame and I would like to compute its coordinate into the basic train for instance to understand if if that is a velocity and I'm reading this but also the on-board above the last frame I want to know if this velocity with respect to my reference frame in this room it's in that direction retract that action or whatever this is a reason why I'm I'm on board I have a sense that like a mechatronic devise on the space shots on my about the camera looking between the gripper and I don't know how this camera display so I I don't know where is this object going with respect to the reference frame so I need to do this transfer midge saw how do I do this well I apply the chain rule I express this into a several times by multiplying rotation medicine and I'll thing the result but this can be done in two different way one is the following saw I'm building OPP by multiplying the first rotation may text and the second thanks to third now if the chain rule repeated my Sol this brother express what you're impatient offering to with respect to frenzied and then this product as a whole can be written in this way because the simplification of complete was substance so busted are consistent so this cancelled is this castle the so what is left is representation of accordion patient offering free with respect to frame 0 by now I've this specter I multiply this vector by this matrix and I'm done okay there's another way of doing things which is the following one now well I think this specter I know it's with three numbers and first are you express it in friend to which means that I do the multiplication of are 23 times P three what do I obtain the representation of the same vector in frame two saw what I would call p2 and now I apply the second rotation matrix and I express the same vector in frame 1 and finally I apply the for salvation me the ex and express things in BC very thing which is better this includes sixty product and 42 summation this only Sep 27 problems an 18 some ish very simple problem where the issue of computation and how fast you can do this now products and summation can be done very fast so it's it's just a matter of theoretical point if you which shows that having an expression of things it's not enough robotics but the way we should organize composition is important because you have to do this competition several times in real time with changing data so you cannot and neglect the issue of computational complexity now of course I would say this is this better than that unless unless the same operation should be applied not just through this vector but will morty to the fact that I know at the same time in FriendFeed if I have 100 vectors known at some instant of time fraying 3 then it's much better to compute first the rotation matrix once for all arsenal 3- and then applied to the whole bunch affect it's clear the safest I mean that the optimal things okay so there's a trade-off just for one it's okay I think that for two are on the borderline for three or more it would be definitely better to organize it first this revision weeks and then apply to the multitude of effective that you need to transfer okay so depending on the task want method may be convenient with respect to another one a okay I think that we won't have time to go through all of this now but let me introduce the point and introduce some mathematics will do now in the last 15 minutes mathematics on the blackboard but why do we need this mathematics press the whole now we want to represent now that we know that rotation matrix are very nice we can do several things now want to solve this problem suppose that I have a vector and I want to see what is its representation of that a genetic rotation not another man 31 around xx is a but legit around the genetic axis which I called are and buy up generate value of Anglesey so i'm saying that I have this vector and I have this vector art and I want to do a rotation of some angle the doc around this X saw how do I represent this rotation knowing already few numbers the directional cosine of this unit axis so three numbers but actually only two because the third number should make you need to inspect okay and that the scholar be back which is the angle around which I'm rotating if I do proper computation I will obtain a rotation matrix which is a function only of small air song the vector hair and the angle think that and this is the so-called iraq problem so I find a rotation matrix problem with the rise by the axis around which I'm rotating and the angle think that such that this made it will rotate whatever vector I'm conceding because of course I'm rotating the word around its axis by an angle think that so whatever vector then win leave will be rotated by this the inverse problem will be okay I have this a rotation matrix numbers orthonormal with their mean plus want but numbers like that like this one then I say okay but this if I forget that this is 0-1 it's a genetic correlation matrix can I 3 percent this rotation matrix Azzaro elementary division of an angle he thought around some axis are and this is the so-called inverse problem even a rotation matrix numbers I want to find the so-called inviting taxes of The Associated rotation and the angle rotating around this and we will see that the direct problem is relatively simple so bill top a rotation matrix only with those data the inverse problem is already one of a complexity senior to those complexity that we find in solving the inverse kinematic of robot manipulators to for instance it may not have a solution or it may have in general has more than one solution before doing this so and we leave the rest of the slide to the next lecture I would like to a recall a bit of mathematics any particular algebra of vectors I have year lease to think okay the first couple I let's let's do this this simple example suppose that this is vector are and in here I'm standing with my head on the arrow of the vector are I think so now the world is rotating around me by an angle theta so I will have something like that and in particular if this is I call this axis X or whatever I will go to ex-prime by an angle theta remember that we compute positive angles when we rotate counterclockwise around an axis so when I'm standing like the axis of rotation I see the axis rotating like opposite of our clock okay so this would be a positive venue of the and same if I'm going from here to there and this is rotation the top for instance this maybe 30 degrees okay now suppose that the rotation is like this and so goes like this: what should I write here what is that angle minus the to okay good suggestion because this is wrong okay basis against either box if I put the number this would be approximately -30 so rid I mention this because of a long story if teaching robotics the student they immediately get lost because he's okay this is positive is this native so this is minus think that no it's always Theda because he's the fine from one access to the other from one access to the other is always think that anything be positive is the rotation is counterclockwise or negative if the rotation is compass clockwise okay so don't make this mistake although the results have part is the same but these is defined as the guy and its value is negative this is the finest eat and its values both okay so now with this premark premises I have few a computation to be done so let's start with this suppose that you have two vectors and and s and they are unit vectors it's not really important or maybe it's a well it's important it is general so they will have be defined by annex NY antsy I'm writing it in roll and s X s why s see now how do I compute and vector s well this will give me some vector are and if these are unit vectors and the out or spoken out and I think this and as X then s as why I will find after vector as the sea vector according to the right hand through but in general these may not be orthogonal and not me the elementary X you know how to do this those of an easy trick which I think it goes on the other side goes through and you do the following: you organize the two vectors as rose of a matrix and then you put DGK which characterize the you unit direction of the axis XYZ not the comp on and any computer determines mmm so if you complete the determinant of this you will find can you expand the longer the computer the determined by expanding the element of the last three or so you will have las Minas plus so I times this block so it's and why as Z minus and Z s why and then plus minus plus minus so I put stop-loss J but I remember that I have to reverse things in this the 3rd min 12 make this brother first s X and see minus and X s the and then again plus without changes K and I have to this minor here and X a s why minus s X and why and these art simply are X are why and RC of the vector are so this is the rule for making vector from not scholar product between two backed no Scott our product are so-called in a product of vectors so one vector one back tonight project them 10 on the other so I have a normal times norms thank the cosine of the angle so it's a scalar here it's another vector which has this component now this can also I with sometimes not use this notation I will use this notation I am writing that s vector and Victor s it's also return as a skew symmetric matrix which contains the element of and times the vector s where the skew symmetric matrix dismayed as fol- 0 on the diagonal and then the elements minus and Z and why and axe with minus and then being skewed thematic up other element and a sec 8 opposite of this so and Z and X and minus and what and this is a askew si metric me thanks times as X that's why as C what is this operation I've replaced an operation between vectors the vector product of two vectors by an operational which is a product of matrix time %uh vector so I'm using a matrix which is convenient because matrix can be concatenated in an easy way for instance we'll see later on that this has now if you do this meh magically York 10 exactly this result so you obtain the same are I every time we have a product of %uh vectors time a another vector we can replace it by the product of a matrix which with the element of the first vector times the other victim so this is a skew symmetric matrix do you know what is a symmetric matrix is a matrix where the element a I GE's equal to the element AJ yeah I a skew symmetric matrix is a matrix where the element a I G for orange I column J is equal to minus the element exchanging rose and Carl and here is the fact okay as a consequence the element on the diagonal of a skew symmetric matrix are always the wrong because they have to be equal to their opposite and the only situation is that if ever see this is a skew symmetric matrix now that our other things that can be said but in some out about this so if I have s vector n I know that this gives minus are not so I flipping the order so I'm obtaining the opposite direction so which means that this is s s times and but that should give also s of minus and vector s gives us exactly the same result so you have operation moving around things that follows from the same properties of a vector product another thing that you want use now but its important for me that it uses is the following: suppose that you have a genetic matrix eighty is this may take SIMATIC or not we don't know it's a generic it's not cemented in general and it's not even skew symmetric its up matrix whatever well we can rewrite this matrix as follow: weekend some and subtract the transpose and divide by two okay the result is a because he's a house plus a house plus a transpose -7 suppose I'll eighty saw any matrix can be written as this two components I look at this comp points this one is symmetric okay it's amazing because if I take the transpose of this matrix I obtain a transpose plus a divided by two witches think this thing okay this is Q si metric because if I think the transpose of this I hope in exactly the opposite makes like here if I think the transpose and I'm changing so element helpful hoping the same 8x with the minus sign in front okay now if you take so I genetic matrix a and you put it into a quadratic form like this have you ever seen this platform so this is a a quadratic function of x not rate than we thought me too eighty in between that if I call this a seam and I call these a skew I can rights this in two parts K you please a week a scene plus a skewed and ideas to do the prophets lapis terms by the skew symmetric matrix when Putin plot quadratic form give Seattle try with this multiply this by thats and in fact it's a it's a result of the vector probables if you take the vector product of eighty vector brother well II leavitt for you mom I which means that every time we have a quadratic form it is nonsense to consider announcing medic meetings just consider symmetric matrix because if a is nonsense is genetic we can be always replaced by a thematic part okay last concept is that if you have a symmetric matrix you know how are the eigen value of this matrix you know what it what I can believe a matrix help you do saw I can values of a matrix of real numbers are in general complex number and if you have a a a complex I can value you have also its conjugate spare okay the coming contribute pair now if the matrix is symmetric all the eigen values are real is a strong property if the matrix is positive definite which means that and at this point I can all only talk about symmetric matrix which means that the CS agree that an equal to 0 and is equal to 0 if and only if X is zero so on if you're taking a zero vector this is seen all the lies this is always possible this is a positive definite meetings aunt being symmetric has or real I can value and this re I can tell you are all positive okay senior only if you have a negative definite matrix SIMATIC or having a value sorry land or are negative day so this properties will be used this last one Lee much neither into in the in the course but this one with skew symmetric matrix will be used here to solve for problem okay we stop here and have a nice weekend |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_101_Recognition_Image_Classification.txt | hello and welcome to computer vision lecture number 10. in the past lectures we have focused our attention a lot on the geometric aspects of computer vision now in this lecture we're going to touch on another important subfield of computer vision which is recognition recognizing objects and it's a very data-driven field which has benefited a lot from the advances that the field has experienced with the rise of deep learning over the last 10 years this lecture is divided into three units in the first unit we're gonna cover image classification one of the fundamental grand goals of computer vision in the second unit we're gonna have a look at a problem called semantic segmentation and then finally we're gonna discuss object recognition object detection and segmentation before we start let's have a brief look at what these problems are and what they distinguish from each other let's start with image classification in image classification we're given an image like this one here and the goal is to assign a single class label an image category to the image let's suppose we have a set of maybe 50 possible class labels for images that we might observe and for this particular image we want to assign the label street scene because it's a street scene that's one discrete number that we assign to the entire image this is image classification in semantic segmentation the goal is different here the goal is given again the same image as an input which i haven't shown here i have just shown the output but given the same image the goal is to classify each individual pixel with respect to its semantic category in other words we want to assign a semantic label to every pixel in the image and these regions that we want to classify are both object regions such as cars that are that could be potentially distinguished from each other but we don't distinguish them we just assign the label car or stuff labels like sky or trees or grass so for every pixel in that image we want to assign one of these discrete labels as illustrated here in object detection the goal is to localize objects via their bounding box their 2d bounding box and classify them in this example here we have localized a couple of trees poles and cars we cannot localize stuff categories such as sky or grass we can only localize object categories that's why it's called object detection and for each of these categories we want to find all the objects and describe them by a 2d bounding box and retrieve the correct category label so we want to mark this object here with the correct bounding box that exactly describes that object and we want to say that this is a car and another tree for example and finally semantic instance segmentation is to assign a semantic and an instance label to every pixel of an object again this can only be done for object classes there is another problem it's called panoptix segmentation that combines both instant segmentation and semantic segmentation but in in traditional instance segmentation the goal is really for all of the objects that we have detected we want to assign a mask and a class label so we want to know which pixels in the image correspond to this particular object and we want to know for this particular object like what is the category and which pixels correspond to that object and in contrast to semantic segmentation where for example cars or trees that are occluding each other cannot be separated from each other because they just obtained the same semantic class label here they obtain the same semantic class label but a different instance label for example for these pixels here we would assign the instance label number three and for those pixels here we would assign instance label number four so we can actually distinguish them despite the fact that they are nearby so every pixel gets a semantic and an instance label every pixel that is on an object these are the four problems that we're going to have a look at let's start with image classification here's again the problem input is an image and we want to assign a single class label or image category to that image such as street scene why is this an interesting problem well first of all it can be a useful task on its own for example think of recognizing handwritten digits or searching images based on keywords these are tasks where image classification has a direct purpose another advantage of this very canonical and simple problem is that they this problem has a very simple specification the input is easily specified as an image often in data sets the images are resized to a common format so it's the same height and width and the output is just a discrete category label so it's a very simple one-dimensional discrete output and we can use standard loss functions like cross-entropy to learn such a model and and for this reason for a long time image classification has been one of the difficult gold standard tasks in computer vision because it requires a lot of reasoning about what's happening in the image it requires an analysis an understanding of the image in order to describe the entire image furthermore image classification as it turns out is also a useful high level proxy task to learn good representations which transfer well to new tasks and domains so they these representations are useful for transfer learning given a big data set that is labeled with image level labels we can train a representation and then we can take that model and fine-tune it on a smaller data set on a different task that can be a very different task can be semantic segmentation can be object detection it can be even reconstruction and we get a wave much less with much fewer data on this target task because we have pre-trained on a very big data set using this image classification proxy task and this is something that we'll talk about also okay so how can we classify or categorize an image early attempts have tried to solve this problem by hand engineering try to come up with an algorithm that solves this task come up with hand engineered manually engineered rules for example given this image we want to find some edges using some edge detector and then based on the relationship between the edges and corners we wanna using simple rules describe um what's in that image and eventually infer the image class but as you can already see from from this example this early attempts have horribly failed and led to a dead end and the reason for this is that object shapes and appearances are just too hard to describe manually it's impossible and that's really why machine learning and labeled data sets play a fundamental role in in solving this problem so let's look at some data sets here is one of the most famous data sets called mnist has been popularized in the machine learning community and is mostly used in the machine learning community there is many variants and it's actually still used today it's based on data from the national institute of standards and technology and these are handwritten digits by employees of the census bureau and high school children and here are two examples of digits written by two different people they are relatively low resolution but there's a lot of data there's sixty thousand training samples with labels with ground truth and ten thousand test samples a slightly more realistic data set is caltech 101 it really was the first major object recognition data set it was collected in 2003 by life fae and rob ferguson pitoporona and it comprised 101 object categories compared to mnist it has much larger image complexity as you can see here by a few examples and it had met many more classes 101 classes that's why it was called caltech101 and these objects were obtained or these images were obtained via google image search and then manually created a little bit however the data set is still rel by today's standard this is very simple all the objects have a canonical size and location they're centered in the image and they are in a relatively canonical pose and often with very simple backgrounds as you can see here and what you see here on the right is an image that shows the average of all the images of a certain category and you can see that for certain classes like faces or stop signs flowers or motorcycles it's actually very easy to recognize just from the average it doesn't contain a lot of variability so it's a relatively easy data set but this is how how it all started and this is what people have worked with when they started looking into machine learning based techniques for tackling this problem and then in 2009 came imagenet which has really led to the rise of deep learning if you will in combination with one of the first models called alexnet that demonstrated that on this particular data set there was a challenge associated with this data set it was carried out every year at uh one of the major computer vision conferences and in one year um in 2012 alex net demonstrated for the first time that deep learning actually works and can outperform classical techniques that have been used so far for image classification and outperformed these techniques not only by as a little margin but by a significant margin and this is where deep learning started however finally imagenet despite now more than 10 years after the creation of imagenet is still one of the primary data sets for pre-training generic representations it's still used for obtaining pre-trained models using this proxy classification task and the reason for this is that it's a very well it's a very large data set first of all and it has objects in complicated poses and difficult categories and it has 22 000 categories and for this challenge here there is typically 1 000 categories considered so it's a it's a very difficult classification problem so a a deep neural network that has to classify the image correct and there is one thousand possibilities has to really do a good job in understanding all the aspects of the image and because there is millions of these images that the model can learn from these representations become very powerful so it's all about complexity and size and less about the complexity of the output that we want to predict that's very simple just a discrete number and so here's a little slide that illustrates the pre-training aspect or sometimes it's called transfer learning deep representations generalize really well despite the large number of parameters and what do i mean with generalization this has been illustrated for example here in this very early paper if we pre-train a let's say convolutional neural network on large amounts of data on a generic task for example imagenet classification and then fine-tune or retrain only the last layers or even fine-tune all the layers on a little data of a new task we can obtain state-of-the-art performance on a variety of different tasks and this is something that has shocked the community and is something that still works very robustly today here in this early paper day they basically just trained a svm readout model on top of these representations but today this is replaced with also neural network layers but you can think of this as having a very deep backbone with maybe 20 layers and then have a few layers on on top a few readout layers that make use of this very powerful representations that have been obtained using imagenet training or maybe just by downloading a model from the internet there's a lot of models now available on the internet you can just download and then plug your little network on top and do something very different but do it well because you have these powerful representations okay so let's look at some challenges now what why is image classification non-trivial first of all there is a large number of image categories it is estimated that there is between ten thousand and thirty thousand image categories or may main image categories so it's a really difficult task to get the right category for example there's many different dog and bird species that might look very similar to each other but still there are different species another challenge is intra-class variation here are some examples of chairs all of these chairs are chairs by their function they have a suitable area but they look very different from each other right the image pixels are very different in each of these images and also their 3d geometry and their appearance their color their shape is just very different but all of these should be categorized by our classifier as chairs and this makes it difficult there's a huge intra class variation and we have to capture that correctly and still be able to distinguish chairs from some other objects like you know sofas or tables another challenge is a viewpoint variation if we look at this object from different viewpoints pixels also change dramatically yet we have to recognize the object always as a lego bulldozer in this case illumination changes are another challenge depending on how i choose the illumination and because the of the fact how light interacts with materials and reflects and refracts at material surfaces the image looks might look completely different right the image pixels even for the viewpoint hasn't changed here the image pixels the intensities have completely changed there are shadows here the objects are much brighter and there is some very dark objects here because the light source is in the middle and this is a more uniformly lit scene it's also challenging of course to categorize or detect objects when there's a lot of clutter as in these examples and some object categories might even deform here in the case of cats now all of these cats for a computer they look very different the images look very different despite the fact that these are all cats and of course if not the entire object is visible if there is occlusions in the image it also becomes harder to detect these objects or these image categories let's get started with some simple models a very naive model for image classification that you can program in a few lines of code is a nearest neighbor classifier for nearest neighbor classifier we need to choose first of all an image distance let's call that distance d that's a distance between two images and in a simple in the simplest case we could just say well we loop over all the pixels in these images and measure the the intensity or rgb difference at the respective pixel location between image 1 and image 2 in this case we use the l1 norm so this is a very simple l1 distance between two images and then if i want to do nearest neighbor search really don't have a training phase i i just i do everything at test time basically i just have a big data set a label data set and i call that d here calligraphic d so given an image i we can find the nearest neighbor so this is the image that we want to find the category for we can find the nearest neighbor by the argument over the entire data set so we have i prime here and we measure the distance from i to i prime to each of these i prime from the data set we check all of the images in the data set and and return the image that is closest to the query image and then simply we look at that image that has been retrieved from the dataset and returned the class of that image remember this is a labeled dataset so we know the label this is the simplest image classifier that we could build however this classifier doesn't work so well first of all it's very slow because we need to do nearest neighbor search at inference time and even when using sophisticated approximate nearest neighbor algorithms in these high dimensional image spaces this can be quite slow and second it's typically also quite bad because pixel distances distances in pixel space are not very informative so it's a bad classifier and here are some examples on cipher i believe where you can see for some query images what are the top ranked images from the data set and you can see that in some cases the classifier was correct here in green but in many cases it was actually incorrect so for example here the frog was incorrectly retrieved by an another animal and you can see what the classifier here focuses on is not the animal but it's the shape and the white background or here also this shape of the boat has been a bird has been retrieved for this boat because the bird is in a similar pose as the boat but of course semantically there's nothing that these two objects have in common so looking at pixel distances is not a good idea and here's a little thought experiment that tries to highlight why this is so uh or is one of the reasons why this is a bad idea if we look at these two checker boards if you look at the left checkerboard and the right checkerboard they have been horizontally displaced by exactly one field and what would be the distance between these two if you take the absolute difference between these pixels of these two images well it would be the maximal possible distance that you could get because every white field falls onto a black field and every black field falls into white field so by translating this image just a little bit we're getting the maximal distance but both are checkerboards they are the same category okay what can we do the next idea that has been followed in the community and that has been successful for some years by the same offers also of the imagenet data set and of course many follow-ups is so called back of words model and here the idea is to obtain spatial invariants by comparing histograms of local features which removes the dependency on the location of these features in this case translating the checkerboard doesn't change the classification result so it's illustrated here by this this brown bag of features of this person here the idea of back of words models actually originates from natural language processing in natural language processing it's common to use back of words which are basically an orderless document document representation that counts the frequencies of words that appear in a document to describe the document here is such a a word histogram visualized by the size of the words that is according to the frequency that they appear in the document how does this work for images this is where the idea comes from has been transferred to images well in the first step we extract features we can use the sift descriptor that we have learned about in previous lectures in order to obtain a number of features for an image at salient locations in the image and then we learn a visual vocabulary so we do this we extract features for the entire training data set and then we have a big set of features for all of the images we throw them together and then we do k means in the simplest case k means clustering in that of these shift feature vectors in that feature vector space and this gives us our visual vocabulary of k categories so we have k canonical clusters and then we quantize all the features into the visual words using that vocabulary by just finding the nearest neighbor so we take each feature and assign it to the cluster from 1 to k that is closest to that feature in that sift feature space and then now we can represent images by histograms of these visual word frequencies they're called visual words in analogy to word frequencies in nlp so for example here is an image of a person that has a high frequency of this person features and here's an image of a instrument presumably a violin that has high frequency of these violin features and now given these histograms so these are three different images now histograms for three different images of course there's more bins in practice we can now train a classifier for example k nearest neighbor on the histograms or a support vector machine or a neural network for image classification and this is already much better than this naive approach i showed in the beginning but there's still a problem there was still like the performance was saturating at some point and the reason is that there's still too many hand design components and too little learning these sift features are hand engineered they're actually designed for feature matching and not necessarily for object recognition and also the way we do the clustering the number of clusters and the classifier that we put on top matters a lot now the solution to this as we now know is to learn instead the entire model from image decisions end to end from data which is illustrated here so on the top we have this very simple classifier on features or gradients here we have the back of word model that extracts gradients and histograms these are the sift features and then does k-means clustering and and a classifier on top and here we have the deep learning model that has all these blocks now in green are all trainable blocks and we can end to end train this is the important thing on big data sets we can end to end at just all the parameters and this gives us much better performance it is also illustrated here this shows the imagenet classification error for the different image net challenges from right to left this is 2010 11 12 and and we really saw acceleration of these more shallow models in 10 and 11 while we had a big jump here now in 2012 introducing alexnet a deep learning model for image classification and since then performance has increased further and further surpassing on this particular task human level performance which is estimated to be around five and the current state of the art on this problem is one point three percent so it's even much lower than uh this model here from 2015. so tremendous progress progress that wasn't was unthinkable in 2010 where for several years the performance didn't didn't decrease okay now good uh we have we have learned that deep learning is is a key element for image classification because it's very important to do recognition and one of the main models of course for solving image classification tasks are convolutional neural networks that we have already learned about in the deep learning lecture but here in the context of this computer vision class i want to just do a little recap of the convolutional neural network lecture from the deep learning course so what are convolutional neural networks here is an example of a so-called vgg network typically these networks these cnns for image classification have three types of layers they have convolutional layers down sampling layers and fully connected layers and they take an input image here this is the image resolution 224 times 224 pixels times rgb and then they decrease the size of the the spatial dimensions and at the same time they increase the number of feature channels in this network until in the end they arrive at this 1 by 1 000 dimensional vector which is then passed through the soft max and this is then the probability distribution over classes the output of this image classification network so let's look at these components in more detail the most important component is the convolution layer which is used at every resolution and every block of this convolutional network compared to a fully connected layer that is illustrated here on the left convolutional layers illustrated here on the right share weights so instead of this relationship here here we have a kernel and the weights are associated with the kernel so we don't have weights for each of these connections here to all of the input pixels but we have only weights to this maybe three by three dimensional kernel times the number of feature channels of the previous layer this is a large reduction in the number of parameters through weight sharing and makes these models easier to learn but at the same time it introduces one of the most important concepts of convolutional neural networks which is translation equivariance at least at the layer level if i take the input to one of these layers and i translate it a little bit the output of the convolution operation gets translated in the same way and this is a useful property that we want to use for images because no matter if the cat that i want to classify is in the center of the image or if it's slightly displaced to the right or to the top or to the bottom it's still it's always an image of a cat and so translation equivariance or ultimately translation invariance in the case of classification because of these pooling layers that we will talk about we obtain some sort of invariance is a desirable property okay so here's an illustration of this convolution happening so we slide this kernel over the input feature map in the first layer that would be the image and in later layers these are just feature maps and then we multiply the weights with this the weights of this kernel with these respective locations in the input feature map and obtain after a non-linearity obtain the value at the output yeah so this is this illustration here is only for one feature channel but of course uh it is meant to be uh for all the feature channels so we have really a c times k times k dimensional kernel here and depending on the number of output channels we have these weights this weights multiplied right so we have if you have two output channels then we have two k by k by c kernels one for each of the output channels the next operation is the down sampling operation the downsampling operation reduces spatial resolution of the input so this is here the max pooling operation for example that goes from this spatial resolution to this spatial resolution successive successively reducing the spatial resolution until we reach just a one by one one pixel by one pixel image if you will which is then the classification result for the entire image and where information from the entire image is stored in that sense we reduce the spatial resolution but increased receptive fields every pixel here in the input contributes to the final output there's no pixel that potentially does not matter that is ignored a priori through these pooling operations typically these pulling operations are applied with a stride of two so we jump over two and we have a kernel size of two by two for here in this case the illustration is with three by three um and so therefore we get a space a reduction in spatial dimension of two because of this dry tube pooling doesn't have any parameters and typical pooling operations are for example taking the maximum of these values and putting it here or the minimum or the mean pooling is applied to each of the channels separately so it keeps the number of channel dimensions if we have c input channels in layer i minus 1 we have c output channels in layer i and the final part here are the fully connected layers once we have pools finally all the um all the spatial dimensions to one by one pixels here then we apply simply fully connected layers in this vgg architecture and these are really the most memory intensive part also of this vgg architecture because we have 40 the 4096 channels and so we have 4096 times 4096 which is the next layer weights or 16 million weights okay and uh here here's an example of uh like this this last well this last stage here could be a could be a pooling stage but in in vgg it's actually a reshaping stage so from here to here we go with a reshaping where we basically um take the 7 by 7 times channel channels dimensional input feature map and simply reshape it into a one by one times channel dimensional feature map where now these channels are much more because they have to capture also the spatial dimensions they're just basically flattening the vector it's called reshaping so there's different ways of doing it and this is just one particular choice that has been made for the vgd architecture and it's done differently for different architectures what we also need to talk about is what the output layer and the loss functions are for image classification so we've talked about now the input and the hidden layers convolution pooling reshaping but what are the output layer what is the output layer and what is the loss function in image classification we use a softmax output layer and a cross-entropy loss and we'll see why that is a natural choice and let's in order to see that let's start with the basics let's start with defining a categorical distribution remember image classification is about returning an image category and so uh what we effectively want is we want to um compute a probability over uh in a probabilistic sense we want to compute a probability over the over the outcomes of the model so if we have eight classes we have a discrete random variable so we have a discrete distribution a categorical distribution over these eight possible values of that random variable and this is called a categorical distribution so the simplest definition of a categorical distribution is this one here where i say well p the probability of of the random variable y taking value c or class c is equal to mu c index c which is the probability for class c this is the parameter of that distribution we have one such parameter for each class and they have to sum to 1 and be positive here's an example on the right this is a discrete distribution a categorical distribution over a random variable that has 4 states one two three four and you can see that together they sum to one they're all positive and the probability for example for class four is higher than the probability for class two in this particular example and so these are the muse mu 1 would be 0.2 mu 2 would be 0.1 etc now an alternative notation to this which is useful for deep learning is the so called one hot vector which is a vector y like here that has a one in one location and zero is everywhere else and it has a 1 in the location of the class that we want to describe so for example for class 2 this would be 0 1 0 0. and with that we can describe the probability of y now this is a vector in this case a one hot vector as the product over all the classes mu c to the power of y c right because we know that y c is only one for one c um so all for all of the other cases will be zero so the pr it will be a product of a lot of ones and one element that is not one which is the yc element right so basically in other words with this yc we're picking out the parameter mu c for a particular class which is namely the class where we have inserted the one in the vector it's exactly the same as this just a different representation and it should of course be familiar from the deep learning class so here are some examples we have four classes and we see the non-bold y representation and the one hot encoding here on the right so the one hot vector has binary elements and the index c where y c equals one for example here c is would be two determines the correct class the class will be two here the third element is one so the class is free and here the fourth element is one so the class is four the elephant class now given that we know what a categorical distribution is we can now define or derive the loss function the cross entropy loss function by maximum the log likelihood of such a categorical conditional model so let the model distribution the conditional model distribution of y given x and the parameters w x is the image and y is the label that we want to predict for that image and w are the parameters of the neural network so let this conditional distribution be equal to the product over all the classes of the probability predicted by the neural network now so f w is the neural network for class c of the input x to the power of y c so you can recognize the categorical distribution here now the only difference is that we have replaced this mu c with actually this this mapping this function this neural network that produces the parameters of that categorical distribution given the particular input x and then the index we take the c if element of this vector f is actually a vector if we assume such a categorical distribution where the categorical distribution is parametrized through a neural network f w and we maximize the log likelihood of this model and we will obtain the cross entropy loss and this is illustrated here so what we want is the maximum likelihood estimate of the parameters which is arc max of the parameters over the parameters of the sum of all the data points so now we have i is the index of the data set the data elements log p model of y i given x i and w this is the log likelihood now we plug in the model distribution so we get this term inside here but in addition we have the i now the index both for the input and also for the label this is a supervised learning problem so we know both the input and the output the target y i um which can be rewritten as such if we consider not this as a maximization problem but as a minimization problem and this term here inside is called the cross entropy loss the sum over all the classes minus y i times the logarithm of the probability predicted by neural network this is the cross entropy loss the famous cross entropy losses used everywhere when training deep neural networks for classification what you have assumed is that the neural network f predicts probabilities but how can we actually ensure that that is true that f w predicts uh correctly these parameters of the categorical distribution well we must guarantee that the outputs of our neural network are between zero and one and that if i well this f is again a vector indexed here by c that if i sum over this vector i get one that the distribution sums to 1. and this is guaranteed by the softmax function right this is the illustration here this is a equation for the softmax function softmax of x is well for each of the elements in that vector x x of x 1 over the sum of all of the x x k of course this is positive because the exponential function transforms from the real line to the positive domain and we divide by the sum which means that we normalize so the sum of these elements will be one this is the generic definition of the softmax function now instead of x we are going to of course use the last elements or the last features in our neural network uh the output of our neural network and we call that the score vector so you have a neural network made of convolutions and non-linearities and and max pooling and at the very end if we have 1000 categories we get a 1 000 dimensional vector and that we call the score vector this is the output of the neural network and that vector we pass through that softmax this is the vector s we pass through that soft max and this is then what we call f w of x this is the output of the of the neural network the last part the output layer is a softmax function if we take the logarithm of this we obtain sc minus logarithm of the sum of exponential of sk now what is the intuition here assume c is the correct class our goal is to maximize the log softmax as such the first term here this one encourages the score sc for the correct class to increase right we want to maximize the log softmax remember that we want to minimize the negative log of the softmax so we want to maximize the log of the softmax so sc for the correct class c should increase the second term encourages that all scores in s decrease because of the negative sign and furthermore the second term here can be approximated by the maximum over k of sk as x of s k is insignificant for all s k that are quite a bit smaller than the maximum so we can approximate it like this and by making this approximation we see that this loss always strongly penalizes the most active incorrect prediction let's assume c would be the correct class and there would be a k not equal to c that is much larger that has a score much larger than sc because we have this approximation here this would be heavily penalized by this loss this would be pushed down the score if the correct class already has the largest score then both terms roughly cancel out right and the loss or this particular data example this data point in our stochastic radiant descent procedure will contribute little to the overall training cost because it's already balanced so we see that the softmax responds to differences between the inputs and it's invariant to adding the same scalar to all its inputs that's something that's very easy to show and we can actually exploit this property but in order to derive a numerically more stable variant if we implement this algorithm this neural network on a computer with floating point precision we always have to worry about numerical precision and we can we can make this numerically more stable by just subtracting the maximum of all the elements which doesn't change the output because it's invariant to addition and so this allows for accurate computation of the softmax even when the axes are actually quite large so let's put it all together here's the cross entropy loss for a single training sample from the training set this is the cross entropy loss now suppose a concrete example with four classes four categories and four training samples these are the four training samples and we have four categories dog cat mouse and elephant and these are four training samples let's assume we have a mini batch of size four and these are the the ground two truth labels correct labels here right a different class for each of these categories and let's assume that our neural network predicts the following score vectors here where on the diagonal i have colored elements in red because those are the elements that correspond to the correct label so in the first case for example the dog has been correctly classified in the second case the cat has been correctly classified but the dog has received similar probability if you will here in the third example all of the classes are equally likely we produce a uniform distribution and in the final example actually the elephant which is the correct class has received the lowest score so this is a bad result from our network now we take these score vectors and transform them through the softmax into probability vectors and we can see from these probability vectors as expected that in the first case the probability of the dog is high then here we have same probability for cat and dog because these two are the same and here we have very low probability actually for the elephant class because it has a very low score and now if we take the cross entropy loss which effectively grabs these numbers here in red and pushes them through a logarithm and uh computes the negative of this logarithm we have a very large number here for very small values because the leg then the logarithm becomes very large negative and so we take the negative of that so it's a very large loss because this is very wrong but here the dog is already correctly classified so our loss doesn't care so much about it it's already good so we observe that sample 4 contributes most strongly to the loss function and this is a nice illustration from the stanford people where you can play around with a convolutional network in your browser and you see how images get propagated through the different layers of a convolutional network and the corresponding class is predicted as the output finally in the first unit i want to show you some network architectures that have been successful very briefly in 1998 this is one of the first successful convolutional neural network called linnet5 has just two convolutional layers here and two pooling layers and then two fully connected layers it's a very small network but it was only applied to small problems like amnesty and achieved state-of-the-art accuracy on mnist prior to imagenet but it wasn't so such a big bus it was a big bus in the machine learning community it wasn't was not a big bus in in computer vision because in computer vision people were more interested in real problems like real images and not these small mnist images and so it required alexnet to demonstrate to the computer vision community that actually these models can work well also for more interesting problems and the reason why this worked in 2012 was a combination of clever engineering the availability of gp gpu computing and also the availability you know of these large data sets which allowed for training these parameters of these very deep high capacity models and this really triggered the deep learning revolution and showed that cnns work well in practice and from there it really went very quickly i just show some lighthouses here some cornerstones vgg is one vgg demonstrated that actually it's much better if you just use three by three convolutions everywhere because you can get the same expressiveness with much fewer parameters for example if you have three layers of three by three convolutions you get the same receptive field as one seven by seven layer but with less parameters but of course you have to train these models correctly so deeper models are harder to train and so training training becomes more challenging then in 2015 also inception or google net has been proposed which is a 22-layer deeper network with these inception models you can see that there's a lot of components that are similar that have parallel streams of convolutions of different filter sizes and pooling operations and also multiple intermediate classification hats to inject gradients also at earlier stages into the network and to improve the gradient flow to improve optimization of this network despite the fact that only this last softmax here this last classification result at the end is used at inference time but this just for improving this has been added just for improving training and then in 2016 of course resnet which showed that well it's actually beneficial in order to to build very deep neural networks it's beneficial to use residual connections where you loop the input directly to the output and just try to predict the residual which is much easier because now it's very easy to represent the identity transform which otherwise is harder and it turns out that these models can be trained much more effectively and allow for much deeper models up to 152 layers for example in this case here and it's a very simple and regular network structure as you can see here so it's very easy to implement it uses 3x3 convolutions like vgg uses striated convolutions for down sampling and these are architect these resin architectures are really dominating today in many areas of deep learning here's a little plot that compares accuracy versus complexity you can see the top one accuracy this is one of the metrics how image net classifiers are evaluated top one means your prediction out of these 1 000 possibilities must be the correct one in order to get a point and top five accuracy means that the target label the correct label must be in your top five predictions it's a little bit easier to achieve high top five accuracy but because models are are getting better now we mostly focus on top one accuracy you can see here from this plot on the x-axis is the computational complexity on the y-axis the accuracy of the models and the size of the circles indicates the number of parameters of the model you can see that vggt really has a high computation complexity because of the fully connected layers also high number of parameters and where you want to be is actually here on the top left so the inception and the resnet architectures have a better trade-off they have a even higher accuracy and at the same time fewer parameters and better computational complexity |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_83_ShapefromX_ShapefromX.txt | this is a short unit in which we briefly review the shape extraction techniques that we have learned about so far and also highlight some of the ones that we didn't have time to look into in-depth before we then move on to the fusion unit number four we have seen how to do binocular stereo matching by utilizing the ap polar geometry we can find correspondences of a point along a scan line that are similar in appearance potentially utilizing smoothness constraints to overcome ambiguities we can also use more than two input images and this is called multiview stereo if we have more images this helps also to overcome ambiguities and this is one of the approaches that we have seen this is this unrolled raynette approach that also uses multiple um input images computes features on those computes a cost volume and in the end returns depth maps we have seen in this lecture how we can obtain geometry from even a single image using shading cues and we've seen how we can remove some of these strong smoothness assumptions that we have to make by using more observations by considering multiple images captured from the same camera viewpoint which is called photometric stereo another cue that can be used but is less frequently used because it's a very specialized application area in order to estimate geometry from even a single image is texture if we have a scene where we know that the texture is local similar then we can exploit this regularity also to recover geometry from such a textured scene but of course in general scenes are not regularly textured everywhere and so it's difficult to apply these algorithms outside very this very narrow domain another more broadly used and popular technique is a so-called structured light and there is many different variants of structured light estimation algorithms structured light estimation algorithms in their very simple form use a pattern projector that projects that illuminates a scene using a stripe or a dot pattern and if that pattern is calibrated if it is known and the projector is known which acts as a virtual image plane if you will so these are both perspective projections but once one device is sending out rays and the other is receiving rays the camera then a single receiver single camera a monocular camera is sufficient if you have such a projector to recover 3d information because we can do triangulation here as well if we know which points correspond to each other and if we observe a particular disparity here in the camera then we know similar to the stereo case we know the depth now the advantage of an active illumination setup is that of course can also overcome surface areas that are typically hard to estimate in a classical stereo setting for example textureless regions a white wall in an office where it's really hard to match features from the rgb image but if i project for example in this infrared space here i project a dot pattern and i can recognize these dots in the camera then out of a sudden i have a lot of texture that i can use for matching there is a whole range of devices that have been developed but really the first consumer and cheap structured light device that has revolutionized research in this area is the kinect version one that appeared in 2010 by microsoft looked like this was developed for the gaming industry but wasn't the big success in gaming um however was a big success in in research in robotics research and computer vision revision research and there have been several follow-ups some of them using also time of flight and also setups that have been produced by other companies this year at the bottom right is a so-called active stereo setup where also a dot pattern is projected but it's combined not with a monocular but with a stereo camera that sees both at day and at night because these are indoors and outdoors because these these structured light approaches have the problem that the light that is emitted is very weak compared to the sunlight and so if you go outside and there's sun then you can't really see your dots anymore because the projector is too weak so here's an example of what such a projective pattern looks like this is an example from our sensor and and an example that we have used in [Music] for training stereo networks for this particular active structured light setup and this is a crop of this region here of the head of the gnome and you can see the individual dots that are projected by this laser that gets diffracted at this diffractive optical element into into into these different individual rays into this dot pattern and then we perceive this dot pattern and because it's a random dot pattern but we know the dot pattern it has a very unique structure so we know which patch this patch here corresponds to in the reference pattern and so we can measure disparity and we can obtain depth from this now another technique for estimating geometry is called monocular depth estimation and in contrast to [Music] shape from shading while it's also using a single image as input it doesn't make any assumptions about the materials in the scene in fact it tries to directly solve the problem by using a big deep neural network that from the single rgb image directly produces the depth map as shown here while early approaches to this problem didn't use deep learning and didn't produce very good results this was the first was kind of a breakthrough technique that for the first time showed reasonable results for this problem using deep neural networks but as you can see the quality of the result is still not great there's still a lot of artifacts like this halo that we can see here but this was from 2014 and a lot has happened since then so here's a state-of-the-art result from 2019 called meet us and you can see this is a technique that has a has a much better network architecture has been has been trained on much larger compute and on much much larger data sets you can see that this is a very versatile algorithm that works on many different scenes reasonably well and what i also want to mention is a line of works that tries to go beyond estimating just what's visible in in the input image but instead tries to estimate the complete shape of an object like this bench here and this line of research has really been popularized also by the advent of deep learning it has actually been possible only through the innovations that happened in deep learning such that was possible from a single image these are all single image reconstruction results and we're going to see more of those in the next lecture that it was possible from a single image to decode this into a shape like this or into a point set representation like this or into a mesh representation like this you |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_52_Probabilistic_Graphical_Models_Markov_Random_Fields.txt | let's start with markov random fields or in short mrfs but before we do that let's briefly recap probability theory in probability theory we're talking about random variables which are variables over which we define distributions so there's two types of random variables discrete random variables that we are considered for most parts of this course and in particular this lecture where a variable can take any of c values think of the c as if this is a classification problem and the probability that x takes a value c is denoted as p of x equals c and note that we are using the lowercase notation of p here also for discrete distributions while often it's it's a capital letter p then if x is a continuous random variable we can write the probability that x takes a value in a set so if this is a one-dimensional random variable and this is a subset of this one-dimensional space r the space of real numbers and the probability that x takes a value in calligraphic x is p of x element of calligraphic x and a distribution over x is simply written as a shorthand notation as p of x where we again don't distinguish between discrete and continuous we're just always using this small letter p for simplicity now there's some important properties to know about probability distributions that we're going to extensively use in this lecture and that you're probably all familiar with but just to recap the joint distribution of two random variables is denoted as p of x comma y or equivalently p of y comma x which is a shorthand notation for p of x taking value c and y taking value c prime let's say this is the joint distribution if x and y are one-dimensional real numbers then this is a two-dimensional distribution like a 2d gaussian for example then we have two important rules the two basic rules of probability one is called the sum rule and the other is called the product rule and from those rule everything else can be derived the sum rule is basically defining the marginalization of one variable where we have a joint distribution p of x and y and transform this into a so-called marginal distribution marginal because one of the variables has been removed in this case y and we marginalize by summing that's why it's called sum rule over the variable that we want to remove from the distribution so the marginal distribution of x is equal to the sum over all possible y's all possible states that y can take off the joint distribution of p of x and y or if x and y are continuous variables then we take the integral operator here so this is called the sum rule or marginalization the other important rule is the so-called product rule where the joint distribution p of x and y equals the conditional distribution p of y given x this is the distribution over y given that we have observed x times the probability of x or in other words p of y given x is equal p of x and y over p of x and as a direct consequence of the product rule we have also the famous space rule which is simply p of y given x equals p of x given y times p of y over p of x okay now let's after this quick recap dive into mark of random fields for defining markov random fields we first have to define what a potential is a potential phi or potential phi of x is a non-negative function of its argument x and in this case x is a single variable and the joint potential phi of x one to x t is a non-negative function of the set of variables in general all of the variables could also be multi-dimensional but in the context of this lecture here we are considering each individual acts as a one-dimensional random variable so this is a potential or a joint potential it's a non-negative function of the variable but it doesn't need to be normalized it's just non-negative now a markov random field in short mrf or also called a markov network is defined as follows for a set of variables calligraphic x equals x1 to xd a markov random field is defined as a product of potentials over the maximal clicks or the clicks we'll see what maximal clicks are the maximal clicks are covering all the clicks so it's sufficient to consider all adjust the maximal clicks but we can also consider all the clicks so the markov random field is defined as a product of potentials over the clicks or just the maximal clicks xc of the undirected graph g and there is capital c such cliques and then the product of potentials is simply the product of little c one to c phi c of calligraphic xc where these clicks here are subsets of the set of variables so for example the first click could contain just x1 the second click could contain x1 and x2 the third click could contain x2 and x5 etc that depends on the graph as we'll see now what we can already see here is that we have this 1 over c expression here which is the normalizer that makes sure that this expression here on the right is normalized such that this is a proper probability distribution that p of x integrates to one we have to have this normalizer here it's called the partition function because we haven't assumed anything about the potentials exact except them being non-negative and this of course does not ensure that this expression here is normalized when summing over all states of calligraphic x of the set of all random variables so we have to add this normalizer in order to form a proper probability distribution there's special cases if the cliques here are of size two so if there is only cliques of size one or two but no cliques of size three no cliques that cover free random variables then we call this a pairwise markov random field and that's an example for this we have already seen in the context of this stereo markov random field that i've briefly shown in the beginning where we have unary potentials and pairwise potentials but no potentials that connect more than um two variables if all potentials are strictly positive then this is called a gibbs distribution now we talked about cliques and maximal clicks and undirected graphs so what is what is all that so here's an example this is an undirected graph with seven vertices and one two three four five six seven eight nine ten eleven edges and it's an undirected graph because each of the edges is undirected there's no arrow there's no notion of directionality it's just an edge this is why it's called an undirected model or an undirected graph now each of these vertices in the graph corresponds to a random variable so in this case we have seven vertices seven random variables and the edges correspond to dependencies between these random variables relationships of these random variables so x2 and x4 are directly related but x2 and x5 are not directly related only through x4 they are related but not directly related now i click and we see several examples here the green is a click and also the red ones are clicks a click is a subset of vertices that are fully connected so for example here we have a clique because x1 is connected to x3 so all of the nodes inside that clique are fully connected with edges but also x1 x2 x3 x4 are clicks or is a click because we have connections between every two variables or vertices inside and similarly x4 x5 and x6 are clicks are x5 x6 and x7 is a click but x4 x5 x6 and x7 is not a click because there is a missing connection from x4 to x7 now a maximal click and only the red ones here are maximal clicks is a click that cannot be extended by any other vertex if we look at this click x1 x2 x4 and x3 and if you want to extend this click by let's say vertex x5 or variable x5 then there would be a lot of connections missing for example x2 to x5 so it's not a clique anymore so this is a maximal click but let's say x1 and x3 is not a maximal click because we can add x2 and we have a bigger click that's fully connected but also that is not a maximal click because we can still add x4 and now we have a maximal click because we cannot add any other variable to further extend it x4 x5 and x6 is a maximal click because if we would add x7 we wouldn't have a clique or if we would add x2 or x3 we wouldn't have a click because there would be connections missing so maximal clicks are clicks that cannot be extended by any other vertex so this is the notion of an undirected graph where the vertices correspond to random variables you can see why this is called a graphical model and edges correspond to relationships and the concept of clicks and maximal clicks now let's look at some properties of these markov random fields and a very simple example with just three random variables or three vertices in the graph and this graph is not fully connected there is a connection from a to b missing there's only connection from a to c and from c to b um so as we've seen before a markov random field is defined as a product of potentials over the cliques of the undirected graph and because the maximal clicks comprise or the set of maximal clicks comprises all the clicks we can similarly also just consider the maximal clicks so in this case we have two clicks which are at the same time maximal clicks we have the click ac and the click click cb so this markov random field specifies this distribution the distribution over the random variables a b and c the joint distribution factorizes as 1 over c and the potential phi 1 of ac and the potential phi 2 of bc and c is an so-called partition function which normalizes this distribution to sum to 1 in the case of discrete random variables that we consider here and so in order to do so of course we have to take this expression and sum it over all possible combinations of a b and c so if these are binary random variables then we have a sum over eight terms because we have eight possible combinations of these binary a b and c variables and it's easy to see that if we plug this in here and then we sum the entire expression over a b and c um then this results in one because we have um the same term in the numerator than we have in the denominator so this is the partition function normalizing constant now back to the properties of markov random fields what happens for this particular markov random field if we marginalize c if we marginalize c what happens is that a and b become dependent on each other a and b which didn't have a direct connection before if we marginalize c then they become directly connected and we can prove that by showing that a and b are not independent anymore after marginalization by showing that the probability of a and b the joint distribution is not equal to the probability of a times the probability of b which is exactly the independence property so how can we show this well we can show this by contradiction so let's assume let's assume this holds true let's assume there's an equality here instead of this inequality so as you can see here i've used an equality here let's assume p of a and b is equal to p of a times p of b so what is p of a and b p of a and b is of course the marginal of p of a b and c when we marginalize c when we sum over c and then we know what p of a b and c is this is exactly this expression so we can plug that in so we end up with this term here on the other hand um p of a is the marginal of p of a b and c when we marginalize over b and c and p of b is the marginal of this expression with respect to a and c and so again here we can plug in the expression for the joint distribution right so we have this expression here which we now assume because we want to prove by contradiction being equal to this expression here on the right hand side now if we put them equal what happens well we have 1 over c on the left and then we have one over c square on the right so we can pull this in front here because it doesn't depend on the indices so we can pull in front so one over c remains on the right hand side and one over c vanishes on the left hand side now one over c is one over the sum of a b and c of the potential so we can just pull this to the other side so that it doesn't appear in the denominator so that's what we what we did here we took the c and put it on the other side this is this expression here and then the other expressions are the ones that we that are left the ones without normal normalization constant so now we if this would be true this expression must hold now consider a concrete counter example to show that this is not true in general consider the following example where we have a potential 1 of ac defined as the so-called iverson bracket of a equals c which means that the potential and this is the definition of this iverson bracket notation that this potential here takes one if it's argument is true so if a equals c then this is one and it's zero otherwise if you think this think of this as a two-dimensional probability table this would be a diagonal a diagonal matrix which on the diagonal would have once and on the off diagonals would have zeros and similarly let's assume that for the second potential we have this iverson or indicator function here of b equals c which means that this potential here takes the value one if both are equal and takes the value zero if they are unequal this is a concrete example a concrete definition of the potentials which of course has to also follow this equality here if the quality would be true because it must be true for all settings of these potentials now if we do this however we obtain a contradiction if we look at the so this is just the expression from the previous slide that i've just copied if i look at the um this normalization constant this partition function c i see that well um there's exactly two states of a b and c because we're summing over all of them here for which this expression here is one namely a equals b equals c equals zero and a equals b equals c equals one um okay what i forgot to say here is that this is assuming that we have binary random variables so the variables are either zero or one for this counter example i will add this to the slide so this expression is equal to 2 1 plus 1. now this expression here what what does this expression evaluate to well we're summing over c and so this expression is going to be equal to 1 if a equals b because then there is for sure one term where the c is also equal to these two and we obtain a 1 for this term but if a is not equal to b then there is no way that this expression can be equal to 1 because either one side or the other side will be 0. so in summary we have the indicator or iverson bracket of a equal b so only if a is equal to b then this expression will be 1 otherwise it will be 0. here on the right hand side we have the marginal over or the summation over b and c of this expression here and if we do that let's assume um uh a is zero um then only for one particular state here namely where b and c are also c 0 this will become 1. so there's only one particular state for a particular chosen a where this will become 1. similarly of course if a takes state 1. so this expression will be one because there is exactly one term where b and c agree with a and similarly this expression will be one because amongst these four terms assuming binary variables there is only one expression where a and c will be equal to b and now what we have in summary is that on the right hand side we have one one times one and on the left hand side we have 2 times [Music] the inversion bracket of a equals b so 2 times [Music] we have two if a equals b and we have zero if a doesn't equal b and so clearly this does not hold so therefore in general for arbitrary choices of potentials p of a and b is not equal to p of a times p of b because we have defined a concrete counter example for binary variables with this particular choice of potentials here where this is not true of course there could be a choice of potentials that this is true for instance when all the potentials are equal to zero everywhere then of course the left hand side and the right hand side would be identical but because it has to hold for all potentials for all choices of potentials in general is not true a second property of markov random fields is that conditioning on c makes a and b actually independent so if we condition remember before we have marginalized c now we're conditioning on c now a and b become independent there is no connection between a and b anymore and this conditional independent statement can be more compactly written with this expression with this terminology which simply can be read as a is independent of b conditioned on c if we condition on c a becomes independent of b and this is the independent operator and the proof of this can be done by showing that p of a and b given c is equal to p of a given c times p of b given c which is exactly corresponding to this independent statement and we'll do this in the exercise and the proof is of course very similar to this proof that we had on the other example the other property now we've seen for this very simple markov random field with three nodes that marginalizing one variable makes the adjacent variables dependent and conditioning on a variable makes the adjacent variables independent the letter statement can be generalized into the so-called global markov property in order to define the global markov property we have to first define separation let's consider this graph here on the right a subset of nodes or random variables let's call that subset s separates a subset a from a subset b in that graph if every path from a member of a to any member of b passes through s so let's make it more concrete let's say s is x4 so x4 separates x1 x2 and x3 from x5 and x6 and x7 because any path from a the set a here to set b must pass through the set s or similarly set s being chosen as x 2 3 4 5 6 is separating x 1 from x7 where a the set a is now the singleton x1 and b is the singleton x7 this one variable x x7 so this is what separation means we are shielding variables from each other now the global markov property says the following for disjoint sets of variables a b and s where s separates a from b we have that a is conditionally independent of b given s that means if we are conditioning on x4 then x2 becomes conditionally independent of x5 or x1 becomes conditional independent of x5 or x3 becomes conditionally independent of x5 or x1 becomes conditionally independent of x7 or the joint of x1 x2 and x3 becomes conditionally independent of x5 x6 and xm so we have by defining the separation property defined conditional independence properties that we can deduce from any graph any graph any concrete graph that's specified induces these independence properties from this global markov property we can very simply derive a local markov property that is a direct consequence the local markov property says when conditioned on its neighbors the random variable x becomes independent of the remaining variables of the graph or mathematically the probability of x given all the other variables this means all the random variables except x so we're removing that single ton from the set calligraphic x from the set of all random variables is equal to the probability of x given just its neighbors so in this simple example here let's say x1 um con x1 conditioned on all the other variables x2 x3 x4567 is equal to x1 conditioned just on x2 and x3 because x2 and x3 shield x1 from the remaining variables through the separation property discussed and the set of neighboring nodes in this case x2 and x3 which are neighbors of x1 is called the markov blanket it's called the markov blanket because it's shielding x 1 from the other variables given that i know x2 and x3 x1 is determined we don't it doesn't depend anymore on all the other variables it's shielded by the blanket and this holds also for sets of variables not just for single variables here yeah so here's an example again so in this case we have x4 which becomes conditional independent of x1 and x7 if we are conditioning on x2 x3 x5 and x6 and similarly there's many other independent relationships that can be read of the graph defined in this way through separation and now why are we doing all of this we've talked about factorizations into maximal clicks and we've talked about graphs through the separation giving rise to conditional independence properties and really the key of this is captured in the so-called hammersley clifford theorem which says a probability distribution that has a strictly positive mass or density satisfies the markov properties with respect to an undirected graph g if and only if it is a gibbs random field that is its density can be factorized over the maximal cliques of the graph so what this theorem says is that both things that we have defined are equivalent we can define we can read off the conditional independence properties from the graph in the way we've just described and the distribution that satisfies all of these conditional random conditional independence properties is the same is identical to the density that is defined by the factorization over the maximal cliques of the graph or over all of the cliques of the graph but we can just consider the maximal clicks because they subsume all clicks so any clique that's part of some max some specific maximal clique can be subsumed by the maximal clique we can basically add the terms to the maximal clique because the maximal click by definition depends on also on the variables in that sub-click that it contains so we can just define that potential of the maximal click and integrate these constraints into that another way of looking at graphical models is the so-called filter view the filter view of graphical models is thinking about graphical models thinking about graphs specifying a filter on probability distributions and only distributions that factorize based on these clicks may pass through that filter defined by that particular graph or alternatively only distributions that respect the markov properties defined as we've described in the previous slides may pass and from this fundamental theorem the fundamental theorem of markov random fields above so-called hammerslayer clifford theorem we know that both sets of distributions are actually identical |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_84_ShapefromX_Volumetric_Fusion.txt | we have seen many techniques to reconstruct geometry from image inputs but most of these reconstructions only considered a particular object or even just produced a single depth map and often when we want to do reconstruction we want to reconstruct entire scenes we want to reconstruct larger scenes scenes that are composed of multiple objects and then the question becomes relevant how to combine these individual reconstructions into one big consistent 3d reconstruction of the entire scene and one technique that is very popular in this area and is used a lot also due to its simplicity is called volumetric fusion and that's what we're going to look at now here's an overview of a traditional 3d reconstruction pipeline the input is a set of images we first estimate the camera poses and we have seen techniques for doing so like structure for motion bundle adjustment from these estimated camera poses we can then and the images we can then compute dense correspondences using binocular or multiview stereo techniques or monocular or active or shape from shading techniques or techniques that combine all of these aspects and one intermediate representation that might be the outcome of such a calculation might be a depth map for each of the reference images that we have and now depthman fusion is about taking all of these depth maps and combining them into one consistent 3d reconstructions a reconstruction as shown here on the left so let's look at how this works and in order to understand how it works we first have to understand the representation that it uses which is not an explicit representation but rather an implicit representation of the shape here we see a mesh based representation this is a shape in this case it's a circle that is discretized into a set of vertices and edges you can see the different vertices here enumerated and and the set of edges which indicates that there is an edge from one to two two to three three to four and if we would do this in 3d this would of course be a set of vertices and a set of faces but here for simplicity we illustrate everything in 2d so this is an explicit representation that stores the geometry in terms of vertices and faces but it's not an ideal representation while it has been used for fusion it's not an ideal representation for fusion and it's it's very hard to change topology with that representation what we're going to do instead is we're going to look at a implicit representation the simplest implicit representation is a simple parametric form for example we can represent a a circle as a level set of this parabola here we have a function f of x y which is x squared plus y squared minus r squared and of course if we set that equal to zero we arrive at an equation for a circle so we have implicitly defined the surface the shape as the level set of such a function now in the next lecture we're going to actually talk about such implicit models more in the context of representing these functions using neural networks but this is a rather recent line of work and so we're first going to focus on what has been done in the past in the last 20 years which is a discrete a discretization of this implicit space and in particular we're going to look at so-called sign distance functions that measure distances from [Music] the center of each of these voxels that we're looking at here or pixels in this 2d example to the actual surface the closest distance so we are discretizing the space as such using a resolution as it sweets us depending on the memory and compute that we have available to us and then we insert at each of these voxels or pixels we insert a value we set the value of that pixel or voxel to as the distance to the closest surface so in this case the value of this voxel here might be 2.5 because it's 2 2.5 units the center is 2.5 units away from the closest surface point or here we might have minus 3 because it's 3 units away and we have a minus 3 here because it's a sine distance so we indicate with a positive sign that this voxel is inside the object and with a negative sign that the voxel is outside the object for example here we have a voxel that's outside the object but it's closer to the object than this one here so we set the value as minus 0.6 which is roughly this distance here in some formulations you will see the signs flipped that is arbitrary that's up to definition we can of course also define the inside to be the negative distances and the outside to be the positive distances but it's important that we define a sine distance so we know what is inside and what is outside so this is the discrete sign distance function representation that stores a sine distance to the closest surface at each and every voxel in this representation and the surface is hence also represented similar to the continuous case as the level set of this discretized representation where we have to interpolate now in order to actually find the surface as we will see a crucial advantage of implicit models is that they allow for representing arbitrary topology they allow for representing topology changes for example in this case here if you look at this scene and you take this slice this red slice through that scene and that slice in x y coordinates is represented here you can see that there is some pixels in that slice where the function value the sdf value is smaller than some threshold or smaller than zero and there is some areas where it's larger than zero and uh this by by just changing the corresponding voxels you can change topology you can switch this object off you can remove this object by just turning all these values to be positive so it will merge into the background you can create a new object here by taking these positive values and turning them negative and an object will appear so the topology the number of connections between surfaces so to say is changing depending on the implicit representation is it can be changed in an easy manner it's much harder to change a mesh because you have to connect vertices differently if you change topology but here you just have to turn change the value of the sdf value of the voxels here's an example with different topologies so here we have a topology with holes here we have a simple topology and i want to show you a video this is a newer work that we did but it nicely illustrates how such implicit models allow for changing topologies you can see here now we have genus zero changing to genus one two three four adding more and more holes over the time of the optimization of this implicit representation and it's no problem at all for this representation to do this to change the topology to add holes to add object to remove objects etc okay so given this representation what do we do volumetric fusion typically has three steps at least if we assume depth maps as input the first step is to convert the depth map to this discrete sine distance field sine distance function representation that we've just seen the second step is to actually perform the volumetric fusion and the third step is to extract a mesh from the fused sine distance field from this implicit model so we convert an explicit explicit representation to an implicit one we do the fusion in the implicit space and then we convert it back to an explicit representation let's start with the first part depth to sdf conversion unfortunately the problem is not as easy as i've illustrated before because the true distance to the surface is often unknown what we often only know is the depth of the surface at uh for a particular array along a particular array so for example for this surface here we know that the surface intersects that ray here but we don't know what is the closest point on the surface and therefore what sdf fusion methods often do is they assume and depending on the complexity of geometry this is a better or worse assumption they assume that this distance to the surface can be approximated with the distance along the ray so here we have the distance along the ray this is not the distance to the surface right the distance to the surface here might be 0.5 but the distance is slightly larger because we're measuring distance to the intersection along the ray from the voxel center so for each voxel center we measure now distances along the ray and we consider the voxels that intersect a particular array so for this ray we consider all these voxels that intersect we take the voxel center and measure distances to the intersection along the ray because that's something that we have easy access to and it's a good approximation for example in the vicinity of the surface if there would be if you would consider for instance this voxel here and there would be another object here then of course the distance to this object would be much smaller than the distance to this object but because because we often only fuse we assume that the reconstruction is is pretty good already and we fuse in the vincinity of the surface we don't consider distance values that are far away but just do the actual computation close to the surface yeah so these are the two distances for these two voxels and we just enter them here as a signed distance so we have for example a negative sign to indicate that we have a voxel that's in front of the surface and a positive sign to indicate that the voxel is behind the surface and then this voxel here is further away from the intersection than this voxel but it's a little bit closer than this voxel etc so we can we can now for a particular viewpoint considering all the rays of that particular view fill this discrete sign distance field representation we have many arrays three of them are only illustrated here but of course we have one ray per pixel so it's very densely capturing the space and in between we can interpolate if there is no ray intersecting with a voxel in fact in an implementation we would implement it reversely we would basically take all the voxels and project them into the image and look where at which pixel they would fall and measure and take the depth value of that depth map at that pixel but just for illustration here we we walk along rays so we have this for the red camera we have this for the blue camera and we get these two discretized sdf representations one for the red camera one for the blue camera this was about depth to sdf conversion now let's see how we do the actual volumetric fusion and to illustrate this in a more simple example we look at the autographic case where the cameras are autographic look in the same direction so all the rays are parallel to each other but really this is just for simplicity of illustration here this algorithm works also for perspective cameras that are not oriented in the same way so what we do here effectively after this conversion into this discretized sdf representation is to calculate the average of the discrete sdf fields because this gives us an average of the implicit surface so by doing the average on this sdf values we will also average the implicit surface that we afterwards extract so let's look at an example so for example we consider like this right here and this ray and this ray you can already see that here we have both a blue and a red ray here we just have a red ray because the camera the blue camera doesn't see that that part of the scene and these are actually the blue and the red are actually the surfaces that have been observed by a red and blue camera so for example here if we fill in now i've just partially filled it in to not make it too complex but if we fill in the red sdf map then we have 0.4 here because the surface is slightly behind this boundary so this distance from the center to the voxel might be 0.4 and conversely might be 0.6 here along this ray and these values are negative and these are positive and they are red because they belong to the red camera similarly here we have blue values behind the surface and in front of the surface and for example this value here 0.1 because this intersection of that ray with that surface is just slightly in front of the the voxel center and now here we have both a blue and red ray so we have information for both and you can see that for instance here the red well both the red and the blue values are positive here we have negative red values and positive blue values because we are between these two surfaces and here we have negative values for both and as i already mentioned what volumetric fusion does is simply to average these values so we're going to replace these well these values here are are just occurring once because we just have two cameras here of course in practice we have more and there will be more overlap but here we don't have more observations so the average will be the same value but here we take the average for example of 1.6 and 3.4 which is 2.5 so we have instead of these values here we have minus 2.5 minus 1.5 minus 0.5 0.5 1.5 2.5 and we do the same averaging in the entire region of overlap actually we do the averaging in the entire space in the entire domain but it just doesn't change anything if there's just a single observation and what you can see now is that well the surface intersection the zero level set the surface is actually intersecting this ray here here no longer here or here but in the middle so we have effectively also averaged the implicit surface if we would now extract the implicit surface from the if we would extract the surface explicitly from this implicit representation yeah so this is what the surface might look like you can see there is this well this discontinuity here actually it's not a continuous surface it because we have a discrete representation but you can imagine this to be much finer and so it might look much closer to this but we have this discontinuity here because there is no averaging happening here because it's just seen by one camera and then this part is seen by both cameras and then this is the part that's seen only by the blue camera but the surface that we have in the end is a smooth surface that is or this is a more smooth surface that is an average between these blue and the red surface how do we do this mathematically well it's simply a weighted average so the fused distance is the sum over i of um the weight i at voxel x so this is a voxel times the distance the sign distance at that voxel x divided over the sum of all the weights at voxel x where i ranges over all the cameras all the images and this expression here at the bottom is simply the fused weight is the the sum of all the the sum of all the weights yeah this is actually the sum of the weights and this is the fused distance so we normalize by the sum of the weights and we distinguish the fused distance and the individual distance with the fewest distances calling them or using an uppercase symbol and the individual distances of individual cameras individual measurements as low case simple a nice property of the simple weighted averaging here is that this can be conveniently expressed with an incremental update rule which we'll do as an exercise and the advantage of this incremental update rule is now given a particular representation we can just add an image at a time using constant computational overhead so so we are not increasing computation a lot we can just add range images at a time and fuse them into an existing reconstruction you can see here in this expression that d i plus 1 just depends on expressions from the previous image or time step plus the current measurements so it's a very simple incremental update rule and it supports reconstruction with thousands and tens of thousands of depth maps that are fused into one reconstruction now what about these weights we said it's a weighted average why do we need weights well there's two reasons why we want to have weights here on the left we can see well the sign distance and also two different weight profiles and here on the right we can see the fused results the reason why we have these weights that are non-constant is that on one hand behind the surface let's say we have the surface here behind the surface we want the weights to decay and the reason for this is that all the values that are behind the surface are just speculation everything that's in front of the surface we know it must be empty space because we have measured the depth at the surface point so we know that everything in front is empty space the weight can be high we are assured that these distance values are correct but behind the surface we are not sure still we need to have some for some distance behind the surface we have to have a positive weight otherwise um we wouldn't keep a zero level set in our fused result so we need to propagate information we need to propagate that we have a a positive values behind the surface for a certain distance but then if we have very thin it's kind of a trade-off if we have very thin structures like a thin wall we don't want to propagate this too much because if then the camera comes and and looks at the object from the other side might see that actually the the voxels that we have indicated as being inside the object are actually not inside it's just a very thin structure so this is a trade-off that we have to play here the other reason why we want to have a weight profile is that we also want to have the opportunity to down weight less certain rays for example if an observation has been made from a very large distance and maybe the measurement noise increases with the distance as in the case of stereo it even increases quadratically or from a very slanted angle or at the boundary of an object where we are not expecting the measurement to be very precise if we have that information we can integrate that by just changing the weight the entire weight globally for that particular observation so here we would have for instance w2 would be a more confident measurement than w1 which receives a overall smaller weight so it's good to have these weights in the formula but why is this averaging in general or this weighted averaging in general a good idea this is the formula from before well it is a good idea because it's the solution to a simple weightedly squares problem what we do here effectively is we're trying to find a surface d and this is just a long array that also holds true if we consider more general setup but this is just a long array we want if we have multiple measurements of depth or distance along array d i and these are weighted by wi and we want to minimize the we want to find the capital d that's closest to these d i's on average weighted by this wi which is what we want to do we want to weigh each of these measurements relative to the weight that we give it and we want to find a d that averages this out then the solution to this weighted least squares problem is exactly this equation here in other words d is the optimal distance in the least squares sense yeah this holds at every location x i didn't include the location the dependency on x here for simplicity but of course this holds in general now we've seen volumetric fusion is nothing then weighted averaging in this sign distance and this discretized sign distance volume how can we now from the results which is also a discretized sign distance volume extractor mesh and the answer to this is the so-called famous marching cubes algorithm that's used everywhere in computer vision and computer graphics it has been developed by coalescent levoi in 1996 and the idea is to simply take the discretized representation and try to come up with a triangular mesh that best approximates the zero level set and the way this is done is by marching this is where the name comes from marching through all the grid cells can actually be done independently for all the grid cells and insert triangle phases whenever a sign change is detected and there's two steps the first step step is to estimate the topology and the second step is to predict the vertex location so in this this is a 2d case here and here we have a 3d case in this 2d case what does topology mean topology means where the triangle face is inserted it could for instance be that all of these nodes here are red all of them would be in front of the surface or behind the surface then there would be no triangular phase in this case there is two to our behind and two are in front and uh so this triangle surface at a strangler face just intersects both but it could be that it intersects here or it intersects here or intersects here and these are different what we call topologies and in 2d well there's four of these voxel centers here that we this this course this dots here correspond this is actually a dual grid this corresponds to the voxel centers from the volumetric fusion grid before at which we have estimated the distance values design distance values and if we have four then we would have two to the power of four possible topologies in 2d but in 3d space we have eight because we have a cube and so we have 256 possible topologies this is the first step figuring out the topology just by looking at which of these vertices are actually larger and which are smaller than zero and this directly maps one to one to a particular topology now we know where the triangle phase should be located but we don't know exactly where the vertices are so where should we locate the vertices well if we have sign distance values then a reasonable thing to assume is that these vertices are at the zero crossing of a linear interpolation between these two so if we take a line and pass it through these two values minus 0.5 and plus 0.5 where that line intersects zero this is the best possible vertex for the triangle phase because this is the zero level set right so if we do this if we set this to zero then we end up with this very simple equation here and if we have 0.5 and minus 0.5 then we have 0.5 over 1 which is 0.5 so it's intuitive that the intersection should be exactly in the middle in contrast here on the right we have 0.8 and minus 0.2 so this intersection is closer to here than to here and so on and so we get also in the 3d case first the topology and then from the actual values the vertex location and we do this for all cells in our grid and this gives us our mesh here's an example of the different topologies that we can observe in the 3d case in fact there's 256 different topologies but they can be grouped into 15 equivalence classes due to rotational symmetry and these are shown here let's now look at some applications of sdf fusion of volumetric fusion maybe the most popular one is called kinect fusion has been published by a newcomb viet al at ismar a conference for vision and augmented reality in 2011 and here is the overall system workflow it's a real time pose estimation and volumetric fusion method which uses the kinect sensor that we've seen before in particular the depth maps that are obtained from that sensor as input and it's doing real-time estimation using a version of the iterative closest point algorithm and then updates the reconstruction using the sdi fusion techniques that we have seen in the previous slides and so what you can see is that if you if you track now frame by frame you get a more and more complete reconstruction so here after having tracked a a partial loop and reconstructed that we get this reconstruction and then if we have the full loop we get an even better reconstruction and here's a video of how this works you can see a person that's moving this kinect sensor in the scene you can see how in real time and that's really impressive i think at the time 2011 particular reconstructing at a quite high fidelity many of the surface areas in the scene quite robustly of course in this case the person had to be static because the system assumed the static scene so i had to stand still an extension of this work has been presented by the same offers in cvpr 2015 that addressed exactly that problem that the scene is often not static that's called dynamic fusion where um from the first frame a canonical representation has been extracted and then with every live frame we can map between this canonical representation and the live frame so for instance if we take the canonical representation and map it to the the live frame then this rays would would actually bend off map from the live frame to to canonical frame and uh now this approach is basically reconstructing non-rigid sins are able to reconstruct non-rigid scenes by warping the estimated geometry using a warp field that's also estimated on the fly into the live frame to update the representation after warping and to to obtain a rigid and consistent representation and you can see here the person is moving is talking is moving the head the camera is moving at the same time yet the reconstruction that is obtained is quite stable now this method works for only for motions of a certain extent there have been a few follow-up works that extend this method to larger motions than what is shown here but um this paper has created a lot of a bus in the community and also one award you can see the reconstruction and the reference canonical representation and this is another scene here's one work that we did in our group one problem with volumetric fusion is that it assumes quite noise free input which is often not guaranteed depending on the type of sensor and so it's beneficial to also integrate deep learning into the estimation process and that's what ocnet fusion tries to do it's basically a 3d convolutional network that uses efficient data adaptive data structures in order to take as an input features from from the fusion process and outputs a denoised and completed reconstruction so you can see here example of volumetric fusion when noise is applied to the input there's observation noise and this is a simple variational fusion technique that doesn't do your learning and here at the bottom left you can see what happens if you train such a learning based model now on a large beta set of 3d shapes that nowadays is often available and as you can see it can can eliminate a lot of the noise while still keeping details intact in contrast here in the fusion technique you can see that a lot of the due to the strong smoothness constraints that are required a lot of the details are gone and another line of work is works that try to tackle the learning problem end to end what we have seen so far is that at least the learning based approaches that let's say take a set of images and learn an implicit representation in a discretized voxel space or also a point set representation had to have a separate meshing step for example poisonous surface reconstruction or run margin cubes in order to arrive at an explicit surface so the supervision happens at the implicit representation so we need to have a loss function defined on the implicit representation to back propagate gradients to the parameters of for example the arc net fusion network and so in this work this margin cubes algorithm that we have just seen has been a variant of this algorithm has been developed that is differentiable it's actually easy to see that the original variant or the original version of the marching cube's algorithm is non-differentiable because of the topology changes but here this is a this is a version here that is differentiable and that combines this topology with the vertex location in a probabilistic manner in order to be able to then back propagate gradients end to end through the entire system to the parameters of the network and allows for training using not this auxiliary loss function on this implicit representation that has first to be obtained but can be directly trained using for example point clouds as supervision that's all for today thanks a lot for watching |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_24_Image_Formation_Image_Sensing_Pipeline.txt | in this last very short unit we're gonna discuss the rest of the image sensing pipeline which is what happens with the light after it has passed through the lens and arrived at the image plane until it is stored in a digital format such that we can use it for computer vision applications for instance here is a simple schematic of the image sensing pipeline that at a very high level can be divided into three stages the first is the physical light transport stage in the camera lens in the body comprising the optics the aperture and the shutter then there is a photo measurement stage and conversion on the sensor chip that measures the photons adjusts their intensity and converts them into a digital format that's also sometimes called the raw image and then there is an in in most digital cameras there's an another digital image signal processing an image compressing stage where the images are denoised and sharpened the colors are converted a white balance is adjusted and they are compressed into a representation for instance using jpeg that is more memory efficient to store so once the light has passed through the lens and just before it hits the image sensor there's typically a shutter and what we see here is a focal plane shutter of a camera and most digital cameras today use a combination of a mechanical shutter and electronic shutter for various reasons now the shutter speed this is a this is a little curtain here that open and closes very quickly it's open only for typically can be open from one side and then closes from with another shutter with another curtain from from the other side and it is open only for a for a fraction of time for maybe 100 or 1000s of a second one millisecond and so the the time that you open the shutter controls how much light reaches the sensor this is called also the exposure time and it also determines if an image appears over or underexposed if your shutter is open too long then your image will be overexposed there will be completely bright spots completely white spots where you cannot recognize anything anymore if it's underexposed it will come it will appear too dark um if you are taking a picture in a dark scene then you have to open the shutter for a long time maybe a second or so and then you have to hold the camera very still because if you move the camera during the exposure time you will have blur in the image so the shutter time also determines the blur and the noise behavior if you have a very dark scene and you move the camera so you want to keep the shutter time short then you don't suffer from blur but you increase the noise because there's now less photons per pixel that are measured and the measure the noise of the measurement process is amplified how is the measurement process actually working there's two main principles two main principles yeah ccd and the cmos chips and in the past ccd chips were the primary chips that delivered high quality but today most of the sensors actually use cmos sensors the way they work the way they measure light both of them of course transform photons to electrons but the way they work is slightly different in ccds ccds move charge from pixel to pixel here and then convert this into voltage at the output node while cmos images convert charge to voltage inside each pixel and don't have to carry charge over multiple sites and this has some advantages and some disadvantages but today the advantages over weigh and then this little orange block that you see here is the gain this is basically how this charge is amplified before converting it to an electronic signal now larger chips so-called full-frame sensors for example that are the size of a classical film format 35 millimeters are more photo sensitive because each pixel is larger so they produce less noise now in order to take pictures of color we have to measure color and color is measured by putting a little filter of color on top of each pixel but because each pixel can only measure one single color you have to arrange now these pixels in a certain pattern and it's called an color array where each pixel in a color array is responsive only to a particular color and the most commonly known pattern is the so-called buyer rgb pattern that you see here where there is a green red blue green pattern that is replicated throughout the entire sensor array and because you're measuring pixels pixel colors or the color only one single color red green or blue at a particular pixel you need to interpolate the missing pixels and this is illustrated here on the right so if you for instance measure green then you have to interpolate red and blue from the neighbors and this process is called democing and there's various algorithms for demo psyching and the choice of algorithm depends on how much time you have and how the quality of the results should be you have you want to avoid some of the artifacts that simple the most hiking algorithms typically make here's another illustration where you can see the little photosensitive elements of a cmos chip and you can see the filters that are put on top of these chips and that let only certain light pass now why is there more green pixels in this array than there is red or blue pixel the reason for this is that we are more sensitive to green colors and that's why we put more of the green pixels onto this array now this individual pixels of course don't just record a single wavelength but they record an entire range of the of the green spectrum for instance and how that looks like depends really on the particular sensor that you're using the spectral the so-called spectral response curve of each pixel or each color is typically provided by the camera manufacturer here what you can see is one of the cameras that we are using in our research and on the right you can see a screenshot from the manual of that camera and you can see how different colors correspond to different wavelengths and you can see that this is not even uh you know monotonic uh it's it has like it was the green curve for respon for example corresponds to green but also a little bit outside of the green spectrum here on the bottom you can see the wavelengths but each of these pixels then basically what it does is it integrates the light spectrum over this spectral response curve so what we're getting is an integral over the light spectrum according to the spectral sensitivity of the camera integrated over all wavelengths that are coming in now in order to store and to represent color images different color spaces are typically used based on the purpose of what we want to do with these images so classical rgb representation is shown here for printing sometimes different color spaces are needed or also for image processing there is for instance the lab color space or the hsv color space the hsv color space is separating not into red green and blue channels but it's separating into hue saturation and value and you can see from from this that um you know uh the uh the hue component is factorized out so here we have all the here we have basically the the um the reflectivity or the color and uh here we have the saturation and here we have basically the overall intensity so depending on what computer vision application you have in mind some of these color spaces might be more suitable than others now in order to store these rgb or other color spaced images onto disk we have to discretize the values we have to discretize them in a clever way and because humans are more sensitive to intensity differences darker regions what we typically do is use so-called gamma compression it's beneficial to non-linearly transform the intensities or colors such that prior to discretization and to undo this transformation during loading such that the intensity differences in the darker regions are kept intact despite quantizing the colors to maybe eight bits um in which we want to store this on on the disk and then another thing that typically happens is we compress the images because storing images as raw images is still quite expensive so we typically compress the images in something like a jpeg format that you're all familiar with typically luminance is compressed with higher fidelity than chrominance just because humans are more sensitive to luminance and in the classical compression techniques like jpeg often eight by eight patch-based discrete cosine or wavelet transform is used and these transforms are basically an approximation to pca of these little patches of natural images and then the coefficients of this um discrete cosine transform so here on the left you can see examples of this discrete cosine transforms once you have converted them uh these these patches these eight by eight patches into the coefficients so that they can be optimally reconstructed with this cosine transform then these coefficients can be stored efficiently using huffman codes and give you a huge gain in terms of memory efficiency when you store this to disk here on the right you can see how different levels of compression different sizes of these codes lead to different discretization or compression artifacts so you see the typical jpeg artifacts for very strong compression here on the right hand side and more recently deep neural networks have also been explored for improving compression algorithms and it turns out that you can actually be quite a bit better maybe unsurprisingly than this very simple eight by eight patch discrete cosine transform compression that's all for today |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_91_Coordinatebased_Networks_Implicit_Neural_Representations.txt | hi and welcome to computer vision lecture number nine at the university of tubingen this is a lecture that i have been looking forward to for a while as it is about a topic coordinate based neural networks or implicit shape representations that my group has worked a lot on and in fact we have proposed one of the first models in this area called occupancy networks that has established an entire field and sparked by now within maybe one or two years already more than one thousand follow-up works this lecture is subdivided into four units in the first unit we are going to cover basic implicit neural representations for shape appearance and motion in the second unit we'll talk about differentiable volumetric rendering which allows for training these representations from images alone without requiring 3d supervisions in unit number 3 we are going to cover neural radiance field or nerf in short a very popular method that also uses these implicit network ideas but with the primary goal of novel views and tests and in unit number four we're gonna cover generative versions of these models that allow for building representations of the worlds that we can sample from in 3d now let's get started with implicit neural representations this is a slide that i have shown you before this is a traditional 3d reconstruction pipeline where the input is a set of images we infer the camera poses for instance using bundle adjustment then we establish dense correspondences stereo binocular stereo or multiview stereo resulting in depth maps we are fusing these depth maps into a more coherent global 3d reconstruction of the scene now this is all fine but despite some parts of this pipeline might utilize learning we haven't used learning extensively and in particular we haven't answered the question if it's possible to directly go from just one or multiple images it's possible even from one image for a human to a complete 3d reconstruction of an object and that's the question that i want to start with can we learn 3d reconstruction from data this is a heavily machine learning or data driven paradigm learning requires data but luckily over the last 10 years or so a lot of 3d data sets have become publicly available for example through scanning techniques such as active lighting systems active light scanning systems that we have discussed in the previous lecture or this is another data set that has been scanned um and that that has happened for both on the object level as well as at the room level or data sets that are comprised of cad models that have been designed by artists and there's a lot of these cad models now available online free for download and have been collected in the so-called shape net data set so there's a data set called scanet and there's a data set called shapenet scan it is a data set that has a lot of 3d scans obtained with a scanner and shapnet is a data set that's a collection a taxonomy of 3d card models that can be used for um learning and then more recently there's an example from the metaport data set there has been also have been also attempt to not just reconstruct single rooms but entire buildings and this of course has a lot of immediate applications so can we make use of this data now if you want to build a model that learns to predict shapes directly from one or multiple input images we have to of course answer what is a good output representation the input representation is pretty clear it could be in the simplest case just a single 2d image that gets processed with a standard 2d convolutional networks as we have introduced it in the deep learning lecture so the input representation is a pixel matrix and we have a 2d confident encoder here but what should be the decoder and in particular what is a good output representation there's a couple of output representations that people have considered starting with this seminal work from matra nidal and iris 2015 where they have looked at the most simple extension of pixel-based representation which is a voxel-based representation a voxel is a 3d equivalent of a pixel it's basically so here's a 2d and this is a this is the pixel version here on top this is just always a 2d illustration to better see what this representation is like with free space so it's inside this is gray and outside free space is white and this is the actual 3d representation where we have these gray voxels that are filled solid this is indicating what's inside and then the free space that's outside you can see these these voxels are 3d equivalents little cubes of 2d pixels so it's a simple discretization it's a discretization of the continuous 3d world that we live in or the 3d space that we want to reconstruct into a regular 3d grid as illustrated here and it's easy to process with neural networks because we can apply the same convolutional neural networks that we have just extend to 3d instead of 2d kernels we have 3d kernels now and of course also the channel dimensions so effectively we would have 4d kernels and the problem with this representation or the major difficulty with this representation that we have also played with is even if you use very special data adaptive data structures such as octrees you still suffer from very large memory growth which in the naive cases cubic but even with these special data structures is a problem so this effectively limits the resolution of what we can output to something very blocky maybe a little bit larger than this this is very small resolution but typically not exceeding 256 to the power of three voxels which is still pretty coarse and you can still see the artifacts and then another downside of this representation is that it introduces an a manhattan world bias right so you have these these voxels they have to be oriented in some way because we are using these voxels as this most simple primitives in our representation and so we have to define the coordinate system that orients the voxels and that coordinate system introduces a bias so we always have we can model flat surface only in the canonical directions but not 45 degree oriented with respect to the canonical directions another representation that has been used in the literature and has been popularized by fenidal at cvpr 2017 is a point sets points are a discretization of surf of the surface into 3d points so this is not a volumetric representation but that's directly a representation of the surface you can see a 2d example here and a 3d example here the difficulty with this representation is that it doesn't model connectivity or topology we don't get a surface out of this representation immediately but we have to extract it using a post-processing algorithm such as personal surface reconstruction that is not directly taken into account when we train these models furthermore these models are typically limited in the number of points that they can represent and they typically require a global shape description as input that also limits the flexibility of what these models can output and typically this is possible only at the object level and not at the scene level and then finally in 2018 mesh based output representations became popular measures are very natural they are used ubiquitously for example in computer graphics and games in movies [Music] and there are also discretizations like point sets and voxels but here they discrete measures discretize the 3d space into vertices and faces right so here we have a 2d visualization it's basically just a line segment and here we have a 3d version which is a mesh and that's actually an output of this method here called atlas net now the problem that arises through this discretization here in the case of measures is that measures are not very nice output representations to predict with a neural network due to their complicated structure it's very simple to create self-intersections and it's very hard for the network to learn to not produce self-intersecting surfaces as shown here there's two primary paradigms one is to predict little surface patches and stitch them together as atlasner does the other is if we know something about the shape that we want to represent then we can build a class template a class specific template such as a mesh of a human face or a human body or an animal body or whatever so if we know what class we want to reconstruct then we can utilize these templates and instead of predicting local surface patches that have this problem with non-water tightness and self-intersections we [Music] then deform the vertices of this template but of course this only works for well-behaved cases if we don't change want to change the topology we don't want to create holes and if we know already the class that we want to reconstruct so it's limiting the applicability and it's not as generic as the other representations now what we try to do in our cvpr 2019 work is to go away from these discretized representations and instead represent the surface implicitly and i will tell you in a second what that means the advantage of this is that this allows for representing arbitrary topology and in theory at least that arbitrary resolution using a very low memory footprint and it's not restricted to any specific class template like in the mesh case so the key idea is to not represent the 3d shape explicitly but instead to consider it or to consider more specifically to consider the surface implicitly as the decision boundary of a non-linear classifier here's a 2d example in this case with a linear classifier you can think of for example a support vector machine or a linear neural network in practice of course we're going to use linear neural networks that tries to separate two classes the red points are the points inside the object which take occupancy value one and the blue points are outside the object which take occupancy value zero and if we want to train a classification model or if we have trained the classification model and we can extract the decision boundary right that most optimally separates inside from outside points and this decision boundary is what we consider the surface so the surface is represented implicitly represented implicitly by the parameters of this model and it's represented implicitly because it needs to be extracted in a post-processing step because the only thing that we have is access to this classifier that separates inside from outside here at the bottom you can see a 3d example in this case with a non-linear decision boundary you can see this object is a bench and there is these red points that are sampled inside the bench and the blue points which are sampled outside the bench and we want to train a classifier that most optimally separates the blue from the red points and we do this by repeatedly sampling points from the volume for which we know if they are inside or outside and we can sample points closer to the surface to become more precise for example and then we're refining the weights of our neural network of our classifier such that the decision boundary becomes closer and closer to the decision boundary we like to have so of course this requires full supervision 3d we require the ability to for every surface for every 3d point to query if that point is inside or outside the shape so we require a watertight mesh in order to generate the ground tooth to repeat instead of representing the 3d shape explicitly we consider the surface the surface of the object implicitly as a decision boundary of a non-linear classifier and we call that classifier an occupancy network f theta because it's a neural network with parameters theta that maps from a 3d location this is just three coordinates this is the point location plus some condition that could be an encoding of an input image or a point cloud this is representing the structure of the 3d shape that we want to model so we need to of course condition on something in order to be able to represent multiple different shapes so this is where the condition comes in and the output of this neural network is just an occupancy probability that's between 0 and 1. right neural networks can predict probabilities by a simple sigmoid or a binary softmax activation if the network predicts one then it's very likely that this point is a point that's inside the object if it breaks zero it's very likely this point is outside the object and we want this classifier to be as confident for most of the points and as correct for most of the points as possible some remarks the function f theta models and occupancy fields here we have really a continuous space that we consider so this function here models a a fields in a mathematical sense and it's also possible to model not just the occupancy probability but also design distance as we've seen the previous lecture design distance is useful it allows for example to directly find the surface from a point close to the surface by just following the gradient for as long as as the sine distance value as the magnitude of the sine distance value and this has also been proposed by related work but the model is very similar except that we change the occupancy and probability output with a sdf output in this in the context of list lecture however we're going to stick with the simpler model which is just predicting occupancies now in order to implement this model we need to define this neural network architecture f f theta so what we do here is a very simple multi-layer we use a very simple multi-layer perceptron which is a residual network composed of five blocks this is a repetition of five of these yellow blocks where we have an encoder here this is another neural network for example a 2d convolutional network that takes an image or a point encoding network that takes a point set or a voxel 3d convolutional network and that produces a fixed length vector which is the condition let's say this is a 128-dimensional vector and then this condition is injected at multiple locations into this residual network using so-called conditional batch normalization but you can also do other conditioning techniques you can apply other things like concatenating these vectors or adding these vectors and all of them work to some degree and then the other thing that's the input to this residual network is a set of 3d points we not just only pass a single free point but we pass directly a batch of t in the order of 2000 points because it's of course more efficient to process more of these points in parallel to process the entire batch in parallel on the gpu so we pass an entire batch of these points to this network that's why they have a t times 3 for xyz dimensional input and the output is for each of these t points the occupancy probability it's a very simple model how do we train this model well it's a simple classification task so we can train it with standard binary cross entropy and that's what we do so for all of the k points we compute the binary cross entropy between the model prediction for that point and the true occupancy which is 0 or 1 for that particular point and that's always given this input condition z this is the condition on the image we can also build a variational occupancy encoder and that's simply the variational autoencoder model that we have talked about in the deep learning lecture applied to this idea of implicit shape representations where we have now in addition to the reconstruction term a kl term a kulbuck label divergence term between some prior and the encoder distribution q psi so we need another encoder that goes from the point set to the latent code right okay now because this is a implicit model we don't get a mesh out directly so we have to extract the mesh from this implicit representation and how do we do this we do this using a technique that we call multi-resolution isosurface extraction in short misa which it incrementally builds an octree by querying the occupancy network so we start with a grid let's say a four by four grid in practice of course we use a more fine-grained grid and we query the occupancy network so this is you should imagine this in 3d but for purpose of easy visualization i've plotted in 2d here so we query these 3d locations on that grid and we observe that this point is inside the object and this point is outside the object given the object representation given the parametrization of our neural network for that particular setting of parameters and input condition this is the shape the boundary the surface of the object therefore this point will result in a will be classified positive and the other points will be classified negative then we look at all the adjacent cells that are adjacent to any point that's inside and points that are outside that transition from points inside to points outside because the surface must lie inside these cells and we're subdividing these cells simply and then we are querying the subdivided unknown locations and we find that these points here are inside and the other points are outside and we can repeat this process n times and get finer and finer until we reach the level of granularity that we want and in the end we can run the margin cubes algorithm as discussed in the last lecture to extract the surface to extract a triangular mesh from this indicator grid and the whole process requires about one to three seconds depending on the size of the scene so it's not super fast but it's not super slow either and the main time requirement of this process is because we have to query this neural network many times but we do it in a clever way we don't do it naively and densely at the same resolution but we do it close to the surface we do it in this hierarchical course to find manner here are some results this is the input to the image and on the right you see some bass lines a voxel based approach a point set generating network a mesh generation network and the output of our method meshed to a 3d mesh and you can see that it is very simple for this implicit representations because we don't model the surface explicitly to handle arbitrary topologies and produce very nice and smooth surfaces here you can see a different input condition this is a point cloud shape reconstruction task where the input is a sparse and noisy point cloud and the output is a 3d shape and on the left the ground truth and we can similarly apply this to a voxel super resolution task where the input is a course marks grid and we're trying to do super resolution of this voxel grid you can see the output of our method on the right what we show here is a visualization of latent space interpolations of our generative model of this variational auto encoder type model which is able given a set of cut models as input to generate new shape new shapes from the underlying shape distribution and to interpolate between shapes as you can see here okay so let's move on this was all about shapes representing shapes but can we also represent appearance with this type of implicit model idea and it's actually quite straightforward to extend these ideas to color what we did in this work is called learning implicit surface light fields that we published at 3dv in 2020 is to try to model view dependent appearance by conditioning a neural network on an input image just as before but also on some 3d geometry that could be a cut model that represents this object in the image or that could be a shape that's predicted using an occupancy network from that image and we show results for both cases and what we want to do then is we want to train this model using um different view conditions such that afterwards we cannot only change the viewpoint but we can also manipulate the illumination as you see here you can see that the shading and the shadows that are on this object change depending on where the light source location is here's the rendering equation that we are familiar with we're not using the rendering equation directly but we are modeling a conditional surface light field so we're not modeling the material the brdf and we are assuming a point light source which means that this integral here goes away so what we do instead is we model a function that maps from a 3d point location and a viewing direction v and a light location l these are these three quantities to a 3d color value that's called a surface light field for every point on the surface given any possible view direction and light direction we want to know what the color is that this would produce when i render this into the image so here's an overview of the model the image encoder is the same as before we take an input image and encode this into a latent vector c and then we have an input shape that's here in this case encoded using a geometry encoder into a global shape code s and then we can take any point on the surface of this object and we query first an appearance field that gives us a feature vector for the appearance and then this lighting model that conditions also on the light setting and the view direction to produce the color of the pixel that this 3d point renders to and we're minimizing the reconstruction loss between this predicted image and the true image this is trained on a data set of chairs with materials that we render using physically based light transport here are some results this is with ground truth geometry as input here's the input image on the right you can see a 2d baseline that tries to transfer learns to transfer the appearance of that 2d image to any render to the image of the object and that doesn't work very well of course and it's not very consistent but because we model directly in 3d our results are much more consistent and in particular what you can see here is that it is capable of modeling specular highlights as well as also shadows and this also works with inferred geometry so here's an input image this is the geometry that has been inferred with an occupancy network and then this is the the color the view dependent color and the elimination content conditioned color that we produce this is all this all produced this output is entirely produced from a single 2d rgb image that's that's quite remarkable that you can learn to produce such an output from this type of image however this model has only been trained on chairs so you cannot produce anything you cannot predict anything else then shares and of course if you go to an object that's a little bit further from the data distribution from the distribution of chairs that have been seen during training it will not work very well so this was about appearance can we also represent motion the naive way of representing 3d motion is to simply extend this idea of occupancy networks to 4d but unfortunately this is hard due to the curse of dimensionality it is hard to represent a complicated 40 function and the shape evolving over time is a complicated 4d function because it's discontinuous at the surface boundary so it's really hard to represent this in 4d and the additional difficulty is that there's much less data sets with ground truth inside outside points with watertight meshes available for this type of 4d task so we have less data and we have a higher capacity model and a more difficult optimization problem which makes it very hard therefore what we do instead is to represent the shape only the shape at a particular time instance let's say t equals zero without losing generality using a 3d occupancy network so this is what we have done before already and then we represent the motion by a temporary and spatially continuous vector field as illustrated here now this is still a 4d function that we have to predict but in contrast to the shape evolving over time this is a much more continuous object because this vector field can be continuous over the entire duration of the sequence as illustrated here it's very continuous you can see these little errors are changing very continuously both in space and time and therefore it's much easier for the model to learn and we can use lower capacity models to accommodate the fact that we have less data available for training the relationship now between the 3d trajectory of 3d location s and the velocity is of course given by a simple ordinary differential equation which tells us well the gradient of the location is the velocity that's what we know from high school physics now because we have such a simple relationship as an ode and because odes are differentiable this has been popularized or repopularized in a 2018 europe's best paper award we can use this ode inside our model and back propagate gradients through it and this is the model so what we try to do is we try to given ground to shapes at discrete time instances for which we know inside and outside we can take any point here and warp it through that ode that's predicted by the velocity field or that's conditioned on this velocity field or implied by this velocity field that itself is conditioned on the input so we can work that point to the location at time equals zero you can see how the space deforms in the background and then we can try to make the occupancy network predict the class class label of that particular point correctly in this case it's an inside object point so it should make the occupancy network predict one for this particular point now this is all end to end learnable because the ode is it doesn't have any parameters but it's differentiable so we can back propagate the gradients through this entire process to both the parameters theta of the occupancy network and the parameters of the velocity network okay and the really nice thing about this is that which we don't need any correspondences which are typically required for such kind of tasks but the correspondences are implicitly established by our model and they are correct so here you can see some inputs some frames of the inputs this is a point cloud completion task and at the bottom you can see the output and you can see in color which points correspond to which points and you can see that the color is correctly predicted the same for the same object parts here's a little video again this is point cloud completion results the input is a very sparse and noisy point cloud and on the right you can see the result of this naive occupancy network in 4d and this occupancy flow model and you can see that it's much more expressive and it does suffer much less from this course of dimensionality than the occupancy network 4d approach we've talked about representing shape we've talked about representing appearance and motion but all of these results that we've seen were on relatively small scale scenes in particular objects single objects reconstructing single objects and so we're wondering well can we also use this output representation as a representation to predict larger scale scenes entire rooms let's say or entire buildings even like in the case of the metapod data set there is an important limitation of this implicit neural representations as we have seen them so far there is actually two important limitations the first limitation is that what we have done so far is to predict a global latent code as a input x that's the image or the point cloud or the voxel grid and we have a 1d encoder that predicts a fixed dimensional latent code for that image and that's the condition of the occupancy network of the mlp these are the features now this is a let's say 128 dimensional vector and it's very difficult to represent the entire distribution of possible 3d scenes in such a vector it's it's kind of possible to represent the distribution over objects because the shape variability is not as large but for scenes with multiple objects inside the type of outputs that we can expect grows exponentially because there is any possible object at any possible location in any possible combination and that makes it really hard so because there is no local information in this global code this 128-dimensional vector has to represent the entire scene it's very hard and typically it leads to very smooth geometry as we'll see on the next slide the other limitation is that this architecture is very limited by its nature it's a fully connected architecture fully connected network that for example doesn't exploit translation aquarians such as we exploit it on a regular basis when we deal with images in our convolutional neural networks and we know that this translation aquarians is crucial for modeling images or also 3d scenes with voxel grids but this is not at all captured here in this type of model and this is what it looks like if we apply this implicit occupancy network on um well still relatively simple but scenes that are more complex than single objects on the right you see the ground truth on the left you see the prediction this is a model that has been trained for very long time much longer than for the single object prediction tasks and it's just not able to get the objects right it actually gets the ground plane right because the ground plane is something that looks similar in all the scene scenes but it doesn't get any of the objects close to even close to what they actually should look like so what we are trying to do in in this work that we published at eccv 2020 called convolutional occupancy networks is to combine the merits of both approach implicit modeling and convolutional networks into one approach to remedy the two limitations that we've seen previously the task that we consider here is the point cloud-based shape completion task and what we do is we first encode the 3d points locally using a pointnet encoder so we take all the points that are inside such a volume and project and encode those using a neural network that produces a feature for just those points at that location on the ground plane let's say this is the ground plane here and we do this for all the points on the ground plane so we we use a discrete structure on the ground that we establish a discrete pixel grid and then we compute features for each of these pixels on the ground plane based on the points that fall inside this tube and then we have a 2d convolutional network that runs on this image this is now an image on the ground plane it's a simple unit that aggregates features on that image and that exploits now equivariance and then we can query each of these features by doing bi-linear interpolation of the features on the ground plane so we if we want to query the occupancy state of that 3d point we are querying these features by bilinearly interpolating them and inputting the bi-linearly interpolated features and the location to the occupancy network that predicts the class inside outside now because we have already a lot of weights in this 2d unit this occupancy network can be more shallow than before so this readout network is much more shallow and we exploit the convolutional property equivariance so we have exploited locality by just looking at local a local neighborhood and the convolutional equivariance property there's multiple ways of doing that we can just project to a single plane we can project to three canonical planes here these are the planes aligned with the x y and z axis of the coordinate system and then we query all of these features as input condition to the occupancy network or we can do it directly in a 3d volume where now we have to use a little bit of coarser representation of the volume compared to these planes here which are 2d now here we have a 3d grid so instead of maybe 128 by 128 we can just use 32 by 32 by 32 but we use the same principle we aggregate information just within local cells so this is the cell center and we aggregate information from all the points that are surrounding that are in falling inside that cell locally and then to query to obtain the features we use tri-linear interpolation of these features that have been computed using a 3d unit architecture on that voxel grid so we query these features now using tri-linear interpolation because we are in 3d but then we do the same thing we have a shallow occupancy network as a readout that predicts the class conditioned on these interpolated features and we can do that of course for any continuous location in that volume for for the na for a point that's in the vicinity but still in the same cell these features of course will look different and that's what the network both this network and this network together because they are jointly trained can exploit to learn robust scene shape representations so here's a comparison again this is the naive occupancy network and at the bottom is the convolutional occupancy network that in contrast to this produces a localized feature representation that can be processed using a convolutional network and then we query the features using interpolation as input to this more shallow readout mlp in comparison to the occupancy networks this leads to much higher accuracy and also faster convergence so we have here the input for a point cloud reconstruction and a voxel reconstruction task or voxel super resolution task this is the result for occupancy networks on this more complex shape categories and this is the output of the convolutional occupancy network that exploits locality and equivariance more importantly this not only leads to better accuracy and faster conversions but it also allows for representing scenes here we have a comparison on synthetic rooms you can see compared to the naive occupancy network our reconstructions are much more precise and even when training only on synthetic rooms we can apply this model on real scans so this is a model has been trained on synthetic rooms but we have applied it then on real scans from a kinect structured light sensor this is on the scanner data set and the results while not perfect they look plausible at least much more plausible than the results of the naive occupancy network and then finally we can also take this model and extend it to a fully convolutional model and apply it to really big scenes like the metaport data set which contains scenes of entire buildings with multiple rooms and this doesn't fit in gpu memory but because this model is fully convolutional we can with overlapping receptive fields applied in sliding window fashion in chunks and we can train it also on these chunks we can then subsequently process multiple or iteratively process multiple of these chunks in a sliding window fashion just like a sliding window object detector and produce very large scale reconstructions as shown here this is also illustrated here in this video this is one example of the metaport 3d reconstruction and this is another example from metapod |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_121_Diverse_Topics_in_Computer_Vision_Input_Optimization.txt | hi and welcome to lecture number 12 which is the last lecture of this computer vision class and today i'm particularly excited because we just got to know that we received the best lecture award for this computer vision class from the computer science department and so i'm even more excited to tell you about the things that i wanted to tell you about today in this computer vision course we've learned a lot right this is a overview over over the topics that we have covered it's not really complete but it tries to give you a rough idea of what we have covered and it's really amazing what we have covered in this class so far starting from the history over basic geometric primitives and image formation camera calibration 3d reconstruction binocular stereo structure prediction optical flow shape from shading multiview reconstruction volumetric fusion margin cubes implicit neural representations differentiable rendering novel view synthesis uh image classification and all the other recognition tasks like 2d detection and semantic and instant segmentation as well as self-supervised learning however there is much much more computer vision is such a diverse field it has grown a lot there is so many people doing research in this area and here i try to list some of the topics that are also computer vision but which we didn't have time to cover cover in this lecture and this is not because they are not that they are less relevant but just because we had to focus and i wanted to focus this class on the fundamentals and dive deeply into the fundamentals rather than just giving a shallow overview of all these topics what i did in order to create this slide was i went to a website that was visualizing the topics of paper submitted to recent computer vision conference i was looking for topics that we haven't covered and i'm not going to read them all to you here but this is just to give an impression of what else is there in our field so i was thinking what can we do with this last lecture of course we can't cover all these things that's impossible and we would have to be very shallow if we wanted to do that and in fact some of these topics we are covering in other classes in some of my other courses at the university of tubingen for example slam and localization we're going to cover in the self-driving class which is taking place next semester image synthesis scans and vaes we have already covered in the deep learning class object tracking and robot vision are also part of the self-driving class what i decided is i will cover some of these additional topics not as in-depth as of course i could cover all the previous topics but i want to just give you an overview a high level overview over these topics diving a little bit more deeply into some specific methodologies of some particular break few breakthrough papers but otherwise staying fairly high level for this course so in particular i want to cover shape abstraction techniques adversarial attacks which had a tremendous impact created their own subfield human body models is also a big subfield in the area of computer vision these days neural style transfer you probably heard of deep fakes as well and also disentangles compositional representations and i want to credit some of the resources that i have used to create these slides which you can find here there's links to blog posts as well as videos and these are great resources which i encourage you to have a look at if you're interested in these topics this is the structure for today the agenda for today we will talk in unit number one about input optimization techniques that's what i call them there's a couple of techniques that fall into this category it will become clear in a minute why i call this input optimization in unit number two we're going to talk about compositional models and holistic models in unit 3 we are going to cover human body models and finally we're going to briefly talk about deep fakes which are so much in the news these days so let's start with unit number one input optimization what is input optimization well this is a computation graph as you're already familiar from with from our deep learning lecture and this is a computation graph of a neural network so we have some weights here in orange the input in green and then we have also the label in green here and the model makes a prediction in this case it's a very simple model it's a logistic requester let's look logistic regression model let's say it predicts this y hat and we have a loss function that compares y hat to y and what we are going to do in order to train the model parameters is we're going to back propagate the gradients along these blue errors here backwards to the parameters w0 and w1 given pairs of training samples x and y this is the setting that we are familiar with but we can do more with such a computation graph it's much more flexible instead of back propagating the gradients to the parameters we can also take a pre-trained network and from some loss node back propagate the gradients to the input instead of the parameters from the perspective of a computation graph really nothing changes it's the same computation graph it's just that we're computing gradients for another node and in this case this node is not a parameter node but it's an input node and we will see two examples of why or where this could be useful in this unit the first example are so called adversarial attacks and the second example is a style transfer and in both of these examples we're going to take a pre-trained network so the parameters and the architecture are going to remain fixed and we're going to back propagate the parameters to the input in the first case in order to create an artificial input that looks like a particular image that we would be familiar with but that is classified wrongly by the classifier and in the second example to create an image that is an artistic rendering of a particular input image that we provide so in both cases we want to generate image we want to invert basically the process here by back propagating gradients to the image to generate an image through this backpropagation process this is the idea of input optimization compared to the traditional weight optimization that we have been dealing with so far okay the last lecture so today i brought wine okay let's start with the first topic adversarial texts and this really spawned an entire field in computer vision and there was this one paper i can still clearly remember the talk of this paper was called intriguing properties of neural networks was published at iclr and i learned about it at the subsequent cvpr and the results of that paper were shocking it was shocked shocking to the entire community this was like this paper was published at a time where people got really excited about deep neural networks because they have demonstrated state of the art substantial improvements in terms of image net classification and on the side a lot of additional use cases were developed such as depth estimation and and also other recognition problems like object detection have been successfully addressed and so people got super excited about these deep neural networks but then there came this paper that showed well these networks have a fundamental flaw and what like this first paper proposed was a so-called lbfgs attack and the idea is very simple so you take an input image that is classified correctly by the classifier as a school bus and then you add a little bit of noise to each pixel but you do it in a specific way and this noise here is highly magnified highly amplified just to recognize it the noise is much weaker than this image here illustrates and you see this also by this image which is actually this image plus the noise and from a human perspective they look the same so it's still a school bus obviously but in this case the classifier now becomes super confident 99 confident that this is actually an ostrich and the same here this is the input image is correctly classified as whatever that class actually is here and when we add this little bit of noise it becomes an ostrich with 99 probability if you think about the soft max probabilities or this image as well so if we add the right noise the right noise image and they are all different here as you can see we can take any input image that is classified correctly and turn the classification result upside down and of course this has consequences right that's why it's called an attack we can attack a classifier we can we can create an image that from a human perspective looks looks good but the machine becomes very confident that this is something else so here's the mathematical formulation for such an adversarial example under the lbfgs attack so we want to find so given a classifier f that takes an image rm as a image out of rm m is the number of pixels and produces uh maps that into a set of classes in case of image net that would be l equal one thousand we want to find an adversarial example and this entire expression here is the adversarial example for a particular image x this is for example the school bus image here and we do this by adding to that image x this noise here which is the argument over all possible delta x this all the space of all possible noise images where we want the noise to be small so we take the l2 norm here the arcmin over the l2 norm but at the same time we inject the constraint that the image is misclassified the most likely class that this classifier produces is actually a particular target class for example ostrich so yt the target class is ostrich no matter what the input image is we want to find the smallest perturbation of the input image which is this column here such that when we add it to the input image and classify it we get a very high probability the class that we want and so once we found this delta x we add it to x and that's the adversarial example that we see here in the right column this is pretty amazing right that this works it demonstrates how fragile neural networks are and how by very small but very targeted changes we can actually fool them and this works not only for classification it works also for semantic segmentation here is a street scene we can add a little bit enough noise to the street scene and what has been correctly classified is completely misclassified as a completely different street scene this is an intersection this is a straight road there's pedestrians there's no pedestrians here the car would stop here the car would drive right just by this little change of the input now people were wondering is this actually really a problem right so this assumes that we need to really be able to manipulate every input pixel of the image and that might not be practical if we want to really attack a classifier or a detector we can't do that typically because we don't have access to the system or if we would have access to the system we could just directly change the output of the system and there was this very controversial paper here from davy forsyth's group called no need to worry about adversarial examples in object detection in autonomous vehicles and this is a sentence that i cite from this paper i'm going to read it adversarial perturbation methods apply to stop sign detection as shown here only work in carefully choosing situations and our preliminary experiment shows that we might need not to worry about it in many real circumstances specifically with autonomous vehicles because there's all these intermediate processes there's the imaging process the sensor adds noise the optics adds or manipulates the image but what we can really do is we can't manipulate the image we can't just manipulate the environment that's going through this imaging process we don't have access directly to the pixels let alone the gradients and so they try to convince the community that's actually not a problem at least with these early methods but as it turns out it is a problem even in that setting and that is what is called a robust adversarial attack it has been demonstrated by for example this paper but also follow-up papers um that there are robust adversarial examples in the physical world and what does that mean that means that there are these examples that no matter from which perspective i would view them they still fool the classifier and what they do in this paper in order to demonstrate this is that instead of just taking one image and trying to manipulate all the pixels such that that image is misclassified they maximize the expectation over the space of all possible transformations that they are interested in it's called eot expectation over transformation so they define a transformation space let's say the space of all affine transformations or the space of all perspective transformations and then they maximize the log likelihood of the wrong or the target class that we want to fool the network with and then at the same time minimize the distance of the two transformed images uh the images image that we search and the original image which is kind of the same same formula as here just differently expressed but now the difference is we search over we compute the expectation this is the different the important part here we compute the arc marks over the expectation over the set of all possible transformations that we're interested in and initially people thought this is not possible this is just too large and we can't detect these networks but it turns out we can still attack them not as well as if we could manipulate per input each individual pixel but we can and of course larger distributions the larger we choose our distribution space the larger the perturbations have to be so it might become uh at some point obvious to the human eye as well that the image has been manipulated in order for actually being able to fool the model and there's different papers that describe also attacks that are more easy to implement in practice so for instance you can also optimize over patches of a regular form that you can more easily print and stick to actual uh to the actual 3d environment like the stop sign here it's called graffiti in this case and it still works you can optimize over this type of images and and you can still get an adversarial attack and now now it becomes really easy because you you can just spray the image you can spray the objects in the world or you can put stickers on them and you can fool the classifier if the classifier is not trained to cope with that another type of attack is called an adversarial patch attack all the attacks that we've seen so far were specific to particular objects or even to specific input images but if we want to really attack a model more generally we of course don't know what the environment will look like and so we might need a more general um attack and this is a patch attack so in the case of a patch attack we are interested in in placing a patch into an image and the patch might look arbitrary so it's it's easy to see there is something added to the scene it's not a change that is imperceptible to the human eye as before but it covers only a small portion of the image um and it's still able to attack no matter what else is in the image so for example here the patch is added to this table with the banana and it can basically change the classification of the banana into a classification of the toaster here's a little video you can see the banana if we add an image of a toaster it's still classified as the banana because it's probably dominant object but if we add this little patch that has been specifically tuned for that classifier it becomes the model becomes very certain that this is a toaster now and this is easy to apply in the real world because you can just print the patch and put it somewhere this is some work that we did in our lab we wondered well now most of these adversarial attacks have been developed for and tested with recognition systems like classification and detection does it also work for deep learning models that try to estimate geometric properties or in this case optical flow so here given an image pair from a video sequence we have an optical flow algorithm like flow in it two that predicts a smooth optical flow field color coded here the direction and the magnitude of the optical flow and now the question is can we actually obtain can we optimize for a small patch that covers a small piece of the image let's say smaller than one percent of the entire pixels the image area this attack patch here and that attacks the image like that successfully attacks the optical flow network to formalize this let f of i and i prime which are the input images the two frames so let f of i and i prime denote an optical flow network and calligraphic i is the data set of all frame pairs and let a of iptl denote the image i on which a patch p has been inserted that is transformed by some affine transformation t that is inserted at location l so t and l describe the transformation and let t denote the distribution over all possible alpha and 2d transformations of the patch and that l denote a uniform distribution over the image domain that is where we insert the patch and then what we're trying to do is we're trying to find the patch p hat such that p hat is the minimizer as the argument over the expectation of all the image pairs in our data set over all the 2d affine transformations and the 2d locations so we are putting the patch basically everywhere and under any transformation in the image and what we want to do is we want to minimize the inner product the normalized inner product between at every pixel between the the unperturbed flow field this is the first term here this is the flow u and v and the perturbed flow field u tilde and v tilde where u tilde and v tilde are the flow fields when we take i and i prime and pass it through that function a that takes that patch and puts it at a particular location with a particular affine transformation so this is the flow computed on the perturbed images and this is the flow computed on the unperturbed images and the intuition of this is of course that because we're minimizing the correlation we want to find a patch p head that reverses the direction of the optical flow ideally at every pixel in the image and if we can do that then we say well we are successful we have we can't misclassify here right it's not classification but we can try to reverse the optical flow we can make it the opposite of what it would have been and it works so here we have a very small patch it's hard to see actually and each row is a different flow network and you can see that depending on the type of flow network we get a stronger or weaker effect of this attack so in particular this unit type end-to-end models like flown at c and flow.2 are heavily affected this is the unattached flow field and this is the attack flow field this is the unattached flow field and this is the attack flow field you can see that almost half of the image is corrupted now by this little by this insertion of this little patch nothing else has changed we just changed a few pixels in the image and of course if we make that patch larger then we can perturb a larger region but what's important here is that in both cases the the attack region the region where the optical flow has has changed due to the receptive field size of the network is extends far beyond the region of the patch itself we are attacking a much larger region that's really problematic and then we can also look at black box attacks where the setting is we optimize a patch for some networks and when then we take that general patch which is looking like this and we use it to attack an unseen network and that's what we call a black box attack because we don't know the network that we applied on and also it works in this setting so here's an example on the real world where we have a patch printed and you can see a person moving in front of the patch and if the person is in front of the patch then the flow field looks fine but if the person is next to the patch such that the patch becomes visible then the flow field becomes disturbed like here of course this raised concerns in the community and also spawned again another research area which is uh concerned with defenses against adversarial attacks and both of these fields somehow uh you know hunt each other so there's a new adversarial attack proposed and then a few months later there's a defense against that attack but then there's a new attack and a defense against that attack now people trying of course to put this on a more theoretical basis now and trying to develop general defenses general defenses that is also related to general robustness of these networks and out of distribution generalization etc so an entire field evolved around defending it with several attacks and there's three domina dominating paradigms the first and you can can read about this in in this review here for example the first is gradient masking obfuscation since most attack algorithms are based on gradient information of the classifier if you mask or hide the gradients to the attacker then this will confound the adversaries the second is robust optimization and there the idea is to relearn a deep neural network classifier so that it increases robustness and you do that by for example training classifier to correctly classify artificially generated adversarial examples it's called adversarial training so you generate these attacks but then you provide these attacks to the algorithm and the algorithm can take them into account during training and become more robust and then there's also adversarial example detection which is concerned with studying the distribution of natural or benign examples in order to detect adversarial examples and then disallow them as an input to the classifier and there's many more papers this is just a short list the second topic i wanted to talk about in this unit on input optimization is neural style transfer which is a completely different technique where the goal is completely different is not about attacking a network but about creating beautiful artistic images but it uses the same idea of optimizing not the parameters but the input of a neural network and probably many of you have seen this in particular if you are in tubing and this is a technique that has been developed in tubingen and matthias paige's group and it has been showcased on this famous tubing and riverside image where this image which is called the content image is combined with a style image this is a single image of a painting of van gogh and then the output of this algorithm is a stylized version of the content image so in neural style transfer the goal is to synthesize an image this one here on the right with the content of one image and the style of another image and the way this is done is to start with a random image this is the input x to the neural network and this is a pre-trained vgg network pre-trained on imagenet and then we're optimizing that image using a content and a style loss to match in terms of the content to match the content image and in terms of the style to match the style image and here's an overview of how the content loss works there's these two losses the content and the style loss the content loss makes sure that the content is matched so how does this work the first step is to so we have a pre-trained vgg network it's pre-trained on imagenet this is this network here in the first step we pass the input image through that network which gives us some level features at some deeper layers of that network which we extract and we keep that fixed and then in step three we initialize a image of the same size randomly with random numbers this is this and then we pass that image through that same vgg network and that will produce also some features and then we have this content loss that tries to make these two the same and so it generates a gradient in blue that flows through these layers it doesn't update the parameters those are detached in pi torch slang it doesn't update the parameters it just updates the pixels only the x the input and then step by step we arrive at the content right or it's something that's similar to the content based on what is actually captured in in this extracted features and also of course based on what this neural network here has learned during the pre-training stage on imagenet so here's the formula for this loss it's very simple so we just so f and p are just the feature maps of the generated and the content image at vgg layer l and so the loss the content loss at layer l is the features at layer l of the generated image minus the features of the content image the feature map we just subtract that and take the l2 loss of that vectorized quantity how does this work if we just do that this is not yet style transfer it's just one part well if we take features from the early layers it works super well so if we look at this we almost get the input back and if we go to later layers we also get the input back but we also get some random noise in there so we see the content is preserved but somehow in terms of the style and the appearance it has changed quite a bit so and this is what we would expect because the the features like this low dimensional code that that we have at the very deep layer of the network doesn't capture the low level statistics of all pixels in the original image but it captures the overall properties and so we still still see the overall structure while details have been gone and what we are using as content losses i think at layer 4 style transfer uses layer 4 because it wants to keep the content as much as possible from the content image and so it uses l equal 4. now about style how can we make sure that the image that we optimize matches in terms of style to the style image like this one here the idea here is borrowed from the literature on texture optimization texture generation it's called the gram matrix so we compute at each layer we can compute what is called the gram matrix and instead of subtracting the features we are subtracting the gram matrixes the cram matrices from each other and computing the l2 loss of that vectorized quantity now what is the gram matrix at a feature it's very simple it's just the correlation of features at each layer l we have a feature map produced by our vgg network so we have width times height of the feature map times number of channels on both feature channels and this green gram matrix has the size of the number of feature channels times the number of feature channels and it computes simply the correlation between any two feature channels by summing over all the pixel pixels in the image domain so here the sum runs over all the pixels in the image domain on the feature maps and it correlates the feature map i with the feature map j at layer l that's all it's just a dot product and that's what's called the gram matrix and then the gram matrices are made as similar as possible what happens if you do that if you do that for the first layer you get something like this so you can see that you have a very local properties matched but it doesn't look at a more global level like the style of this image but as you move to later layers and in this case we're actually computing the gram matrix at all subsequent layers as well and adding them up so this loss is a combined loss of all the style losses across all the layers but that's a technical detail so as you do that and move to later layers you can see that the style starts to match but the content is no longer there right so because we are computing here something that is where we have some overall pixel locations so we're becoming spatially invariant we don't have we can't expect that the content is the same but the statistics of these feature correlations are the same and through this pre-trained vgg network we're getting an image then this is the optimized image that matches in terms of style but not in terms of content and what neural style transfer does is simply combine these two objectives right so here we have the image that is optimized for on the right we have the content image and here we have simply the difference in the feature maps which is the content loss here we have the style image and here at each layer in this case we have the difference in the gram matrices and then we're using these as the loss functions and back propagating the gradients here along the blue arrows in order to update this initial random white noise image and this yields these beautiful stylistic images of the provided content image and you can do that for for any style image so here are different examples for different painters you just need a single image of that painter and you can stylize this image of the beautiful tubing and riverside according to the style of that particular painter so that created a large bus in the community that this is actually possible and this is a feature that you can now find in in modern software uh for image processing as well and there is a website from matthias baitcas lab where you can take your own picture and you upload your picture and you upload a style picture and you get a stylized version of that picture i encourage you to try that out it's called deepart.io |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_13_Introduction_History_of_Computer_Vision.txt | this unit is about a brief overview over the history of computer vision now of course it's a very shortened summary of what has happened over the last 50 years and it's a personalized viewpoint but with this being said i hope it still can provide you some intuition about on where the field has come from and where the field is going to i want to credit in particular the recent lecture from lana la cedric at uiuc on computer vision looking back to look forward where some of the slides have been taken from as well as the 3d computer vision past present and future lecture from steve sites i'll link both of them here if you're interested in diving more into these topics and in particular also watching this a very good talk from steve um have a look at that now before we start with the last 50 years of computer vision i just want to give you a very rough pre-history of course um the perception of humans has been investigated before computers have been invented for instance the prospective projection process has already been described in the 15th century by leonardo da vinci johann heinrich lambert after which the lumberjan reflection is named has made important contributions to the area of photometry and karl friedrich gauss has contributed to a lot of different aspects of science but in particular relevant to name here is the method of least squares which is a very useful tool across all of sciences but in particular also in computer vision is used everywhere and then in the 19th century charles wheatstone has looked at the problem of stereopsis and producing stereo images so this has happened all like several hundred of years before the history that we are looking at what you can see here is the so-called perspective graph this is a tool that leonardo da vinci has developed for drawing better pictures of the real world that are that adhere better to the actual perspective projection that a human observes and that tool is basically a focal point where the eye of the observer or the artist is fixated and then the picture is drawn on a glass surface such that the points that are drawn correspond to actual 3d points in the scene and the perspective effects are taken into account by changing the distance between this glass surface and the focal point and what leonardo da vinci uh writes about perspective is the following perspective is nothing else than the seeing of an object behind a sheet of glass smooth and quite transparent on the surface of which all of the things may be marked that are behind this glass this is this process that i just described all things transmit their images to the eye by pyramidal lines and these pyramids are cut by the set glass the nearer to the eye these are intersected the smaller the image of the cause will appear so he had had a pretty good understanding of how perspective projection worked then in the 19th century in particular 1839 a technique called daguerreotype has been introduced which was the first publicly available photographic process invented by louis da guerre and it has been widely used in the 1840s and 1850s before other photometric techniques have been invented and the way this works here on the right you can see such a camera and the resulting photograph is that a silver plated copper sheet was polished and treated with some films fumes to make it light sensitive and the resulting latent image that was then projected onto that sheet was made visible by fuming it with mercury vapor and removing its sensitivity to light by some chemical treatment it was then rinsed dried and sealed behind the glass it was very fragile but it led to pretty astounding photographs at that time already another important prehistoric landmark is the great arc of india survey where this was a multi-decade project to measure the entire indian subcontinent with scientific precision and was using trigonometric tools in some parts of this process it was under the leadership of george everest after which the mountain was named and who was responsible for the survey of india and what they did is they did some form of manual bundle adjustment the techniques that you're gonna get to know during this class and that we're gonna use um in the context of computers they have done this manually in order to prove amongst many other things that the mount everest is the highest mountain on earth so now let's look a bit into the more recent history of computer vision if we want to categorize which is quite difficult i have to admit the different eras of the computer vision research this is what i would propose so there's multiple waves of development in the 1960s the early approaches which we're going to see in a few slides are approaches that were detecting lines and trying to fit simple geometric blocks to these lines in order to understand the world and for instance to give a robot a understanding of the world such that the robot can act in that world but the limitations became clear very early on one limitation was that low level vision was more difficult than was fought initially and so this led in the 70s to a surge of interest in low level vision research accumulating in the development of stereo and optical flow and shape from shading and other techniques in the 80s neural networks became popular again in particular through the invention of the backpropagation algorithm and the first models for self-driving have been proposed then in the 1990s mark of random fields and graphical models which are also part of this lecture have been quite popular in the community allowing for removing some of the ambiguities in dense stereo multiverse stereo for example or in optical flow by using a global optimization perspective on the problem then in the um uh 2000s um this was the decade a decade of the feature descriptors and feature detectors like sift which enabled for the first time really large scale structure from motion 3d reconstruction and recognition approaches and then from 2010 onwards neural networks have made a real impact have surpassed the classical techniques that have been there um in the computer vision community on many important problems and they are the dominating techniques today whenever large data sets are available leading to a very quick growth of the field and also commercialization so let's start with uh at 1957 with stereo we can see here on the right is this is still an analog implementation but it's the first implementation of stereo image correlation given two images and in this case these were photogrammetry images image taken from an aerial perspective in order to recover an elevation map from the earth's surface given these two images correlate features in these images such that such a digital elevation map appears and this was the first machine that was this book this process was done manually until then but this was the first machine that was automatizing this process through such an analog implementation of a stereo correlator implementing many of the techniques that are still useful and part of the state-of-the-art models today like image pyramids and course to find estimation and warping etc then in [Music] 1958 rosenblatt invented the famous perceptron and demonstrated with this very simple linear classifier that was trained using the perceptron algorithm that it's possible to classify simple patterns for instance into male and female and there were some nice properties about this algorithm for instance novikov proved the convergence of this algorithm but it was overhyped at the time rosenblatt claimed that percept the perceptron will lead to computers that will walk talk see write reproduce and are conscious of their existence and this was expected to happen over the course of the next decade but we still don't have machines that can do all of this today then in 1963 with which is maybe the event that many in the computer vision community consider as as the real cornerstone of computer vision larry roberts the one of the inventors of the internet actually but in his phd he was concerned with image understanding particular machine perception of three-dimensional solids and he wanted to understand images such that they are useful for robotic manipulation so in contrast to simple pattern recognition he wanted to interpret images as projections of 3d scenes and recover the underlying 3d structure of an object from the topological structure of the lines the approach preceded by taking images of simple objects on simple backgrounds extracting edges and then coming up with a 3d model that can be rotated and manipulated and that was supposed to be the input for a robotic system in order to interact with the environment so one of the motivating factors of computer vision was actually robotics at some point the community deviated a little bit from robotics and robotics developed its own field and computer vision developed its own field but then nowadays these two fields are converging again to this to to a unified research direction because people have realized that these independent developments are insufficient to actually develop robotic systems that work in the real world now this was a very simple system it had many problems and the challenges that were present or were heavily underestimated so it was completely uh not robust um to the to noise in the input image and to it was also only working for such simple scenes um with uniformly colored objects and perfect lighting and simple backgrounds and one example that's also often given for this underestimation of the challenge of computer vision um is this paper or this this ai memo from seymour papert where in 1966 they thought that they can take a couple of interns over the summer and solve computer vision and as we know i mean computer vision in many aspects has made tremendous progress but it's it's far from being solved even in terms of the goals that they had at the time in 1969 minsky and poppard published a famous book perceptrons that showed several discouraging results about neural networks and that led to a decline in research on neural networks that only later in the 80s and again in the in 2010 reappeared resurfaced but it significantly domina uh uh it significantly uh contributed to this decline into the dominance of some symbolic ai research in the 70s which nowadays plays a relatively small role and hasn't led to a lot of success stories compared to what machine learning has led to in 1970 um some researchers at mit developed the copy demo which was the culmination of this research on block world's perception where we were using such block worlds perception algorithm to recover the structure of a simple scene and then have a robot plan and build a copy of a set of blocks from another set of blocks you can see this illustrated here there's the robot arm there's a set of blocks that should be then formed into an arrangement to replicate another set of blocks and this task already involved vision planning and manipulation but it was realized very very quickly that this approach that was taken the low level edge finding is not robust enough for this task it's very brittle and so there was a lot of attention created on improving low level vision because it was thought that once low level vision has been solved this task can also be solved but today we know that there is more challenges to it than just solving low-level vision and that solving low-level vision in isolation is also not easy in 1970 horn published the famous technical report in his phd thesis basically on shape from shading on how to recover 3d from a single 2d image and based not on geometric cues like lines and edge and point features but looking at the shading of the surface so you can hear for instance see that at the side of the nose we have much darker colors than on the front of the nose and from just the intensity of the colors there can be formulated constraints that if if they are well integrated leads to a elevation map of the surface however it's a very ill-posed problem there's many solutions to it and so the method required a smoothness regularization regulation to constrain the problem i will see such type of approaches also during the lecture in 1987 uh 1978 um intrinsic images have been proposed as a intermediate representation the idea was to decompose an image into in its different intrinsic components into different 2d layers this is not a 3d representation it's a 2d pixel representation for instance reflectance separate the reflectance from the shading or separating the image into motion and geometry let's say now this was believed to be useful for many downstream tasks such as object detection as object detection of this object here becomes independent of shadows and lighting which makes it more invariant and should help object detection and this has this intrinsic image the composition has not has developed into its own field with many follow-up papers on this approach in 1980 photometric stereo was developed by woodham and a paper called photometric method for determining surface orientation from multiple images in contrast to shape from shading here multiple images were used with known or unknown light source position but a point light source position and so in in order instead of using heavy smoothness regularization here in this case multiple observations per pixel are available and it can be shown that if you have at least three different observations per pixel you can actually reconstruct the surface normal from which you can get you can integrate uh depth or geometry and this has led to many approaches that are also in industrial applications today with unprecedented detail and accuracy now the assumption of the shape from shading approach was that the object surface was homogeneous and following a lambertian reflectance in this case also the lambertian and homogeneity assumption were used but they have been later on subsequently relaxed and so this approach became more powerful it could handle objects that were non-laboration and that also had texture 1981 um despite the key ideas having been around for more than 100 years um formerly the essential matrix has been reintroduced by logged longet higgins in a paper called a computer algorithm for reconstructing a scene from two projections the essential matrix defines two view geometry we have two cameras here this is a left image plane that is a right image plane and this is a camera center left and right so the essential matrix defines this geometry as a matrix that maps points for instance this point here on the left to a so-called apipola line here on the right such that if i have taken these two images at the same point in time or the scene has been kept static during acquisition it is sufficient to search along this 1d line for the correspondence of this point in order to being able to triangulate it to reconstruct the 3d geometry of that point or many points to form a surface and so this was really a foundation of the development of stereo techniques has also been shown how this matrix can be estimated from a set of 2d correspondences which is the extrinsic camera calibration problem in 1981 the early stereo approaches stereo is the is a method for extracting depth images from our depth maps from two images just as we humans observe the world with two eyes if you have two images taken from slightly different vantage points you can also reconstruct 3d using the epipolar geometry as shown on the previous slides but early approaches that correlated these features didn't work very well because of all the ambiguities because points in these two images looked similar despite they were not the same 3d points so in 1981 additional constraints have been added in particular they use dynamic programming basically the viterbi algorithm to introduce constraints along individual scan lines smoothness constraints occlusion constraints etc now this allows for overcoming ambiguities in the stereo matching process but because each scan line was processed independently it led to streaking artifacts between the scan lines in 1981 also the most popular paper on optical flow has been published called determining optical flow from horn and chunk it's a famous horn and shank algorithm optical flow is an even more challenging problem than stereo because it's a 2d surge problem optical flow doesn't assume that the scene remains static between the two images have been taken but things can move arbitrarily and optical flow refers to the pattern of apparent motion of objects surfaces and edges in a visual scene here on the right is an example where you can imagine these little errors indicating the motion of a sphere that's rotating and this is actually this is an actual measurement that has been made with this algorithm now the goal for optical flow is for every pixel to determine how far the pixel has moved between two frames that happening have been captured for instance of a scene where a sphere is rotating or a car is moving and or densely tracked the pixels in other words between these two frames and it has been investigated already by gibson in the 1950s who was a psychologist to describe the visual stimulus of animals for instance when animals are approaching a surface for landing this low-level cues are very important and so hornchunk provided the first algorithm it was a variational algorithm that integrated this prior constraints in a mathematically concise fashion in order to recover dense optical flow fields now this algorithm had a lot of assumptions and it was subsequently revised relaxing these assumptions and yielding better optical flow algorithms but this is really one of the seminal papers also semillon also in a similar way um in 1984 discrete markov random fields have been um considered as a way of encoding prior knowledge about a problem such as stereo or flow or image denoising and the idea is similar to this discretized variational flow formulation where constraints between neighboring pixel sites have been introduced but now in this case it's a discrete formulation and there's a whole variety of inference algorithms that are used in our days for optimizing such problems such as variational inference sampling belief propagation graph cuts and so on in the 1980s the 1980s also have been a time where part-based models have been or have become popular for instance in 1973 the famous pictorial structures model in 1976 generalized cylinders or in 1986 super quadrics as we can see here on the right this is our a set of primitives that are parameterized with just a few parameters these are very basic 3d shape components that despite being very simple can be composed into quite complex scenes as we see here with very few bits so very few parameters very small storage we can create 3d scenes that are semantically meaningful and of course being able to infer such part-based representations is also useful for downstream tasks such as in robotics where we want to have a good understanding about the geometric relationships of different objects or parts now in the 80s the first ideas haven't worked so well but they have been revised over the last couple of years using deep learning techniques and now start to work much much better than they have been working in the past in 1986 also the back propagation algorithm for training neural networks has been um popularized by rimmel hart hinton and williams and it's really the main workhorse of deep learning still today um that drives a lot of computer vision applications and uh the key of this algorithm as we all know is that it is it's using dynamic programming or dynamic computation in order to very efficiently compute millions of gradients of a loss function with respect to the parameters of such a deep neural network this algorithm has been known already since the 60s but this was the time where it was demonstrated to be also empirically successful and it's really the main algorithm that's used for training deep models today in various forms but this is the algorithm for computing the gradients in 1986 also self-driving cars have been demonstrated for the first time for instance this is a example that has been a car that has been developed by one of the pioneers of self-driving in europe and stigmas in the context of a european project called prometers it was demonstrated in collaboration with daimler-benz and this first vehicle was driving up to 40 kilometers per hour on a highway and then follow-up vehicles were driving also up to 150 kilometers per hour there was a famous ride from southern germany to denmark and back now it seemed already at the time self-driving wouldn't be very far away still today we don't have self-driving vehicles everywhere on the street so it's it's also an indicator for underestimating the difficulties of some of these problems in particular in self-driving where the accuracy is important and where safety is important and where you have to have models that are just very robust and reliable 88 at cmu the self-driving car alvin has been developed which was the first self-driving car that was using neural networks for self-driving so instead of these models that were using more like today self-driving vehicles different modules for perception and planning and control this was a vehicle that was just taking the input image passing it through a neural network and outputting the steering control and it was demonstrated that on somewhat simple scenarios was able to drive up to 70 miles per hour already at the time 1988 where also the computer of course was much smaller the available compute so now back to computer vision in 1992 structure from motion has made progress thomas and canada published their famous factorization paper that allowed to estimating the 3d structure from 2d image sequences of static scenes with only a single camera and enclosed form solution using a singular velod singular value decomposition at least for the autographic case and it has later been extended to the projective case and today these type of optimizations are then done using non-linear least squares but this is a very simple method that under certain assumptions can be solved with simple linear algebra um there was also pro progress on processing point clouds from stereo or lighter or laser scans for instance the iterative closest point algorithm that was able to register to plot point clouds by iteratively optimizing a rigid or non-rigid transformation in this example here that we have this green template and we have this red model and if we run that algorithm on this um this green template iteratively converges to this red model such that the point clouds are registered as closely as possible here in blue is this is the fitted template after convergence of this algorithm and this is of course a useful algorithm if you have 3d scans to build bigger 3d models to estimate relative camera poses for odometry or to localize a robot with respect to a map let's say in order to integrate 3d information from multiple viewpoints or sensors another important technique that is still used today that has been introduced in 1996 in computer graphics is called volumetric fusion by curling levoi and it's a technique that considers surfaces implicitly as a serial level set of a field that's defined discretely in space and that aggregates multiple of these observations so here these blue points are in front of the surface and the red points are behind of the surface if you have multiple of these observations now this algorithm basically averages these observations and has certain properties in order to fuse multiple partial views of an object into a coherent 3d object a larger 3d surface of the scene in 1998 there has also been first real progress on multiview stereo this is a seminal work by fogueras and caravan called complete dense stereo vision using level set methods they formulated stereo not as a matching problem but as a reconstruction of the images modeling the 3d surface as directly as the level set um of an optimization formulation and projecting that into all of the input views and trying to minimize the reconstruction error which allows for flexible topology so many of the ideas that we are still exploiting today were already present in this paper and they had a proper model of visibility and some convergence guarantees there were also some other approaches in this era um but that were rather dead ends like voxel coloring or space carving that made assumptions that in reality just don't hold and so we're restricted to very artificial scenes but this general level set and variational optimization or multi-view stereo is something that has survived until today in another example of global optimization in this case for stereo in the context of discrete markov random fields was the gruff cuts algorithm that has become popular in 1998 um in the paper from boycott fidel mark of random fields with efficient approximations it was an algorithm that comparably fast comparably efficiently was able to provide a global solution to a optimization problem that included unary and pairwise terms so unary terms are basically per pixel observations or correlation measures and pairwise terms encode smoothness constraints between pixels for instance that it's more likely that two neighboring pixels have the same disparity or depth depth disparity is the inverse depth as we'll learn about in the next lecture later versions also included specific forms of high order potentials and in comparison to these scanline stereo methods that i've shown previously this was a real global reasoning method that was reasoning about all the pixels in the image simultaneously not just about individual rows of the image and therefore led to much more consistent results in 1998 convolutional neural networks have been proposed by le coonetal and they are still one of the primary workhorses of computer vision deep computer vision models today after more than 20 years convolutional neural networks implement spatial invariancy via convolutions and max pooling and through this weight sharing of the kernels they reduce the perimeter space and at the time they only got good results on mnist which is this very simple numbered recognition data set and therefore it didn't get too much attention in the computer vision community it was more in the machine learning community and then only in 2010 when alexnet demonstrated that these type of models also produce competitive or state-of-the-art results on more challenging data sets that the computer vision community was interested in data sets of natural scenes basically then these models were adopted in 1999 bluntz and feder demonstrated their morphable model for single view 3d phase reconstruction they took a repository of 200 laser scans of faces and built a subspace model from that that was then subsequently fitted in terms of both geometry and appearance to real images with impressive uh performance at the time so these results are really stunning today and and also form the basis for some of the most advanced pose human body and pose and shape estimation techniques today a major landmark in 1999 was also the development of the so-called scale invariant feature transform which is an algorithm for detecting and describing salient local features in an image you can see an example here on the left these little squares are little images a little features that have been detected and described in these images and there's another picture of the same object it's a bit hard to see here where the perspective has been changed but the same roughly the same features have been detected at the same locations in these two views and since you have now solved the correspondence problem you can use this for tracking and for 3d reconstruction etc so this really enables many applications for instance image stitching we will see in the next lecture reconstruction motion estimation and many more one of these demonstrations was in 2006 this photo tourism paper by snavely seitz and zieliski presented at siggraph you can already see how computer vision and computer graphics are both have both contributed to this progress and are intertwined the idea in phototourism was to take a large scale or to produce a large-scale 3d reconstruction from internet photos and the key ingredients here really were the availability of the sift feature matcher and large compute resources and large-scale bundle adjustment with a very clever pipeline it even led to a product in the end there was the phototourism product by microsoft which has been just discontinued now but um it was quite quite attracting a lot of interest at a time where you could take your photo collections sort them in 3d and then browse them in 3d looking at the scene from novel viewpoints and so on in 2007 pmvs patch-based multi-view stereo has been proposed by furukawa and ponz this was also a seminal paper on 3d reconstruction because for first time demonstrated reconstructions of various different objects at a quality that hasn't been seen before and also at a very high degree of robustness so the performance of these 3d reconstruction techniques continued to increase here's one more example of large-scale 3d reconstruction it was a paper building rome in a day at iccb 2009 there was about building a 3d model of landmarks of cities but not from images from a single camera where you were where you would know like exactly the parameters of this camera but these were images from unstructured internet photo collections so images were automatically extracted from the internet by searching for say colosseum and then the algorithm was run for a few hours or sometimes a few days in order to reconstruct these landmarks or entire cities where people have taken photographs of and there was a follow-up that's focused on making this more efficient running it not on a large gpu cpu cloud but running it on a single computer here's an example of what this looks for rome with 75 000 input images it is not a dense reconstruction it's a sparse reconstruction but it's still quite precise it contains multiple landmarks as there is photos from many different places in rome these landmarks are well connected so it's possible to optimize all of this jointly now this was one of the earliest methods the quality of these approaches has tremendously improved since then and one of the state-of-the-art methods is called call map that produces much more dense and much more detailed results than what you can see here but this was the precursor of this newer line of methods in 2011 the elect the kinect camera has been put on the market it was an active light 3d sensing device from microsoft and there's actually three generations of this camera this is the newest version the kinect usher that has just been released two years ago the early versions this is the first version actually failed to commercialize the idea was to use this 3d camera and there was an algorithm that was extracting using machine learning uh the uh like it was segmenting the image into different body parts and was estimating the human body pose that could be used as a control that was allowing to have the user of let's say games or certain programs interact much more naturally with the computer by being the controller um the human being the controller not having a separate controller but it kind of failed um it was discontinued this first version but it was heavily used for robotics and vision research because it was the first sensor that was cheap it was just 100 or 200 dollars and it allowed compared to the price of other active depth sensing technology that was available at time allowed to navigate robots indoor robots for instance much more easily compared to other sensors that were available at the same price then in 2009 2012 the development of imagenet and um the capability or the demonstration of neural networks to solve imagenet this classification task much more bad much much better than classical techniques has really ignited both the area of computer vision and the area of deep learning and led to this um this increased interest in uh both areas today so imagenet was this as already mentioned this large data set with 10 million annotated images each image has a one single class label out of 1000 categories and the goal is to classify the correct category and alexnet was the first neural network that significantly advanced the state of the art and what they did is they used a gpu training so this was the first effective use of gpu training for neural networks they used a specific architecture deep neural network architecture and they used a lot of data demonstrated that deep learning is actually effective also on challenging real world data sets um it was realized in many sub-areas of computer vision that data sets are actually one of the important pieces both to drive progress in computer vision because it allowed people to compare their approaches on a fair and equal basis and evaluate them rank them on leaderboards but also having these training sets as a possibility for training algorithms machine learning algorithms such as deep neural networks that had a lot of parameters effectively so here's a list of a couple of data sets that have been used in various image related tasks in this example here self-driving now creating these data sets in particular annotating these data sets is expensive and therefore there has been also a lot of interest into how can we exploit synthetic data sets where maybe asset creation is also expensive but once you have created the assets you can actually render a lot of scenes and a lot of constellations of objects and this is still a very active research area today so here's an example with a very simple data set but it impressively has been demonstrated that even with the simple data sets this might be very useful for learning some tasks so here's a data set with flying chairs on random background images and that has been true that has been used for training deep optical flow methods for optical flow it's really hard to get ground truth labels because you need to know for every pixel how far it has moved and that's something that's almost impossible to annotate so with these type of data sets there was a lot of progress despite these images that you see here look very different from the images that the algorithm later was evaluated on because these neural networks are rather black boxes there was also a search in interest of a visualization tag about visualization techniques that allow for visualizing what's going on inside this neural networks here's one of the early examples by sailor and cnn features of the shelf an astounding baseline for recognition and so they visualized image regions that most strongly activate various neurons at different layers of the network and found that higher levels capture more abstract semantic information and lower layers capture less abstract semantic information and so on in 2014 it has also been demonstrated that it's very easy actually to fool deep neural networks but they are actually not as robust as one might might have thought until then by just adding a small tiny change this is magnified here to the input images such that the classifier in this case an imagenet type classifier confuses this image or this image or this image with a very high probability as ostrich and this is this also sparked a whole research area on how to build more robust deep models and on the other hand how these more robust models can still be attacked in 2014 another important landmark paper was the generative adversarial network paper and the variational auto encoder paper that allowed for training in an unsupervised fashion models they are able to generate new samples they haven't been seen but they are still photorealistic and even like again for example in this case on faces here you can see the development over the years on faces these images that are generated by these methods are really hard to distinguish from real images even for humans so a lot of active research and image translation domain adaptation content and scene generation and 3d gans that has followed up on this seminal paper and i want to show you some results on a method called style gun that is still considered state of the art for image generation [Music] gans or generative adversarial networks have captured the world's imagination with their ability to create ai generated images of landscapes animals cars and people without any human supervision nvidia's stylegand 2 features redesigned normalization and improved conditioning that delivers a new level of quality and creative control [Music] for this demo we used transfer learning and trained the model on an nvidia dgx system using a large data set of unique paintings it can produce endless variations of ai generated characters in a seemingly infinite variety of painting styles [Music] so let's move on as you can see we're spending a lot of our time on the last 10 years or so which is where a lot of the progress a lot of the recent progress has happened where a lot of the algorithms started to work one example is also deep phase this was one of the first models that demonstrate performance in face recognition on par with human face recognition performance or even outperforming humans and it was made possible by a combination of classical techniques with deep learning and that's also maybe one of the messages i want to convey in this intro here that despite we're going to discuss a couple of very old techniques in this lecture a lot of the old techniques and ideas are still valid and still useful not all of them but many of them are still valid and useful and rediscovered or re-implemented today in combination with data-driven models leading to better results then another another line of research in around 2010 was concerned with more holistic 3d scene understanding so parsing rgb or rgbd stands for depth images into holistic 3d scene representations and there were several papers for indoors and outdoors i've chosen here one paper from our research group where we were interested in in 2013 in understanding how a traffic situation looks like just from a monocular grayscale camera you can see on the top the input image with some object detections this was actually an not a deep neural network object detector but a classic object detector at the time because there were no deep object detectors available and on the bottom you can see the layout of the intersection and the situation that has been inferred from that monochrome video sequence alone in 2014 there was also an impressive demonstration of 3d scanning techniques that allow now for creating really accurate replicas for instance of humans in this case paul dippovec's team scanned obama and just created a 3d presidential portrait that is now exhibited in the smithsonian museum and there's a link here if you're more interested into this there's a very nice video explanation also of this process how we set up the cameras and so on then in 2015 deepmind has published a paper how to learn human level control of video games and later also other games through deep reinforcement learning that was learning a policy directly from the state the image to the action and there was also intro like again combining computer vision techniques with uh robotics techniques like reinforcement learning and was one of the first applications of these deep models demonstrating that deep models deep reinforcement learning is also possible despite the sparsity of the reward signal of reinforcement learning this is an example here also from tubingen a very famous paper from matthias betgis group called image style transfer using convolutional neural networks where they demonstrated how it is possible with a pre-trained neural network to take an image a real image and a painting and then produce an image that is demonstrating the content of the real image but in the style of the painting and there's a really nice website where you can try this yourself with your own photographs which is fun and i encourage you to try out then in 2015 or from 2015 on semantics beyond classification of images but more fine-grained semantic was extra was was inferred using deep neural networks one of the most famous problems in computer vision is called semantic segmentation and these were the first deep models for semantic segmentation in 2015 here i listed a couple of these models that have been developed in this area in semantic segmentation the task is to take an input image like here so these are two different examples so we take this image and we want to produce a semantic class label now not for the image but for each individual pixel in that image and as you can see it's it's quite the results are quite precise quite aligned with the object boundaries and today this is an example here this video shows an example from 2016 the results have improved since but as you can see even in 2016 on this very challenging data the results that were obtained are quite consistent and quite precise so it was really demonstrated as semantic segmentation that was very brittle and didn't work very well with classical techniques classical features and classical classical machine learning techniques such as support vector machines or structured support vector machines did start to work when using representation learning here's another example this is one of the still state-of-the-art object detectors and semantic instance segmenters where the goal is now to not assign only the semantic label but also the instance label to each individual pixel it's called mask rcnn and at the time produced a state-of-the-art result on a challenging data set called microsoft common objects in context and because things started to work well people have looked at other tasks so there was a growth in in new tasks that have emerged um from this so for instance in 2017 image captioning was introduced as a new task and also visual question answering an image an image captioning the task is to take an image as an input and produce a sentence for that image and as you can see here um this uh in some cases works quite well in some cases it gets uh some it produces some weird errors so people were quite excited by these results but then also quickly realized that these models that have been trained were somehow also adopting the data set by us and uh we're still lacking in a real understanding of the images and in common sense now with the growth of data sets this has changed a little bit but it's still a problem another important area of research that has developed significantly since the 2000s is human shape and post estimation where the goal is given a 3d point cloud or an input a single input image to produce a 3d model of the person or multiple people in an image and of course this has many important applications in human computer interaction or understanding humans better in health etc so there were rich parametric models developed like simple and star here in tubingen that even allowed for regressing these these model parameters from single rgb images alone and more recent trends go towards modeling very detailed post-dependent deformations and also clothing which is which can be quite challenging so here's an example of a model already a few years four years ago we can see how let me play that again we can see how independently in this case frame by frame that's why it's jittering a little bit this uh parkour um person here was tracked and the post was estimated for this person and then in 2016 from 2016 on deep learning has also entered the area of 3d vision before that deep learning was successful on classical recognition problems and 2d estimation or two and a half the estimation problems but then from 2016 when people realized we can also apply these models for 3d vision um while in the beginning people used voxels and point clouds and meshes as representations now there's a lot of hype around these implicit neural network based representations and one class of this lecture is also devoted to these type of models these allow for predicting entire 3d models even from a single image as input and they also allow for depending on the model for predicting geometry materials light and motion good i want to conclude with a little overview it's a very incomplete overview of course of applications and commercial products where we're using computer vision technology in our everyday is live there's for instance google's portrait mode for the smartphone which allows to blur the background very realistically of a portrait there is the skydio um or skydio2 drone that just came out that allows that navigates itself allows to follow a person through complicated terrain without crashing into obstacles there's of course many companies um working on self-driving cars that haven't fulfilled the promise yet but um there's still a lot of a lot of interest and a lot of progress in this area there is a growing [Music] interest in also virtual reality and augmented reality with new devices like virtual reality glasses but also the hololens which is an augmented reality device and it uses a lot of computer vision for tracking the head for detecting objects in the environment and for estimating geometry and localizing itself and here on the right is an example where iris recognition has been used to identify a person 18 years later that otherwise wouldn't have been identified as the same person now this concludes the short historical review what i want to mention in the end is that there is a lot of challenges that are still unsolved this is an incomplete list but it gives you an idea of what are the things that people are working on it's of course still difficult to get big data sets in particular annotated data sets so people look into an and self-supervised learning algorithms they look also in interactive learning algorithms that more naturally interact with the environment accuracy accuracy is an important topic and holistic modeling for instance for getting self-driving cars as robust as humans or more robust than humans robustness and generalization is an important aspect inductive biases are important an understanding of the black box neural networks and the mathematics behind is an active research area many of these models in particular in 3d have high memory and compute requirements and so efficiency of these models and new representations are constantly developed and there's many also ethics and legal questions that are currently investigated that are also result of the demand of commercializing some of these ideas into tools that are available to us in our daily lives that's it for today thanks |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_62_Applications_of_Graphical_Models_MultiView_Reconstruction.txt | in this unit we're going to see a second application of graphical models in the context of dense multiview stereo reconstruction which is the problem of reconstructing a 3d scene not from just two images but from multiple images of that scene and the advantage of course similar to in the case of sparse reconstruction that we have looked at in the beginning of this lecture is that we have more constraints and we can better resolve ambiguities and we can also reconstruct not just a two and a half d depth representation but really a complete 3d reconstruction of the scene so here's an example of what we want to achieve on the left is a set of input images i show only three here but in practice there's of course many more we use around 50 to 100 images for a 3d reconstruction and we want to develop a model now that allows for inferring such a 3d volumetric representation as we can see here on the right hand side or for inferring depth maps from that [Music] set of input images that we have observed now even for using many more cameras image-based 3d reconstruction can still be a highly ill post problem for example in the context of textures areas so here's this green area that has almost the same color everywhere and if we look at the scene from multiple different viewpoints in this case from two different viewpoints then so this is a side view imagine this is a side view here where this is the ground then the true surface is the green one here but because all of these pixels are roughly the same color or intensity any of these other surfaces the blue one the red one or the black one are also plausible surfaces so there's still a lot of reconstruction ambiguity and of course we can remove some of these ambiguity and ambiguities by introducing appropriate priors such as maybe assuming that this region here is flat but the stronger these priors are the stronger our assumptions are the stronger will also be or the larger will be the mistakes that we're doing in case these assumptions are violated and therefore coping with and exposing uncertainty is really essential so the question is now can we utilize the power of probabilistic graphical models to formulate dense 3d reconstruction from multiple views in a probabilistic fashion this is the representation that we're going to use we're going to discretize the 3d volume that we consider into a discrete set of so-called voxels each of these little cubes here is a voxel and depending on the resolution that we apply in each dimension you have more or less voxel so here is just a five by five by five voxel grid but in practice of course we use much higher resolution voxel grids in the order of 1000 by 1000 by 1000. so we have discretized the 3d volume now how do we represent the 3d geometry and appearance inside that volume well for each of these voxels each of these little cubes we're going to assign with two random variables the first random variable is the voxel occupancy it's a binary random variable associated with each of these voxels that tells or that is equal to one if that particular voxel is occupied and that is equal to zero if the voxel is empty and the second variable that we're going to associate with each voxel a is the voxel appearance and that's a real number in our case here are going to consider grayscale images or inference just in terms of intensity values but we could also use r to the power 3 in order to represent color or even more more complex properties of the of the appearance but we are going to focus on the most simple case here which is just a one-dimensional real number a scalar that represents at each um voxel how bright that voxel appears in the images and of course it makes the lamborghini assumption and it's a very inaccurate representation but it's the most simple representation that we can start with and we're also going to introduce a little shorthand notation that helps us formulating the problem later on so here's a camera this symbol indicates a camera and on the image plane of the camera there's many pixels now we know already that the combination of the camera center and the particular pixel defines a ray that passes through 3d space by projective geometry as discussed in lecture number two so we can talk about rays or pixels interchangeably and when i talk about pixels arrays that effectively means the same for a particular camera and a particular pixel that corresponds to a particular array and one particular array is illustrated here and what we're going to do with this shorthand notation bold face o and bold phase a with index r later on we're going gonna omit that index for simplicity but for each of these rays each of the cameras has a set of pixels and each of these pixel corresponds to rays so from all of the cameras uh together we have many more rays than we have from a single camera and we consider just a generic ray here that could be any of these rays from any of the cameras corresponding to any of the pixels so for each of these rays we have this boldface o and boldface a variable which simply collect the set of random variables from the entire volume that are intersected by that ray and they collect it in the order that they are intersected with when starting from the camera center so in this case here we have two random variables o one and a one associated with the first voxel that's intersected by the array and then we have two variables associated with the second voxel and so on and that's how we define o r and are in the order the ray intersects the voxels in this case nr which is the number of voxels that are intersected by ray r is five now in order to solve a nil post computer vision problem such as multiview of 3d reconstruction we first have to understand the image formation process and that's illustrated here i'm often using color to better illustrate but really what we do in in the end consider only is the intensity but think of color and intensity as being the same so how does image formation work well intuitively that's not very hard to answer for solid objects the color of a particular pixel corresponding to a particular array a particular camera is simply well the first occu the color of the first occupied voxel that is hit by that ray so if we go from the camera center along that ray through that 3d volume the first two voxels here are empty space free space but this third voxel here is occupied and has the color or intensity red then this color should appear on that pixel because that's the first occupied voxel color now how can we formulate this mathematically well let ir denote intensity or color at the pixel r and o i and ai be defined as before these occupancy and appearance variables sorted along that ray where they are intersected by the array then the intensity at the pixel r is simply this expression here now this looks a little bit complicated at first glance but it's really easy to understand what we have here this term here is simply telling us well or this term here is simply equal to one if i is the first occupied oi is the first occupied voxel and why is that true well only if oi is the first occupied voxel then this term is one and this term is zero so the entire product here is also one and the product of one and one is one so only in the case of oi being the first occupied voxel this expression is one and in all other cases this expression will be zero because there are well either oi might not be occupied or any of the previous voxels might be occupied so this expression is exactly one if oi is the first occupied voxel and now of course if oi is the first occupied voxel then we want to take the color of that voxel so we multiply a 1 with that color or intensity value and copy it to the pixel and because we don't know which of these voxels along the ray which of this n five in this case voxels along the array is the first occupied voxel we need to sum over all possible hypotheses but only one of this term in only in one of these terms this expression here will be equal to one in all other ones um oh i will not be the first occupied voxel so the color will not um be or the color of that voxel will not be taken into account by this expression because this term here is one only if oi is the first occupied voxel so now we have a simple formula that represents exactly the fact that the pixel should the pixel intensity or color should correspond to the intensity or color of the first occupied voxel okay now as mentioned before we want to formulate a probabilistic model and want to do inference in a probabilistic model in order to expose uncertainty so given the knowledge of the image formation process we now define a probability distribution over all the occupancy variables and all the appearance variables in this volume so it's a very large number of occupancy and appearance variables and we denote this by capital o and capital a and we are factorizing this distribution over all occupancy and appearance variables into a product of unaries and so-called ray factors and now these ray factors are higher order potentials they are not like in the stereo example pairwise factors but they are really higher order potentials because they connect all the variables along the ray all the o's and r's along the a's along the ray which could be hundreds of variables and so we need to find an efficient way uh tractable way of actually doing inference in this model because we have such high order factors in that model now what are these factors well the unity potentials are just some simple prior knowledge that we can specify about general occupancy of the occupancy variables or the general occupancy of the fear of the scene so this is a simple bernoulli independent per side bernoulli distribution where we have a hyperparameter gamma that controls how much we believe voxels in a scene are empty or not empty and as most voxels in a scene are empty typically gamma is small chosen smaller than 0.5 and it helps a little bit in cleaning up some of the outliers that are inferred by this model so it's a useful term to have but the really important term here is of course this ray potential that models the consistency of the 3d reconstruction that we want to infer and all the images all the observations and as already mentioned this factor or potential here is a higher order one because it's connected to all the occupancy and all the appearance variables along the ray all the voxels that are intersected by the ray and how do we choose that well we're going to take advantage of our image formation model this is exactly the same expression as before except that we have replaced a i here with the gaussian centered at i r and the gaussian of a i because we can't assume that the intensity [Music] of a particular voxel is observed exactly the same way in all the images because of noise and because our simplistic assumptions here don't hold true even in the lamborghini case it would actually not hold true and so we are allowing for a little bit of slack so we're allowing for a and i to deviate a little bit and this is a hyper parameter that we can tune with this sigma parameter of this gaussian distribution so there's a little bit of noise allowed by adding this noise term here but still if you think about this term um a configuration of o's and a's that maximizes the joint distribution the probability is one where for all of the rays in all of the cameras so this is the ray set that goes over all of the rays of all of the cameras is um increasing the value of this potential and that is exactly increased if um the first occupied voxel is similar to the pixel that is corresponding to that ray if a is similar to i then this term will be large and if it's the first occupied voxel then this will be one so we are getting a large value here in this potential so we not only have a constraint from a single image of course which would be completely ambiguous but now we have these constraints from all the images and this illustrates again that only if we have the constraints from all the images then we can on one hand infer a complete 3d reconstruction but also reduce the uncertainty because we have neighboring many neighboring views that see the same surface and this leads to better surface reconstructions so we want to infer a 3d reconstruction here that is consistent with all of the images that have been taken from that scene this is the model now how can we do probabilistic inference in that model we're going to use the sum product algorithm and the simplest thing we can do in terms of an inference question is to ask well for each of the input views or any novel view any ray actually in the scene what is the depth along that ray if we can answer that question then we can well we can reconstruct depth maps but we can if we can get the depth along each potential ray in the scene then we can we have the reconstruction we can also extract the mesh etc so this is a question one of the simplest questions we can ask what is that along any ray in the scene can be arrayed from the cameras that have observed the scene but can be also any arbitrary ray what is the distance to the first occupied voxel along that ray that's what we're going to consider here so consider a single ray r in space as illustrated here let now dk be the occluding distance from the camera center or the origin of that ray to the voxel k along the ray okay that's the case if we want to infer depth maps right we want to get the metric distance so for instance voxel 1 is at 3 meters distance voxel 2 is at 3.5 meters distance etc that's just the definition and then the depth d along that ray be the distance to the closest occupied voxel so d takes of course a value from the set of these decays so d is either d1 d2 in this case d is equal to d3 because this is the first occupied voxel now what we want to obtain is of course the best possible depth estimate the optimal depth estimate and if you're familiar with bayesian inference and risk minimization um and if you're familiar with the base risk then you know that well the optimal depth is the one that minimizes some form of risk but what is risk in our in our setting so we want to find the d prime that's minimizing the risk over all potential depth values well the risk is simply the expected loss over the depth distribution so you want to find the d prime that's minimizing some loss refreeze in expectation with respect to the distribution of depth values along the ray and depending on how we choose that loss if we choose it as a squared loss for instance then this corresponds to the mean over the depth values along the ray if we choose l1 loss this corresponds to the median this is easy to see if you plug in these expressions in here you can see that this corresponds to the mean or the median now this is good um but of course this hasn't solved our problem yet because uh it only demonstrates what we need right this is an expression that's easy to evaluate but we need first to compute the marginal depth distribution p of d along each ray and that's the inference quantity that we are looking after so we're looking after a marginal and we know that marginals can be computed using the sum product algorithm so we're going to use the sum product algorithm or some form of the sum product algorithm here okay so but this is because of the higher order nature of these potentials a difficult problem and so we first have to simplify it and it turns out that due to the special setting here the equations simplify quite a bit and i want to show you some of these simplifications not all of them some of them you will find in in the paper so let's assume now let's consider again the depth distribution for a single array and assume that we have this particular configuration but we assume this without loss of generality so it could be any of the voxels could be the first occupied voxel but here we assume that the cave voxel is the first occupied voxel but the consideration that we're doing here holds for all case being the first occupying voxels if this holds true then the probability of d being equal to dk is of course the probability of this constellation because we have assumed ok is the first occupied voxel where well d takes dk corresponds to okay now this expression here is a marginal distribution with respect to all the distribution over all the voxels and appearance variables along the ray right because this is a distribution over only the first k occupancy variables and it doesn't even consider the appearance variables so we can write this in terms of a marginal distribution where we sum over all the o's that are larger than with an index larger than k and all of the appearance variables and because the appearance variables are real numbers we here we have to integrate now here we have again a marginal distribution because this is the distribution of all the occupancy and appearance variables along the ray and that's a marginal with respect to the joint of all the occupancy and appearance variables in the volume which are many more here we only consider the ones that intersect the ray so how can we compute a marginal distribution well we can use the sum product algorithm and then the marginal after running the algorithm is given as the product of the factor and the incoming messages that's what we have derived in the previous lecture so in this case of a factor of this ray factor this high order factor we have the product of the factory itself with all the messages that are coming in from the a the appearance variables along the ray and all the messages that are coming in from the occupancy variable variables along the array okay now this is a very nasty expression to calculate of course because we have a summation of a very large state space and also an integration of a very large state space this bolt a is maybe 300 dimensional because it comprises all the variables along the ray but luckily things simplify the first thing that we can observe is that this expression here because okay is the first occupied voxel simplifies to just the gaussian of a k given i right because we know that this this term here is one exactly for i equals k only the cave term here plays a role and we can plug that in but again this consideration that we're doing here holds for all the case okay so we have simplified this now what we can do also is we can pull out all the products all the messages where the of all the occupancy variables with i smaller or equal to k so that just the ones with i bigger than k remain here corresponding to the values that we sum over and similarly we can pull out the ak all the terms that depend on a k there is one here one product and this term so we have pulled out this integral over a k such that only the integral over a not equal k remains now why did we do that well if we look closely at this expression and if we assume that these messages are normalized which we can do these are very simple distributions then we see that well because we sum and integrate over exactly the entire state space here that is still left inside here this is over okay bigger than k and here we have all the elements that are bigger than k and here we have all the a's with i not equal to k which is exactly what we integrate over so if we sum and integrate over the entire state space of a normalized distribution then of course um this is equal to one right if i integrate over gaussian distribution then the integral is one so we're left with this expression and we see now that this expression is much simpler than before it doesn't require summation over very large state spaces but it just requires an integral over a one dimensional variable now and a product over several of these terms but it it doesn't have this exponential complexity from before and the intuition of this term that we have derived is also very simple the depth d is equal to dk exactly if voxel case occupied invisible this is the first term remember again that this is what we have considered in this case this is corresponding to voxel k is occupied and visible and the second the blue term simply says that it should explain the observed pixel value right so for that first occupied voxel a k should be close to i because then this probability here is large and the question of course is well how can we obtain mu and this mu of o and this mu of ak which are the messages the incoming messages um if we go back here these are the incoming messages right and we obtain them using some some product belief propagation message passing so here's an illustration of how we pass the messages we have these cameras that are intersecting voxels in this volume and then we pass messages from the rays to the factors and from the factors to the rays and we iterate until convergence now some of these messages like these messages from the unary potentials to the occupancies are very simple and the variable to factor messages are also simple because they are just a product but some of these messages are hard to compute in particular the ones that go from this higher order potential back to the variables however using similar simplifications as we have seen on a previous slide this can also be simplified the exponential complexity of this higher order factor also reduces to a linear time complexity but the derivation of this is a little bit technical and so i omitted it for the sake of this lecture if you're interested in this there is a very detailed and rigorous mathematical formulation that can be found in the supplementary of the paper listed below so to summarize what are the challenges and what are the solutions if we want to apply belief propagation to the problem of multiview stereo well the challenge the first challenge is that the mrf comprises discrete and continuous variables so it's not just a discrete problem but we also need to reason in this joint discrete and continuous space the second challenge is that the ray potentials are higher order and so lead to this exponential complexity at first glance and there are many factors each pixel defines a factor so it's a very it requires a lot of computation to be solved now the solution um to this uh is that the well the continue the this continuous problem we can tackle by approximating the messages using continuous belief propagation where we keep distributions in terms of mixture of gaussians and update them via important sampling so this is something that i haven't shown but it's something that you also find in the paper and then the messages can actually be calculated in linear time as i already mentioned and there is an exact derivation of these messages but is quite technical and then to address the third point what has been done in this paper is an octree course to find representation and heavy gp gpu parallelization in order to execute that algorithm so it's running on a gpu with efficient data structure in order to result in meaningful return meaningful results in reasonable time okay so let's look at some results here is a data set that has been considered in this work it's a data set from restrainpoidal that has been captured by flying around a city providence and capturing images of that city and uh at the same time there was a lighter that was measuring depth so that that accurate ground truth depth or relatively accurate round of depth was available for evaluation so this is what you can see here at the bottom is the ground truth tab and so this algorithm compared to previous algorithms local algorithms but also global algorithms that were not exposing uncertainty improves performance in particular in textureless areas as you can see by this gap in the curves but what is maybe more interesting is to look at some qualitative results so here's a particular patch of a particular input image and for the patch of course with this algorithm that i've just shown we can infer a depth map and so here's the depth map that has been inferred by a previous algorithm that was using maximum posture inference and that wasn't exposing the uncertainty and this is the arrow map so the error map is a colored visualization of where errors are large and where errors are small blue means small errors red means large errors and these red regions here typically correspond to textures areas and so in these areas this base optimal prediction uh better it produces less error and here is a video of the results you can see on the left a rendering of the appearance and on the right a rendering of the occupancy and what you can see here is that depending on the building there is um a good reconstruction possible or not so here's a building with a lot of texture it's easy to reconstruct so we have we have a high certainty also for these voxels to be occupied but here there is a building with a mirror like facade where the appearance model completely breaks down and while this algorithm is not able to reconstruct that at least it knows that it's not able to reconstruct it because it exposes the uncertainty which is illustrated here by this white area now similar to the case with 2d stereo matching with two view stereo matching where we have also integrated higher order constraints in terms of objects here we can also utilize such shape prior knowledge and so in in this extension here of this paper this has been done because for many scenes like this downtown scene there is exclusive prior knowledge available for example if you know the gps tag of or the rough location of where the images have been taken you can simply query that in in 3d warehouse and you obtain 3d models for that scene and that's what has been obtained for that particular scene here so you can see there's rough building outlines available for this scene and for indoor scenes for example many rooms contain particular type of furniture like ikea furniture and so for this type of furniture there's also a lot of cad models available online that can be used as prior knowledge however there's a lot of challenges also involved first of all these 3d models often only contain are only a course are only course coarse and inaccurate um you don't really know the orientation and also the location so this has to be inferred as well and there might be occlusions or the retrieve models might actually not be present in the scene and the object size is not correct so these things have to be taken into account and so there in this model this has been done by using a as you can see here a particle based representation a sample based representation of the 3d objects so each object is represented by a particle set and the spread of that particle set that has been inferred determines the uncertainty also of this of the of the pose of these objects and i want to show you this results here so these are the shape models that have been used for joint inference of geometry appearance and the shape models and the algorithm also identifies which of these objects are very unlikely to be present and removes them it's of course a very hard problem there's some pre-processing steps involved but then once you have a reasonable initialization then you can get results like this where you have now inferred not only the appearance and the geometry but also the the type and pose and geometry of these objects in the scene and what you can see here for example for this particular building that we have seen before where there is a lot of ambiguities due to this mirror facade that this building can be reconstructed a little bit better at least with this this method so here you see the rendered depth maps to summarize probabilistic multiview reconstruction at least this particular model here on the positive side probabilistic formulation using graphical models is tractable as these ray factors decompose and we don't have this exponential complexity and the non-local constraints via joint inference can be all non-local constraints can be integrated via joint inference in 2d and 3d this is something i haven't shown here but there has been another related work that has shown that you can also integrate things like local planarity assumptions we have seen that also cad priors can be integrated and help to disambiguate textureless regions and using oak trees reconstructions up to 1024 to the power three voxels which is a really large number of voxels are possible so that's quite exciting however using this loopy highly loopy higher order mrf only approximate inference is possible and it's also relatively slow it takes several minutes per scene on a gpu and the appearance term is very simplistic and not robust it doesn't take into account non-lumberjane appearance and noise and outliers and also the resolution is quite limited by discrete by the discrete voxel representation that we're using here and in the course in the later course of this lecture we're going to see some some other neural network based implicit representations that tackle this or address this problem and are not restricted by the voxel discretization |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_22_Image_Formation_Geometric_Image_Formation.txt | in the second unit we are going to discuss the geometric image formation process and basic camera models and we're going to see how points in 3d space are related to points on the 2d image plane the principle of projection um is an old one and as an example we can take for instance the human eye where light is passing through a lens and an aperture and is then collected by photoreceptors on the retina and the same principle holds true for the first camera obscura which is collecting light through a little tiny hole that's made into the wall of an otherwise entirely dark room and that gives rise to an image on the upper wall and then all the way until photographic the photographic camera has come to life so here's an example of the camera obscura where an entirely dark room is illuminated by the light that is passing through this tiny little hole and creating an image of the objects outside of the scene outside upside down in this very dark room of course because the aperture is so small and there's so little light coming into the room um you have to um you know you have to adapt to the darkness of that room if you're standing inside that room to actually perceive something it's actually just very little light it's not very bright if you create such a camera obscura at home but this has been used um you can find this in in various museums has been used by artists here some examples and on this website you find more examples where ordinary rooms have been illuminated using this principle of a very small pinhole and then an image has been captured inside that room with a very long shutter time and giving rise to installations like you can see here by this artist so the most basic physical pinhole camera can be described as a box that has a little hole inside one of the walls and that hole is collecting light or is letting a light pass through but it's letting light pass through only into the direction continuing where the light is coming from so we have a point here then that light passes through that hole and because light travels linearly along rays that light is hitting this image plane the sensor here on the back exactly at one location because of this the image that is projected is actually projected upside down onto that image plane behind this little hole in the wall which is called the focal point because all the light rays that pass through this hole must pass exactly through this hole this focal point however when we consider camera models in this lecture we typically consider the image plane to be in front of the focal point and not behind the focal point so here's an example of despite this being the physical model the mathematical model looks like this here on the right the distance from the focal point of this from from the focal point of this plane is the same as the distance of this image plane to this focal point but in this case the image plane is in front of the focal point now in this case the image of the 3d object here is of the same size and the same shape as the image if it's projected behind the focal point the only difference is that it's mirrored and so both models are actually equivalent right they are just related by this mirroring so they can be brought into each other using appropriate change of image coordinates now in this lecture we're considering this mathematical camera model which has some advantages in the sense that the image is not projected upside down so we can we can directly see the image as we would see it as humans and that's also how digital cameras store images they don't store the images upside down but they store them in the right orientation so this is what we can use as a replacement for this model because both are equivalent up to very simple reflection now if we talk about projection models there's two basic projection models that are important there's many more but these two are the ones that we're going to discuss here and if you want to know about other projection models i recommend having a look into the silesky book in chapter 214 the first model that we're going to discuss here is the so-called autographic projection model where light is assumed to travel parallel to each other's ray um on onto and then projected in that parallel fashion onto that image plane and in the perspective projection model which is the pinhole camera model where i have a focal point and all the light rays that pass from a any 3d point to that focal point all the light rays must pass through that focal point are hitting the image plane at the intersection of that line with the image plane you can see that here in this case the size of the object changes while here the size remains the same the object has the same 3d size as the dimensions on the image plane and both of these projection models are relevant the traditional projection model is really the perspective projection model that is implemented or that can approximate almost all the cameras that we know like our smartphones our digital cameras or dslrs but if we use a very large focal length if you have a telephoto lens then this perspective projection model can be well approximated by an autographic projection model the perspective effects vanish if we use a telephoto lens and in some cases that's actually wanted for instance in applications where we want to use optics for measurement here on the left you can see a so-called telecentric lens that models exactly the autographic projection process because if you model that process exactly and you can measure distances then afterwards after projection on the image plane you know that these distances correspond one to one to the distances in the real world so this is used for inspection purposes for instance here is a comparison between perspective and autographic projection and what they do to the image that is formed and that there is a continuous transition in between these two and that's also what you're gonna play with and experiment with in the exercise so here on the left we have a classic this is all the same cube but here it's projected using a perspective projection where the vanishing point is in the vincinity and not at infinity and we can see that these these edges of that 3d cube that are all parallel to each other in 3d are intersecting at this vanishing point that we see here on this image now if we increase the focal length we come to a weak perspective setting or closer to a weak perspective setting where this vanishing point moves outside this image domain that we're considering here and if we increase the focal length even further if we move the camera infinitely far away from the object and at the same time increase the focal length to infinity which of course is physically impossible without such as a telecentric lens then we obtain an autographic projection here where then all the lines that are parallel in 3d are actually also parallel in the image that is projected in in the image that we have captured with the camera and here on the bottom you can see two examples of a perspective projection and a orthographic projection or close to autographic projection of a real scene so how can we describe these projections mathematically here we see the model for the autographic projection and in all illustrations that i'm showing here the illustrations are only showing the x and the c coordinate the y coordinate you can think of as going into the image plane here of this camera coordinate system in red but it's just difficult to draw in 3d and it's confusing so i draw everything just as this cut in x and c dimensions okay so what we have here is is the camera center the origin where the camera coordinate system is defined this is the xyz coordinate system where the 3d coordinates of a 3d point in 3d space are defined and then in the autographic projection model the image coordinate system falls together with the x and the y axis here only x illustrated again so we have the x and the x-axis of the image coordinate system in blue and the x-axis of the camera coordinate system in red that are the same and the same holds true for the y-axis and now because the light rays travel parallel to the so-called principle line here um uh and parallel to each other we know that if we have this point xc c c stands for camera coordinates and s stands for screen coordinates or image coordinates we know that the x and the y coordinates are actually remaining the same so everything that we have done in this projection is we have remove the c coordinate but x s so this is the 3d point and the non-bold version is the x coordinate of that 3d point and that x coordinate of the 3d point is projected to the same value in the image plane so the x-coordinate in screen coordinates is the same value as the x-coordinate in the camera coordinates and the same hole true holds true for the y-coordinate so this is how another graphic projection of a 3d point bold xc in 3d space to pixel coordinates bold xs in the 2d image domain here works the y and the x-axis of the camera and the image coordinate systems are shared the light rays are parallel to the c-coordinate of the camera coordinate system and during project it is also called the principal axis here in this black one and during projection the c coordinate is dropped and x and y remain the same and uh just to highlight it again the y coordinate is not shown here for clarity but you can think of the y coordinate as going into direction of the image plane the direction we can't see here but it behaves exactly the same so how can we write this down in terms of linear algebra well we know that an autographic projection simply drops the c component of the 3d point but leaves the other coordinates the same and so we can simply express this in inhomogeneous coordinates as such when we have a 3d point multiplied with this 2x3 matrix where the last coordinate of this 3d point is multiplied with zero and the first and the second coordinate are copied and we have an equivalent representation in homogeneous coordinates here with these augmented vectors in camera coordinates in 3d space so this is a 4d vector and here in screen coordinates now this is a three-dimensional vector where we take this four-dimensional vector and the last element is a one because it's an augmented vector so the last element of x s will also be a one in this case but of course again this can be replaced by any homogeneous coordinates it doesn't need to be augmented vectors autography as already mentioned is exact for telecentric lenses and an approximation for telephoto lenses and after projection the distance of the 3d point from the image can't be recovered anymore which is a property that holds true for all projections also for perspective projection of course so after the projection onto the image plane the distance is unknown now in practice we're not going to use exactly that model that i've just shown because the image sensor typically measures in pixels we define the image coordinate system in in pixels not in millimeters or meters and so we must scale the coordinates in order to be able to measure in pixels and that's called scaled autography and so what we have here is simply replace these ones with an s scaling factor so we can scale um the x and the y coordinate that we're projecting and still dropping the c coordinate of course and the unit of this scaling factor is pixel over meter or pixel over millimeter depending on the unit of the 3d point in order to be able to convert the metric 3d points into pixels so let's assume we have 10 pixels per millimeter and we have a point that is at x 10 millimeters then we would convert this into um [Music] 100 pixels under autography structure and motion can be estimated simultaneously using closed form factorization methods and we're going to see some of these methods in the next lecture now um having seen the simple autographic projection model we we come to the perspective projection model which is a model that we need for almost all traditional cameras where light rays are focused at a single focal point which is here the camera center so here's an example of how a 3d point xc in camera coordinates projects to pixel coordinates xs in so here's the image plane now in the screen coordinates as you can see now the image plane and image coordinate system are not collocated with the camera coordinate system anymore as before but the image plane is displaced from out of the camera's center and all the rays that are all the rays all the light rays here must pass through this focal point through this camera center so there's always the light ray that passes through the camera center the pixel s xs and the 3d point xc this is the constraint and the convention here is that the principal axis is which is this one here is orthogonal to the image plane and the lines with the c axis um so it's called the principal axis and again why the y-coordinate is not shown here for clarity but it behaves similarly so how does a 3d point now project mathematically to this point xs well for this 3d point we know the x coordinate and we know the c coordinate in 3d camera coordinates and for this image coordinate we know the x-coordinate on screen in pixels let's say and we know the focal length f right this is the focal length that is the distance of the image plane from the focal point that's a parameter of cameras we can't change it if we have a zoom lens we can change the focal length that will change how big the object will appear on the image plane you can already imagine changing this will change the size of the projection onto the image because the image plane moves it translates left to right now what is the relationship that we can infer from this geometric constellation well we see that there's a set of equal triangles so we have this triangle here x s and f so if we divide x s divided by f we have must we must have the same number we must have the same ratio as if you divide xc by cc just by the principle of equal triangles so we have this triangle this big triangle here and this little small triangle here and this is what i've written here on the right hand side so in other words i can solve this for xs by saying well the pixel coordinate x is equal to xc divided by cz multiplied with the focal length f and this is exactly the pinhole projection formula and it also holds for y of course y s is equal to y c divided by c c times f now we can also formulate this mathematically in terms of a matrix multiplication if we use homogeneous coordinates so as we've seen in perspective projection 3d points and camera coordinates are mapped to the image plane by dividing them by their c component and multiplying with the focal length that's what we've just derived and this is this expression here in for both the x and the y coordinate in regular inhomogeneous coordinates now while this is a non-linear expression because we have to divide by c in homogeneous coordinates we can write this as a linear equation so assume a matrix called a camera matrix or projection matrix there's a three by four matrix here with the focal length on the diagonal and zeros in the last column and we if we multiply this with the homogeneous four-dimensional augmented vector xc of the 3d point then we obtain well xcf ycf and then the third coordinate cc um and if we now not this is a homogeneous three-dimensional homogeneous vector and if we normalize that homogeneous vector into an augmented vector where the last element is one we obtain exactly f x c over c c and f y c over c c in the first two chord elements of this vector right so by using this homogeneous representation because we have these points that are all equivalent to each other and by the definition of conversion from homogeneous coordinates to inhomogeneous coordinates we are representing the projection process so that we can express this entire process here with a simple linear multiplication if we assume homogeneous coordinates and then afterwards do the conversion to inhomogeneous coordinates by dividing by the third element of the homogeneous vector so the important point here is this projection is linear when using homogeneous coordinates and after the projection again as in the autographic case it's impossible to recover the distance of the 3d point from the image as little remark the unit for f is typically chosen as pixels to convert metric 3d points into pixels if you look at this expression here if we have a 3d point that's expressed in meters so we have x and c being defined as meters but the string screen coordinates are supposed to be in pixels and of course the unit of f has to be also defined in pixels because the meters cancel here and so the only thing that remains is the screen coordinate in pixel and that's why the focal length in our models that we're using is often going to be defined in pixels this is the most basic form of the pinhole projection principle but we can add more parameters and one important parameter is the so-called principle point why do we need a principle point well in the previous model that i've explained the image coordinate system and this is now a 3d illustration of this one here the image coordinate system is defined in on the principal axis it's centered on the principal axis but it's inconvenient to store pixels with negative coordinates consider a pixel here that would have negative coordinates with respect to that coordinate system so what it's done is in practice is and also i have already as you can see here oriented the coordinate system in a way that is convenient to store images what we can do here is we can move that coordinate system to the top left of the image so this is the the viewpoint here so looking from behind this is the top left uh corner of the image and so the the first pixel here well if we move this coordinate system will be located at zero zero or one one if you use a one based coordinate system which is much more convenient to store image arrays and that's what we typically do in practice we don't consider negative pixels but we consider just positive pixels and define the image coordinate system here which means that this principal point here must be must be added to the image coordinates such that we obtain only positive coordinates and that's called the principal point c x and c y that's added to the pixel coordinates after this projection that was discussed in the previous slide so the complete perspective projection model without distortion is given by this equation here where you can see that now in addition to this multiplying the focal length with x divided by c we have also now a translation by the principal point and we have done two more modifications here so one modification is that we have allowed the focal length in x direction to differ from the focal length in y direction and we have added another scaling factor here that allows for this is called the skew that allows for modeling sensors that are not mounted perpendicular to the optical axis due to manufacturing inaccuracies f x and f y however would only be independent from each other if we have different pixel aspect ratios and sensors are manufactured typically pretty precise so in practice what we often do is we set fx equal to fy and also this q is typically very small so that in practice often the skew is such a zero um but we have to model in almost all cases that we consider the principal point right so while f can be set the same and s can be set to zero the principal point is still an important parameter that we want to estimate in the world on a model now this is the full camera matrix that is defined through all of these parameters and this is called the intrinsic metrics because it's and actually this is well this is the projection matrix the three by four matrix but the three by three sub matrix here in front is called the calibration matrix k here in red and this is called the intrinsic matrix because it stores all the intrinsic parameters of the camera everything that's intrinsic to the camera like the focal length um the sensor ratio the or the pixel ratio the principal point and so on and this is opposed to the so-called extrinsic parameters which is the camera pose with respect to some world coordinate system that's why it's called intrinsic parameters and the projection then can be done in homogeneous coordinates by just taking this four-dimensional homogeneous vector so representing a 3d point in camera coordinate system and projecting it by multiplying it with this matrix now one thing that's nice about this homogeneous representation is that we can change transformations so if a 3d point is not represented in camera coordinate systems but is represented in the world coordinate system we can chain the transformation the rigid body transformation that maps a point by multiplying it with this rotation and translation matrix this is an euclidean transformation under we've seen before so it multiplies that point and afterwards multiplies that point that is now in camera coordinates with the intrinsic matrix so we have the extrinsic and the intrinsic matrix here onto the image plane we can also do this by simply multiplying these two transformations together into one matrix and now we have just to do one matrix multiplication instead of two matrix multiplications and and in many cases that's very convenient and can also save time and be more efficient so here's the mathematical formulation of this so we have the screen coordinate which is equal to the camera coordinate or camera the 3d point in camera coordinates multiplied with the intrinsics which is equal to the point in world coordinate multiplied with the x26 and the intrinsics and then we can multiply these two matrices together this is a three by three matrix and this is a three by four matrix to obtain a three by four projection matrix that directly maps a point from world coordinates to the screen and this three by four projection matrix p can be precomputed can be pre-multiplied and then applied for all the points where we want to apply this projection on sometimes it's preferable to use a full rank 4x4 projection matrix where instead of you know three by four matrix with the intrinsics and a four by four matrix for the extrinsics we have also a four by four matrix for the uh intrinsics where we simply have added the last row with zeros and a one in the last element now this is of course if we multiply these together we get a four by four matrix and now we can take this homogeneous vector this 4d vector representing a 3d point and project it through this matrix and obtain now a four dimensional homogeneous vector x tilde but because we know this is a point that's on the image screen we must still normalize this with respect normalize this vector with respect to the third coordinate not with respect to the fourth coordinate and so what we obtain is this expression here which is familiar because the first three coordinates are the same the third coordinate is one by definition and we have this projection here in the first two coordinates and then the last coordinate here is the inverse depth as you can see here all right and so um because this fourth component of this inhomogeneous 40 vector is the inverse depth if the inverse depth is known we can use this full rank matrix to also do the inverse so assume we have this vector where we know the depth we know the inverse step as well then we can also now because it's a full rank matrix we can invert it and go from an image point to the 3d point and afterwards in order to extract the 3d point we need to normalize with respect to the fourth entry because now we have a point in 3d space but of course we need to know the depth otherwise this wouldn't be possible good now the last aspect about the geometric image formation process i want to mention is the lens distortion we haven't talked about lenses yet but we'll talk about lenses in the next unit and lenses are important because as already mentioned if we just use a very small pinhole we get very little light on the sensor plane so it's important to have a lens that collects light but if we have a lens system then often also there is a geometric distortion introduced and this linear projection model that we've seen before it doesn't hold exactly anymore there's linear or there's lines in 3d that are projected onto curves in the image plane and this is called distortion so the assumption of the linear projection that straight lines remain straight on the image plane is violated in practice due to this properties of the camera lenses and there's two types of distortions there's radial distortion and tangential distortion and both of them luckily can be modeled for most camera models and most lens models relatively easily so here's the the formula that is most often used what is done here is we take the point in camera coordinates and first do the normalization by the c coordinate but we don't multiply the focal length yet and then we compute the radius and then with these components x y and r we can multiply those with these uh or we can apply these polynomials to them that model the radial distortion and the tangential distortion so we see here how the x y point is transformed through this equation non-linearly to yields a point x prime that is the distorted point so from the understory point we get a distorted point and then this distorted point is multiplied with the focal length so this is now the focal length from the pinhole model and the principal point is added and so this is this is everything that we need to do in order to also model very basic radial and tangential distortion and the nice thing about this is that well while this looks ugly at first glance the nice thing is that this can actually be undone this effect is invertible because these transformations are typically monotonic so we can pre-compute undistorted images from distorted images and then still apply the pinhole model directly on this undistorted images that have been simply precomputed before using the image or the projection models that we want to use the simple pinhole projection model and that's what's typically used in practice we are undistorting the images before using them now this is a very relatively simple model more complex distortion models must be used for more complex lenses like in particular wide angle lenses that do not follow this simple polynomial model and here's some examples of how this looks like you are probably familiar with this there's an example of a checkerboard here and if we have so-called barrel distortion we obtain an effect like here if we have the inverse of that the negative effect is called pincushion distortion and we get a projection that looks like this and you can see in both cases the lines are not projected to lines but two curves and also depending on the type of distortion the projected image becomes smaller or larger so you need to use a different crop on the image sensor to fully exploit the projected image |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_44_Stereo_Reconstruction_Spatial_Regularization.txt | in the previous units we always applied the winner takes all strategy which is for every pixel in the reference image we compute the similarity across all disparity hypotheses and then compute return the patch with the highest similarity or with the lowest matching cost while using deep learning to compute the similarity scores improves the situation a little bit regarding reducing ambiguities it doesn't remove all the ambiguities and so it's beneficial as we've briefly seen already to incorporate global optimization rather than just locally taking the winner and this is what we are going to focus on in this unit on spatial regularization let's recap quickly what are the challenges when will local matching fail well what is the underlying assumption well corresponding regions in both images should look somehow similar and non-corresponding regions should look different so when will this fail here we have an example with an image from the kitty data set i'm toggling back and forth so you can get a feeling for what the disparities are and how the local image regions change if we look closely at the similarity constraint the local similarity constraint we can identify some failure cases for example there is some patches where there is simply very little or no texture at all just because the object that we're looking at is not textured or because there is maybe sensor saturation etc so matching such a patch becomes very hard because there's many possibilities in the other image where this white patch would also match further if we look at this patch here we have the tree in front of the building and if we look from the left image and from the right image then the patch will look very different because the the background actually changes so the background changes relative to the foreground this is something we've discussed before in the context of disparity discontinuities and the violation of the frontoparal assumption so some of this can be some of these artifacts can be reduced by learning but not all of them another challenge are repetitions if we have structures in the image here we have three different patches extracted that look very similar to each other then matching becomes suddenly very ambiguous all of these patches are so similar to each other that it's really almost impossible to distinguish the what is the correct one from the wrong one and then finally here we have violations of the lambertian surface model where we have strong specular reflections from the building of the on the opposite side in the window of this car and you can see that if i'm toggling now between this is the patch of the left image and this is the corresponding patch in the right image that's corresponding based on the actual surface location you can see that the content of these patches completely changes gives us actually the wrong solution at that location now in order to overcome these local ambiguities that cannot be entirely overcome by a siamese network because it learns only local features is to use spatial regularization and the idea here is very simple well what we can ask is if we look at depth images of the real world here in this case here from the brown range image database where real scenes have been scanned using a lidar scanner if we look at these depth images then we can find very fundamental basic principles about the statistics of these images so if we look at these images what's already evident from the images is also evident from this plot which shows us the derivative of the range or of the log range and the probability for each of these values here what we see from the images is that there's a lot of smooth surfaces so in most cases for most adjacent pixels the depth actually varies very slowly except at object discontinuities where the depth varies very quickly from a small value to a large value here for instance but depth discontinuities are very sparse in other words the number of pixels where we have a depth discontinuity is very small with respect to the total number of pixels in the image for most pixels the neighbors are looking the same except for these discontinuities and that's also reflected in this plot here where we can see that there is a peak at zero um and that means that most of for most of the pixels there is actually almost i mean the the change is is very smooth change is very slow in terms of depth but then there is a heavy tail here which means there is some some pixels much less it's a log scale here but there's some pixels for which we do have a significant gradient in the depth map and so now we want to incorporate the constraint into the disparity estimation process and the way we do that is by specifying a loopy markov random field or short mrf that will be discussed in more depth in lecture five and we specify that markov random field on a grid where the nodes in that grid here illustrated on the right these circles are the pixels and these little squares are constraints on that pixels and then we solve the whole disparity map d which is these pixels at once where the map solution is the solution with the minimum energy so we can think of this here for the in the context of this lecture as an energy minimization problem because the probability of the disparity map is proportional to such as called a gibbs distribution and we'll cover this in much more depth in the next lecture of an energy or a minus energy term but here inside these brackets we have an energy term in other words if we maxim if we're minimizing this energy we're maximizing the probability because we have probability proportional to x exponential minus the energy now this this term here what does it mean well this energy term is composed of two types of terms there are so called unary terms which are the matching costs this is what we have computed in the previous units this is what we have basically um from which we have derived the winner takes all solution for every pixel so i is a pixel here and for every disparity hypothesis so di is the disparity hypothesis maybe ranging from 0 to 128 we obtain a matching cost and ideally that cost is very low for the correct correspondence and it's high for the wrong correspondence that's basically the the negative of the similarity or the inverse of the similarity now this is what we had already and if we would ignore that term and we would try to maximize this probability or minimize that energy because there would be no connections no pairwise con potentials no pairwise terms so this would go away would just have these little squares attached to each variable individually then we would obtain the winner takes all solutions because there's no constraints between pixels but what we're doing now is we are introducing constraints and in this case on a four connected grid remember again each of these variables that we want to estimate this variables from the mrf these are these d's we have one per pixel and then we have connections between adjacent pixels here on this four connected grid so for each pixel we have the top the left the right and the bottom neighbor and for each of those we have such a smoothness term here added that now depends not on a single variable but on let's say this variable and the adjacent variable and then we have one for this and this and the sum of all of these for all pixels is this term here in summary we have this neighborhood relationship which is denoted here by this tilde where i and j are two neighboring pixels on this four connected grid then we have unary potentials which are the matching costs and then we have these additional pairwise potentials now which model our prior belief about the smoothness of adjacent pixels so a very simple model for this movements term would be one that says well if i have if i look at two neighboring sites let's say at this pixel and this pixel then i wanna have a small energy value if um the elements are the same which is indicated by this expression here so this is the iverson bracket so if d and d prime so the neighboring disparity are not the same then this will evaluate to one and if they are the same this will evaluate to zero this is what the iris and bracket denotes it's basically an indicator operator that just evaluates if this expression is true or false if it's true if they are dissimilar then we obtain a one and if they're the same then they obtain a zero now this is a very simple and a very stupid model a better model is to take the relative displacement into account so we can take for instance what's typically taking is a truncated l1 smoothness term where we have the difference between the disparity at the current site and the neighboring disparity d prime and we have an absolute difference here and then we have a truncation tau a truncation parameter such that even if this distance is very large we don't induce too much penalty so it's a robust potential and we'll talk about robust potentials also more in the next lectures so this is a much more reasonable assumption that takes also the discrepancy of adjacent disparities into account up to a certain truncation threshold tau and then we can solve this mrf approximately using belief propagation now we have specified all the potentials we have computed these matching costs these are just the matching costs as we did them for block matching and then we have introduced these additional energy terms here so we have a big big term now many many sums and now we can solve for the optimal d values if we would remove this part then we would just obtain the winner takes all solution because there is no constraints between or no relationships modeled between adjacent sides but if we introduce these smoothness constraints telling the model well if two sides are nearby then we expect the disparity in most cases is actually quite similar so we want it to be smooth the output then if we minimize that energy then we obtain a different disparity map that's hopefully better than the one we would obtain by just looking at the local information and so for solving this mrf approximately this is a difficult problem we can only solve it approximately there's a variety of techniques that can be used such as graph cuts or belief propagation and in lecture five to seven will in particular talk about the belief propagation algorithm and so here's a result of this algorithm applied on the same image that we have seen before the currents data set from the middlebury dataset and you can see that now um despite where we're using a very simple not even learned similarity metric we obtain much better results at least for this very simple scene here with a lot of texture and lamborghini surfaces and this idea of using markov random fields to model relationships can be extended from these local pairwise relationships to more global relationships so in this work here it's actually a work from our group that what we did is that we try to model disparities and objects in the scene jointly to give constraints that span larger distances not just adjacent pixels which is too weak in most cases and then we obtain the objects a solution for the object simultaneously to the solution of in terms of the depth for the disparity map on the bottom here and here's an example of what this looks like so on the left you can see the result from this learned potentials from the siamese network spontaneous using already global optimization but using only pairwise potentials are very local interactions and here on the right is an example from this other model which uses more global constraints by modeling objects |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_12_Introduction_Introduction.txt | in this second unit we're gonna introduce what computer vision is and also have a look at some of the challenges involved in this problem to put computer vision into context um let's consider it as part of what it is typically considered an aspect of the general field of artificial intelligence artificial intelligence was characterized by john mccarthy one of the pioneers as an attempt to find how to make machines use language form abstractions and concepts solve kind of problems now reserved for humans and improve themselves and that of course also includes visual perception of the environment so sub areas of artificial intelligence include machine learning computer vision also computer graphics natural language processing robotics art education and more and all of these areas are interlinked as we will see in this class so here is an example on the right if you want to build a robotic system that robotic system must perceive the world or environment and it must then accordingly update its internal representation of the world in order to make appropriate decisions to minimize some objective now computer vision can be very generally described as illustrated here by this picture from antonio to alba as converting light into meaning and that meaning could be geometry reconstructing the scene in terms of geometry or in terms of semantics identifying certain aspects that we see but really what computer vision does what it receives as an input is just a sampling a very specific sampling of the light pattern here's a light emitter that emits light rays into all directions and these light directions hit a surface point and then they bounce away into multiple directions from that surface and then they hit another surface point and at some point they hit the observer the eye or the digital camera and so all of these light rays the entire scene is um that the pass through this scene um are then sampled at a particular point in time in a particular location a particular direction and a particular wavelength and this forms the image that we then have to interpret this is one of the most famous books about vision and also computer vision by david and what what is he right well what does it mean to see the plain man's answer and aristotle's two would be to know what is where by looking this is how computer vision is often defined we want a computer to know what is where by looking or in other words to discover from images what is present in the world where things are what actions are taking place to predict and anticipate events in the world so this is more the robotics aspect where we want to perform actions maybe in the world and in order to do actions we need to anticipate what's happening around us we need to make also predictions about the future there's numerous computer vision applications and many of these have already commercialized today and there's applications in robotics in medicine there's applications in structuring photo collections or building 3d models in self-driving in mobile devices and accessibility just to name a few and as already mentioned the field of computer vision has connections to many neighboring fields so for instance it has connections to neuroscience and biological vision in particular many computer vision algorithms take inspiration from biological vision and biological vision serves as a motivator for developing computer vision algorithms in fact over 50 percent of the processing in the human brain is dedicated to visual information signifying the importance um and the difficulty of that task that we have to solve if we want to replicate biological vision with a computer computer vision is also related to computer graphics in computer graphics given the 3d scene composed of objects materials certain semantic pose and motion we're concerned with rendering a photorealistic 2d image of that scene now in computer vision we are trying to solve the inverse problem that's why sometimes we're we're also talking about inverse graphics we're just seeing this 2d image where a lot of this information from the original 3d scene has been lost through this projection in process and the only thing that's accessible to us now is a raw pixel matrix and we want to recover these latent vectors these underlying factors that are present in the scene and therefore vision is also an ill-posed process an imposed inverse problem where a lot of this information that was originally present in the scene has been lost and has to be recovered for example many 3d scenes yield exactly the same 2d image think about occlusion if an object is occluded no matter where it's located the projection into the image looks the same or if an image is overexposed no matter what the scene geometry is it all looks the same so these are just two extreme examples but because this is pervasive in computer vision for almost all computer computer vision problems we need to integrate additional constraints additional knowledge about the world in order to solve this ill post inverse problem and this is where a lot of research in computer vision goes into these constraints can come from engineer constraints about it can come from first principles about the world can come from also large data sets where this knowledge has been acquired from computer vision is also related to image processing image processing is concerned with low level manipulation of images like color adjustment edge detection denoising image warping image morphing this all happens in 2d as we'll see in the history part computer vision has deviated from image processing primarily in the early stages because it was concerned with getting a more holistic scene representation scene understanding in particular recovering the underlying 3d structure of the world that is not present in this 2d representations that image processing is dealing with image processing is not a focus of this class there's other classes on image processing that are very relevant and i highly recommend you to take but we are focusing on the computer vision on the more holistic aspects of this problem there's of course also a relationship to machine learning in that a lot of machine learning tools that have been developed are used everywhere in computer vision today and they have made computer vision successful also commercially successful they have provided the basis for computer vision to work in real world applications in particular in the last 10 or 15 years on the other side in the other direction computer vision often provides a very good motivation good examples good use cases for machine learning and has helped tremendously several breakthroughs in machine learning and in other fields of machine learning such as language processing [Music] by accelerating the pace of the research as you know in the last 10 years deep learning has revolutionized computer vision but also computer vision has formed the basis for revolutionizing deep learning here you see as an example the progress on this imagenet classification problem where there is a million images with 1000 different categories and the goal is to classify each of these images into one category and the progress in this problem has somewhat stalled until 2010 until it has been shown for the first time that deep learning can significantly improve on the performance on this problem um and you know by 2015 we have surpassed human performance on this particular problem while humans are not designed for it's not a very easy task for humans to solve um but there was a steep uh decrease in this is the error rate so-called top five error rate how often um like you get an error if your prediction if the correct example correct class label is not within the top five with your predictions so this number has decreased tremendously since 2011 2012 now surpassing human level performance and and going down further every year this fact that computer vision started to work really on relevant problems has also increased attention and accelerated the field here you can see the number of submitted in blue and accepted papers to cvpr one of three top tier conferences in our field you can also see that it's a relatively competitive conference where about um only one fourth of the submitted papers is accepted but what you can see is that both the submitted and accepted papers have grown in particular over the last 10 years or so to uh numbers much much larger than um you know the size of the field in the 80s and 90s so it's very short innovation cycles that we have in this field and a lot of people working on these problems today and this is also reflected in the attendance of for instance the cvpr conference which is hitting the ten thousands now while it was around one thousand when i started doing research in computer vision around 2008 2009 because some of the tasks can now be solved with computer vision there has been a huge industry developing around computer vision where computer vision algorithms are used for all kinds of problems we'll see some of them in the third unit of this lecture and here's just an example of some of the companies that are sponsoring they're giving money for this you know for organizing um the cvpr conference and that also presents their latest innovations that also collaborate with researchers in the fields and this is just growing very quickly due to the success that computer vision had recently now why is visual perception hard as in many areas in the beginning it was underestimated there's this famous ai memo from seymour parker who proposed in a summer vision research project with interns to basically solve computer vision over on the period of the summer what this says here in the abstract is the summervision project is an attempt to use our summer workers effectively in the construction of a significant part of a visual system the particular task that was chosen partly because it can be segmented into sub-problems which will allow individuals to work independently and yet participate in the construction of a system complex enough to be a real landmark in the development of pattern recognition now it turns out that we're still working on this problem today because it's not so easy it's not like the problem can be completely separated into this independent subproblems and solving these independent sub-problems and composing them together will solve the situation and also each individual sub-problem turned out to be harder than initially expected now why is visual perception hard it feels very easy to humans if we look at how babies grow and learn about interacting with the environment they learn very fast even at an age of of one year they realize how to manipulate devices that are very hard to manipulate for a robot now one key aspect why visual perception is hard is what i've already mentioned is that while if we as humans look at this picture and immediately in in the order of a few milliseconds obtain an understanding of what's happening in that picture what the computer sees is just a pixel matrix and we need to interpret this pixel matrix in order to come up with a hypothesis about what's happening in that image and this interpretation is hard so one challenge has already been mentioned if we take a 3d scene and project it into an image and then we want to reconstruct that 3d scene from that image a lot of information has been lost through this projection process so we are dealing with an ill post inverse process where many solutions many 3d solutions are possible for a single 2d image so here's this famous workshop metaphor from edelson in pentland where you can look it up here in this paper i by the way always try to put references to relevant papers as footnotes so whenever you're interested in some topic you can google the name of the paper that i put in the footnote and most of these papers are freely available on the internet so we'll find them relatively easily and i i can just encourage to have a look at some of these papers if you're interested in this area it's good to to read up on these papers and get a feeling of how the field works so this is an older paper and in this workshop a bit tougher they they make the point that in order to explain an image here this is the image this is a 2d image in terms of reflectance lighting and shape a painter a designer and a sculptor will design three different but plausible solutions a painter might just paint that image as it is in 2d a light designer might create a set of light sources that project light onto a wall such that the image appears with exactly that contours and shading but it's actually a white wall and a sculptor might create a sculpture that is oriented such that the shading of this picture appears through different inclination relative to the light direction that's coming in so it's multiple solutions just to explain the single image in fact there's infinitely many solutions and this is what makes this computer vision problem so hard and ill-posed here's an example this is the famous ames room illusion which you might have seen this is a room that from a particular point from particular perspective looks like a real room but it's actually not is actually deformed and through this for this particular point of view it looks like it would be a real room but it because it's not objects that you that are in that room look as they were of different sizes and here's another example that from a particular point of view looks like it produces a physical phenomenon that's impossible but if you look from a different point of view you see that this is actually just a perspective illusion so a single perspective doesn't reveal everything that's there in terms of geometry even for humans are actually very good at interpreting the geometry from a single image now let's talk about some more challenges one challenge in for instance recognizing objects is that depending on the viewpoint the pixel matrix might dramatically change despite this being the same object depending on the viewpoint the pixels change dramatically and you you need to recover this invariance recover that this is the same object the same holds true for deformation there's many objects that are non-rigidly deforming giving rays to new pixel constellations and the goal is to be able to interpret that this is still the same object occlusions are another challenge there is parts occluded which makes it hard for some of the objects to be recognized here's an example of illumination this is a scene i've taken the slide from also from antonio to alba this is a scene that has been illuminated by a particular light source can you recognize what that scene shows well this has been illuminated by a laser light and depending on where we project that laser light the pixel matrix changes completely and also our understanding of that scene changes completely but it's always exactly the same scene pictured under always exactly the same viewpoint so there's a intricate interplay between materials and light that give rise to a lot of different images despite showing exactly the same scene here's another example from jan kanderink showing the same phenomena that it depends on where you put the light source how the scene appears also motion is challenging motion can create blur can make objects harder to recognize but also we are interested it's a useful cure we're interested in recovering the motion itself further perception and measurement often uh differ and here are some examples this is a famous example from edward adelson i put the link to this checker shadow gallery at the bottom and i ask you to focus on these checkerboard patches that are indicated by a and b if you look at this image most people would say well it's clear that the a field is darker than the b field but this is a this is what you perceive if you put stripes of the same color next to the a and b fields you will realize that they actually have exactly the same color so perception and measurement can be quite different the measurement is saying you if i take a picture of this with a digital camera saying this has exactly the same grey value despite you perceive them very differently um the same holds true for occluding contours there's this famous kinshasa triangle where if we look at this picture despite there is no triangle most humans perceive a triangle occluding three circles here and the same holds true for these other objects that are only present because they appear through the contours through looking at the contours holistically not just a single piece of the segment but holistically at the image recovering um what's going on in the scene here's another example what do you see on this picture can you recognize something well typically it's hard in the beginning but if you've seen it it's very easy so there's a dog here with a head and two legs and here two legs in the back and this is the body so integrating this image based on local features alone will not lead you to anywhere you need to and this is very hard still of course for computer vision algorithms but humans can do that and humans can interpret based on the larger holistic structure so perception and measurement can be quite different here's another example for a slight variation of this in terms of local ambiguities so what you see here in the circles is always exactly the same pixel arrangement as here in this template on the left but depending on where you have it in which image it can mean something very different so here from the context locally it's ambiguous but from the context we identify this as a as a telephone maybe and here this might be shoes and this might be a car etc so sometimes just from local context is impossible to distinguish what an object is but it becomes very clear if you look at the scene as a whole so it's important to integrate scenes as a whole here's another example it's actually a counterexample and serves as an illustration for how strong your prior knowledge about scenes is what you see here um is a person right but you also see some objects do you recognize what these objects are so um if you look at this picture despite it being very blurry you can probably recognize there's a phone here there's a mouse a monitor maybe it's a printer here in the back but now if i show you that same image unblurred with the original resolution you can see that none of these objects is actually present so despite you were very sure here that these are the objects that are present they are actually not present in that image and this is a it's a as a kind of a counter example is that very unlikely for this scene probably they have seen such a scene before and that such a scene appears naturally in in the distribution of images that you observe during your lifetime but because you have such a strong prior knowledge you have developed such a good model of the world you have such a strong hypothesis about what these objects are that you're kind of surprised that the objects are not what you think they are another challenge is the variation in the object itself one example that's often given is chairs because chairs despite having all the same functionality as an object that you can sit on they look very different so now if you want to distinguish chairs from some other object category you need to solve the problem that despite all the chairs look different they are part of the same category you want to categorize all of them as being some suitable objects with a you know backrest etc so the intraclass variation is is a problem if you wanna separate classes from each other but you wanna classify each element of a class correctly that's very challenging and then also the sheer number of object categories in the world that exist and can be named is is very large and that makes this problem very challenging it's estimated that there is between ten thousand to thirty thousand um high level object categories and there's of course much much more if you go more fine grained |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_94_Coordinatebased_Networks_Generative_Radiance_Fields.txt | in this last unit we're going to take the representation from the previous unit the neural radiance field and try to build a generative model for it try to build a model that is capable of given just a collection of 2d image of a particular object category builds a general 3d model that can be rendered photorealistically into new images the difference to before is that nerf has always been optimized for a particular scenes given many viewpoints where the camera poses have been known now we don't assume any such thing we just assume that there's a collection of images of objects of a particular category and we don't even know the camera pose and we want to build a generative model for this and this is what we call generative radiance fields or graph in short before we start let's quickly recap generative adversarial networks these are slides from the deep learning lecture that you should already be familiar with the term generative model refers to any model that takes a data set drawn from a data distribution p data and learns a probability distribution p model to represent that data distribution in some cases the model estimates p model explicitly and therefore allows for evaluating the approximate likelihood p model of x of a sample x but in other cases the model in many cases this is not intractable so the model is only able to generate samples from p model and gans are prominent examples of this family this family is called family of implicit models so you can see this is another occurrence of the term implicit but again in another context but it fits the spirit of this lecture so here implicit means that the likelihood is not represented explicitly but only implicitly so gans provide a framework for training models without explicit likelihood and this is what we have seen in the last deep learning lecture this is the prototypical diagram of again there's a latent code sampled from some noise distribution and so we get that latent code and we input that to some generator network that produces in the case that we've seen in the deep learning lecture just a 2d image and then we have a discriminate network that takes this image and tries to classify if this what it sees as an input x hat or x which comes from the data distribution so these are the same networks here if that is a real image or if this is a if it's a real image or if it's coming from the generator distribution and we're training this model using this two player mini max game paradigm with a value function specified as such and try the discriminator try to optimize both the generator and the discriminator so the generator tries to produce images that are more and more realistic and the discriminator tries to distinguish them and if the generator becomes better and better then the discriminator has a hard time and ideally in the end will be completely uncertain we'll assign 0.5 probability to all the samples that's gans in a nutshell we've seen in the deep learning lecture that the theoretical analysis shows that this minimax game as specified here at the bottom recovers in some using some very very strong assumptions in particular assuming that d star can be reached at each iteration recovers the data distribution as the model distribution and also assuming that the model has of course enough capacity to represent the data distribution but in practice reaching the star the optimum at each iteration is too slow and so we must use iterative numerical optimization and optimize d in the inner loop to completion is just not possible so we have typically an alternating optimization where we do one two three steps of optimizing d in the inner loop and then one step of optimizing g using a small enough learning rate and this way we maintain d near its optimal solution as long as g changes slowly enough which in gradient descent typically does because we can't do very large step sizes anyways and we have seen this little 1d example here we have the data distribution in blue the model distribution at the initial iteration in orange and the discriminator here in green and this is the noise distribution in red and if we start optimizing this is iteration 500 iteration 1000 2000 2500 we see that the model distribution converges to the data distribution and the discriminator becomes more and more uncertain and close to so here's the axis for the discriminator probability close to 0.5 for all the samples now let's take this idea and apply to radiance fields so we want to build a general model for radiance fields we want to sample a latent code that captures a 3d point location a viewing direction and information about the shape and appearance of the object as well which is not shown here as well as the camera pose and then for that point using the radiance field we can predict the color and the density and then we can use volumetric rendering to render the scene and we want to render the scene such that a discriminator applied on that scene on that images a 2d discriminator can distinguish between real images from the data set this is all trained just with 2d images and the renderings from the model this is the idea behind graph generative models for radiant fields which has been which has appeared in eurips 2020 and we train this from unstructured and unposed and post means we don't know the pose of the cameras to the image collections a particular challenge here is that we can't directly apply a discriminator in 3d because we just have images as observation so we really need to apply this volumetric rendering which is slow and this is one of the contributions that this paper does is how to make this tractable so let's now step by step build up this model a radian we have a radiance field here that we know already that maps a 3d point x and a viewing direction d and this gamma here just is a positional encoding as for nerf so we push these both to a positional encoding and this maps it to a color and a density and then for that ray we do this well we we do this radiance field computation for multiple samples along that ray so x changes we have an index i here because we have multiple samples along the ray the direction is always the same um and so we evaluate this conditional this this radiance field is not yet conditional this radiance field is nerve for all of these points along the array to get for all of these points color and density and then we apply volumetric rendering which is abbreviated with pi here but it's what we have discussed before the standard alpha composition in order to render a pixel's color from these endpoints along the ray this is how nerf works now what we add here is two additional latent codes one is a shape latent code and one is an appearance latent code that are also sampled from some standard distributions normal distributions and that are responsible for modeling the condition of well the rendered image on the shape and the appearance of the object now because we can't render the entire image which would be much too time consum way too time consuming what we do instead is we render just a few a few pixels in particular we take a patch this is shown here as a five by five patch in practice we use a 32 by 32 patch so we take a 2d image grid and the location and stride the size of this image grid is sampled randomly so for only for those five by five or 32 by 32 pixel locations we do this process and that is what we what we have here with with r so this plate here denotes that we do this r times where r is let's say 5 times 5 25 times we do this k is the intrinsics psi size the extrinsics has come from the um from from that assembled also randomly here in this case we assume the intrinsics are known but you can also sample the intrinsics and this new parameter here is responsible for placing this sampling grid this sparse grid on top of the image plane which determines then where the rays are located that we sample from so it's very similar to nerf just that instead of sampling for all pixels in the image that we want to render we sample just a few of them so we get this predicted patch here and the sampling pattern you here changes the location and stride arbitrarily because at any scale and any location we want to produce a realistic patch but we just produced this patch but because we have a 2d patch we can apply a standard confidence now and this is very fast so we apply a small confidence that is basically um discriminating that predicted patch from a real patch that is sampled from real images using the same pattern generator here we also have the new that generates the sparse pattern at a certain location of a certain stride from this random distribution and we sample a random image from a data set and we get therefore a random patch so we compare these two with a 2d discriminator and the 2d discriminator is a simple four layer confident so this is very simple now we've reduced the problem because we don't need to render an entire image but we just do sparse sampling of the image and compare these very small patches and this is fast enough and takes little enough memory to uh be able to being trained uh in in in the context of a of a generative adversarial network objective you can see blue is the generator of the generative adversarial network and red is the discriminator the discriminator is a 2d confidence similar to standard guns but the generator is generating a 3d structure that's rendered into an image or more specifically into this little patch that the discriminator tries to discriminate from real patches here is an illustration of the conditional radiance field network architecture this is this block here and as i mentioned already in the beginning we have a shape code cs and appearance code ca but how do we make sure that they are not entangled why do they represent different things well similar to nerf the volume density sigma depends solely on the 3d point x here and not on the view direction d which is injected later into the network and it also depends on only the shape code cs because the appearance code is later injected we first want to model the shape and then the appearance is typically something simpler that we can model with a model that has less capacity so it's injected later the predicted color c then additional depends on the view direction and this appearance code but there's less capacity in this last part of the model obviously because we've already done some computation here so this color head needs to then adapt the color using these conditions and this allows to model view dependent appearance for example specular materials that's all so let's look at some results and katya the phd student who did this work did a beautiful video so i want to show you simply this video hi i'm katya from tubing university and i will talk about 3d aware image synthesis with generative radiance fields this is joint work with e liao and andreas geiger while generic guns can synthesize high resolution images they cannot model 3d properties like viewpoint changes explicitly but we want this we want to make generative models 3d aware but 3d supervision is difficult to obtain for real world data so the question is can we learn a 3d aware generative model using unposed 2d images only existing approaches rely on voxel-based representations they either generate the full 3d object or an abstract low-dimensional 3d feature that is decoded with a learned while the letter reduces discretization artifacts and memory requirements the learned projection can impair multi-view consistency this can be seen particularly well in the 3d reconstruction from the generated images inspired by neural radiance fields by milton hall at all we propose generative radiance fields to replace voxel based with continuous representations this scales better to high resolution avoids a learnable projection and allows generating 3d consistent images from 2d images only so how do we learn to generate radiance fields we start by sampling a camera pose randomly on the upper hemisphere following a patch pattern we select a fixed number of camera arrays on each ray we sample 3d locations to which we concatenate the viewing direction of the ray to create 5d coordinates we embed the coordinates with a positional encoding and concatenate a shape and an appearance code our generator predicts a volume density and a color value from the inputs note that changing the shape code will correspond to a change in volume density while changing the appearance code will modify the predicted color the radiance field is predicted for every sample location on the ray next we compute the pixel value using volume rendering the pixel values from all rays form a patch lastly our discriminator compares the generated patch to a patch sampled from a real image let's look at some results while all methods work reasonably well for small images at higher resolution voxel-based approaches either get too memory intensive or fail to disentangle viewpoint and identity our method scales well to high resolution and further produces a depth prediction because we learn the full 3d object the two lighten codes allow to manipulate shape and appearance independently our method also works reasonably well using unposted natural images if you like our work visit our webpage and check out graph on github thanks great one final work i like to mention this giraffe giraffe has just yesterday received the cvpr 2021 best paper award and it's an extension of graph to compositional scenes instead of modeling just a single scene using a radiance field we have multiple radiance fields here that are composited together into a representation and which allow for disentangling from just 2d image collections alone scenes into various objects we don't have time to go into details into this method today but i encourage you to have a look at the paper and also there is a nice blog post on our website and a good video from michael about his work let me summarize um we've seen that we've seen various neural network as continuous shape representations as continuous material and appearance representations or motion representations for example we have seen occupancy networks which was one of the first and uh came out at the same conference as scene representation networks and deep sdf which follow a very similar principle in occupancy networks given a 3d coordinate these are all coordinate based networks given the 3d coordinate we predict the occupancy in contrast deep sdf predicts assigned distance the neural radiance field takes us input the 3d coordinate and the view direction and predicts color and density in differentiable volumetric rendering we predict the color and the occupancy from a 3d location and in graph we take the 3d location of the point the view direction and a latent shape and appearance code to predict the color and density but always we have this little pretty naive mlp as a simple predictor for these implicit representations so let's sum up coordinate-based networks coordinate-based networks form an effective output representation for shape appearance material and motion and they don't require any discretization and are able to model arbitrary topology they can be learned from images via differentiable rendering as we've seen and there's many applications in 3d reconstruction motion estimation few synthesis but also we have recently done some work in the area of robotics and self-driving where this representation proved to be useful however they are implicit that means certain properties such as geometry must be extracted in a post-processing step which takes some time and extension to higher dimensions is not necessarily straightforward due to the course of dimensionality we've also seen that the fully connected architecture the global conditioning lead to over smooth results and also the low dimensionality of the input but there is promising directions such as using local features like in confident or pyfu which is also related work as well as better input encodings as proposed in nerf and also there is a lot of work on new architectures for example there is a work called siren that uses different architecture architecture different from a relu mlp to achieve better results that's all from my side today thank you very much for watching |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_51_Probabilistic_Graphical_Models_Structured_Prediction.txt | hello and welcome to lecture number five of this computer vision course over the last few weeks or so i tried to improve a little bit of my audio setup i've purchased some new components and try to improve the acoustics and this is the result so i hope you enjoy the now new and hopefully improved audio quality of this course so back to lecture five lecture five is on probabilistic graphical models lecture five six and seven are a little excursion to techniques that today are not the dominating techniques in computer vision given the um given the fact that deep learning methods have overshadowed a lot of the things that have happened in the past but still they are they have been dominating the computer vision field for almost 20 years and many of the ideas that are part of these techniques are still relevant today and worth discussing and certainly when giving a computer vision class that should treat also techniques that have been very popular in the past so this lecture is structured into five units in the first unit we're gonna cover the basic setup on the setup of so-called structure prediction problems and in particular graphical models and how they can help in modeling such structure prediction problems in unit 2 we're going to cover one particular graphical model the so-called markov random field that's going to be our little friend for the rest of this lecture and also for the next two lectures then in lecture in unit 3 we are discussing factor graphs which is a specific representation that's particularly useful for doing inference and then in unit 4 we're going to discuss one particular inference algorithm which is called belief propagation and we'll look at some very small toy examples in unit 5 before then in the next lecture we're going to cover more challenging more real world examples that can be solved using these techniques so let's start with structure prediction [Music] in the last lecture we've seen stereo matching algorithms in particular block matching algorithms where due to the inverse nature of the stereo matching problem there is a lot of ambiguities there's a lot of matching ambiguities such that the correct match between the left and the right image cannot be determined unambiguously so the fundamental assumption of block matching algorithms is that corresponding regions in the two images should look similar but there is a whole range of situations where these similarity constraints the similarity assumptions that we do on little patches fail and here are some examples for example in textureless areas where it's simply impossible to distinguish adjacent patches or when we have repetitions where a lot of the patches look very similar to each other or in the case of occlusions when the background changes differently from the foreground and so the appearance within the patch that is matched changes dramatically and also in the case of non-laboration surfaces for example when there is a reflection where the appearance doesn't move with the geometry that we'd like to measure now in order to overcome such ambiguities what we can do is we can try to integrate prior knowledge and prior knowledge in the case of stereo matching means knowing what the world looks like in terms of depth maps or disparity maps in terms of what the quantity that we like to infer and one example that we've seen already in last lecture is the brown range image database which provides a number of depth maps or depth images from which statistics have been created that show that depth or disparity also very slowly except at object boundaries at object discontinuities which are relatively sparse so we have such distributions where we have a lot of the probability mass centered at areas where the derivative of the range is small but then we have some also some significant mass some non-zero mass that is in areas where we have this this object discontinuities these disparity discontinuities and we wanna take such statistics this is our prior knowledge and now integrate this into the block matching into the stereo matching algorithm and this is called spatial regularization or the simplest form of this is called spatial regularization because spatially what we want to do is we want to integrate these statistics and what that means is we want to encourage smoothness we want to encourage or we want to model the fact that adjacent pixels are more likely to be of the same value of the same disparity value rather than not being the same disparity value but there should be also significant probability for them to differ so how can we do this how can we integrate this knowledge about the statistics of depth maps we have already very briefly touched upon this in the last lecture we can formulate the problem as inference in a graphical model and this is called the structure prediction problem because we are predicting not at every site at every pixel individually the disparity now but we are predicting the whole disparity map at once while integrating our prior knowledge our assumption about interaction of adjacent pixels which is indicated here in this vector graph which we're going to cover later in this lecture by these little squares that are connecting adjacent pixels in this so-called four connected grid where for every pixel on this regular grid we have a connection to the top the bottom the left and the right pixel adjacent to it and this can be written as we'll see in such a form here in such a energy function that gives rise to a so-called gibbs distribution that is the probability distribution over the disparity map and now we want to do maximum upper story inference over the space of all these disparity maps in order to figure out what is the most likely disparity map given these constraints given the data constraints these are the matching cost for example and given this prior knowledge that tells us well we've looked at the statistics of real world devmaps and now we want to integrate this by saying that adjacent sides adjacent pixels have similar values so we're combining the data term and such a smoothness regularizer so at a very high level what are probabilistic graphical models probabilistic graphical models take a probabilistic viewpoint and model the dependency structure of the problem so we really try to model probability distribution and we will try to model the distribution of the entire object the entire disparity map that we want to infer and we want to then do inference over that model that we have so it's a structure prediction problem and we're going to see in a few slides what structure prediction exactly means but essentially it's predicting not just a single pixel disparity but predicting all the disparities at once by taking into account these local interactions these local constraints that we have defined between adjacent random variables in the case of this four connected grid that i've shown before graphical models have really ruled many areas at least of computer vision before the deep learning revolution and they are particularly useful in the presence of little training data and they allow for integrating prior knowledge there is two ways to overcome ambiguities in this ill post problems that we often have in computer vision one is to integrate a lot of data possibly annotated data where we have a set of input images and the corresponding disparity maps and then we want to train a stereo matching network end to end and this is what deep learning does but in many cases these large amounts of data are not available if you think for instance in medical applications and therefore it's useful if we have prior knowledge to integrate that prior knowledge and this is where graphical models are powerful however graphical models and also the ideas behind them can be combined can be advantageously combined with deep learning and can also inform deep learning and we'll see that in lecture seven with a few examples that we can combine deep local features with inference in a graphical model and that also we can derive deep learning architectures novel deep learning architectures from inference in graphical models so here i try to put next to each other the pros and cons of probabilistic graphical models the pros are we can integrate prior knowledge if we know something about the problem such as the statistics of depth maps we can integrate that they have relative review parameters which means we can estimate those parameters from limited data and they are interpretable by design because we are as you will see modeling a graph where certain components of the graph have a certain meaning and so if we afterwards inspect what the model has inferred it's much easier to understand what happens compared to looking at a vanilla deep neural network and trying to figure out what the individual neurons are doing on the other side unfortunately um as people also have realized many phenomena are really hard to model and this makes it difficult to apply these graphical models we're using the wrong assumptions we are using um a two simple dependency structure in order to be able to actually do inference that's still tractable for example assuming a pairwise dependency structure where we just have pairwise relationships between variables while the true dependency structure is much more complex and so this introduces errors of course then exploiting large data sets is often difficult also because these techniques are not particularly fast and inference is often approximate even for simple cases because um inference for example in graphs with loops is np-hard and so we need to resort to approximate inference algorithms as we are going to discuss them also in this lecture now i already mentioned that inferencing graphical models is a structure prediction model so what is a structure prediction model formally well structure prediction is in opposition to traditional classification and regression problems in classification or regression we are trying to model a function that goes from some complex input space that can be any kind of objects the input could be something really high dimensional like images or text audio sequences of amino acids etc and the output is a single number and that's either a discrete number then it's called a classification problem or it's a continuous number then it's called a regression problem now in structure prediction we have the same type of inputs we have any kind of objects complex objects high dimensional objects images text etc but now the outputs are complex structured objects as well so instead of just predicting a single real or discrete number now the output could also be images texts or parse trees or folds of a protein computer programs etc so the output is now something high dimensional something complex that's the difference between classification regression and structure prediction as we define it here in this lecture so here's an example this is a generic form of a model with parameters theta where the input gets mapped through this function f with parameters theta to some output and then we have two problems associated to it in machine learning the first is learning estimating the parameters theta from the training set and the second is inference making novel predictions for unseen inputs x and in the case of classification or regression the input might be an image and we might want to infer the category that that image belongs to and another example is a siamese network where the input is now a pair of patches and the output is a classification result for that particular pair of patches that gives us a score for a particular disparity value at a particular pixel location and then we can for each pixel independently look at the maximum score and figure out the disparity that way but we do this really independently across all the pixels in the image and even in this case across all the disparity levels so we're not taking the structure into account now in structure prediction we want to take the structure into account so we wanted to inference over the anti-disparity map in the case of stereo or here in this more simple case where we have an observation of a sequence an image sequence and we want to track this object in front then the output here is not a one-dimensional quantity but it's a three-dimensional quantity it's three random variables x1 x2 and x3 and each of these variables in this simple example and we'll see this again at the end of the lecture is a discrete random variable that can take any of three states the vehicle that we're tracking is in the left lane and the center lane or in the right lane and now we want to do inference from these inputs um to infer what at each time step t one two and three the correct state of these variables is and now assume there's a lot of noise in these inputs so the predictions are inaccurate but we can assume some regularities on the output space then this becomes a structure prediction problem because we can say well it's unlikely for instance that the vehicle in the first time step is here and then directly in the next time step moves to the left lane which is physically implausible instead it's more likely that the vehicle stays on a lane or moves just by one lane and so by integrating such constraints as prior knowledge into that model and by doing structure prediction by doing inference in a graphical model we can then take those prior constraints these prior assumptions into account so one example for structure prediction problem which is a problem that has a structured high dimensional output is probabilistic graphical models or inference in probabilistic graphical models where local dependencies are encoded or also deep neural networks with image-based outputs as we have seen in the previous lecture for example with the end-to-end models that we're taking two images as input and predicting an entire disparity map so they also fall into the category of structure prediction models but typically when we speak about structure prediction here we refer to these probabilistic graphical models now there's a whole variety of graphical models in the literature and there's entire courses on this topic and indeed i have taught an entire class just on graphical models in previous years but in the context of this lecture we are just keeping it to three lectures to get the gist of graphical models and we're just looking at a very specific type of graphical model so here you can see a taxonomy maybe you have heard of bayesian networks before which are directed graphical models in the context of this lecture we'll look at one particular graphical model that's that's very relevant for computer vision applications and these are undirected graphs graphical models with undirected graphs so called markov networks or vector graphs so here's a brief overview of this little excursion in lecture five this lecture here we are going to introduce probabilistic graphical models and we are going to be talking about markov networks factor graphs and belief propagation an inference algorithm to derive marginals and so-called maximum apostolic solutions in lecture six we're going to see some applications of graphical models in the context of stereo optical flow and multiview reconstruction and then in lecture seven we're gonna talk about learning or parameter estimation in graphical models and also how to combine them with deep learning and if you wanna know more about graphical models i highly recommend having a look at these two links here the first is a tutorial that comes also with a book that's available for free online from sebastian novozin and christoph lampert it's a great introduction to graphical models and then a little bit more extensive the book by barber that's also available online |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_73_Learning_in_Graphical_Models_Deep_Structured_Models.txt | so far we discussed log linear models log linear models are models where the parameters w appear in a log linear fashion in the model equation in the probability distribution this assumption severely limits these type of models because the features must already be very powerful they must already do the heavy lifting the parameter vector can only adjust in a linear fashion the influence of these different features but it cannot do anything sophisticated other than linearly rewriting these features what we ideally like to have is the feature functions themselves being parametrized with parameters where these parameters do not have to appear in a linear fashion but they could also be these feature functions could for example be little neural networks where the parameters don't appear linearly and we want to update not only the relative weighting of these features as in these log linear models but we also want to update these parameters of the feature functions themselves jointly by training by maximizing the likelihood and this is what we're going to discuss today this is called deep structured models where the parameters don't need to appear just linearly in the expression let's look at the log linear models again in the log linear model the parameters appear like linear in this expression if they take the logarithm of this term here we see that we have an inner product of w and the features so this expression here depends linearly on w and therefore the features they must do the heavy lifting they must be quite complicated and advanced but in practice of course we don't know the perfect features so it would be better if we could actually also train the features train the parameters learn and estimate the parameters of the features themselves and this leads us leads us to deep structured models in deep structured models and i've highlighted here the difference in red we do not need this linear weighing of the features because the features themselves depend now on the parameters w and they can depend on these parameters in a non-linear fashion for example this could be a neural network or for each potential that could be a little neural network that depends on parameters w and that outputs a feature value for a given input x and y in other words the potential functions now are directly parametrized via the parameter vector w this results in a much more flexible model as psy can be any complex function it could be a neural network with many layers which depends in a very non-linear fashion on the parameters which can represent very complex relationships of the inputs x y and the output the features of this function of course this representation here is a superset of this representation it includes this representation because if we specify these potential functions in this more general form then this form here is included i mean this representation here can just be a linear one it's just a special case but in general we're going to be interested now in in nonlinear dependencies on w now similarly to the derivation of the log likelihood the negative log likelihood and the gradient of the negative log likelihood in the previous unit for log linear models here for this more general class of deep structured models we can also derive the negative log likelihood and its gradient and the duration is is really similar so i'm i'm not exercising it here again because it's so similar but what i did is i highlighted the differences of the negative log likelihood and its gradient with respect to those in the log linear case in red so what you can see here for example is that before we had an inner product of features and w and now here we at this place we just have the feature that depends on w and similar inside this partition function expression similarly for the gradient we have a feature that depends on w and also inside this expectation that we have to calculate so the form of these equations is really similar to the form that we've seen already except that now w doesn't appear in this inner product with these square brackets but it appears directly inside the feature function again the sum can be efficiently computed as the features decompose so we do assume that there is a graphical model that makes inference which is required for training tractable and that these potentials here decompose into sum of potentials of for example unreal pairwise potentials but each of those can now depend on the parameters as a little remark here we have included the dependency on the parameters as an argument of this function while often when we talk about neural networks or functions parameterized functions in general we include the dependency as a subscript but we mean the same so it's it's just more convenient here because we have another subscript of this potentials to include the dependency as an argument as well now in order to compute the gradient and the negative likelihood in order to perform a gradient update step in the context of a great and decent or stochastic gradient descent algorithm we can exploit again the efficiency of inferencing graphical models if the graphical model is tractable to compute these expressions efficiently however differently from before now in order to compute these two quantities we also have to do inference in the uh neural networks if we assume that these feature functions are specified in neural networks in order to compute the value of an of these neural networks for a particular input and parameter configuration as well as the gradient of these neural networks with respect to the parameters w from deep learning we know that in order to compute the output for a particular input we have to apply the forward algorithm the forward pass and in order to compute the gradients we can compute the back propagation algorithm and these two algorithms are tractable because they heavily utilize the principle of dynamic programming of reusing computation and storing intermediate information for example in the forward pass to compute the quantities in the backward pass efficiently but it is more complex now before we just had to compute uh the feature the inner product between the features and the parameters now we have to do inference in potentially very deep neural networks not to compute the features and the gradients of these features so what does the algorithm look like what we do now is we compute a forward pass in order to get the feature values then we compute a backward pass to obtain the gradients and then we can compute the marginals and the two quantities from before using message passing and with those the gradient of the negative log likelihood and the negative log likelihood update the parameters w now what is the problem of this approach the problem is that this becomes even slower than before already in learning in a conditional random field is not super fast but now because we have instead of just multiplying a weight vector with a feature vector we have to actually compute a forward and a backward pass through these deep neural networks at every iteration of the great and decent algorithm and for every potential unless they have shared weights this becomes really slow it's really slow as the forward and the backward pass are required to calculate the features and the gradients for this graphical model inference for this belief propagation algorithm in every gradient update step however there are some alternatives one alternative is to interleave learning and inference and this has been discussed in this paper learning deep structured models from icml 2015. now this makes learning faster but it's still comparably slow the applications that have been tackled in this line of works are still relatively simple compared to the applications we are interested typically and so it's it's a line of work that hasn't been as as fruitful another alternative that is more heavily used today is so-called unrolled inference which is much simpler leads also to a much simpler algorithm but we lose the probabilistic interpretation and this is what we're going to discuss in the remainder of this unit the idea in unrolled inference is to consider the inference process for example some product belief propagation in a graphical model as a sequence of small computations for which we can comp uh calculate a computation graph as we write down a computation graph for a particular deep learning architecture and then we unroll a fixed number of inference iterations similar to unrolling of an rnn during learning a recurrent neural network but we do this only for a fixed number of iterations when we do inference in a graphical model typically we don't know how many iterations we have to do until convergence depending on the algorithm and the approximations that we have assumed but here we say well for a fixed number of iterations we want to now train a model that for this number of iterations produces the best output and so when unrolling for a fixed number of iterations this inference algorithm we can compute the gradients simply by using automatic differentiation so the gradients can be computed we don't we don't need to compute them ourselves but there's toolboxes available that do that for us some remarks we are now with this idea of inference unrolling for fixed number of iterations in the regime of empirical risk minimization so it's a purely deterministic approach which where we give up the probabilistic viewpoint we can't really trust that our quantities that we compute are really really do have a probabilistic interpretation like if we apply the sum product belief propagation algorithm on trees where we know we compute the correct marginals this is not the case here anymore but instead what we do here is we do empirical risk minimization so for for this fixed like for this for this um inference algorithm that we unroll for fixed number of iterations we want to update the parameters in a way that for for this particular instantiation of this algorithm on expectation we get the best prediction say the best maximum operatory prediction under that model given the trainings data set that we have right and the advantage of this is that now this is often fast enough for efficient training because in compared to belief propagation where in inside belief propagation we have to um do the forward backward pass in order to compute the gradients for stochastic gradient descent here we just have to basically do a training of a deep neural network where the architecture of the deep neural network is defined by the inference algorithm that we have unrolled so we have to find you can think about this as having defined a novel deep neural network architecture which integrates some of the knowledge that we have about the problem that that is encoded in these potentials and constraints of the graphical model but now unrolled into a chain of computations which has some parameters that yields a certain specific architecture of a deep neural network where we can just do learning or empirical risk minimization by applying stochastic gradient descent using the back propagation algorithm for computing the gradients so what we do is we effectively integrate the structure of the problem that we have encoded in this graphical model into the architecture of a deep neural network this is now not a standard vanilla deep neural network like a convolutional deep neural network anymore it's not something that we can instantiate with the standard components of pytorch but we have derived a special new architecture through this unroll pro unrolling process and this can be thought of as a form of regularization a form of a heart constraint changing or specifying the architecture of a neural network is a hard constraint we're not softly regularizing using using an l2 loss or something but we have a hard constraint on the architecture on the space of possible functions that can be expressed that hopefully improves generalization performance and learning from the data that we have here's a little illustration of inference unrolling um so this is also how r and ns are unrolled but you can think of this also as inference in a graphical model where we have a hidden state here and an input so the input is the image and then we have this hidden state which is the output variables at the current iteration and then we iterate we run several iterations of the sum product belief propagation algorithm or the max product belief propagation algorithm until after a fixed number of iterations we arrive at an output and so we can equivalently illustrate this in this unrolled representation where we don't have a a backward error a backward loop here but we have unrolled this backward arrow into the individual time steps this t0 t1 t2 are the iterations of the inference algorithm right so this is then corresponding to at the higher level to the computation graph that we want to do um uh the forward pass and the backward pass on but of course this algorithm that runs here at each iteration is complex and we have to run the forward and backward pass on this computation graph that is defined here so this is just a very high level picture and in order to do this because we don't want to compute all the derivatives of this unrolled algorithm ourself what we can use is we can use this beautiful toolboxes that implement automatic differentiation for us and that in modern deep learning frameworks like pytorch is standard that's very easy to use and the idea here really is that we can rewrite complicated functions as a composition of simple functions so here we have a com a function f that's written as a composition of simpler functions each simple function f k has a simple derivative and we can then use the chain rule to compute the gradient of this more complex composed function so here's an example this is basically exactly what happens in um when you when we do the forward and the backward pass in deep learning but we can do this not only for generic deep learning architecture but also for arbitrary compositions of functions as they occur when we unroll for example the sum product belief propagation algorithm this is a very simple example a function of two variables cosine of x times sine of y plus x over y and we can write this as this computation graph cosine of x sine of y multiplied together and adding these two divided by each other okay so it's relatively easy now with the modern frameworks to actually implement a model using this unrolled inference ideas and i want to show you just uh two little examples where this has been used but there's many more examples in the literature and this is still frequently used in particular to improve generalization or to develop novel architectures for specific problems one of the first instances where this has occurred is in this paper called conditional random fields as recurrent neural networks where the goal is semantic segmentation and we have a graphical model you can recognize the form of the graphical model this is using a different notation this is copied from the paper but it's a it's an energy so we have x of minus e as the gibbs distribution and this energy is composed of unary and pairwise terms where these pairwise terms here now connect all possible combination of pixels all sides in the image and they are specified through this gaussian kernels here and there's a special inference algorithm for this so-called densely connected crf that despite the fact that we have a lot of these potentials gives us uh results in comparably little time so it's very efficient and it's uh based on the so-called mean field algorithm so it's not the same as the belief propagation algorithm that we've discussed before but it's a different inference algorithm however the general gist of this is the same there is a graphical model and there is some inference algorithm that gives us a solution and we are unrolling that inference algorithm for a couple of iterations and then we are trying we were doing empirical risk minimization given the data set we are trying to make as little error in the predictions for the inputs in the data set compared to the annotated examples in that data set and here are some results so what we see here is three input images and the corresponding semantic segmentations where each pixels is classified into a particular class label such as rider bicycle sofa horse and so on and what we can see is that so here on the left these are two deep learning architectures we can see that they make certain errors so this architecture for instance is a very simple one that leads to very smooth results and this architecture is a more complicated one but as you can see it produces quite a bit of noise now using these smoothness constraints these priors that are encoded in the crf by combining the craft with like deep potentials and back propagating training this end to end in this case for these examples the segmentation boundaries in particular have improved as you can see here at the bicycle or at the legs of the horse for example and here on the right this is the illustration of the corresponding ground roof that has been labeled so these results are of course on the validation or the test set and these examples have these particular data samples have not been participated in a training but you can imagine similar examples being part of the training set here's the second example this is raynette i work from our group where we have tried to apply this idea of unrolling to the multiview reconstruction model that we have introduced in lecture six where we have used the sum product algorithm to come up with base optimal reconstructions and depth maps that still exhibit the uncertainty and the reconstruction and what's different here now from before this is an equation that you will recognize from before and this one here as well but what's different from before is that now we have a little neural network at every image that produces a probability score for a particular depth value and that relates then to the occupancy of a particular voxel in that volume right so we have neural networks that can now estimate roughly the probability for a a particular depth that is observed for every pixel in every image that participates in the reconstruction and we are optimizing the parameters of these neural networks as well as the parameters of the graphical models for inference that encodes this image formation process here by applying a forward and backward pass applying back propagation through his unrolled computation graph and this is a high-level picture of this method we see the like some 2d convolutional neural networks here that extract features from the images and then what happens here is we take a reference view and adjacent fuse and from those using multi-view reconstruction we can actually obtain surface probabilities this is the probability of over the depth at a particular pixel so we get a volume here because we have this for every pixel in the image in the input images and we do this for all the reference views and then here we see three iterations three unrolled iterations of this sum product belief propagation algorithm where we have ray potentials that encode the image formation process and these unit potentials that encode a prior about the occupancy of the space and we are updating iteratively these occupancy variables this is a variable associated with every voxel in order to obtain a depth map at every view and then we assume there is ground roof depth maps available and minimize the expected loss of the infrared depth distribution with respect to the ground truth depth map here's a result of this you can see the corresponding input image you can see the result of just local predictions using a cnn and then the result of the combination of the local features with the constraints that are defined by the graphical model by doing unrolling of this graphical model and training the cnn features and the weights in this graphical model jointly |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_63_Applications_of_Graphical_Models_Optical_Flow.txt | in the last unit of this lecture we're going to discuss the optical flow problem and how it can be formulated or thought of as inference in a markov random field optical flow is defined as the apparent motion of objects surfaces and edges in a visual scene caused by the relative motion between an observer and a scene so these little errors that you can see here uh indicate how much a particular pixel moves and in this scene where we have this pilot landing an airplane on this airfield here you can see that there is a point of expansion where the arrows are very small and points that very nearby the observer are larger than points that are further away right and this motion is a current motion of how pixels move in an image is caused by either objects moving objects in the scene or the observer moving in this case the scene is static but the camera is moving with respect to the scene causing this motion field and optical flow has been first investigated by jj gibson in the 1950s who was a psychologist and analyzed how animals conduct low level perception and also use these ideas in order to help pilots navigate optical flow of course has a relationship to stereo both are 2d estimation problems but the crucial difference is that in stereo we take two images at the same time or equivalently we take two images of a static scene and in optical flow we don't make such assumptions these two images are taking at two time steps without any restrictions on the scene so in stereo we only have camera motion while optical flow we have camera and object motion and therefore stereo is a 1d estimation problem we just have to search along the apipolar line once we have found the ap polar geometry we can search along the apipolar line in 1d and optical flow is a full 2d estimation problem a pixel in the first image can be located at any pixel in the second image there is no constraint a priori and also in animals these two principles are used to a different different extent so for instance monkeys they have a like humans a forward-oriented vision system and so they have a large overlapping field of view indicated here by a dark gray shaded area and so they can do depth perception very well while for example squirrels they have an outside oriented vision system so they have a very small a very narrow overlapping field of view and they are not so good at doing stereo but they they can of course perceive very well in 360 degree if some some other animal is approaching for example there's two terms that i'd like to define now before we start optical flow is one term an optical flow field a motion field is another term a motion field is the thing that we are actually after but the optical flow is the thing that we can actually only measure so what's the difference a motion field is the 2d motion um or represents the the the projected motion of the 3d of the actual 3d motion of an object of points in the scene onto the image plane so let's assume this is an image plane here and the camera with camera center here and there's a 3d point that moves from here to here and that's projected onto that image plane here and here and so this is the motion field that we like to measure but of course we don't have access to that we don't know that this physical point is actually like this this projection here and this projection here correspond to the same physical point because we just have we have just estimated this correspondence or we can't just estimate that correspondence we don't have any any 3d structure any physical information about the world except this 2d projection in the image so the optical flow is the 2d velocity field describing the apparent motion in the image and that's what we can actually measure it's the displacement of pixels looking similar if there would be a corner here that's unambiguous and that corner would move to here then we could match these two using optical flow and in this case the optical flow and the motion field would be the same and in most cases they are so it's useful to estimate optical flow but it's not always the case so it's not always the case that optical flow and the motion field are the same why let's make a little forward experiment let's consider this ball so think about this image that i show here on the right as a lamborghini ball so a ball that doesn't have any specularities no specular material it's a lumberjan ball and it's rotating in 3d around the up axis so it's routing like rotating like this it's rotating 3d around its up axis what does the 2d motion field look like can you imagine that well because it's rotating around the up axis you will have like small little errors here and then you have large arrows here and smaller arrows here so you really see the projected 3d motion as a motion field in the image but what does the 2d optical flow field look like well it's a lumberjan ball without texture rotating in 3d so despite the fact that it's rotating it's actually not changing the appearance so the optical flow field will be zero there will be no motion that we can observe because both images look exactly identical conversely if we look at a stationary specular ball so this is now a ball with a specular material so we can see a highlight and now we don't move the ball but we move the light source then this highlight will start moving right what does the 2d motion field look like well the ball is static the 3d geometry is the same it doesn't move so the motion field will be zero everywhere in the image but the optical flow field will not be zero because this highlight here is moving on that is moving because of the specular reflection moving on that object and so we'll see we observe a motion here and that's also what an optical flow algorithm would determine it would determine the motion of that highlight moving despite the object the geometry actually being static so that's an example or these are two examples for how motion field a motion field is different from an optical flow field so the optical flow tells us something may be ambiguous about the 3d structure of the world the motion of objects in the viewing area and the motion of the observer if any and it's a kind of a substitute for the motion fields that we like to estimate and it's often a pretty accurate substitute except for these extreme cases that i've just shown in contrast to stereo we can't make any assumptions on the apollo geometry it's a real 2d estimation problem so it's a much harder problem of course has a much larger search space and it's used by animals so here's an example of a so-called northern gannet that's a bird that hunts for fish in the water and that bird hovers over the water's surface and then once it identifies fish it it uh it goes into a descent um and and dives into the water and in order to dive into the water it needs to flap its wing its wings to the body at the right point in time such that it doesn't damage its wings while while entering the water surface which is done at or performed at around 60 or 70 kilometers per hour so it would have dramatic consequences for the bird if the wings wouldn't be flapped and that bird uses the optical flow the speed with which this pattern moves in order to determine determine very precisely in milliseconds the point in time when the water surface will be hit in order to adjust the wings so here's an example of this oh so this was an example from the animal kingdom now of course optical flow has many important applications also in in computer vision and so one application is a video interpolation of frame rate adaption if we know the image motion between two frames this is michael black here who was a pioneer in optical flow or is a pioneer in optical flow if we know the relative motion here by this flow field then of course we can take the first frame and warp it only halfway from time t not to time t plus one but two time t plus 0.5 so if from these two images we have estimated the flow we can reduce the flow vectors or scale the flow vectors by one half and then we can synthesize an artificial frame in between and so we can create artificial slow motion effect this has been done for example in this in this movie here where this so-called bullet time effect has been created by putting a lot of cameras around the actor and then creating a virtual trajectory that flies around the actor virtually but because there's only a fixed number of cameras that can be actually put into place you need to interpolate the frames in between in order to create a smooth motion this is illustrated here so here's the raw bullet time footage you can see the cameras these little holes in the green room are the cameras and here's now the the cleaned up footage we can see it very smoothly transitions and the final effect that we're all familiar with yeah so this is how professional studios use it i have also used it as a little exercise in one of the previous lectures and here's a student that put a assembled a yanga tower and then taped it with a regular video camera and interpolated the frames in between so i want to briefly show you this this is of course a very simple optical flow algorithm that has been used here so you can see a lot of artifacts and there's kind of a stop and go motion but still you can see how how like the original hive like like low frame rate could be increased with this technique and it works it works better for small motions and works less well it's much more challenging to solve this problem for large motions when the objects move very fast good optical flow can also be used for video compression so for example to compress an image sequence you can predict new frames using the optical flow field and only stored and once we store the optical flow and then how to fix the prediction so we take this image at time t warp it using the motion field or the optical flow field here and then we fix the prediction by by changing some of the pixels only and because the flow fields are smooth they are much easier to compress much easier to store than uh storing the second image separately from the first image something like this is actually done in the mpeg video compression algorithm and then in autonomous driving if you combine disparity with optical flow you get 3d motion that's called scene flow and that's something that it's not so easy to visualize but here it's visualized separately the disparity in the top and the optical flow on the bottom so you know now exactly for which we have inferred for which 3d point for for many of these 3d points in which 3d direction they actually move if you know the depth and the optical flow and you can do this using a stereo camera and tracking frames over time okay now optical flow comes with a problem maybe the most famous problem in the context of optical flow is called the aperture problem and the aperture problem is just following if i just have a single observation like a single pixel but in this case it's illustrated by looking through a hole looking for an aperture at a scene if i just have this single observation can i determine how things move how does this line move if i look just at this very small aperture on this edge let's say this is an edge of an object i'm looking for this very small hole how does the line move if i'm toggling now between these frames here it's the first impression of how this moves is that it moves from the bottom left to the top right i think most people would agree with this right but if i show you exactly the same scene now without the aperture removing the hole you can see that there was a stick that was actually moving from the left to the right and not from the bottom left to the top right this means that a single observation is not enough to determine optical flow and that also makes sense mathematically if i just observe a single pixel changing intensity right then i cannot determine term and optical flow because optical flow has two unknowns i need to determine the flow in the x-coordinate of the image plane and then the y-coordinate of the image plane and a similar effect you can also observe or has been used has been exploited by the barber pole illusion which you can find in some of the barber shops for example in the s um where you have such a such a pole where this pole here is rotating around the up axis but the pattern that's generated the optical flow field is going upwards so you have a optical flow field that that goes perpendicular the apparent motion is perpendicular to the motion that the pole is actually rotating that's quite interesting good now we know what optical flow is now we just need to determine it and there is a seminal paper by horne and chunk that's called determining optical flow and it has created thousands of follow-up papers on this topic there's also another famous paper on optical flow that's called the lucas canada optical flow but that is a local flow mechanism so that it suffers from the same window artifacts and because here we're considering flow in the context of markov random fields um this horn chunk algorithm is the one that relates to this markov random fields because it also makes this smoothness assumptions let's formulate this problem mathematically consider the image i and it was considered like this in the original algorithm as a function of continuous variables x y and t so we have a continuous space time volume x y is the image domain and t is the time domain now of course we're going to discretize this later on if we want to implement it at a computer and in a computer but for theoretical considerations and that is what has been done in this paper it has been considered as a variational problem which we're not going to spend so much time on here um because we're interested in its interpretation as a mark of random field but they have considered it as a variational problem where there is an image that's a function of these continue continuous variables and the quantity that we infer is also a function it's a flow field that's continuous over the spatial domain so this is the con the flow field we consider u of x y this is the flow in the u direction in the x direction in the image plane and then we have another one-dimensional field v so effectively the flow field is a two-dimensional field but here has been split into two functions two one-dimensional functions u and v and at each continuous location in the image um this is what we want to infer the value of u this is a continuous flow field so we want to effectively infer um functions and that's why we are minimizing an energy functional right it's a it's a expression an energy that depends not on vectors that we want to optimize over but on functions so we have a function of a function it's called an energy functional and the energy functional is as follows in the original formulation of hornet junk we have this first term here which is a quadratic penalty for brightness change we have the intensity of a pixel that's displaced by the optical flow u and v at that pixel x y at time t plus one and that should be similar so we are subtracting it from or to the image at location x and y at time t and so we're minimizing the squared error here we want the image or the intensity at x y time t to be the same the brightness should be the same that's why it's called brightness constancy assumption the brightness should be the same if the optical flow is correct if we move that pixel by u and v to the next time step t plus one then we want the intensity to be the same the appearance should be the same right optical flow is about the apparent motion so if the optical flow is correct then we want the appearance to stay the same we have a quadratic penalty for the brightness change and then because we have the aperture problem we also need a penalty for the flow change this is a regular riser without which the problem could not be optimized so we have a regularization parameter lumped on and then nabla here is the gradient operator so if the gradient of u and v the gradient of these flow fields we want to minimize the squared gradient which means we want to have to change the variation in the optical flow in local neighborhoods to be small we want it to be smooth this is a continuous formulation a continuous equivalent to what we have done for stereo matching so here we have again the equation from the previous slide and what we can see is that minimizing this directly is a really hard problem because the energy is highly non-convex and has many local minima why is that well the last term that's unproblematic it's just a quadratic regularizer but here in the first term we have the optical flow field the quantity that we want to infer u and v as an argument of the image it's added to the pixel location and because images are highly non-convex highly non-linear if you have a textured image you have a lot of valleys and hills right and because this enters as an argument this expression i of x plus u is highly non-linear highly non-convex right if we move a little bit in the spatial image domain we might have to jump over a valley or a mountain in this in this if you think of this image as a as an elevation map the solution to this that is often done in engineering and that has been used by horn and shank as well is to linearize the brightness constancy assumption so we have a non-linear problem we're simply linearizing it what does that mean well we are linearizing it by taking a first order taylor approximation now we have a we take a first order taylor approximation of this term here now we have a function i image of three variables so we need a multivariate taylor series so here what i what i've drawn here is a a multi-variable taylor series i have i've drawn the taylor series for two variables which you might be familiar with what this means here is that the function f at x and y um is a linearized around a and b this is what this means here is approximately if i don't consider the higher order the quadratic and higher order terms here the function at a and b plus the gradient in x direction times x minus a plus the gradient in y direction times y minus b so it's really just a linear approximation to the function f at the location a and b so this is something that we're already familiar with now if we apply this to the this term here of the brightness constancy assumption then we up this is this term here then we obtain the following well we in this case now we want to linearize around x y and t the current estimate so we have i of x y and t and then we have this expression with respect to x times well the argument x plus u minus x right so we need to subtract the variable that we are linearizing around and the same for y and the same for t and you can already see that x and minus x cancels and y and minus y cancel and t and minus t cancel and so this expression simplifies to this expression here and if we now plug this into here we can also see that we have a minus i of x y t here so this term cancels as well so what remains from the brightness constancy assumption quietness constancy constraint and what is often referred to as the brightness constancy constraint is actually this expression here so this term from before now is approximated by this linearized expression here at the bottom where we have replaced this term with the linearized expression from the previous slide and then cancel the i and so this is what remains now we can see that this is linear now this term here is linear without the square it's linear in u and v before it was not linear because it appeared as an argument now it's linear so this first term here is quadratic in unb and that is something that's convex and we can easily work with we can find good solutions for even closed form solutions it's effectively it's a very high dimensional parabola now of course in a computer we can't implement this integrals over the spatial domain so what we do effectively in order to apply this algorithm is we do spatial discretization spatial in the image domain so these integrals if we do this i'm not going to give all the details here but if we do this then the integrals here are going to turn into sums so we're not going to integrate over the image domain anymore but we're going to sum over the pixels and this variational formulation becomes a standard energy function where now these functions u and v are replaced by flow fields by the flow field u and v so u and v are matrices of the size of the image where u stores the value of the flow vectors at each pixel in horizontal direction and v stores those in vertical direction right and yes and so we have the derivative of i with respect to x and y and t but now here we don't have functions but we have just these are elements of this matrices u x y index by x and y these are discrete spatially discrete elements and also this this term here then becomes this term so we instead of the gradient here we have a numerical approximation to the gradient you can see we have u x and minus u x plus one or v y minus v and we are x minus v x plus one or here v y minus v y plus one now we can see that this entire expression here is quadratic in all the u's and v's in all the u and v variables so therefore has a unique optimum and that optimum can even be found in closed form solution but it's still a very large optimization problem right it still has a lot of variables this there's as many new variables and as many v variables as there are pixels in the image and therefore typically iterative solvers are used to obtain a solution so how do we obtain a solution well we have a quadratic problem so we differentiate uv and set the gradient to zero but it's a very large linear system so we can't do that directly but we can exploit the sparsity and why is it sparse well because this spatial well this is a this is a local constraint and these here are also local because they only connect neighboring variables right so you already see maybe a little bit like how this is related to markov random fields we have the same kind of data and then smoothness term here so we have a sparse linear system but now in contrast to the first unit where we had a discrete variables here we have all all the variables u and v are continuous so we use gradient descent and we can use iterative techniques like gauss-seidel etc now one problem with this approach is that this linearization that we have assumed before only works for really small motion because otherwise we are making mistakes and the solution to that that's typically performed is to first of all iteratively estimate so we we make a step and if we haven't converged then we make another step and we re uh linearize around the current estimate and because even that is not enough for recovering large motions we also typically do a course to find estimation strategy where we start with a small image resolution and then we do the optical flow estimation there because we know the optical flow is not larger than a few pixels and then we we go to the next higher resolution so we upscale or yeah we use the next scale we basically downscale we build an image pyramid but then we start at the smallest resolution and they take the next image in that pyramid the next resolution warped the target image based on the optical flow estimated at the lower resolution and use this as an initialization because now the um well now the starting point is already better so the optical flow has only the delta has to be estimated which is smaller and that can be done done using these iterate iterative techniques so we do iterative course to find estimation and warping to go from very coarse to very high resolution results and here's the result of the horn chunk algorithm we have two examples for two different scenes from the middlebury dataset and on the right a color-coded flow field where the color denotes the orientation and the intensity denotes the strength of the optical flow so for example here at the boundaries the optical flow strength is very small the vectors are not very long but then here for this ball the colors are very colorful and so the flow is stronger and i can show you i can toggle between these input images for you to get a feeling how the objects actually move you can see that this is a very simple data set this is one of the first data sets that people have really quantitatively worked with and now these algorithms work on much more challenging scenes as well you can see that this for instance here is rotating that's why we have this rotating flow field here but other things like this here are more translating so the horn and chunk results that we have just seen are quite plausible already however the flow is very smooth if we look closely we can see that in the transition from these two transition between these two objects it's not as sharp as it actually is so we are over smoothing somehow and the reason for this is that we have to overcome these ambiguities by setting this lambda to a relatively high value to overcome the aperture problem and this is very difficult ill post solve this ill pose task um but this then over smooths the flow discontinuity continuity so we have this this trade-off here and one particular problem is that we're making a very bad assumption we're assuming that these terms here are all quadratic and we have assumed that for convenience really because it's convenient we can then use standard techniques to solve these quadratic problems quadratic problems are always convenient but convenience is not a good thing here because these assumptions are are really problematic if we have a quadratic penalty for instance for the change in optical flow then if it that means that if we have a discontinuity in terms of optical flow if there's like two objects that move very that have a very different optical flow then the model gets penalized a lot for making a sharp transition there and that's why we get this over smoothing artifacts so we should use something different than the l2 regularizer and that's also related to the problems we have seen in the context of markov random fields in other words this quadratic penalty is penalized large changes too much and they cause over smoothing and therefore there has been a whole line of research that tried to formulate optical flow more robustly by using better penalties penalties that are more aligned with the statistics of the real world like in stereo where we try to come close to the statistics of the real world so here's the connection of the optical flow problem to map inference in an mrf we have the gibbs formulation now that we have already that we are already familiar with this is a distribution over u and v is equal to one over the partition function x of minus the energy of u and v where the energy the so-called gibbs energy is defined by the energy that we have just derived right so we can take that energy that we have just derived for optical flow and plug it as an energy into the gibbs energy term and this gives us a distribution so there's this duality between the two representations and now performing map inference on this distribution is the same as minimizing this energy but as u and v are continuous here now not discrete as in the case of stereo we solve inference now with gradient descent not belief propagation so we have a different inference algorithm but we can think about the model in the same way as map inference in a in a markov random field or in a factor graph or in a graphical model in general now um this gibbs energy this also leads to the relationship of the these this energy function is quadratic penalties to the distribution that is implied by that because the skips energy has a corresponding gibbs distribution if we take the skips energy and plug it in here because we have x of minus this the summation here turns into a product so we have instead of these summations here we have just a product also here and we have x of minus this data term and the smoothness terms and we can identify these terms as go as gaussian distributions now right we have a distribution that's a factor of many small gaussian distributions because we have x of the minus something to the power of two which is the definition of a gaussian distribution and we have this everywhere also for the regular rises in other words quadratic penalties translate to gaussian distributions and these assumptions are invalid both for the brightness consistency constraint as well as for the smoothness constraint why well as we've seen gaussian distributions correspond to squared loss functions which are not robust to outliers or to this flow discontinuities let's say so these outliers occur at object boundaries because we have a violation of this squared smoothness regularizer and they also occur for the data term think about non-lab version scenes where you have highlights which cause outliers etc or if you have occlusions for example the solution here that has been introduced in a seminal work by black and anadahn is to use a robust data term and robust smoothness penalties instead which we call row here so instead of this quadratic penalty we use a function rho and that function is not quadratic but it's row more robust now the question is of course how to choose row d and rho s for the data in the smoothness term we want a prior that allows for discontinuities in the optical flow and the likelihood that allows for outliers and occlusions so what we what we do is um we replace the in the probabilistic formulation we replace the gaussian with a heavy tail student t distribution that corresponds in the energy domain to the so-called laurentian penalty so here we have the student t definition of the student t distribution that we want to plug in here that corresponds to um this uh we can we can derive that this corresponds to the following choice this choice here of the penalties row d and rho s so if we chose rho d and rho s as this expression then the distributions that we have assumed here for the gibbs distribution are student t distributions instead of the gaussian distributions that we have assumed if we would would use a quadratic penalty instead and here's a comparison this is now in the energy domain so we have a gaussian and this is this corresponding to a parabola because it's if we take the negative log density then this is a squared function and we can see how this is basically um leading to the formulas on the bottom actually both correspond to this plot here this is a bit confusing i just see now um so this is just a squared uh function which is not robust to outliers like at float or object discontinuities because it penalizes them heavily there's a strong penalty for large values and in contrast the student t distribution here in red is uh more gracious so it has uh heavy tails it's called a heavy tail the probability far away from the center is much larger than for the gaussian where the probability very quickly declines to zero and that's why we have this very large values here and so in the negative log density in the energy domain we have uh curves that look like this that don't grow quadratically but that flatten flatten out here and if you compare this to the actual data distribution the empirical data distribution of how these for instance the gradient of the flow fields look like similar to what we have done with the brown range image database in the case of stereo we can see that this red distribution is much better to align to the blue empirical data distribution compared to the gaussian here on the left and now we see an example of what this changes here is the result of horn and chunk and this is the result of black and anadan it's actually the result of sun and black that has used a couple of more tricks but essentially uses the same ideas as in the original paper plus some additional insights you can see how the how the boundaries are much sharper compared to the original honshank algorithm primarily because of the more precise assumptions now finally um i wanna of course also mention that um optical flow has not only be tackled by uh classical algorithms like inferencing graphical models or variational techniques but it has also been approached with end-to-end deep learning and since a few years now we see that these techniques one of the the latest fields where deep learning has succeeded to over overcome uh classical methods and one of the reasons for this is that there's it's very challenging to generate ground truth data that could be used for deep models to learn from and that's one of the reasons why deep learning has has taken a while in order to to outperform the classical techniques and actually the first technique the first deep technique called flow net in the seminal paper by dosavitsky al has not been able to outperform the classical techniques it was more a demonstration that optical flow estimation using supervised learning with large synthetic data sets is possible so this network is very similar to the display network it's almost the same it has an encoder with convolutions with strides and a decoder of up convolutions and skip connections so it's basically a u-net or hourglass architecture and it uses a multi-scale loss curriculum learning and a lot of synthetic training data because it has so many parameters now that we we need a lot of data and there is not enough real world data so synthetic data has been used and there is a extension of this flow net model is called flow n2 that's basically stacking multiple units together in order to handle different problems separately like large displacements and small displacements small flow vectors and fuses them and this leads to results that are this was was a state of the art in uh 2017 here you can see some examples on the right these are really challenging images you can see a lot of textureless regions and compared to classical methods this learning based approach was able to resolve some of those and then more recently it has been demonstrated that actually optical flow can also be optical deep optical flow algorithms can also be trained in an unsupervised fashion by leveraging the principles and models that we have used in classical in classical optical flow estimation techniques by predicting with a deep network the forward and the backward flow and doing consistency checks and then warping the images into each other using these estimated flow fields and putting a data loss that's similar to the data term that we have just discussed and also putting a smoothness loss on the flow fields and then we can of course train this model using just images without any optical flow crown truth which is great and so there it's possible to learn optical flow without supervision and more about this self-supervised learning techniques will be covered in lecture 12. to summarize classical optical flow approaches have been state of the art until 2016 2017 and deep learning based methods became on par or even better since 2017 but they require very big models with a lot of parameters enormous amount of synthetic training data a lot of gpu compute and sophisticated curriculum learning schedules where you train with simple data first and then go harder and harder but today the state of the art is clearly deep learning here because also the data sets became better and there were more there's more data also real data now with optical flow ground roof available and what's also interesting is that the top performing deep learning methods borrow many elements from classical methods so for instance they use this idea of warping and iterative estimation they use ideas of cost volume course to find estimation and also similar loss functions okay that's it for today thanks |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_81_ShapefromX_ShapefromShading.txt | hello and welcome to computer vision lecture number eight today's topic is shape from x which means reconstructing 3d geometry from images in the previous lectures we've already learned about two techniques one was binocular stereo reconstructing from two images using stereo matching techniques finding correspondences across two images and we've also seen a multi-view reconstruction approach and today we're gonna learn how to use other cues as well in particular how to use shading cues this lecture is divided into four units in the first unit we're going to discuss shape from shading a technique that allows to reconstruct 3d information even from a single image in the second unit we're going to cover photometric stereo which is a generalization of shape from shading if you will to multiple observations where the camera is kept fixed but we are illuminating the scenes with different light source with different illumination conditions in the third unit we're covering an overview of shape from x techniques including shape from shading photometric stereo binocular stereo but also other techniques that exist and then finally in the last unit we're going to see how we can use or how can how we can fuse multiple incomplete reconstructions that have been obtained using those techniques into a more global larger scale model of the world so let's start with unit number one shape from shading the main question that we're going to cover in this unit is can we actually recover shape from just a single image by looking at the shading cues what does shading mean well shading means the relative intensity that i observe when i look at an image in other words we want to recover or we want to uncover the question what is the relation between intensity and shape is there any relation between the intensity that i observe in this image say that the image is brighter here and darker here or in this case this pixel is brighter than this pixel let's say does this tell us anything about shape can we recover shape from this information by making appropriate assumptions you will see that we need actually very strong assumptions intuitively it should be possible right if we humans look at an image like this well we recognize the object but even if we don't know the object this is a pretty generic object here on the left by just observing the shading cues we have an idea about where the light is probably located and what the shape of that object is however in general this is a very hard problem in particular if we look at this problem trying to solve it from a single image and this is well covered by this famous adelson and pendleton pentlands workshop metaffer [Music] which illustrates that solving this inverse problem uncovering the 3d geometry from just the 2d rgb image is very challenging because the space of shapes paint and light that exactly reproduce a particular particular images is vast so for example in this case here this is actually this workshop metaphor from adelson and pendland's paper that has often been used to illustrate the problem in this example here this image is observed this is a 2d pixel matrix that we observe here on the left and on the right are multiple explanations that could have given rise to exactly this configuration of pixels so this image very likely corresponds to something like this a wall that sticks out from the floor that has different paint in different areas but also because this piece of the wall is oriented differently from this and this piece of the wall the color or the intensity of of this piece is darker than the other two pieces this is one possible explanation but there is other explanations it could be that what we look at here as an image is simply a painting we have taken a photograph of a painting that shows exactly that set of pixel configurations it could also be that there was a sculptor that has sculpted a a set of objects like this that if illuminated from the right angle also give rise to exactly this image and it could have also been created by just a special configuration of light sources that illuminate these nine patches exactly with the right intensity but the actual object is flat so what we're looking for is a simple explanation and priors that we can use in order to recover the most likely 3d geometry given this image here's another example um how and that illustrates how humans perceive 3d from a single image many people that look at this image probably tell well i see a a little hill here a little mountain with a little crater on top now what one would expect is if i take that image and turn it upside down we would see that little mountain hanging from the top with also a little crater inside but if i do that actually something interesting happens so what most humans perceive if i look at this image is that now we we don't have a mountain but we have a big crater with a little mountain inside a big crater with a little hill inside it's not what we would have expected a hill hanging from the top with a little crater inside but it's actually a big crater with a little hill inside why is that well it is because also we as humans by our experiences that we have um obtained through our lifetime make certain assumptions about the configuration of the world and in this case we assume for instance that the light source is always coming from above and this makes this particular explanation that we see here that we perceive as humans more likely okay so far for the motivation now let's try to solve this problem and in order to solve this problem we have to remember again the image formation process from lecture number two so let's recap the rendering equation this is a slide that we have already seen before in lecture number two and it is the rendering equation let p denote a 3d surface point and v the viewing direction and let s be the incoming light direction so here we have a camera this is the view direction we have a incoming light ray that gets reflected at this surface point with this normal here so we have this little patch here it gets reflected at this little patch into the viewing direction you can imagine that this is a little patch that's located on a bigger surface now the rendering equation describes how much of this light l in that's the intensity of the light with a certain wavelength lambda arriving at this point p is reflected into the viewing direction v and this is decoration for it if this light if this point is also emitting light we have also an emitting um a term that's responsible for the emitting light but typically we don't consider this here so we just left with this right hand side of the equation which is the integral over the upper hemisphere over the brdf which is the bi-directional reflectance distribution function that defines how light is reflected at an opaque surface at this particular point here so it's defined at that point for a particular incoming and outgoing light direction and a particular wavelength times well the strength of the incoming light times the normal transpose times the incoming light direction which is modeling the foreshortening effect if the light comes directly at a very flat angle then the intensity is very low but if the light comes directly from above the intensity is high so more light is is reflected at a particular fixed surface area and so we're integrating this over the entire hemisphere here so this over all possible incoming light directions because there could not just be one point light source but there could be many light sources in fact in the real world we typically model an environment map because the light comes a little bit from everywhere because it's not just coming directly from the sun but it's it's reflected by the atmosphere and by other objects until it reaches a particular surface and then eventually the camera this is the most general form of the rendering equation now what we're going to do now is we're going to try to simplify it successively by making certain assumptions about the world in order to arrive at a much simpler equation that we can then apply to this shape from shading problem in order to infer geometry from the shading cues of a single image typical brdfs have diffuse and specular components we've already learned about this where diffuse components scatter light uniformly in all directions and this leads to the shading variations that we see this leads to the shading variations of the sphere and the phase that we've seen in the motivational part in the beginning the specular component in contrast strongly depends on the outgoing light direction right so there's a strong dependency here it's not uniformly scattering but it depends it's going like the strongest effect is in the perfect mirror direction and so here are three examples typically we observe a combined effect because materials are both diffuse and specular but there are materials that are purely diffuse and there are materials that are purely specular so here are two examples on the left is a clay pot that's mostly diffuse because there's a very rough surface so whenever light hits the surface it gets scattered into all directions and this is the type of objects that we are going to consider this is the simple type of materials that we're going to consider in the shape from shading formulation on the other extreme are completely mirroring surfaces that are like a mirror and here's an example this is the so-called cloud gate if you have been in chicago you have probably seen this it's in a millennial park and it's a art installation from kapoor from 2006 and quite impressive so back to our rendering equation what we're going to do first is we're going to drop the dependency on lambda that's typically done because we we don't observe the entire spectrum of light but we have a sensor that already integrates over a certain spectrum so we don't consider the entire spectrum anymore so we're dropping the dependency on the wavelength lambda here and also p just for notational simplicity of course we are going to consider surface points but we're just going to make the notation simpler and then what we're also going to do is we're going to going to consider a single point light source that's located in direction s and if we do that then the rendering equation simplifies drastically so first of all we don't have an emitting surface so this term vanishes and as you can see the lambda and the p have been omitted here from this equation um and also well actually this this l in term should also depend on s still but what has happened here is that the integral has vanished because we're not we don't have to integrate over the entire sphere anymore because we know there's just one single point light source there's no environment map light is just coming from one particular direction so you can think of this as in terms of a distribution as a direct delta and therefore the integral goes away and we can simply write this equation here where s is now the direction of this one point light source that we assume so we're gonna assume now also that the images that we consider have been captured with a single point light source everything else is dark the object is placed in a dark room and there's just one point light source that illuminates the scene so this is also illustrated here at the bottom on the left we have the hemisphere on the right i have removed it to illustrate that we don't integrate over s anymore but we just have a we consider a single point light source now the next thing that we're going to do as i already indicated is to assume a diffuse a purely diffuse material where the albedo is now called rho so the brdf is a constant this row this constant rho is called the albedo but the brdf does not depend on s and v anymore right so instead of this expression here we simply have replaced now the brdf with a constant row which is the albedo because it doesn't depend on the incoming and the outgoing light direction anymore because we assume a diffuse material and in fact we assume this material or we will assume for the algorithm to work this material to be the same everywhere in the image or everywhere on the target object like in the case of the part which just consists of one material so we end up with this equation here which is again simpler than this equation and the last thing that we're going to do is we're going to eliminate the minus sign here by simply changing the definition of the light the light source direction as you can see here the arrow points from the light source towards p and i've just reversed that arrow and if i do that then the minus here goes away and of course you can see that this is true because this term here must always be positive right if n and s coincide these are both are the same unit vector then the effect or the light that's transferred via that surface point is maximal and if s is orthogonal to n so in that case that that term would be one but if s is orthogonal to n so it goes in that direction here then this term is zero and so the outgoing light will be zero no matter what the incoming light and the albedo are now we end up with this expression here which is under all these assumptions that we made the lamborghini assumption the diffuse material and the single point light source gives us a much simpler expression than what we had to deal with before um so what we've seen is that for a fixed material and light source the reflected light l out is a function of the normal so we can consider this expression here as a function of the normal because we assume a fixed light source that is known we have calibrated it we know where it is located and also here we assume a known albedo or the albedo can be actually observed observed absorbed in the image intensity so it doesn't matter so really what this term depends on is just the surface orientation the outgoing light strength is a function of n the surface normal and this function which we now have named r of n is called the reflectance map that's why it's called r of n this is called the reflectance map and we will see examples of this note that r of n depends on or is a function on the normal space not on the image space it's a function on the space of all possible unit vectors n now if we would know n at each surface point we can integrate the geometry afterwards so our goal is to determine n and from n we can actually recover the depth or the geometry we can't directly relate the geometry or the depth to the intensity but instead the intensity is related to the normal which is the gradient of the depth now how can we determine and from the observation l out that's the question can we do this for every pixel in the image and this problem is called the shape from shading problem and it has been introduced to the community by vertolton in the 1970s so let's move on to the shape from shading technique before discussing the technique we have to recapitulate the assumptions that we do there the assumptions that we make about the world the first assumption we have seen already we assume a diffuse material with spatially constant albedo so not only must the material be diffuse but it must also be the same material everywhere in the scene like in the case of the clay pot this is a pretty strong assumption but if we make this assumption then the number of material parameters reduces to one we have only a single parameter that describes the material properties of the entire object if it would be spatially varying then we would have one parameter per pixel and if we would be spatially varying and not diffuse but a brdf we would have many parameters depending on the parametrization of the brdf per pixel but now here we assume there's just one single parameter for the entire image or entire scene the second assumption that we make is that the point light source that we consider is at infinity or it's at least sufficiently far away that we can consider it to be equivalent to infinity the reason why we do this is that if we assume this then the light direction s is constant across all the pixels if it would be very close to the object then depending on the pixel that observe the light orientation would change but we assume the light source is far away so that the orientation no matter where i am in the image or on the surface of the object for the same normal is the same and this makes s independent of the geometry or depth so this is a simplifying assumption that we make but it's it's not super strong because at least in a laboratory setting we can often put the light source sufficiently far away from the object and if you consider real scenes we'll see examples later on where we consider the sunlight then the sun is of course very far away compared to the objects that we are looking at so the light rays can be considered parallel the third assumption that we do is similar for the camera we assume the camera also to be at infinity which corresponds to another graphic projection and this in turn keeps the view direction v here the left one here on the left side the blue one here constant across all pixels so similar to the light direction s now the view direction v is also constant across all pixels and therefore v becomes independent of the geometry and depth as well if the camera would be very close to the scene depending on the surface point i would look at the the definition of the normal the oriented relative orientation of the normal with respect to that observation ray would change but in order for our algorithm to work at least the simple algorithm that has been introduced in the early 70s we just abstract away from this problem by assuming an autographic camera where the rays that enter the sensor plane are assumed to be parallel there is extensions of this problem to the perspective case and to close point light sources and to non-diffuse materials but these are significantly harder problems so we are focusing on this simple problem now to start with the first thing that we have to address is how should we actually parametrize the normal n a unit normal n lives in a three-dimensional space it lives actually on a sphere a three-dimensional sphere so it has only two degrees of freedom can be represented by two degrees of freedom so it's not ideal to represent the normal using three coordinates and therefore what we're going to do is we're going to use the so-called gradient space representation the gradient space representation parameterizes the normal n by the negative gradients of the depth map so instead of specifying the normal directly we specify the gradients of the depth map so let's call the depth c this is the you can imagine here a camera on top that looks at the scene from the top and c is the depth direction or the inverse depth direction depends on how you define it but c goes into or goes along the principal axis of the camera and so we define p and q as the negative gradients of the depth map and if we do that then the surface normal n at pixel xy is given by pq1 transpose and in order to obtain a unit normal that normalizes to one we divide by the magnitude of this vector which is p squared plus q squared plus one and the square root of that expression let's look at a simple example so here we just have a 2d visualization so we have the x and the c plane and we have a surface here that has a gradient of minus 2. and if we so we have just the x and the c coordinate so if we look at p in this case so p would be minus 2 minus minus 2. so 2 we have 2 and then we have the 1 here and we normalize that vector and as you can see this is the correct um this is actually the unnormalized vector here in black this is two and one but this is the the correct normal that corresponds to this gradient so there's the relationship between the between the gradient or the negative gradients of the depth map and the normal if i give you the gradients of the depth map you can tell me the normal by applying this equation here and similarly if you give me a normal i can convert it into the p and q values now assuming this representation and assuming albedo times the incoming light strength to be one which we can do because we can just factor that into the image intensity into the observations this is an unknown that we can globally adjust afterwards so we don't need to take care of this here then the reflectance becomes simply so this r of n is simply n transpose s let's look at the equation from before we have r of n equals this expression times n transpose s so now we're just left with n transpose s if we assume this to be 1 or to be absorbed into r so you can think of this as r divided by this constant here and if we write this in our gradient space representation then of course multiplying this vector with the incoming light vector s where the components are denoted as x as y and sc we have p psx plus qsy plus sc over the normalization of this term we don't need to normalize over the s vector because we have assumed it to be normalized the vector that goes into the direction of the light source is assumed to be one the magnitude of that vector is assumed to be one and so as you can see instead of writing a reflectance map in terms of the normal space the three-dimensional normal space we can write the reflectance map instead in terms of the two-dimensional pq gradient space so this expression here is in terms of the normal and this we have converted it into expression that depends on p and q and now we have exactly the two degrees of freedom that we want here's another visualization of this gradient space in gray so we're given this coordinate system x y and z if we cut the x y or if we cut the c axis at c equals one so we put a plane that is coplanar with the x y plane um and that's located at c equals one this is this gray plane here this is exactly the gradient space p and q and if we take now a 3d point p q and 1 which is the intersection of the normal or the extension of the normal with that plane so we extend this normal until it hits that plane until it intersects that plane at c equal one and that's the point that we call p q one where p and q is the gradient space representation then we can of course again obtain the unit normal by simply normalizing pq and 1 by its magnitude so this is the relationship between the pq space and the normal so for any any normal that is not lying exactly in the x y plane we'll find and that's not um that doesn't have a negative c component we can find an intersection with the pq plane with this gradient space plane and we can determine that point pq1 the normals that have a negative c direction are not important for us for this purpose because there would not be any light reflected from them however the normals that lie inside the x y plane where the reflected value is zero pose a problem or normals that are very close to that because they are close to infinitely far away from the origin of the pq plane and we're going to see how we can deal with this problem similarly to the normal p and q is the normal representation we can also represent of course the light source in this pq gradient space so the gradient space can also be used to represent the light source in a similar manner by considering a the normalized or the unit vector that goes into the direction of the light source which we call s and we're extending that and intersecting that with the pq plane and this gives us what we call psqs1 to distinguish it from the normal we have added an index s here to indicate that this is the vector that corresponds to the light source now for a given the question that we want to want to answer here is that for a given light ray s and observed reflectance r what is the normal n this is what we want to solve for using this representation and there is a very nice and elegant visualization of this when we look at this particular graph here on the right for so what we have here is the expression for the reflectance r which is the normal transpose times s which is equivalent because these two are unit vectors to the cosine of the enclosing angle theta theta is the angle between n and s now if you consider a particular vector s for a particular light source that points into the direction of a particular light source and we fix r to a particular value then what that means is that um the space of all or all possible normals must lie at an angle theta at the same angle theta from that vector s so what we have here is that all possible surface normals that are at that angle theta such that n transpose s equals r or cosine of theta is equal to r or in other words theta is the arcs cosine of r um are located on or pointing in the direction of a circle right a circle around the vector s such that all the vectors all the black vectors which are normal vectors here are at a this are at an angle theta that lies between s and the respective normal right so what we're doing here is we're effectively spanning a cone and now we're extending this cone like we did with the normal before but we're extending now for all possible normals so we're extending the entire cone to intersect with the pq gradient space and this is what i've drawn here in purple and so we can see that the space of solutions where the reflectance is taking a particular value it's a so-called iso reflectance contour because it's the same value everywhere so the space of possible normals in the pq representation lies on a conic section or in this case here an ellipse so any point on that ellipse or equivalently any if i normalize that to a normal to a unit normal vector any normal that lies on that circle here is a solution to the shape from shading problem for that particular pixel the set of solutions n is located on a circle around s so it's not unique and projecting this set to the c equals one plane yields this conic section right it's also intuitive why we don't have a unique solution here because we just have a single observation a single intensity that we observe but there's two unknowns right so it's it's relatively clear that we have to end up with a set of solutions there's no way we can solve the shape from shading problem by just looking at a single intensity and making no other assumptions even having assumed a lamborghini scene etc so there's one exception to this at r equals 1.0 where all these where basically the uh normal has to face the direction of the light source where basically all the ellipses shrink to a single point this is indicated here by this orange point here except for this point the normals consistent with the reflectance that we observe at this pixel are not unique you can see that they are only unique there's only one solution for this singular case here but in all other cases for all other reflectance values and in practice of course we also have noise sensor noise etc to deal with for all our cases we have a set of solutions that lies on a curve lies on an iso brightness contour that's a conic section there's another special case which is the case where the normal multiplied with the vector s equals zero in this case the conic section degenerates to a line so we have this spectrum from a line to a single point via this conic sections here because the normals consistent with r are not unique we require further constraints to solve this problem and there's uh two solutions to it that we're gonna both talk about today the first solution is the shape from shading problem this should actually be an s shape from shading where we add additional regularization for example smoothness constraints like in the case of optical flow and the other solution is to add simply more observations and this problem is called photometric stereo before we now continue with the shape from shading problem let's look at this problem that we have hinted to before already where what happens if the normal that we want to estimate becomes orthogonal to the viewing direction in this case we can't find a point in this pq space the point will be infinitely far away so all the normals that are very close to being orthogonal with the viewing direction are problematic but there is a simple trick to solve this problem we are going to change the representation and this is called the stereographic mapping that has been used for example in this paper here by a cushy and horn numerical shape from shading and occlusion boundaries and the idea is actually quite old idea is rather simple we change the representation such that we bound the area in this gradient space representation to a particular area in this case to an area that is smaller than one how do they do this well instead of taking the normal and intersecting that normal with the pq plane is we are gonna take a line that starts at zero zero and mean minus one here at this point uh c equals minus one and it goes through this normal head the point where the normal points to and we are going to intersect that red line now with this c equal one plane and because we have changed the representation we're going to call this the f g plane instead of the pq plane so you can see that the location of this point has changed from here to here it has come closer to the center and there is a mathematical formulation a mathematical relationship that holds for this which is a simple expression that's shown here on the left that can be derived by considering equal triangles here and solving for f and g now compared to before r depends on f and g now so we have a reflectance map r that depends on f and g which in turn are functions of x and y remember we are we are working in the normal space here but of course we have an f and a g for every pixel location x y often we ignore these dependencies just for notational simplicity but let's just keep in mind that these dependencies are there and f and g are in this stereographically mapped gradient space in this normal space and xy is the image domain so this is not the image domain here we can see why we bound the set of possible solutions now well this is the case because the normals the most extreme normals are orthogonal to the viewing direction so they lie on this equator of this sphere here and if we take this point at minus one and we draw a line through the equator you see that we end up at a circle here on the fg plane um that has a radius of two right so this circle here would have a radius of one if we would autographically project it onto that plane but we do a perspective projection and so this circle has a radius of two but now in contrast to b4 is bounded so it's much better behaved and that's the representation that we're gonna work with so now we finally come to the shape from shading formulation the assumption that we're going to make is that the image irradiance or the intensity we're going to consider it to be equal here abstracting away from all the measurement process just saying that the irradiance is the same as the intensity that we measure the voltage that comes out at every pixel of the sensor plane should be equal to the reflectance map we want this intensity that we measure to be same to be the same as the reflected light therefore shape from shading minimizes a error term let's call that the image error term or the reconstruction error term that depends on f and g these are the quantities these are the the gradient fields that we want to optimize that corresponds to the normal if we have found f and g then we have found the normals and this is the integral if we consider the continuous domain here this is the integral over the spatial domain and what we're going to subtract is the observed intensity and the reflectance that is a result of a particular f and g at x and y and so we're minimizing this in the square distance in the squared sense so this penalizes errors between the image irradiance and the reflectance map we want both of them to be the same however as we have seen before already this problem is highly ill posed there is many solutions f and g that give rise to an r that is equal to i because there is more unknowns than we have observations to constrain this ill pose problem further shape from shading therefore exploits two additional constraints the first constraint is a simple smoothness constraint as we have already seen it in the case of optical flow we call this additional energy term e-smooth that also depends on f and g and simply penalizes the magnitude the squared gradients of both f and g in x and y direction in other words the goal is to penalize rapid changes in the surface gradients f and g we assume that the surface is rather smooth if you think about the clay pot for example you have a rather smooth surface almost everywhere and the second constraint that we introduce to make this um solvable is so called our so-called occluding boundary constraints we assume that the object that we observe in the image has been segmented and we know the segmentation boundary it could have been segmented because there is an instant segmentation algorithm that figures out the segmentation of that object or because the human has drawn the segmentation manually so let's assume we have segmented the object and we have this camera looking at the object so we know the boundary of the object what we know is that at the boundary the surface normal must lie in in a plane that is orthogonal to the viewing direction otherwise it wouldn't be an occluding boundary we assume that the surface is smooth and at some point it reaches a 90 degree orientation so it's directly becoming parallel to the viewing direction such that the object ends at this point so this normal here must lie in the plane that is that is orthogonal to the viewing direction and the other constraint that we have is that this normal must also be orthogonal to the direction of the edge which we can compute by by looking at the variation of the edge in the image domain so in other words what we have is we have at these particular pixels at the boundary of the object we know the normal we know that the normal must be the cross product uh between the e vector the edge and the viewing direction it must be orthogonal to both of them so it must be the cross product of them so the goal here is to constrain the normals at occluding boundaries where they are known now with these two constraints the smoothness constraints that are applied everywhere in the image everywhere in the image on the object and the occluding boundaries that are applied at the boundaries of the object we can now solve the problem so in summary we optimize a variational energy it's a variational energy here because we still use this continuous formulation e of f and g that comprises this image and the smoothness term weighted by some weighting factor lambda with respect to the gradient fields f and g subject to the occluding boundary constraints here's some remarks as image our images are spatially discrete similar to the horn chunk optical flow formulation this problem is of course in practice when we implement it on a computer also transferred to a spatially discrete problem the this the way this is transferred is very similar to what we have discussed in lecture six so every everything becomes just a discrete counterpart of this continuous formulation furthermore similar to optical flow iterative optimization is required as r depends nonlinearly on f and g and r is required in this image term here furthermore we use the known normals at the occluding boundaries to fix f and g on the occluding boundaries and we initialize all the other variables to zero so we initialize them to zero and start iterative optimization there until the object or the normal map emerges until it converges and then we take this converged normal map and transfer it into a geometry as we'll discuss next if you want to know more about this technique i can highly recommend a recent video lecture on this topic from the university of columbia okay so we have discussed how we can using this additional constraints and using all the assumptions that we have made convert a intensity image into a normal map but how can we now go from the normal map to the actual geometry remember that the normal map or the gradients how we represent them here example in in the p and q space which we can of course obtain again from the f and g space is simply the surface gradients so we have the gradients of the surface that we're actually interested in we don't have directly the surface we have we have only access to the gradients now how can we recover the 3d surface or the depth map from these gradients well the best way to do that is also assuming a smooth surface that doesn't have any jumps to solve a variational problem where we integrate over the image domain and we are computing um these two terms here which is the so this is the gradient of the surface of the depth map with respect to x and we add p here because p is the negative surface gradient so effectively we are subtracting it and these two must be the same and we're doing the same here on the right hand side for the gradient of the depth map in y direction again plus q because it's the negative surface gradient and we want to make we want to obtain we're optimizing for a depth map c such that its gradients are consistent with the gradients that we have inferred with the p and q values that we have inferred using the shape from shading algorithm and this variational problem can be solved very efficiently using the discrete fast fourier transform and more details about this are described in this paper here from franco and jalapa from 1988 now let's look at some results so here on the top we can see the input image and at the bottom the respective reconstruction after converting the gradient field into a depth map or a mesh you can see that we can actually recognize something from that mesh we can recognize the shape but you can also see that depending on where the light source is located you introduce certain biases into the reconstruction so it's a very difficult problem more modern versions of shape from shading consider generalizations of the problem trying to relax some of the assumptions that we have made so for example in surfs shape illumination and reflectance from shading which is work from baron malik from 2015 the input was assumed to be also a single masked rgb image as in the case of shape from shading but instead of just estimating the normals and assuming um a known point light source and a constant albedo what what they do in this paper is they estimate the depth the normals the reflectance the shading and the lighting trying to disentangle all of these components which of course is a much harder problem compared to assuming a single known point light source because now there's much more ambiguities to be resolved for example between the environment map the light map that is unknown and the appearance of the object so it's a much harder problem than shape from shading but as you can see here to some extent reasonable results can be obtained despite the fact that the reflectance doesn't capture pure color and there is some color left in the shading um and also that of course it's a very difficult problem even for very for objects of very similar materials such that there is noise in the depth maps and the normal maps but still quite impressive results considering that this has been done based on a single image only as input |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_72_Learning_in_Graphical_Models_Parameter_Estimation.txt | how can we now estimate the parameters of a conditional random field our goal is to maximize the likelihood of the outputs y conditioned on the inputs x with respect to the parameter vector w and as often the case in machine learning we're going to assume that the data is independent and identically distributed in order for the likelihood to factorize mathematically what we want to find is the parameter vector w that maximizes the probability of y given x and w and we're going to call this parameter vector w hat ml for maximum likelihood estimate as we will see for these log linear models that we consider in this unit this optimization problem is convex or semiconvex and so we'll find a global optimum now because we assumed that the data is iid the probability over y and x which here in this case denotes the entire data set it factorizes into the product over the entire data set so here n is the index that runs over the data set capital n is the number of annotated images that we have and so we have this factorization into the conditional distributions where y to the power of n and x to the power of n are the n elements in the data set in other words what we want to find is the parameter vector w such that the model distribution is as similar as possible to the data distribution if all the data points have like high likelihood under that conditional probability distribution of the model that we have learned then the model distribution must be similar to the data distribution of course the model distribution is not going to be exactly the same as the data distribution due to all the assumptions the simplifying assumptions that we have made when we model the problem but we give it some flexibility with the parameters and at least in the class of all possible models that are defined through the parameters for this particular class of models that we defined we want to get the best one by finding the best parameter vector now this is equivalent to minimizing the negative log likelihood um so instead of writing the arc marks we can also write the argument of some loss function l of the parameters w which is the quantity that we want to optimize over where the loss function is simply the negative log probability and because the negative log of the product is the same as the negative sum of the log we have rewritten this here in terms of summations instead of products so we want to minimize the negative conditional log likelihood which is copied from the previous slide here what is the negative log likelihood if we write it out well p of y and an x and this is this data point uh and conditioned on w can be simply replaced by this expression which is the form of our conditional random field remember if i go back this is how we have defined the conditional random field now we are plugging in one particular data point yn and xn from this data set here and so this term here can be written as this term so we have we have simply substituted the conditional probability through its crf form that we have defined in the previous unit now the logarithm of one over c is mine is equal to minus the logarithm of c and the logarithm of the exponential or the logarithm cancels with the exponential so we have minus the logarithm c plus because we have the logarithm of a product the argument of the exponential function we have basically just applied the logarithm rules here so we end up with this expression now we take this last term and put it in front and the first term and put it at the second as the second term here and then what we also do is we are taking this partition function and writing it out what is the partition function well the partition function is the sum over the entire state space y so we're summing over all possible combinations of y of exactly the unnormalized conditional random field which which is the exponential of the inner product of the parameter vector and the feature or more precisely the feature vector so this is the expression that we obtain for the negative conditional log likelihood which we simply write as a loss function in terms of the parameters that we want to optimize this is what we have to optimize with respect to w now how can we optimize this well from this form it's clear there is no closed form solution it's clearly not quadratic so what we're going to do is we're going to use gradient descent how does gradient descent work well in its simplest form we pick a step size eta and a tolerance epsilon and then we initialize the parameters somehow and then we repeat until convergence we compute the gradient of the loss function and then we do an update on the parameters and there's various um variants here so there is uh for example line search where instead of just picking a step size we're trying to query the optimal step size by going into the direction of the gradient until we find the minimum there's also conjugate gradients which provides a better direction for the next gradient step and you can see an illustration here on the right but what is common to all these algorithms is that they all require gradients and some of them even require uh the function evaluation like the line search so we have to be able to evaluate the function the loss function that we've seen on the previous slide and we also have to be able to compute the gradient of that loss function in order to execute the great and decent algorithm so these are the two things that we have to compute the loss function the negative conditional log likelihood this is just copied from the previous slide and the gradient of this what is the gradient of this well if we take the gradient does the gradient with respect to the parameters w of course if we take the gradient of this then we have the gradient of this expression here which is simply the feature for xn and y in itself because it's a linear term in w and then we have the logarithm of the sum of the exponential so the logarithm of the partition function and so the derivative of the logarithm is 1 over the logarithm so this expression here goes into the denominator and we are through the chain rule left with this expression here times the inner derivative which is again this expression with respect derived with respect to w which is simply the feature itself all right so through the logarithm we have one over this expression and then we have through the chain rule this expression times the inner derivative which is simply the features here now we can rewrite this slightly we can well we can write this here which is simply a scalar with a different index so instead of summing over all y's calligraphic wise we are summing over all calligraphic y primes here in order to be able to distinguish this and this y nothing has changed so far and then we can pull this term inside this summation that's what has happened here so we have taken this term here because this is constant with respect to the index variables of this sum we can just pull it inside so we get this is basically this is this sum here is this summation here so we have the summation over this term and this term which is this and this divided by this term where we just have used another index variable to distinguish these two because this is now part of this but it's not the same this looks a bit strange why do we do that well if we inspect this closely we see that this term here is exactly the probability of x n given y and w because here we have the exponential term this is our unnormalized conditional random field divided by c divided by the partition function so this simplifies as follows we have the first term minus the sum over this state space the output state space on the probability of y given x n and w times the features at x n and y so what this means is that we simply have for the second term here the expectation this is just the definition of the expectation of the features with respect to the y variables drawn from the probability distribution of y given x n and w this is just the definition of the expectation function it's an expectation of psi of the features and it's with respect to the variables y they come from this distribution so this is the probability that we multiply the features with so this is the expression that we obtain so we can already see that this gradient is subtracting the features evaluated for the labeled output yn for that particular input xn with the expectation of these features under the model this is the model expectation and that makes sense right we want to make both of them the same so it makes sense that in the gradient this difference between the features computed from the data set and the features that are computed as expectation of the model appears because we want the model for a model that is that is good we expect that on average it predicts the same features for the same input xn as the features that correspond to the labeled example for that particular xn to make this a bit more concrete when is the loss function minimal well if the expectation of the features under the model is equal to the features for the labeled example in this case this difference here will vanish and the gradient will be zero so in this case we have found a critical point right this is what we just said and the interpretation is that we aim at expectation matching so we aim at matching the expectation of the features under the model to the features of the observation here informally abbreviated with y ops but we try to do this now um discriminately discriminatively only for the x that are available in the training set we can of course not match the features for any unknown example because we haven't labeled it we might not even know the x but we have the data set with the x y pairs and for this particular axis we want the features to be the same the features predicted by the model and the features that are computed by taking the feature function of the respective input and the labels example if both are the same then we are happy as a little note the loss function l of w is convex because as can be shown the hashing of it if we take one more derivative is positive semi-definite and that means that if we for this particular model this particular crf definition if the gradient is zero for example by starting at some random initial guess and performing gradient descent and arriving at a critical point where the gradient is zero then we have reached the global optimum however this is of course only true if we use this very special definition of the conditional random field in particular if this probability of y given x and w is log linear in w this is what we have assumed right it is if i go back this is log linear if we take the logarithm of this expression then this expression here is linear becomes linear in w only in this case we have a convex optimization problem but later in the third unit we will also see non-linear models but this is not the case and of course in those cases if we apply gradient descent we don't necessarily arrive at the global optimum but just at a local minimum now back to the task that we have to perform we know that we have to do gradient descent and for gradient descent for example of line search we must evaluate l and of course we must also compute the gradients of the loss function l and this is what the loss function and the gradient of the loss function looks like as we have derived on the previous slides now the problem with this is that the state space y is typically very large exponentially large right so this hints again at the same problem that we have discussed in lecture number five on graphical models this is also why graphical models play a role here now in order to make this feasible if we want to just naively compute these sums here over the entire state space of y this is completely intractable as an example if we consider binary image segmentation of a vga resolution image 640 pixel wide and 480 pixel high then there is 2 to the power of 640 x 480 possible solutions so this sum is a sum over 10 to the power of 92 475 terms this is completely intractable it's way more solutions than there is atoms in the universe it's it will require way more time than we would have to compute so this is not possible and we need to now exploit the beauty of graphical models in order to make this tractable similar to how similar to what we did for the intractable inference problem in graphical models in lecture five we must use the structure in y or we are lost so again these are the two equations from the previous slide the loss function and the gradient of the loss function in order to see and understand this a little bit better what is the competition complexity of computing these two quantities let's write down the computational complexity precisely and then later let's see how this computational complexity gets reduced so what is the computational complexity well the the first term in this in this computational complexity expression here is n because we have to for all n is the number of data samples in the data sets so in order to evaluate the loss or the gradient we have to of course compute the sum over n many terms so we have n appearing linearly here then we have the expression that we have already seen c to the power of m which is the maximum number of labels per output node in the case of binary segmentation as in the previous slide this would be 2 but it could be much more for instance for semantic segmentation if we have 100 different semantic classes this would be 100. then we have m c to the power of m where m is the number of output nodes in the case in the previous case we had 640 by 480 pixels so 640 by 480 variables or output nodes and this is this computational complexity that we have already discussed this is the main problem in computing these expressions and then we have d which is the dimensionality of the feature space because we have to compute the sum here always in terms of all the features so for inside for example this gradient here is the gradient the length of this gradient vector is of course the of dimensionality d of the size of the feature space the larger the feature space the more we have to compute and so this d also appears linearly in this expression just because of the size of the feature space here in the case of the computation of the loss function it also appears because with a larger dimensional ltd also this inner product here becomes more and more expensive to compute because it's a product of larger vectors so we have these three terms that contribute to the computational complexity in order to compute these two quantities that are required at every single step of gradient descent and now we're going to look at how we can reduce this computational complexity and the first point that we're going to address is the most important one this is the one that makes the entire problem irretractable or intractable and this is this complexity here introduced by summing over the entire state space where in the case of binary segmentation we have 2 to the power of maybe 640 times 480 possible combinations in these two sums in order to solve this problem we're gonna again exploit the structure of graphical models as we did for inference in fact we're gonna apply inference algorithms as we're gonna see remember in a graphical model the feature vector decomposes so instead of just considering this as a very long vector that we don't know anything about this is actually a vector that is a concatenation of smaller sub vectors and we have in this case k different sub vectors because in this graphical model here we assume there's k potentials each potential each of these unity potentials or pairwise potentials or higher order potentials produces a feature so this global feature this long feature vector is just a concatenation of smaller feature vectors and therefore the partition function simplifies the partition function is relevant this is this function here because we have to compute it here right so this is the intractable expression here this is just the partition function that we have to compute so the prediction function simplifies why does it simplify well instead of just writing this very long inner product what we can now write is a sum of smaller inner products where we have the sum over the k potentials and each of these inner products correspond corresponds to one potential so we have a subset of the parameters which we call wk which is just maybe two or three values in this long parameter vector w and we have local potentials that depends not on the entire output space y anymore but just on a subset of that maybe in the case of pairwise potentials that's just two variables two adjacent pixels instead of all the pixels in the image and that's indicated here as in previous lectures by this subscript k we have y k this potential could still depend on all the input variables that doesn't matter what matters here is the complexity with respect to the output variables so we don't make any restriction or assumption here in terms of the input the input is still xn this is for this particular training sample n that appears in this expression here for a particular term in the sum we have a little n for this training sample so we can consider the entire training sample we don't need to restrict to certain nodes what is what matters here is is this complexity in terms of the output space now what we can see here of course um is that this is just the sum over the entire output space of a product of these vectors here right so x of the sum is the same as the sum of x which is which is just a vector this is the k factor here according to our graphical model definition so there's no magic happening here it's just we've just rewritten this sum over this this exponential of this very long feature space into a sum of product of very small factors and now what we know is if these factors are of tractable order such as in the case of the stereo problem we've discussed with pairwise and junior refactors then we can efficiently calculate this entire expression here using message passing using message passing we can compute marginals and by summing over any of these marginals we get the partition function so the sum this is a marginalization here as you can see it's a marginalization over actually all the variables that are involved we don't want to just compute the marginal over as we have uh done in in the inference lecture over over a particular variable x1 and marginalize x2 to x100 but here we are really computing the sum over x1 to x00 and we can do this efficiently using the belief propagation algorithm by computing the marginals and then taking any of these marginals and summing these values up so that we have some not only over over all variables except one variable but over all variables and so this now becomes tractable for tractable graphical models because we apply the trick that we have introduced for inference in meshes message parsing and you can already see here now that in order to do gradient updates in order to learn the parameters of the model we need to do inference in particular uh some product belief propagation we need to compute marginals for learning similarly the feature expectation simplifies as well the feature expectation is this expression here that appears in the gradient of the loss function and it is also intractable if treated naively so in this case here what we have here is simply expectation of a y of again now decomposed here the sum of these potentials now because the expectation is a linear operator we can pull this sum outside so we have the sum of the x expectation of the features here where the features again as in the previous slide only depend um on the nodes that are involved in that particular potential indicated by the subscript k here of y now because these because of this fact because these potentials only depend on some of the output variables maybe one or two or three this expectation has to also only depend on those variables because the others don't matter right so we can equivalently write the expectation over just y k this subset of the variables that this feature or potential depends upon and now what we see is well this expectation is simply the sum over the entire state space y k here of the probability of y k times the feature at k or the feature k and what we see now is that well the computation has dramatically simplified in terms of complexity because what we have to do here is we have to sum just over this state space y k which is much smaller than the entire state space of all the variables because this is only some of the variables one two or three not all the variables all the output variables and what we have to compute before summing this up is to compute the marginals at a particular for particular configuration of nodes and we know that these marginals can be computed for tractable graphical models with not too high order clicks efficiently using belief propagation so we have an efficient way of computing these marginals and then we need to sum up over these like the product of these marginals with the features and we have to do this for all the potentials in the graphical model what does that mean in terms of the computational complexity well compared to the computational complexity that we had before c to the power of m where m was very large now we have k times c to the power of f where now and this is important f which is the order of the largest factor typically two or three is much much smaller than the number of all the output nodes so this now becomes tractable because k is just linear and f is small in other words we have exploited efficient inference algorithms in graphical models in order to calculate the loss function and the gradient of the loss function much much more efficiently now let's look also quickly at the other terms that contribute to the computational complexity the first is well the size of the data set the number of samples in the data set that could be just a hundred samples that are annotated but in some cases it could also be a one million samples that are annotated and of course because this is the sum over all these samples this also adds to the computational complexity it adds linearly to the computational complexity of computing the gradient and the loss function which again have to has to be computed at every step of the great and decent algorithm so learning on large data sets becomes problematic because processing all and training samples for one gradient update is slow and often it doesn't even fit into memory for example if you use gpus as in deep learning so how can we estimate the parameters in this setting well one way would be to simplify the model to make the gradient updates faster but if we simplify the model then the model is less accurate and so the results get worse that's not what we want to sacrifice another strategy would be to train the model on a sub-sample data set but this clearly ignores some of the information in the data set and is not ideal either we could also parallelize across cpus and gpus but that doesn't really save computation it it just makes things uh run in parallel so what we can do is what we did also in deep learning is to use stochastic gradient descent in stochastic gradient descent in each gradient step we create a random subset d prime of the data set where this random subset is typically of the size 64 or 128 so relatively small compared to the size of the entire data set and then we just follow the approximate gradient where the approximation comes by comes in by summing now not over all the data samples but summing over only a subset of the data samples a small subset of the data samples now in this case line search is no longer possible so we have to introduce this extra step size hyperparameter eta that allows us to go forward along the gradient for a fixed step size in stochastic gradient descent just as in deep learning it can be shown as we have seen the deep learning lecture that sgd converges to the minimum of a local minimum at least of the loss function if eta is chosen right and in the in our particular case here of these log linear models it even converges to a global minimum sgd needs more iterations but requires less memory and each of these iterations is faster and there's also a nice article that speaks about the trade-offs of large-scale learning and that i recommend as a read if you're interested in this topic and the final term now in this complexity expression is this d that appears at the end which is the dimensionality of the feature space if the dimensionality of the feature space is extremely large if that would be one million then of course that contributes also significantly to the runtime of computing these two terms now in order to get a feeling of what these feature spaces typically look like let's look at some concrete applications and where possible i put also the corresponding paper at the bottom the first application is semantic segmentation where the input is an image and the output is a label map that could be just binary segmentation foreground versus background or it could be multi-class segmentation in this case if we specify this problem as a conditional random field we associate with every spatial variable let's say every pixel or sometimes also super pixels are used which are a little bit bigger pixels to make the problem smaller to make the number of output variables smaller so with each of these sites um y i we associate a a local image feature that describes how this pixel looks or in particular how it relates to a certain semantic class label for example if there is a green region that is unlikely to be a horse but if there's a brown region around that pixel is more likely to be a horse so these are local features that can be extracted using hand engineered features such as back of words or deep features and are typically in the order of maybe a few thousand so we have to compute here this inner product of say a thousand-dimensional parameter vector with a thousand-dimensional feature that is returned at this particular pixel and you can think of this as a local classifier like in logistic regression or we also take this inner product and then we have another term here which is the pairwise term that is a test in this particular case in this particular example here for the same label so it could for example take value one if the labels are the same if you want to maximize these remember that we use this positive definition of the potentials and zero if they are not the same and you can see what happens if we do just local classification versus introducing the smoothness term a lot of this noise that is present by just the local features the local classifiers gets removed but at the same time also the foreground region gets oversmoothed a little so this combination of local and smoothness terms leads to a smooth version of the local cues of what we would classify with the local features alone this was an example for semantic segmentation another example is handwritten recognition handwriting recognition where we have a sequence of characters and we want to classify these characters this could be little images and we want to classify these characters and maybe we have a certain we have certain prior knowledge about which characters occur after each other and that could be encoded by such pairwise potentials here where we have a an indicator that indicates which letter is more likely to follow another letter and of course if we have such knowledge from maybe a big text database extracted then we can more robustly solve this problem we can overcome some of the errors that some of the errors that might or some of the uncertainties that might be present in these local representations if the character cannot be uniquely identified by just its local surrounding in terms of its appearance so the combination of the local and the pairwise term here delivers a corrected version of the local cues but now in this case not for images but for sequences of characters and here's another example in this case pose estimation where again the input is an image but now what we want to infer is the human body pose of a person and so at the output space here is again at each pixel we we associate with each pixel random variable y i and that random variable y i indicates the body part that is associated with that pixel so again we use either hog or deep features in order to extract local features at each pixel but if we do this alone and we use this for classification we get a lot of ambiguities so this is a result that comes out of just local classification but if we now have a post prior if we know how certain body parts are aligned with respect to each other for example it's more likely that the head of a person the blue one here is on top of the body and similar constraints also hold for relationships between the arm and the body or the upper and lower arm for example if we have such a fit function that operates on neighboring sites then we can use this and include this into the conditional random field in order to obtain a sanitized version of the local cues and this is shown here on the right where a lot of this uncertainty has been removed and plausible classification into body parts has been established so in summary typical feature functions for crfs in computer vision include unary terms which are local representation and often high dimensional so you can think of this as local classifiers and pairwise terms which encode some prior knowledge but are typically rather low dimensional and sometimes there's also higher order terms so these purpose terms sometimes also depend on or this higher order terms sometimes also depend on x not only the unity terms can depend on x but also the pairwise terms can depend on x it depends on how the model is defined but it does does this doesn't change the complexity and now during learning we want to adjust the parameters we want to learn these local linear classifiers and the unaries and we want to learn the pairwise weights the importance of for example smoothing in terms of obtaining a smooth semantic segmentation as an output and the arg mux as we've seen in all three cases is a cleaned up version of the local prediction by utilizing our prior knowledge that's encoded in these pairwise or higher order terms however sometimes training this entire model is not easy because of the feature dimensionality um so for example um often it's not so easy to specify features um or terms where the parameters are linear in the features and therefore we often need very high dimensional parameter feature vectors and if we have high dimensional feature vectors learning can be very slow an alternative now in order to overcome this problem of high dimensionality of this d is to use piecewise training where we first pre-train classifiers for example for these unit potentials here we have a unary potential we pre-train a classifier and then set the potential as a one-dimensional function which is just logarithm of that local classification so we use a non-linear classifier in order to overcome this problem of specifying very high dimensional features such that we can use a linear inner product between the parameters and the features so this classifier now can be non-linear and it outputs something that's more low dimensional the advantage of this is that lower dimensional feature vectors during training lead of course to faster training and also faster inference and that these classifiers can be stronger now than just the linear classifier as we have to find it in the conditional random field it can be a non-linear svm or a convolutional neural network etc however disadvantage of this strategy is that if these local classifiers are actually bad the crf training can also not fix this so the features in other words that we extract from this must still be meaningful in order to solve our problem good so let's summarize what we've seen in this unit given a training set d with iid drawn data and a feature function from input and output space to some feature space the task is to find the parameter vector w maximum likelihood such that the model distribution is similar or becomes similar to the data distribution under that training set and in order to do so we minimize the negative conditional log likelihood which which in the case of log linear models is a convex optimization problem so we know that gradient descent leads to a global optimum we've also seen that training needs repeated runs of probabilistic inference therefore this inference must be fast because we have to do it at every iteration of the great and decent algorithm and we have also seen how we can make it fast for example how we can solve the problem of this exponentially large state space that we have to sum over by exploiting the structure of the problem using graphical models and for example using the belief propagation for computing marginal distributions we have seen that we can overcome the problem of very large data sets by exploiting mini batches and stochastic gradient descent and that very large features can be overcome by piecewise training |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_23_Image_Formation_Photometric_Image_Formation.txt | this unit is about the photometric image formation process so far we have discussed how individual light rays travel through space but it's also important how actually the light changes once it reaches the sensor plane so now we discuss how an image is formed in terms of pixel intensities and colors if we take a picture of a scene we observe a light field from a very specific vantage point which is the viewpoint of the camera and the light field is a consequence of light traveling through the scene light that has been emitted by some light source a light bulb or the sun traveling through the scene hitting surfaces reflecting or refracting from these surfaces and at some point reaching the sensor plane in order to model the process of how light reflects from a surface the rendering equation is used as a mathematical tool to define this process precisely and this is one of the most basic equations that you will learn about when you take a computer graphics course to understand the very basics of this equation let p be a 3d surface point here let's assume this is a surface and let v be the viewing direction and are the incoming light directions we have an incoming light here in yellow and the viewing direction in blue and furthermore we have the normal here in with n at that surface point p the rendering equation then describes how much of the light that's coming in um from uh at this point here that's arriving at this point with a particular wavelength lambda is reflected into the viewing direction and in order to measure because the light that is coming in here is reflecting into into multiple directions um um we have to um consider the light that's arriving at this point from all possible or directions on the hemisphere where the light could potentially enter the scene and arrive at this particular surface point so what you have here is this upper hemi unit hemisphere sigma and what we are doing is we are gonna this is the l out is the light that's that's going out in this direction at a surface point p into direction v with a wavelength lambda all of these functions are here are conditioned on this wavelength lambda because um they are this equation is true for all wavelengths basically so for all of these wavelengths we are integrating over this hemisphere sigma and we're integrating the so-called bi-directional reflectance distribution function which determines how much light is reflected at point p for a given incoming light direction r and a giving out going light direction v and a given wavelength lambda so we're integrating how much light is arriving here over all the incoming light directions multiplying this bi-directional reflectance distribution function with the corresponding incoming the intensity of the incoming light at point p from direction r with wavelength lambda and in addition there is this factor here on the right which is the product between the inner product between the normal and this incoming ray direction which models the foreshortening effects you can imagine that if you if you shine light at a surface at a very shallow angle at a particular point there is less light reflected because the light arrives at a very shallow angle if it arrives exactly perpendicular there is no light reflected and this is what this this term here this uh shading term here models um or this attenuation for term here actually models and then there's an additional term here which is only present if this point also emits light so it could be a light source that's reflecting light arriving from some other light source but also emitting light into a particular direction and so we have to add this emission term here as well for emitting light sources now how do these bi-directional reflectance distribution functions look like well typically they have a diffuse and a specular component which is familiar to you if you look at surfaces some of the surfaces are rather diffused and some of them are more shiny mirror-like and so this is illustrated here if you have just a purely diffuse surface that means that an incoming light ray is reflected into all directions equally strongly but if you have a specular surface think about a metallic or a plastic object then there is more light reflected into the direction that is uh the optimal or the ideal uh mirror reflection here right so this is the the strongest effect here is in the direction of the ideal mirror reflection and then it attenuates um it falls off if you go further away and so here on the right we have the the perfect mirror situation where incoming light is is exactly mirrored away into one particular direction so this is the whole spectrum of possible reflectance that that we have and typically surfaces are real surfaces are a combination of all of these effects so the diffuse the constant component scatters light uniformly in all directions and this leads to shading that are smooth variations of intensity with respect to the surface normal and the specular component depends strongly on the outgoing light direction so here's a simple example with a sphere we have a sphere of a diffuse material that's reflecting light diffusely but you can see that depending on the surface orientation we have a darker color or a brighter color here we have a specular component that is reflecting in the perfect mirror direction more than in other directions and then here we have a combination of these two which is the ultimate material that we might consider here and beer fs can actually be quite complex so this was a homogeneous brdf over the entire object but in practice brdfs are spatially varying every surface point might have a slightly different brdf and brdfs themselves might have very um challenging aspects to them very challenging reflection reflectance properties very challenging distributions and there's a whole field about how to estimate these brdfs and how to get good models for them here's next another example for how challenging beard apps can be this is the so-called fresnel effect um if you take a picture of a a lake um or water as here um the amount of light that's reflected from a surface depends on the viewing angle you can see that if you look further away you have almost perfect mirror reflection but if you look closer at a steeper angle with respect to the surface normal you see through that surface and so the reflectance model has also to take into account these type of effects now light does not only bounce from surfaces once but it actually bounces from one surface and might reach not directly the camera but might reach another surface and it's important to model these effects and that's called global illumination in computer graphics and here's an example so here's a light coming from this direction and this is a rendering without global illumination which looks quite unnatur unnatural and here on the right we have the same rendering now with global illumination where this light that's entering the scene here is reflected from this wall and reflected from reflected from the ground and maybe reflected another time from this wall and illuminating parts of the scene that are not directly illuminated by the emitting light source so this was a very very short um x excursion on light and reflectance now let's turn back to cameras we have discussed the pinhole camera model which is actually a good approximation also for real cameras with lenses but why do cameras actually not have just a single pinhole but do have a lens system well here we see an example from paul debevec who used a dslr camera and put a perfect pinhole created a little pinhole and replaced the lens with that pinhole and took a picture that you can see here as you can see from the picture it's very very difficult to get a picture sharp and of high quality and the reason for this is that there's a trade-off here if you make the pinhole very small then you need extremely long shutter times which lead to a motion blur or camera shake blur but if you make the pinhole slightly larger then you're averaging multiple light rays leading to blur if you make the pinhole extremely small you also get a blur effect but that time through diffraction at the surface of this or close to this pinhole and here's an example of this um here's an example for a small pinhole that leads to a sharp projection if i increase the pinhole size then i get a blurry projection because now there is multiple rays that are passing through this but there's multiple rays for one single point that passed through this pinhole at different locations and that captured a scene that sampled the scene at different locations and so i get an average here and the same holds true for this point here which is just an average of these points and so we're getting this blurring effect so this happens when the pinhole is very large and if i start decreasing the pinhole i have to increase the shutter time because the light arriving at the sensor becomes less and less and if i'm decreasing it even further then it doesn't become sharper anymore because of the refraction effects at the pinhole and i get again a blurry image that's the reason why we want to have optics right so on the left is a ideal pinhole model on the right is a camera with a lens cameras use one or multiple lenses to accumulate light on the sensor plane to make the picture brighter the to to create more photons on the sensor plane but there's a trade-off now because only points that are in focus of the lens arrive at the same 2d pixel locations and if the lens is out of focus then you also get blur on the image plane the good thing about this is that for many applications it's sufficient to model the lens lens cameras with a pinhole model with the simple model that we've discussed in the previous unit and that's great but to address focus and vignetting and aberration and all these kinds of photometric effects we need to actually model the lens more precisely so the most basic lens model that we can consider is the so-called fin lens model with a spherical lens meaning that we have this lens here with a shape of a sphere on either side that is often used as an approximation for more complex lens systems the properties of such a lens is that this lens has a focal point on either side and that excess parallel rays pass through the focal point on the other side and that rays via the center keep their direction so here we have an array from a 3d point passing through the center keeping its direction here we have a ray that's parallel to the principal axis and that is then reflected in that lens in the direction of the focal point and passes through the focal point so all the rays that pass through the focal points and through the lens are then turned parallel to the principal axis and the same holds true for the upper direction as well you have a ray that passes through the focal point and let's turn parallel to this axis here now or what can we do with these properties well we can again use the principle of equal triangles to derive a relationship that's one of the most important lens relationships that we know and this is the following here we have the the red triangle relationship and the green triangles relationship let's consider the green triangle first so here we have um now let's consider the red triangles first so here we have xs over xc so we have this distance and xc is this distance which is the same as this distance here so this divided by this must obviously be this distance divided by this distance so x s minus f divided by f and similarly x s divided by x c by this distance here must be the same as c s divided by c c so this is the relationship that's coming from the green triangle so if you have these two relationships then we can substitute one into the other and we arrive finally at this equation one over c s plus one over c c equals one over f so for a particular lens with a particular uh focal point lens focal point f we we can determine uh for a particular 3d point um for for a particular distance to a 3d point how far the uh distance of the image plane must be in order for this point to project sharply onto that image plane or in other words for a particular 3d point and a particular distance to the image plane we can determine how we have to change the focal length or the lens system such that the point appears sharply so here's another illustration of this the image here on the left is in focus here in this case here if this relationship holds where f is the focal length of the lens um and for this distance of the 3d point towards infinity we obtain in this expression here we obtain cs equal f so length with focal length f is approximately a pinhole at distance f if the image plane is out of focus this is the example here a 3d point projects not to a single point for example this 3d point here doesn't project to a single point but it projects to a so-called circle of confusion we get a little disk on the image plane and if this disk is larger than the pixel size we see blur in the image and the size of this circle of confusion is of course determined by the distance of the image plane to the lens and the focal length and the distance of the 3d point that's why we have to always make sure that our lens is in focus another way to control the size of the circle of confusion is of course to change the lens approach and that's why almost all lenses have an aperture that you can set so here you see a 50 millimeter lens of a dslr with the aperture which is a which is just basically a plane with a hole fully open this is an aperture of 1.4 and here we have the aperture of 8.0 which is pretty much closed so if you close the aperture then of course you limit the circle of confusion here by limiting the size or the the circle where the ray can actually pass through but you also let less photons pass and you need longer shutter times so the aperture limits the amount of light that can reach the image plane and small apertures lead to sharper but more noisy images and if you have ever worked with manual photography you have seen lenses like this where there is a depth of field indicator so the the range where the image appears sharply is called the depth of field and the lens tells you where the image appears sharply so if you have an aperture of 8 then you know that if you focus at 10 meters in this case here that your range where the image will appear sharply is from 5 meters to infinity roughly and this is an example for a taylor lens here on the right now the aperture in typical dslr cameras is indicated by a so-called f number on the lens and the f number is simply defined as the lens focal length divided by the aperture diameter so if you have an aperture diameter that's that's very small d is very small um then you get an f number that is very large so if the f number is very large let's say 16 then you have a very large depth of fields but if you make the aperture large let's say 2.8 then the depth of field is very shallow you get this depth of field effect and this is indicated here so here we have the same image pictured with three different apertures 1.4 4 and 22 and you can see how you get different depth of field effects here everything is sharp here the background is not sharp and here only one of these blocks is sharp so decreasing the aperture diameter or increasing the f number increases to the depth of field another effect that plays a role is chromatic aberration the index of reflect reflection for glass how lenses bend rays slightly varies as a function of the wavelength and therefore different colors are refracted into different directions and therefore simple lenses suffer from chromatic aberration which is the tendency for light of different colors to focus at slightly different distances and so we obtain blur and color shift effects so for instance for this 3d point here we obtain a blur effect because as you can see here the red colors are focused after the image plane and the blue colors are focused in front of the image plane and so we have a little circle of confusion for both colors here for this point here we have an effect of blur and color shift because neither of these red and blue wavelengths is focused on the image plane and both of them are actually focus focused or arrive at the image plane um with a circle of confusion but at different locations on the image plane and this color shift is something that you can actually observe uh when you look closely in onto images that you have taken particular in the border regions of wide-angle lens images to reduce chromatic aberration and other kinds of effects most photographic lenses are combined lenses made of different glass elements with different coatings that try to eliminate this effect and you can see so here on the left you can see the the effect of refraction that light that comes in is reflected into different directions for different colors and you can see on the right a high quality lens compared to a low quality lens where you can clearly see the color shift and the blurring that occurs due to this simple lens that which was used another effect that's visible for many images that you have that you're looking at is the so-called vignetting v-netting is the tendency for the brightness to fall off towards the image edge and i'm going to show an image on the next slide vignetting is the composition of two effects there's natural and mechanical vignetting natural vinegaring is the foreshortening of object surfaces and lens aperture lens aperture and mechanical vignetting is the shade is is due to the shady part of the beam sometimes never reaching the image and with shaded i mean here the shaded part in this image so if you look at this this beam here um and these rays that pass for this part of the first lens surface this is a multi-lens system um they actually never reach the second lens or the image plane and this is the reason why in particular regions at the boundary of the image appear darker than the center of the image and that's called vignetting now the good thing about vignetting is that vignetting can be calibrated it can be undone at least mechanical vignetting and that's what's also often used so here you can see an example on the right you can see the original image which shows vignetting and on the left you can see the image where the vignetting was undone |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_53_Probabilistic_Graphical_Models_Factor_Graphs.txt | while inference can be done directly on markov random fields we're now gonna discuss a graphical model that's a little bit more precise than simple markov random fields or markov networks and this is called a factor graph why do we need factor graphs let's consider mrfs again consider the following factorization into potential functions where we have a joint distribution written as one over the normalization constant times three different potentials one over a and b one defined over the variables b and c and one over c and a what is the corresponding markov network or markov random field in terms of the graph representation well clearly there must be a connection between a and b and between b and c and between c and a so we have a graph like this is a fully connected graph that connects random variable a and c and b each with respect to each other however the maximal clique in this markov network is the click a b and c so there is another factorization that is represented by this network that represents actually as we now know the same set of conditional independence properties so the same class of distributions in terms of conditional independence properties and this is the following so we have p of a b and c equals the normalization and then we have just one potential which is a potential over all three variables now the second factorization is more general as it admits a larger class of distributions by having this triplet potential here we can model all possible functions of a b and c which is not possible by looking just at this power at this product of pairwise relationships or pairwise functions or functions of two variables so this is richer than this but in terms of markov networks they don't differ because they respect both of them respect the same conditional independence assumptions despite this one being more powerful therefore the factorization into potentials is not uniquely specified by the graph in the case of a markov random field and to disambiguate this we introduce an extra node in our factor graphs this is the new representation and this extra type of node we're going to utilize a square to distinguish these nodes from the random variable nodes which are circles and each of these squares is called a factor and now we can distinguish these two situations from before so here we have the markov network that is the same for both factorizations but now with this explicit notation of factors we can write in the middle the factor representation of this factorization with just one click and here we can have the factorization into this representation with three clicks but just two variables in each click so the two factor graphs here correspond to the same markov network in terms of conditional independence assumptions but they distinguish the different types of distributions that can be expressed and so they allow to distinguish the different factorizations by making them explicit now very similar to before we define a factor graph just replacing the potentials phi that we had before now with the factors f and we're using now this graphical representation over this graphical representation here which is more precise given a set of random variables x1 to xd and sets of subsets of these random variables which are these cliques or potentials so each of those is a subset of x and a function f of all random variables that is a product of factors where we have capital k factors now there's a product of k vectors of these subsets so for each of these subsets we have to find one function that's now called a factor and we didn't we use an f to make explicit that it's a factor the factor graph or short f g is a bipartite graph with a square node for each vector f k and a circle node for each variable x i and it's bipartite in the sense that there is no connection between factors and there is no connection between variables all connections are between factors and variables between different types of variables so we can we can transform this into a bipartite graph um here's an example of such a transformation graphical transformation into a bipartite graph where on one side we have variables on the other side we have the vectors and there's only connections going from the top to the bottom and not between variables and between factors so that's why it's called a bipartite graph so the factor graph is a bipartite graph with a square node for each vector fk and a circle node for each variable so we are trying to graphically distinguish vectors which are functions from the variables and similar to before by normalizing f we obtain a distribution so we simply have the probability distribution over x over the set of random variables is equal to f of x over this partition function c where the partition function is just summing up f over the entire state space of all the random variables jointly or in the case of continuous variables we're having an integral here now it's easy to read off distributions from vector graphs and also vice versa to write factor graphs for distributions so what is the distribution of this factor graph where we can directly read of which vectors are connected to which variables and we can directly write down the factor graph up to the normalization constant so we just write one over c p of x is or that should be calligraphic x is um fa of x1 and x2 because it's connected to x1 and x2 fb of x 1 and x2 because fb is also connected to x1 and x2 and so on fd is only connected to x3 and similarly we can look at a particular factorization and ask our particular distribution and ask what is the corresponding factor graph for this if i've specified a distribution like this and the answer is well each of these individual distributions here are functions in their arguments so we can simply write them as instead of p we can write an f and so we have f a that's this term here that is connected to x1 as a function of x1 fb this is uh or fc this should be fc that's a function of x1 x2 and x3 and then we have fb that's a function of just x2 |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_45_Stereo_Reconstruction_EndtoEnd_Learning.txt | in the last unit we're gonna be talking about end-to-end learning algorithms that not only learn the matching cost but directly take entire images as input and using a deep neural network directly output disparity maps and this was something that was really only possible with increasing compute in particular annotated data sets that could be used for training these models because these models now require much more label training data compared to the models that just compute or try to learn good models for matching local patches the first model in this context that basically spurred an entire field is called dyspnet it was the first end-to-end train deep neural network for stereo the input it's very similar to the flow net model is a pair of images left right images and we have some convolutions down sampling and we have some skip connections so it's a new a unit architecture we use skip connections to try to retain uh fine details and then we have this up convolution in order to increase resolution again so we have a contracting and an expanding part and in the end we obtain directly we directly predict or regress a disparity map using this and this is trained end-to-end there is no global optimization introduced in the end um it's all trained from very large amounts of data one thing that's specific about this architecture is that and that has also been used and by follow-up works is that after a few convolution and pooling layers when having reached already a relatively small resolution there is already some idea about stereo matching incorporated here by having a correlation layer that does something similar to the correlation that's happening also in block matching but now on this feature level so we're having if you will a siamese network here and then we're doing this correlation layer here that tries to now combine these two and then from there on we have just a single branch and not two branches anymore so this is incorporating some of these basic ideas of traditional stereo matching turns out that actually even without this correlation layer just combining concatenating these features here we can already get good results but this improves results slightly in order to train this model the offers used a multi-scale loss so the loss was not only applied the loss that measures the discrepancy between the predicted disparity and the ground with disparity was not only applied at the last layer but it was also applied for downscaled versions of the ground truth at intermediate layers another thing that they did in order to make this actually work is a curriculum learning strategy where the model and that turned out to be very important in order to not over fit was first trained on easy examples like very simple scenes that are easy to match and smaller resolutions and then later the difficulty of the data sets was increased until the target data set difficulty had been reached and that's a strategy that is still frequently used in particular in the context of training deep neural networks for stereo estimation and optical flow estimation another important thing is data sets creating data sets of stereo imagery with ground truth is really hard in kitty we used a lidar scanner but then the data set was very first of all it had to be manually created so it created a lot of work in order to remove outliers and create it and then second it's uh very specific to the self-driving scenario now if you want to go broader you want to have also stereo images from more general scenes and for those it's often very hard to get the ground truth displacements because there's not really a good sensor that can measure that and what was done in this paper and what's particularly surprising to everybody at the time and that this works is that very large synthetic data sets have been created for which the annotations are cheap to obtain because if we render data 3d assets then we get the depth for free we can just render depth layer using a rendering engine but what was surprising is that very data sets that the data sets that were created were very dissimilar from the target data sets that were used for testing and for fine tuning and testing the model so on the target data sets like kitty the model was done the pre-trained model on these data sets was fine-tuned for a few more epochs but only a few more epochs because these annotated real data sets are small and the synthetic data sets these artificial data sets are large but despite this discrepancy between the appearance of these data sets in the for example kitty data set the performance on the target data sets was reasonably good and the reason for this is that these models are able to generalize well these matching problems are able to generalize reasonably well so here in this case this is a data set called flying things where just random assets have been downloaded from the internet you can see there is flying cars and flying chairs and they were just put on top of random backgrounds from which then the images and also the ground roof was generated a very chaotic data set but it provides a lot of difficulties to the stereo matching problem as you can see there's a lot of depth discontinuities and so the model becomes a generic depth estimator that generalizes comparably well if it's fine-tuned on the target data set later on and another data set that was used was the monka data set which is a cgi movie with free assets and here is a result of this net applied on the kitty data set you can see the resulting disparity maps are are smooth and they are of high quality a follow-up work is called gcnet that led to even better performance than dyspnet by utilizing a very simple idea what has been done differently in this model while also using a shared 2d convolutional encoder was that now this the cost volume that was created by a correlation and in this case both for the left and for the right image as reference image was then filtered with a 3d filter that could then adapt to this correlation volume and pick up features in that volume so the difference to this model here while also having a correlation layer is that here this is still 2d convolutions and here from this layer on we have 3d convolutions and also 3d deconvolutions in order to increase the spatial resolution so the key idea is to calculate the disparity cost volume similar to traditional methods and then apply 3d convolutions on that volume and this leads to slightly better performance but has larger implications for the memory it's very memory intensive so very small mini batch sizes could be used here because these 3d volumes are very memory intensive and these 3d convolutions are more memory intensive than 2d convolutions one thing that was also done here was that the disparity estimation problem wasn't tasked to be as in the distinct case here as a regression problem but instead as a problem where now because we have these 3d convolutions on that volume we can predict a matching cost also for each disparity hypothesis right so we can have now for each at each location at each pixel uv or xy in the image and at each disparity level now we can predict for each disparity level at each pixel a matching cost so we don't have to rely on regression and then this matching costs they have to be combined of course somehow and what they did in this paper is simply to take the expectation so at each pixel this is per pixel we compute the expectation of a disparity which is basically computed by taking the sum over all disparities hypotheses at that pixel computing the probability for that disparity which is the negative cost turned into a probability by computing a soft max along the entire vector of disparities remember the soft max outputs a probability like a discrete probability distribution so these scores here these negative cost or scores are turned into a discrete probability distribution and this is then multiplied with the disparity so we are trying to minimize we're computing the the estimated disparity is the expected disparity and we're trying to minimize the discrepancy between the expected disparate disparity from that cost volume and the ground with disparity now as a final example that i want to show you while there's many more examples that many more works that have been done in that area is so-called stereo mixture density networks that's a rather recent work that we did in our group at cvpr 2021 and the idea of stereo mixture density networks is now to scale these models all of these models are quite memory hungry and they can't be applied to very large image resolution and one particular problem of these models is also that if we use this model stand because of the intrinsic smoothness properties of deep neural networks we get these smearing artifacts you can see that the borders are not very sharp and if we would project such a disparity map into 3d space we would see that there's a bleeding at the edges of objects and this is illustrated here so this is a standard um a state-of-the-art standard model called hsm that produces at the disper disparity discontinuities such bleeding artifacts flying pixels because of the smoothness uh properties of neural network based regression basically and so these are the results of the s d nets the stereo mixture density networks that predict sharper boundaries and also at much higher spatial resolution and how is this done well the first innovation here is um that instead of predicting just a single or regressing a single disparity value we're predicting uh multimodal distribution over disparity values and the advantage of this is that with this we can model much sharper disparity discontinuities on the left we see as an example a classical deep network for stereo regression that suffers from the smoothness bias and hence continuously interpolate object boundaries so we see the green in green the true disparity but the best this model can do is to predict these black dots here which you can see are smoothly transitioning from the background to the foreground on the y-axis we have the disparity and here on the right we have this mixture density network that predicts a mixture distribution so at each x location here this is the image or column for a particular image rho we predict a bimodal distribution in this case over the disparity values and so that distribution can can basically model both of these modes and if we then take as the final disparity value the value that maximizes this distribution then we can model a sharp transition here because at some even for these two these distributions are continuously transitioning you can see that there this transition of this peak here becoming weaker and this peak here becoming stronger is continuous we obtain a sharp discontinuity and this models is a much better model for given a cnn as a predictor is a much better model for disparity maps and the other innovation we did here is that we in on top of a standard stereo backbone and we experimented with various backbones what we did is we didn't directly predict at a fixed resolution the output disparity or the parameters of this mixture model but we have an additional head so called smd hat that queries using an mlp at an arbitrary continuous location in the image domain the feature values using interpolation using bilinear interpolation and then passes these bilinearly interpolated features predicted by the backbone through this mlp in order to predict the parameters of this mixture distribution in this case a bimodal laplacian mixture and this really enables training and inference at arbitrary spatial resolution so despite the input images might uh be a fixed resolution let's say hd images we can predict at much higher resolution outputs so here's an example of this on the bottom we see an example of a classical state-of-the-art stereo deep stereo matching network this is an output of course for test image that wasn't part of the training set we can see that the resolution the output resolution the maximal output resolution that can be achieved here is 0.5 megapixels so we get a pixelated result and we also see this smearing this bleeding at the boundaries where we get disparity values that are actually incorrect while using the combination of this bimodal mixture model and this smd head that allows for querying and also training a model at arbitrary spatial precision we don't get these bleeding artifacts and also at the same time we can query the output disparity map at much higher resolution like 128 megapixels as in this example here that's all for today |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_32_StructurefromMotion_Twoframe_StructurefromMotion.txt | in this unit we're going to discuss two frame structure from motion which is the most basic setup where we're given just two images taken from different viewpoints and we want to find the relative motion between the two cameras as well as the location of the 3d points that correspond to the projections that we observe in the images and i have to say that being able to reconstruct 3d from just two images to flat images of a scene is something that has always fascinated me and was triggering me to enter that field and i still find it fascinating that it's possible today now in order to do this we first need to understand so-called ap polar geometry again the goal is to recover now from just two images and detected features and matched features in these two images the relative camera pose between these two images or cameras and the 3d structure from just these image correspondences that we have detected for instance using the sift descriptor from the last unit and the required relationships are described by the two-view epipolar geometry that's illustrated here on the right okay so what do we have here we have two cameras this is the camera center of the first camera and the image plane of the first camera camera center of the second camera and the image plane of the second camera and the rotation matrix r and the translation vector t denote the relative pose between these two cameras and we assume that we have two perspective cameras so this is the realistic assumptions that we're having here almost all standard cameras follow that model we have removed distortion we can model them like this now a 3d point is projected into the left image onto pixel x bar one we're using this augmented vector here which is just to recap the homogeneous vector where we normalize by the third component such that the x and y pixel coordinates can be directly read off from the first two components first to elements in that vector and similar that 3d point gets projected into the second image onto pixel x bar 2. now the 3d point and the two camera centers form or span the so-called epipolar plane because we have three 3d points all of them must lie on a single plane that's clear but because the points get projected linearly into the images we also know that the pixels x-bar 1 and x-bar 2 lie exactly on the same plane all of these five points 3d point the camera centers and the pixels lie on the so-called epipolar plane and now one nice property of this a polar plane is that it we want to now if we have already recovered that epipolar geometry and we know the epipolar plane then this restricts the search space when we want to find the correspondence of one particular feature point in the first image in the second image and why is that well if we have such a point x bar 1 for which we want to find on the correspondence in the second image we do not have to search through the entire 2d image domain in order to find that point but we know from the apipola geometry that that point here must lie on that epipolar plane and in particular it must lie on the so-called epipolar line that is the intersection of that epipolar plane with that image plane here second camera and that's indicated here by homogeneous l2 this is the so-called apipola line and that apricollar line exists also on the other image if we want to find the feature that corresponds to x2 bar in the second image we want to find that in the first image we need to search only along the apripolar line l one in the first image and not everywhere in the first image just on this along this 1d search space and this is of course a significant simplification of the problem of the search problem once we have found the apipola geometry once we know the camera the calibration and also the relative pose between the two cameras and this is used you brigittely see across all um 3d reconstruction techniques that produce state-of-the-art bars or dense 3d reconstructions we're going to cover some of them at the end of this lecture so called multiview stereo techniques very briefly just showing some results but we're going to discuss in more depth in terms of two view dense stereo reconstruction techniques in the next lecture and then also some applications of graphical models for instance in later lectures so this is the the point of the apollo geometry once we know the camera calibration matrices and the rotation and the translation then finding correspondences suddenly becomes much easier but in order to recover the epipolar geometry which we're going to discuss next we need to find correspondences without knowing the april geometry of course so we need to search feature correspondences across the entire image domain we need to solve a harder problem in order to find this few parameters of the ipopular geometry there is actually the camera calibration is known there's five three for rotation and two for translation because there's a scale ambiguity left so we only need to find these five parameters and then the search problem becomes suddenly much easier and then it becomes also clear that all these epipolar lines for all possible 3d points pass through a single point in the 2d image planes which is called the epipole we have one ap pole in the left image plane and one av pole in the right image plane and that is exactly where the so-called baseline connection between two camera centers passes through the image plane and that doesn't need to be on the image plane that can also be at infinity if the cameras are oriented in the same direction but here in this example they are in the image plane holes so to see that actually all apipolar lines pass through these two epipoles i have drawn another point here another 3d point and also the a bipolar plane and the apopolar lines and every polls for this uh plane here and you can see if i'm moving that 3d point then the projections of course of this 3d point change and the apripolar lines change but the epipolar lines always pass through these every poles that stay fixed good so now the question is how can we mathematically express that relationship and how can we recover that relationship so let's start by assuming that we know the camera matrix of the left camera or the first camera and the second camera ai which is a 3x3 matrix we know that for instance through calibration techniques we've discussed in the first unit now if we know that calibration matrix from a 2d pixel coordinate here in augmented vector notation we can find the local ray direction by undoing what the last step of the image formation process does which is the camera calibration matrix application we're just doing the inverse of that so we have the local ray direction in 3d space of that pixel x bar and we call that x tilde here because it's a homogeneous coordinate but we're not using any any further subscripts here just for simplicity so this is the local ray direction by just multiplying the pixel with the inverse of the calibration matrix just undoing this last step of the projection now what we have is that this local ray direction must be proportional to the 3d point because the 3d point is on that ray just at a different distance we're using that symbol here to denote proportionality now what we also know is that this 3d point is related at the 3d point with respect to the second camera system represented with respect to the second camera system is related to the 3d the same 3d point represented in the first camera coordinate system which we denote by x1 here through a rigid body transform because we're going from this camera coordinate system to this camera coordinate system with this rigid body transform so this point which is x1 in this coordinate system becomes x2 in this coordinate system by multiplying it with r and translating it with t this is this here expression now this one here is proportional to um this expression here because there's only a scaling relationship that we have introduced here so we have changed x1 to x1 tilde which is the local ray direction in the first camera and because we have just scaled this we of course need to apply the same scale here on the right hand side which i just denoted by s i multiply to t that's basically the proportion of how much we have scaled x tilde with respect to x bar one now what we can do with this is we can take the cross product of both sides with t and obtain the following expression if we take the cross product of this side here with t we have this expression here on the left side and then here on the right if we take the cross product we have this expression because the last expression is the cross product of t with itself which is zero so this term falls away no matter what s actually is what we can do for ron is we can take the dot product of both sides again on the left side here with x tilde to transpose which yields the following expression here we have x tilde t x tilde cross product t so this is this q symmetric matrix we've introduced in the previous lecture r x tilde 1. um [Music] and this must be proportional so we have swapped these two expressions here left and right to x2 t and the cross product is q symmetric matrix of t x tilde 2 this is this term here now on the right hand side here what we have here is the so-called triple product it's an inner product and a cross product combined and what we know from the triple product because it describes the volume of the trapezoid that's represented by the three vectors is that this volume is zero if two of these vectors are the same this is just a geometric property that you can also look up on wikipedia so if two of these vectors are the same then this must be zero so the entire expression here on the right hand side is zero and because that's proportional to the left-hand side no matter what that proportionality is um the left-hand side must also be zero to fulfill this in general so we have retrieved this expression here we have retrieved an expression for the local ray directions relating the local ray directions in the first image in the local ray direction in the second image through this matrix here and this is the so called a bipolar constraint x tilde 2 transpose e this is called the essential matrix x tilde 1 equals 0. and the essential matrix is simply this q symmetric matrix from the translation vector multiplied with the rotation matrix of the relative pose between the two cameras now what we see from this expression here is that this is actually also leading to the a bipolar lines directly because e maps a point x one in image one to the corresponding ap polar line in image two and so if we take um we take this e and we multiply it with x one tilde we obtain a line expression because um x two trans x two tilde transpose uh l tilde two is equal to zero because of the epipolar constraint that we have derived this is just if i plug this in here i get the apripolar constraint and here i have the line equation and that means if i multiply that line to that e on the right hand side x x tilde 1 then i have just the co vector we called it before the co-vector here that together with another point x2 equal to 0 represents the line constraint all points x2 are on the line if multiplied with l are equal to zero and similarly by transposition we obtain the opposite we pertain the apipolar line in image one as transposing this expression here e transpose x two this line at the apribola line in the first image plane right and here's the relationship again just visually now for any point x one in the first image the corresponding apipolar line in the second image passes through the so-called ap pole that is what we know from just the fact that all of them are on the same apopolar plane and therefore um the epipole satisfy satisfies the following expression the ap poll multiplied with the april polar line because the april must be on the epipolar line is equal to zero and here is the expression for the apollo line so e2 transpose e build up x tilde one must be equal to zero and this must hold true for all possible x ones right so for all possible x ones here the ap pole e2 must be on the api polar line l2 for this one here and for this one here and this gives us a constraint for where that ap pool must be located it follows that if this has to hold true for all x one then actually this left hand side here must be equal to the zero vector otherwise this couldn't hold true for all x1 so e2 transpose times the essential matrix is equal to the zero vector and from this expression here we can directly read off that e builder 2 transpose so the ap pool in the second image is the left null space or in other words the left singular vector corresponding to singular value 0 of the essential matrix and similarly we have that the april pool in the first image is the right null space or in other words the right singular vector with singular value zero of essential matrix it's just given here by expression good so we know how we can extract the epipol for instance from the essential matrix but how can we recover actually the essential matrix from the image correspondences well let's assume we have n image correspondences and sift features in the left image that have been matched to shift features in the right image or first image match to features in the red in the second image now with these correspondences we can form n homogeneous equations in the nine elements of this essential matrix because we have constrained here this constraint here which gives us exactly one equation for each correspondence if we write this in if we write this out we get this equation here for each of the correspondences and see that here we have the elements of the left and the right pixel coordinates x1 and x2 x x1 and y1 and x2 and y2 and we have the elements e11 is the first element of that essential matrix we have one equation now for each image correspondence in terms and you can see it's linear in terms of the elements of the essential matrix there's a homogeneous equation now s e the essential matrix is a homogeneous matrix we have to constrain again as in the case of estimating homographies the last lecture the scale of that matrix and we can constrain that scale of the matrix to 1 by again using singular value decomposition and obtain the solution to this system to the constraint system exactly the same way as we did it for the recovery of the homogeneous of the homography before and you will see that exactly the same trick we're going to apply multiple times also throughout this lecture here so it's a very important trick now one thing to note here is that in this equation some of the terms are products of two image measurements and some of them are just products of one image measurement and this introduces some asymmetry and if you have measurement noise this amplifies measurement noise and to counteract this the normalized eight-point algorithm from hartley a seminal paper called in defense of the eight point algorithm i recommend reading normalizes the image features in the 2d image plane it particularly widens them to have zero mean and unit variance and then runs the recovery algorithm for the essential matrix and then after the recovery this normalization can be undone in a in analytic form and it turns out if you do that then you get much more stable results so it's a good idea to always do that you first normalize the image coordinates such that they have zero mean and unit variance and then you do the svd and afterwards you're undoing this normalization again and you get much more stable results because in practice your civ correspondences are always going to be contaminated by noise and so this effect will be much less strong if you do that now from this essential matrix now we have a recovered essential matrix elements but how can we recover the rotation and translation from that well to recover the direction t hat of the translation vector t and we can only recover the direction because the scale is not uniquely determined you can also show that from the two images alone because you can you can scale the reconstruction and you can scale the translation vector and you still get valid reprojections into the image into the image planes so there's an ambiguity you can't measure this um but we can we can recover the direction which you call t hat by multiplying t hat transpose to the essential matrix because we have t hat transpose times an excuse metric matrix of t times r and this is zero and the entire expression becomes zero and therefore uh e first of all therefore the essential matrix is singular so there's one singular value that's zero and we obtain t hat as the left singular value associated with the singular value zero but in practice of course because we have measurement noise single singular values will not be exactly zero due to this measurement noise and therefore we choose simply the smallest one if you look at the singular value decomposition one singular value should be very close to zero and the other two are actually quite equal to each other and if you see that then you just pick the one that's closest to zero now the rotation matrix can also be calculated from this expression but it's a bit more complicated and technical and there's multiple solutions so we're not going to cover this in detail in this lecture but if you want to know more about it you can read up on this in this liske book chapter 11.3 final remark essential matrix has 5 degree of freedom because there is 3 for the rotation and 2 for the translation as we don't know this absolute scale but just the direction so in theory you don't even need eight correspondences you can get away with less and there's algorithms for computing this with less correspondences but they are not as as simple as just a singular value decomposition composition as we have done here now there's another important matrix that we're going to quickly cover and that's also covered in this song here that i recommend to watch um is the so-called fundamental matrix and that is relevant if the camera calibration is unknown the camera calibration is unknown we cannot use the local ray directions but we have to use the pixel coordinates directly and therefore the essential matrix or before becomes this right we know that the local ray directions are defined as such x 1 is equal to a 1 inverse x bar 1 this expression here and the same for x2 here just transposed and so we replace this here and the fundamental matrix then directly relates the pixels not the ray directions because we don't know k and the matrix that is relating these two is now called f and not e anymore but it contains e and these two inverse calibration matrices in one single three by three matrix and this is called the fundamental matrix like the essential matrix the fundamental matrix is in absence of noise rank 2 and the api pulse can be recovered in the same way as before however intrinsic parameters cannot be directly determined in other words we obtain only a perspective reconstruction and not a metric one and that's that's not very desirable but it's still useful in order for instance to do certain verification tasks for verifying if feature correspondences are actually correct because they lie on the epipolar lines you can do that but you don't get a metric construction but if you have some additional information like vanishing points you know that k is constant across time for all the frames you have there's some information about the intrinsics and then you can actually upgrade the perspective reconstruction to a proper metric one and there's a couple of papers on this topic as well so here's an illustration of how the a bipolar geometry looks like independent of we know just the essential matrix or we know the fundamental matrix um here we have the apipola lines that correspond to each other and you know you can note here that for instance for this point here that is corresponding to this point here we um have all the correspondences aligned here all the hypothetical correspondences all the hypotheses are aligned on the epipolar line so we just need to search along the epipolar line to find the corresponding point for this point here in this case this would be the correct correspondence okay so now we have recovered the relative or the relationship between the two cameras in terms of their rigid body transform up to scale but we haven't recovered the 3d points location yet and this is done through triangulation now given these two cameras with intrinsics and extrinsics how can we recover the 3d geometry and well that's pretty easy now given these two and given correspondences we can just shoot the rays into 3d space and we can see where they intersect however because these measurements these sift correspondences are typically noisy these rays might not exactly intersect precisely and we need to take that into account so we want to recover a point 3d point that in some sense is closest to these two rays okay so let's um uh let x tilde w denote a 3d point in homogeneous coordinates we could also write x bar because but because we can't recover x bar we're writing x x tilde here directly and that is related to the point on the image plane and now we're introducing to d to distinguish these two we're introducing the s as an index so this is the to the pixel coordinate in homogeneous coordinates and they are related to this through this 3d projection as we've introduced in the last lecture this is a projection matrix really to 2d projection so this is the equation for projecting this 3d world point x w onto the image of the if camera and as both sides are homogeneous they have the same direction but may differ in magnitude and we are applying the same trick as before for estimating homographies if we want to recover now from this relationship um xw so to account for this we consider the cross product again and we know that the cross product is zero if these two are the same up to some unknown scale factor so this is the constraint that we're concerned with and now if you write this out and use e i k to denote the k throw of the i f camera's projection matrix you obtain a linear equation like this that is linear in the unknown that we want to recover which is the 3d point location in homogeneous coordinates so you can try this out yourself very easy to see that this corresponds so this this can be derived from this so what we have here now is one constraint per uh per image i so is the image index and because we want to find and because we have two constraints here it's actually two constraints so this is a zero vector and we have one constraint for the x-coordinate and one for the y-coordinate but because we have a we have a 3-d vector to recover we need two correspondences to identify the 3d point and that's also clear by looking at this image if i just have one camera i can't recover the 3d point i need at least one more but it's of course better to have more cameras to recover that point more precisely so you can add as many equations you can stack as many equations to this linear system as you like this just giving you two constraints because four six eight four so stacking n with n equal or bigger to two observations of a point we obtain a big linear system a x tilde w equals zero and as x tilde is homogeneous this leads again to a constrained least squares problem where again the solution to this problem is the right singular vector corresponding to the smallest singular value of a and this is exactly the direct linear transform we're already familiar with from lecture two for homographies but now for estimating the 3d point location some remarks while dlt often works well in practice it is not invariant to perspective transformations that means that noise is noise in the that happens in the image is not taken appropriately into account and therefore the gold standard algorithm is to do something else is to minimize the reprojection error to find the 3d point that when projected into the two or more images is giving the smallest euclidean distance in the image plane and we can formulate this as such an optimization problem that can be solved using numerical gradient based techniques such as liebenbach markwood so here we have this notation denotes xs is the projection of the 3d point xw here with the bar for denoted augmented vector and xo is the observation we're trying to minimize the projection minus the observation discrepancy and we're doing this over all the observations of all the cameras for that particular point and we're optimizing over that 3d point you can see that the point is not having an i index because it exists only once but we have n different observations so the projection and the observation depend on i because we have different projection matrices and different observations in the different images and now this allows us to take measurement noise appropriately appropriately into account for instance if we have gaussian measurement noise in the images this is the right model to use if we have different noise models we can also incorporate them here but this is what is typically done the minimum can also be obtained in closed form as the solution of a six degree polynomial um but this becomes more technical than more details on this uh can be found in the textbook from hartley and scissorman section 12.5 now triangulation works differently well depending on the relative camera pose there's some trade-off so here you can see the shaded region denoting the uncertainty basically the area where the 3d point could be located and that increases as the rays become more parallel to each other here so if you have this situation then you have a very small gray region but then in this situation here you have a very elongated gray region and so the uncertainty is very high there is a trade-off here because if you have if you're looking at the scene from two different perspectives then feature matching is very hard because the images look very different locally but the triangulation will be better and for nearby views it's very easy to match features because the features look similar but triangulation is harder or more uncertain we present |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_42_Stereo_Reconstruction_Block_Matching.txt | now we know how we obtain depth values from disparity values but how given such a rectified stereo setup can we actually determine the disparity values in the first place well of course given the left and the right image we somehow have to find how much each pixel in the reference image in the image that we want to compute the disparity map for let's say this is the left image sometimes also for the right image a disparity map is desired but typically people start by computing a disparity map for the left image so for each pixel and left image we want to determine how how much the pixel has moved and the right image with respect to the original location in the left image and the simplest way to do that is to take a little patch around that pixel in the left image and try to find that patch along the a bipolar scan line the apripolar line in the right image trying to find a patch that looks most similar to that first patch that's called block matching but how can we determine if two image points actually correspond to each other well informally it's kind of clear that they should look similar somewhat to each other but what does similar actually mean if we just look at a single pixel rgb color value that doesn't reveal the local structure there's too many ambiguities if there's a red pixel in the left image along the same scan line in the right image there's too many red pixels or pixels that have color close to red and that could be confused with that original pixel in particular also because all images are noisy so the red color even for the correct location in the right image would not be it would not be exactly the same as observed in the left image and so you would get a lot of noise if you would just consider a single pixel and therefore what we do in practice is we compare a little patch a little image region that disambiguates this ambiguities better but even then the task is quite difficult so if i show you this example here and you look at the tail of this animal you can see that in these two images the tail looks quite similar but then in in these two images the tail looks quite different so even looking at a local region doesn't completely solve the problem as we'll also see but it makes it easier now how does this actually work i want to show this to you in the context of one very famous scene that has been captured by daniel scharstein and rick siliski in their famous middlebury benchmark that has brought a lot of progress in the field of stereo matching has been published in 2003 and this is one of the image images from this benchmark and the um what makes this benchmark particularly interesting is that for every pixel there has been um a procedure or there has been a procedure that was able to determine for every pixel the precise disparity ground truth so for every pixel we know what the true disparity is and we can then take our algorithms and evaluate the performance of our algorithms on these image pairs so here's the right image this is the left image you can see that some objects in the front they move more than objects in the back you can also see that some objects occlude each other and you can also see that these objects are pretty lambertian and also pretty texture rich which makes correspondence estimation quite easy in this particular lab seen here now how does block matching work well what we do is basically we take a scan line as we have rectified the images we know that the correspondence for this particular pixel we are interested in the left image at x location x1 this is indicated here by this vertical line must also lie on that same scan line in the second image and so what we do is we query all possible patches here only a few of them are shown in the second image and we are trying to measure similarity between these patches and ideally we recovered this as the correct patch for this query patch here what you can also see here is that there is a gray shaded region here on the right and i have shaded that region because we don't need to search in that region remember that the disparity is always positive but then the search is in uh we have to search in the negative direction so disparity if we go from the left to the right image then the displacement goes towards the left so disparity zero would be at this white line which is located exactly at the white line here in the first image so this is the x1 coordinate now overlaid over the second image right is of course at a different it appears differently but it's the same image column and then if the objects would be at infinity then this this part here should be actually located here but because they are not they are closer we go by some disparity here we move for some pixels in this direction until we have found that patch and that's done at x2 which is the correct location this is what we want to recover so we have only to search in the non-shaded region towards the left of x to x1 and here at the bottom you can see the matching score for a particular similarity metric that compares these two matches and you can see that in but this particular example the matching score attains its highest value for the correct location and so if in this case we would just pick the maximum we would get the right match some of them have really low score some of them are higher closer to the maximum but the maximum is clearly distinct from the others this is not true for all patches in particular if there's ambiguities if the patch is textureless etc then this curve here doesn't look as good as it looks here in this illustrative example so how can we actually now compute the similarity metric itself well we consider k by k patches k by k windows in each of these images that are flattened into vectors and we call these vectors wl and wr for a patch in the left and in the right image and flattened means that we just take all the the values of these patches and we concatenate all of them into a big vector that's now of size r to the power of k to the power of two and what we wanna ask ourselves now is well is that patch in the left image the same or is very similar to that patch in the right image and there's a whole variety of evaluation metrics that are used in the community but two very prominent ones for basic block matching are the so-called serial normalized truss correlation c and c c and the sum of absolute differences c n c c is basically a correlation metric that takes um a patch in the left image and normalizes it by subtracting the mean and dividing by its variance and then computing the dot product of this vector with the patch in the right image subtracted by the mean and divided by its variance you can see here that the left patch is taken here that the arguments denote where we center the patch where the patch has been extracted from the image it's taking at x y and then the right image is taking at x minus t and y because we move into the negative direction for d disparity levels or for d disparities but we are in both cases extracting the patch at y in the y coordinate so at the same image row this is one metric another metric is the so-called sum of squared differences which is even um easier to compute so the sum of square differences basically just takes the left patch this is this big vector and then takes the right patch at x minus d and y which is the zero big vector from the other image and then subtracts both from each other and computes the sum of the squares of the elements of this resulting vector now different metrics have different advantages and disadvantages we don't have time to go into detail and there's numerous other similarity metrics in the literature and if you're more interested in other some better similarity metrics i recommend to have a look at the silica book chapter 12.3 or in particular also this palmy paper from and scharstein called evaluate evaluation of stereo matching costs on images with radiometric differences that compares a whole variety of different matching costs or similarity metrics on in terms of their performance when the objects in the scene are not completely inversion so what does the block matching the algorithm then look like in total well we choose a disparity range we first need to decide what is the maximum disparity that we want to search for which determines how close can an object be to the camera and so we set the disparity range to zero and this maximum d and then for all pixels in the reference image let's say this is the left image we compute the best disparity by using the so-called winner takes all strategy which is we're just computing for all possible disparity levels the similarity score and we pick the one independently for each pixel we pick the one that's best and then we can also do this for the right image as a reference image and search for the correspondences in the left image and the advantage of having disparities estimated in the left and in the right image is that we can then check for consistency as you can see if we compute disparities in only one of the images then there's a lot of outliers left and we can remove these outliers by doing the disparity computation in the left and in the right image and then they must be consistent so we can search from the left image we can go along the disparity and then we have the disparity in the right image and that must warp us backwards to the first pixel location and if these don't coincide then we can say well this is probably a wrong estimate and we can remove it and here on the right you can see the ground roof which is very precise and has been estimated by a structured light imaging technique for this particular scene now if we inspect these estimated disparity maps a little bit more closely we can see some artifacts apart from the noise that we also see one artifact is that there are some regions that appear to be entirely black and these regions are so called half occlusions these are regions that are visible in one image but not visible in the other image in particular they are visible in the reference image but they are not visible in the target image and that's why they are called half occlusions visible only in the reference image but we don't we can't find the correspondence there because they are not visible in the target image for example here on the very left of the reference image if we move if we look at the scene from the right image perspective or right camera's perspective then these pixels here will move outside of the image domain you can see that this cone has disappeared the first thing we see is this cone here and that's why here we can't estimate the spiriting similarly if we have an object inside the scene like this part of the mask here let's say and there is another object in front like this cone that's occluding this mask in the right image then for these pixels on that mask of course we can't find the correspondence and so we can't estimate disparity for x for those here's an illustration of this half occlusion let's say we have these two objects the background object and the foreground object and then there is this red area that's visible in the left camera image but it's not visible in the right image because this object is occluding this portion of the scene now let's talk a little bit in more detail about the assumptions of block matching the assumptions that block matching makes that in practice are often violated and that's why more sophisticated algorithms are often used one assumption one of the most important assumptions that block matching makes is that all the pixels inside that region that we consider for block matching are actually displaced by the same disparity so let's assume we use this patch size for block matching for comparing patches and we compare two patches pixel by pixel using um serial normalized cross correlation or sum of square differences then we assume that all the pixels inside that patch are displaced by disparity d into the other image however this assumption is often violated it's this is called the fronto parallel assumption this assumption actually holds only true if you think about it holds only true for planes for 3d planes in the scene that are coplanar with the image plane that's why it's called the fronto parallel assumption it only holds true for all the planes that are parallel with the image plane like a wall that you're facing straight it doesn't hold true in particular for slanted surfaces so here we have a surface on this vehicle that is violating this assumption because it's it's entirely non-fronto parallel it's slanted inwards um the wall is to the right in other words and this surface or this the content of the surface deforms perspectively that's what we know if this would be a plane it based on perspective projection it deforms perspective perspectively and not by a simple translation when the viewpoint changes and it's illustrated here so i have zoomed into that patch this is what we have here and then if i look at the same patch in the other image and i toggle back and forth you can see that the size of the structures has changed that the content of that patch has been squeezed and that means that the assumption of every pixel inside a patch moving in the same way just by translating based on the disparity is violated similarly we have a similar effect for occlusions or at disparity discontinuities so here we have a patch and we want to predict the disparity for the center location of that patch which is on the stick that's in front of this background and you can see the zoom in here and obviously if i now move the camera the foreground stick moves differently from the background cloth and therefore the content of the patch that we are comparing this is the first patch this is the second patch changes completely and again this is a violation of this assumption that all the pixels are moving the same way so you can already see there's a trade-off between choosing large patches and small patches and that's illustrated here on the left is a disparity estimate of a stereo algorithm of a block matching algorithm using windows size 5x5 pixels and on the right you have a result with a window size of 11 by 11 pixels and what we can see here clearly is that small windows on the left here lead to matching ambiguities and therefore add noise to disparity maps but larger windows which lead to smoother results also lead to a loss of detail so we have less accurate less precisely estimated boundaries of the individual objects because we are violating the fronto parallel assumption too much so on the left we have too many ambiguities on the right we have less ambiguities but we are violating this fronto parallel assumption and so this also leads to border bleeding effects that you can see here where the if if i toggle backward and forward you can see that the disparity map the disparity the boundary of this foreground object with respect to the background object is not precisely aligned with the boundary of the object with the actual boundary object which we can clearly see because we can understand the scene so this boundary is here where the cone ends and here at this uh in this example here as well so we get this bleeding artifact where foreground objects are bleeding into the background the disparity of foreground objects bleeds into the background here at this pixel for instance where the disparity should be the disparity of the background object we actually observe or estimate the disparity of the foreground object that's called border bleeding and that happens if the windows are too large and because of the violation of this fronto parallel assumption and just to give a little bit more intuition um why this happens if we look at this this is for for this particular region here if we look at a little patch in this region we want to determine the disparity for a point on the edge that's easy because the edge is the dominant structure and so if we are on the edge we can estimate the disparity precisely because that's dominating the matching cost but unfortunately it's also dominating for this patch here where we want to estimate the disparity for this point but the background changes only slightly while this foreground edge is really dominant and so despite this pixel location for which we want to compute disparity being located on the background there's a lot of information here from this edge and so eventually this is out ruling the background the background is not providing enough grading information not enough texture and so this pixel gets assigned the disparity of that edge which is the disparity of the foreground object and that's incorrect now we can alleviate some of these effects and also remove some of the outliers by performing a so-called left-right consistency test now this is a result for a different a better stereo matching algorithm but it illustrates the principle outliers and half occlusions can be detected by computing a disparity map for both the left and the right image as a reference image and then verify if they map to each other if there is a cycle so we for every pixel in the left image we look at the disparity and move along that disparity uh in the right image and then we query the disparity at that at that location and then we we move along that disparity backwards in the first image and then that point that we end with should be the same as the pixel location that we have started with and if that's not true then there's likely that there's an inconsistency and it's likely that in one of the disparity estimates there's an arrow and so we can reject that point and that happens for instance for occlusions because in occlusions is very likely that the disparity is inconsistent because in the reference image where the occlusion happens or the half occlusion happens we are almost assigning a random value here and so that value will with almost probability 1 be not the same or not corresponding to the value in in the other image as a reference image and so you can see here that it's relatively easy to actually detect these occluded regions using such a simple and also fast to compute cycle consistency or left right consistency test |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_71_Learning_in_Graphical_Models_Conditional_Random_Fields.txt | welcome to computer vision lecture number seven this is the last lecture in this little excursion on graphical models in lecture number five we've introduced graphical models and a very basic inference algorithm the belief propagation algorithm for inferring marginal or maximum upper story solutions in lecture number six we have then seen some applications of graphical models and how we can incorporate prior knowledge about the problem into the model using graphical models for example in the case of stereo we've utilized prior knowledge about the properties the low level statistics of depth maps in particular that they are smooth in the case of multiview reconstruction we have shown that it's also possible to incorporate more complex knowledge such as knowledge about the image formation process into the graphical model and we have seen how inferencing graphical models leads to results that respect these prior constraints that we have encoded in the graphical models now all of the models that we've seen so far had some parameters for example in the case of stereo we had this lambda parameter which was adjusting the level of smoothness the strength of the regularization in other words and in this lecture we're going to talk about how these parameters can be estimated given a data set so this is about learning in graphical models this lecture is structured into three units in the first unit we're gonna introduce the terminology of conditional random fields which is crucial for learning in the second unit we're going to discuss the actual learning problem first in a simple setting where the parameters up here uh linearly or log linearly in the equations and where learning is a convex problem and then in the final unit we're going to talk about deep structured models where the parameters depend non-linearly or appear non-linearly in the model and we'll see also how we can combine or exploit graphical models in other ways to benefit deep learning let's start with conditional random fields so far we've talked about markov random fields or factor graphs and to remind ourselves what is a markov random field or a vector graph well we have in a markov random field a distribution over random variables here very concretely for one example this is the example of an mrf for stereos moving where we have in this example 100 or image denoising where we have 100 random variables and then we have this exponential of a sum of these log factors or potentials actually often we call the uh the factories the potentials but we also call the log factors the potential so this name of potentials is is used um both in the logarithm and in the original domain so it's a little bit confusing maybe but we always call these potentials so we have in this case a sum of unary potentials and a sum of pairwise potentials over maybe adjacent pixel sides and this defines our probability distribution and because this is not normalized we have to divide this by the partition function which is this expression here summed over the entire state space of all the axes or in the case of continuous variables integrated over the entire state space of these variables that participate and what we have discussed so far is inference in mark of random fields so we were interested in estimating marginal distributions that is for example the distribution p of x one or maximum a posteriori solution that is given a particular model what configuration of x one to x one hundred is actually maximizing this probability so we're interested in this arc max over all the variables and we have seen that both of these inference problems are relevant marginal distributions are often required when computing expectations we have seen that in the case of multiview reconstruction and we are going to see that also today when we want to learn these models and the maximum posterior solution is often also interesting in computer vision when we are just interested in a point estimate what is the best solution given the model now in this lecture we are going to talk about the learning problem which is how can we estimate the parameters in this example here there's just one parameter lambda but we could have more parameters here and we'll see examples where we have more parameters but here in this simple example it's just this one single scalar lambda so how can we estimate that lambda from a data set a little remark in the literature these potentials are sometimes defined as the negative log factors here we have to find them as the log factors which means that a high value of a potential will lead to a high value in the probability function but often people think about these potentials as low values leading to high probabilities thinking about them as energy but in the context of this lecture it doesn't matter because we'll just consider these as generic features so um we can just swap design and define them arbitrarily it's just basically an interpretation if we consider them as being positive and negative and the advantage of not having the negative sign here is that the formulas become a little bit more uh slightly easier to to write down so here we simply omit the sign and use this definition where high values of the potentials imply high probability now what are conditional random fields and why do we need them well if we just define markov random fields we can define the learning problem and why can't we define the learning problem well because we're only looking at one particular model instantiation so in the example of image noising we're looking at one particular image that we want to denoise and in that image we associate each pixel with a random variable and that's the set of random variables x but of course if we want to do learning then we want to learn from a larger data set not just for a particular instance and that's why we want to we need to formulate the problem conditionally and this is what leads us to so-called structured output learning again we have seen this already where we want to learn a function of mapping f with parameters w that goes from some space x some input space x to another output space y and in structured output learning as we already know the input inputs can be any kind of objects as also in regression but the outputs importantly are complex structured objects so in this case the outputs are the axis so we're gonna swap the name of the variables now to define the conditional random fields so the outputs could for instance be images or semantic segmentation maps or text or parse trees computer programs etc so here's the definition of the condition random field in a conditional random field we make the conditioning of the output on the input and the parameters explicit so instead of writing just the distribution on x we now write a distribution of y where the y is now the output which is what x has been before so please be careful now because we need to swap x and y when we talk about conditional random fields so we have a model that given a particular x that could be a noisy image produces an output y which is the denoised image so we make this input image x and the parameters explicit now in order to be able to apply this model on an entire data set where we want to learn these mappings from x to y from noisy images to denoised images so it's very crucial here that we remember that in the mrf notation we have just x so we called the output variables x but now in the crf notation we have in the structure prediction problem we have a mapping from x to y so the output variables are now y this is what has been x before and the inputs the noisy image is called x we haven't explicitly encoded a noisy image here this was implicitly encoded in the definition of for example the unary potentials before but now we have this explicit so that the unary potentials are functions of the input as well as the random output variable they act on and also the pairwise potentials are functions of the input and the output variables they act on and similarly of course for higher order factors as well so we denote the set of all input variables in a conditional random field as calligraphic x which are from a space x and the outputs as the set calligraphic y which are from the space y and learning that amounts to estimating the parameters of this model here in this case w is not a vector it's just a scalar it's equal to lambda so when i estimate the parameters of this model in this case just lambda given a data set of input output pairs in the case of denoising noisy inputs and denoised outputs y in the case of semantic segmentation input images x and semantic segmentation maps y so it's a supervised learning problem where we always consider input output pairs we always have annotations for each input x and from this we want to infer what is the optimal parameter such that given a novel input that we haven't seen during training we do the best possible prediction under that model now this was a very specific example of a conditional random field to make the transition easy from markov random fields this was the example where we looked at the simple image denoising problem with the unreal and the pairwise potentials but of course we can write this more generally if we write this in a more general form then we can just write as you see here we have just the sum of unity pairwise and maybe higher order factors we can just concatenate all these features in a long vector and then multiply this with the parameters so there could be a parameter here and there could be even a different parameter for every unary so for every i there could be a different parameter lambda i and for every combination i j there could be a different parameter lambda i j therefore the general form can simply be written as the inner product these brackets here denote the inner product i could also have written w transpose psi but it's more common to use these brackets in this context for the inner product so we have a parameter vector w and compute the inner product of that parameter vector w with these features which is the concatenation of all the potentials all the feature functions these individual feature functions that we have specified in the conditional random field and we take the exponential of this inner product and again this is of course not normalized so in order to arrive at a proper conditional probability distribution we have to divide by the partition function which depends on the input x and the parameters w but not on y because we are summing over y and this also explains why this is called the conditional random field because we are modeling conditional distributions we are modeling the distribution of the output conditional or conditioned on the input x again in this model we have a feature function this is this psi of x and y this is a function that operates on x and y and that can possibly decompose into something very simple according to the structure of the graphical model and that in the general form that function is just a mapping from the input space x and um the um m is the number of output variables so we have r to the power of m because we have m output variables and it's mapping to the feature space r to the power of d where d is the dimensionality of the feature space for example w here has this vector has the size d the size of the feature space and the size of the feature space of course depends on the problem in this case here it would just be one but in practice of course we wanna if we have larger data sets we wanna make this model more flexible give you more parameters and estimate more than this one single parameter now this is a very general form but of course we want to exploit as we'll see in the following the structure of graphical models in order to make this actually tractable and therefore typically this potentials here these log factors decompose so we have a concatenation of all the potentials all the log factors that are involved in the graphical model this is just a big vector that concatenates all the features produced by all the small features all these little potentials so for example here we're concatenating all the unit features over all the pixels and then we're concatenating all the pairwise features of over all the pairwise pixels into one big vector and then we have the parameter vector w which has dimensional ltd so m is the number of output nodes and d is the dimensionality of the feature space and this general form here of course is much more flexible than just having a single parameter because now we can have one parameter for each feature that is defined by the graphical model we have the partition function here that is simply the sum over the entire output state space y of this expression here and the goal is to learn or estimate the parameters w this vector w from a given annotated data set with pairs input output pairs x1 y1 x2 y2 until x and y and and so on okay |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_21_Image_Formation_Primitives_and_Transformations.txt | hello and welcome to the second lecture of this computer vision course if we want to build computer vision systems we first have to gain a good understanding of the image formation process of how a 3d scene is projected onto a 2d image and once we have an understanding of this process we can start developing models that mathematically formalize this process and that is what this lecture is about this lecture is structured into four subunits in the first one we will introduce basic primitives and transformations in conventional but also in particular in so-called homogeneous coordinates which play a fundamental role for describing objects and projections in 3d computer vision in the second unit we're going to discuss the image formation process from a geometric perspective how geometric features are projected geometrically onto the 2d image plane in the third unit will then move on to the photometric image formation process how light actually changes while traveling from the light source to the camera and then finally we're gonna also have a look at how the image is then processed inside the camera and stored in the camera let's start with primitives and transformations geometric primitives are the basic building block used to describe 3d shapes in this unit we will particularly focus on points lines and planes which are the most fundamental geometric building blocks furthermore we will discuss the most basic transformations both in 2d and in 3d space this unit particularly covers the topics discussed in the book by riksilisky chapter 2.1 and a more exhaustive introduction into multiple view geometry and their fundamentals can be found in the hartley and scissorman book let's start with 2d points 2d points can be written in conventional or so called inhomogeneous coordinates as a two vector where we have two real scalar numbers real valued scalar numbers that describe the two coordinates for instance the two coordinates in the image domain now what we're going to introduce now is an equivalent representation in so-called homogeneous coordinates that's why we call these here inhomogeneous to contrast to homogeneous coordinates in homogeneous coordinates we're extending the dimensionality of the vector space by one for 2d points that means we have a three-dimensional space but actually this space has a certain meaning this three-dimensional space is not just a three-dimensional space because we're removing the element at zero and call this space p2 the so-called projective space and this is a very important space that we're going to work a lot with during this unit and lecture as you can see we have augmented this two-dimensional vector with a third coordinate that is called w and we have also introduced the tilde symbol the tilde above the symbol to explicitly denote that this particular coordinate is to be integrated as a homogeneous coordinate and to distinguish this from a conventional inhomogeneous coordinate so whenever you gonna see a symbol that carries a tilde on top you know that this is a homogeneous coordinate that's integrated as a homogeneous coordinate from this projective space that is simply r free uh in the two-dimensional case it's r3 except for the element at zero homogeneous vectors differ only by scale homogeneous vectors that differ only by scale are considered equivalent that's why it's an effectively a two-dimensional space this projective space because we define an equivalence class by all vectors that are related through a scalar operation so assume there's a vector 1 1 1 then the vector 2 2 2 is considered equivalent it falls into the same equivalence class that means homogeneous vectors are defined only up to scale why do we introduce such a strange construction well we're going to see very soon that this is very beneficial it allows us to express points at infinity it allows us to express intersections of parallel lines it allows us to express transformations very easily as concatenations of multiple transformations and in particular it also allows us to denote or to mathematically formalize projective projections that are fundamental for 3d computer vision as linear operations but for now the most important thing that you have to remember is that homogeneous coordinates are denoted by a tilde and they are representing equivalence class with a vectors in a vector space or using a vector space that's one dimensionality larger than the original inhomogeneous coordinates so in 2d that is a three-dimensional space for 3d in homogeneous coordinates that would be a four-dimensional space and then all of the vectors that are related just by a different scaling are considered to be equivalent now how can we convert between an inhomogeneous vector and a homogeneous vector an inhomogeneous vector is converted to a homogeneous vectors vector by simply adding a one this is a convention if there is a one as the last element then the first two coordinates are considered the inhomogeneous vector of a homogeneous vector so here we have an inhomogeneous vector x and we add we concatenate of one to this two-dimensional vector to obtain a three-dimensional vector and then this is gonna this is going to be a a homogeneous vector where the first two coordinates are the inhomogeneous part and for all vectors that have a one at the end we call these vectors also the augmented vector x bar so we introduce another symbol that we can put on top of these vectors which is a bar and this bar denotes that we have a homogeneous vector it's also three-dimensional vector but it's one particular element of this equivalence class which is exactly that element that has a one as the last entry and there's only one such element so it's one particular element of this of all possible homogeneous vectors that are equivalent to each other and this is how we define the relationship between the inhomogeneous vector and the homogeneous vector now this also allows us then to of course convert in the opposite direction as you might have already realized to convert in the opposite direction we simply have to divide by the last element of the homoge homogeneous vector w tilde because if we do so the last element of the vector turns into a one and so we can read off the inhomogeneous vector from the first elements of the vector so here's an example we have um an inhomo and homogeneous vector x tilde with the last element a w tilde and we divide this by one over tilde so we get x tilde over w tilde and y tilde over w tilde and w tilde over w tilde which is 1 as the last element and so this is then as we see an augmented vector because by definition the last element must be one so the inhomogeneous vector is x tilde divided by w tilde and y tilde divided by w tilde so we gained we or we see the relationship between the homogeneous the inhomogeneous and the augmented vector here all in one equation there's one special homogeneous vector that has a special meaning and that's the homogeneous vector where the last element w tilde is equal to zero and such points are called ideal points or points at infinity and these points can of course not be represented with inhomogeneous coordinates because if we have w tilde equal to zero we would have to divide by zero here these two elements which would lead to infinity for the x and the y component of the inhomogeneous parts however this allows us to very conveniently express points that are located at infinity even without having to use the infinity symbol we can simply have a vector a homogeneous vector where the last element is zero and the point that corresponds to that homogeneous vector is a point that lies at infinity here's a visual illustration of this of the relationship between homogeneous and inhomogeneous and augmented vectors we have the homogeneous coordinate system here x y and this should actually be a w and then we have the homogeneous vector that's represented in that coordinate system and we have this plane here that is a plane in the x y plane of the homogeneous coordinate system but translated to location where where said or w actually is equal to one and so um [Music] if we divide this vector here by w tilde we obtain this point here that is on that plane where the line um that connects this 3d point and the homogeneous coordinate system intersects that plane that defines the inhomogeneous coordinates and at that point we have the augmented vector with the last element being one and you can already see that this model or this this projection that we are doing here by division uh resembles a lot the perspective projection process that we're going to discuss later so it's it's kind of intuitive that this is a a useful thing to do a useful representation to have but we'll see precisely what we can do with this homogeneous representation in the next couple of slides now um given this homogeneous representation of points we can also express lines using 2d lines using homogeneous coordinates if we want to express a 2d line using homogeneous coordinates we also write l tilde we're always using bold symbols by the way to denote vectors bold lowercase symbols we use bold uppercase symbols to denote matrices and we use non-bold symbols to denote scalars so we have a line that is a free vector abc and if we multiply that free vector abc as an inner product with a augmented vector x bar we obtain this expression here a x plus b y plus c 1 equals 0 which we recognize as the line equation for all points that are located on the line all points x y if this equation here is zero these points are located on the line this is the definition of a line all points that satisfy this constraint are on the line and so we can write this in this um no explicit as this explicit expression here but we can also simply write this in terms of a inner product between the homogeneous line vector and the homogeneous point which in this case is a in a augmented vector but of course note that it doesn't need to be an augmented vector augmented vector is just one element out of this equivalence class of homogeneous coordinates so we can scale we can use this line equation holds for any other homogeneous vector because we know that all of these homogeneous vectors that are related by just scaling are the same so we can divide or multiply this equation here on the left hand side and on the right hand side arbitrarily and obtain still a line equation we have just chosen here for convenience the augmented vector x bar because this gives rise to exactly this line equation that we're familiar with but of course multiplying both sides doesn't change anything now um for for instance what we can do is we can normalize l bar which is also a multiplication with a scalar such that l bar the line becomes n x n y and d or n d which then has a particular geometric meaning where this n we normalize it in a way such that this n vector is normalized to 1 and in this case n is just a normal vector perpendicular to the line and d is the distance to the origin of the coordinate system okay and i have an illustration for this on the next slide an exception or a special line is the line at infinity which we denote as l tilde infinity which is the line defined by the vector 0 0 1 and this is the line that passes through all the ideal points for all the points at infinity now of course we can't normalize this line such that it is in that representation because we would have to divide by zero and that makes sense because that line is at infinity we can't represent it in in that normalized way now um to describe some of the properties of what we can do with lines what we introduce is first of what we remind ourselves is first the traditional cross product that we know from school between two vectors the cross product is written as if we have one vector a and one vector b a x b and we can also express this cross product as a skew symmetric matrix and a vector here we can see how this is looking or how this is written mathematically with these two brackets and the x symbol and uh this skew symmetric matrix is defined like this such that if we multiply this with the b vector we get the familiar form of a cross product between the vectors a and b where the elements are a1 a2 a3 and b1 b2 b3 another remark here in this course we are using square brackets to distinguish matrices from vectors so whenever we have a vector i'm trying to write a a non-squared matrix a non-squared bracket and whenever we have a matrix i'm trying to write a squared bracket to distinguish the two you can see we have one matrix here and a vector here and another vector here good now in homogeneous coordinates the intersection of two lines is given by a very simple form or as a very simple expression using this cross product definition here what we can what we have is that this point x tilde everything in homogeneous coordinates is equal to the first line cross product the second line vector and similarly the line joining two points can be compactly written in homogeneous coordinates as l tilde being equal to the first homogeneous point cross product the second homogeneous point and here i've used the augmented version of a homogeneous vector but this could be replaced by any homogeneous vector of course again the symbol here denotes the cross product and the proof of this is very easy and is left for the exercise so we see already that this homogeneous representation is a very useful representation to express very simple relationships that are otherwise harder to express here's an example so here we have two lines one line that is characterized by the equation y equals one and another line that's characterized by the equation x equal two the line vector for the first line is simply zero one minus one because if we multiply this with the augmented vector x y and one we obtain y minus 1 equals 0 or y equals 1. and similarly the line vector for the second line is 1 0 and minus 2 because we want to express x equals to 2. now if we take these two line vectors and compute the cross product which can be also written as this product of this q symmetric matrix with the second line vector where this skew symmetric matrix is simply specified as this cross product matrix which i have again written here on the slide on the right and we multiply these two together then we obtain two one and one where we see this is all directly it's it's not not a arbitrary homogeneous vector but in this particular case it's already a augmented vector with the last element being 1 so we can directly read off the inhomogeneous coordinates 2 and 1 and we can verify that 2 and 1 is indeed the correct intersection of these two lines we can also do the same for two parallel lines here we have two parallel lines one zero and minus one and one zero minus two and if we do the same thing now for these two parallel lines we obtain as the intersection point zero minus one and zero and uh this intersection point as we can see from this homogeneous representation is at infinity as expected right so these two lines are indeed intersecting at infinity because the last element is zero it is an ideal point the outcome of this operation is an ideal point and we can also take the line at infinity and multiply this with the point at infinity and verify that indeed the inner product between these two is zero so the point in other words the point at infinity does actually lie on the line at infinity okay so we've seen that homogeneous coordinates allow us to express points at infinity and very easily the relationship between lines and points we can also represent more complex algebraic objects using homogeneous equations for instance if we go from linear equations that we had discussed so far to polynomial equations like quadratic equations here in this case we can express conic sections so here we have a quadratic expression and the solution to this quadratic expression is a conic section which is a section of a cone with a plane and this gives results to such such such a such a curve here and depending on how we orient that plane which is defined by the q matrix we get either a simple circle an ellipse in this 2d space a parabola in this blue 2d space or a hyperbola in this 2d space and this is very useful for multiview geometry and camera calibration but we're not going to discuss this in in very great detail here in this lecture and there's an entire book written around this which is the hartley and scissorman book which i have which i recommend to have a look in case you're interested in more details about this type of representations and how they relate to camera calibration now let's move on from 2d points to 3d points similarly to 2d points 3d points can also be written in homogeneous coordinates here's an inhomogeneous representation of a 3d point and here is the corresponding homogeneous representation of a 3d point where we now have a projective space p3 which is the four-dimensional vector space except the element at zero right so this element we remove to because we have to define the direction of the vector um so w tilde can be zero but not all of the elements of that vector can be zero and that's what's called the projective space the origin is removed now similar to before well first of all similar to before all homogeneous vectors that are related by scale are in the same equivalence class and we can convert between homogeneous and homogeneous vectors in the same way as before for instance we can go from a homogeneous vector to an inhomogeneous one by dividing by w tilde we can also represent 3d planes as homogeneous coordinates where now the plane is represented by a homogeneous vector m tilde a b c d where we have this expression ax plus b y plus c c plus d equals zero that has to be fulfilled for all x y c where these points are on the plane and we can equivalently write this compactly as this homogeneous constraint again we can normalize m tilde so that's m tilde or so that n this three-dimensional vector the euclidean norm of n is equal to one and in this case again n is the normal perpendicular to the plane now not the line but the plane and d is the distance to the origin an exception is the plane at infinity which passes through all ideal points or points at infinity for which w tilde is zero here's an illustration i mentioned before that i show an illustration for lines but i omitted that i just directly have the illustration here for planes but it's um the same just one dimension higher so what we have here is this um coordinate system here and we have this plane here which is at distance d from the coordinate system if this n vector here is normalized to one and then n corresponds to the normal of that plane okay now 3d lines are a little bit less elegant than either 2d lines or 3d planes to express one possible representation is to express points on a line as a linear combination of two points p and q on that line as done here this convex combination however this representation uses six parameters for four degrees of freedom a 3d line has only four degrees of freedom and we're using six parameters here which is not ideal there's alternative representations um there's multiple alternative representations one is and one minimal representation is the two plane parametrization and another one are the plicker coordinates and there's more details on this in silicy and hartley and simmerman scissorman in chapter two the 3d analog of 2d conics are quadric surfaces so here we have a variety of quadric surfaces that can be expressed as this quadratic equations here with homogeneous coordinates in three-dimensional space so we have a quadratic expression here and all the points that lie on the surface of this quadrice depending on q this quadric shape changes but all the points for a particular q all the points let's say in this case in this parabola case all the points that lie on the parabola surface satisfy this expression now these quadrics are useful in the study of multi-view geometry again and they also serve as useful modeling primitives for understanding scenes in terms of compact representations here's a simple example from my research group where we have used super quadrics which are a slight generalization of these quadric surfaces in order to represent geometric objects in terms of simpler parts so here you see some meshes you see representations in terms of simple cuboids and then in the bottom in terms of super quadrics and this is very useful for understanding the world in terms of of simple primitives and can be extracted in a pretty unsupervised fashion and allows for shape abstraction and also compression we can then compress scenes very compactly and store them very compactly while still preserving the dominant semantic meaning of the scene okay so let's move on to transformations now in this unit we're going to discuss the most important 2d and 3d transformations i'm going to start with 2d transformations and the simplest 2d transformation is clearly a translation where if we have a square here we're simply translating that square to another location in that two-dimensional space and such a translation is simply given by adding a two-dimensional vector in inhomogeneous coordinates to all the points that we'd like to translate and we can alternatively write this as this homogeneous expression here where here on the right we have a three by three matrix and we're converting a homogeneous in this case an augmented vector to the new location by multiplying it with the identity matrix and adding the translation now see that the translation here is added because the last element of this augmented vector is one so we're multiplying one with t and we have the identity here so we're taking the original point and adding t so it's a simple 2d translation with 2 degrees of freedom t x and t y now using such a 3 by 3 matrix here instead of this allows us to very easily chain or invert transformations so we can compute the inverse translation by inverting this matrix and we can chain multiple translations by multiplying multiple of these matrices together into one matrix that then represents the chained overall transformation and this is not only true for the simple translation example but for all the other transformations that we are going to see euclidean similarity are fine and projective as always augmented vectors can always be replaced by general homogeneous vectors x tilde the next transformation is the so-called euclidean transformation which is a translation and a rotation so it has three degrees of freedom in 2d space and this can be expressed as rx plus t in homogeneous coordinates or in homogeneous coordinates by a matrix where in the top left 2 by 2 sub matrix we have the rotation component r and then in the top right we have the translation vector the two-dimensional translation vector and if we multiply this with an augmented vector we see that we obtain this expression rx plus t r is a rotation matrix from the special orthogonal orthonormal group so2 um with the properties that r r transpose is equal to the identity so it's r to normal and the determinant determinant of r is one euclidean transformations preserve our euclidean distances so if we look at two points that are at a certain distance after the euclidean transformation they are also at the same distance and this of course holds also for the translation because translation is a special case of the euclidean transformation but it doesn't hold for instance for the affine transformation so we see already that this is a giving us a hierarchy of transformations that we're looking at here the next level is the similarity the next level in this hierarchy is the similarity transformation which is 2d translation plus scale 2d rotation plus a plus plus 2d rotation plus scale so it has four degrees of freedom and it's exactly the same expression as before except that now we have a scale that's multiplied um to uh the rotation before translating the vector again r is a rotation matrix now s is an arbitrary scale factor it's a scalar and the similarity transformation preserves not anymore distances but at least angles between lines so we have for instance an orthogonal relationship between these two lines here which is preserved from the original cube but the scaling has changed and the rotation and the translation now going one level up the hierarchy we have affine transformations which have six degree of freedom where now the rotation matrix here has been replaced this two by two matrix with a general arbitrary two by two matrix so we have four degrees of freedom here and two degrees of freedom here but here at the bottom we have a zero vector and a one so overall we have six degrees of freedom and we can see an example of an affine transformation or a linear transformation here which is um not preserving angles anymore as you can see but which is preserving parallel lines so if you have two parallel lines then these remain a parallel with respect to each other after the transformation and then finally the most general transformation that we can define with such a linear homogeneous equation is the so-called perspective transformation or homography and an example is given here now every point can transform every point on this square can transform to a different location that is why we have eight degrees of freedom we can also represent this by the coordinates where every every of these three corners of the square transforms to but it's only eight degrees of freedom so all the points that are in between or outside would transform similarly to uh this this transformed corners of the square and so this can be represented by a three by three matrix which is a homogeneous matrix that is defined only up to scale that is why this matrix doesn't have nine degrees of freedom but only eight degrees of freedom it's a homogeneous matrix so all the matrices that are scaled relative to each other but are otherwise the same are in the same equivalence class so this property at this principle of homogeneous homogeneity doesn't only apply to vectors but of course also applies to matrices now this is an equation here that is in terms of a general homogeneous vector and of course we can again obtain the augmented vector by dividing by the last element as we've defined in the beginning perspective transformations do not maintain parallel lines anymore as you can see but they preserve straight lines so if a line has been straight in the original um before the transformation then after the transformation is guaranteed to also be straight it's not bending it's a linear transformation in homogeneous coordinates now we've seen how points can be converted into new points using such homogeneous transformations but what happens with co vectors lines are co vectors because they are represented by a vector that is used always in combination with a homogeneous coordinate to define a useful principle such as a line equation and so what happens with for instance lines how can we transform lines with these homogeneous representations well considering any perspective 2d transformation what we can do is we can look at the transform 2d line and the transform to the line equation is simply l tilde prime this is prime indicates that this is the transformed line transpose times x tilde prime now we know what x tilde prime is because that's what we have just defined the perspective transformation so we can plug that in here and then we can multiply these two together or factorize them out and obtain h transpose l prime transpose if we transpose this expression here and this as we recognize is simply the line equation if we set it to zero this is what we started with this is our assumption that this transpose x tilde is equal to zero so this must be the expression of the line before the transformation in homogeneous coordinates and now we see that well l-tilde transpose is this expression transpose which means that l tilde prime this is the trans transformed line is equal to h tilde transpose to the power of minus 1 times l tilde therefore we see that the action of a projective transformation on a co-vector such as a 2d line or 3d normal can be represented by the transposed inverse of the original matrix that is transforming points and can be easily obtained and applied so here's an overview of 2d transformations we see the hierarchy translation euclidean transformations similarity transformation affine and projective transformation we see their representation we see that degrees of freedom what they preserve and here an illustration of what they do so we see that these transformations form a nested set of groups which is closed under composition and inversion so these are defining mathematical groups and they can be integrated as restricted 3x3 matrices operating on 2d homogeneous coordinates so this is the least restricted transformation or least restricted group of matrices and the more we go up in this hierarchy the more restrictions we have here the restriction is that the top left of the matrix is an identity matrix and while these matrices here are written as two by three we can always add a last line zero and one to make this into three by three full rank matrices the transformations um preserve the properties below which means that for instance the similarity transformation does not only preserve angles but also pluralism and straight lines and the translation operation does not only preserve orientation but also all of the other nested transformation properties so the translation preserves almost everything except translation and the projective transformation only preserves straight lines and the same holds true for 3d transformation so here's an overview over the equivalent in 3d space where the degrees of freedom changes but otherwise is very similar again these three by four matrices that you see here can always be extended with a fourth row um to apply them in homogeneous transforms and to change transformations etc now um let's go back to the 2d case and let's think about how we can actually estimate the parameters of these transformations given some observations and while this is possible for all the transformations we are going to look here just at the most general transformation which is the perspective or the homography transformation and we already know that this homography has eight degrees of freedom so if we want to estimate this transformation from correspondences between uh you know a representation before and after the transformation between two images let's say then we know because each pixel defines two constraints a constraint for the x and the y coordinate we need at least four of these correspondences in order to properly define and estimate the homography so let x denote a set of n to d to 2d correspondences related by this homography equation in homogeneous coordinates where x tilde is the point the ive point before the transformation and x i tilde prime is the point after the transformation and we need at least n equal four but we can have more we can have an over determined system to obtain a better homography as the correspondence vectors are homogeneous they have the same direction but they differ in magnitude right so we we have represented this equation here in homogeneous coordinates so now in order to solve it we cannot just set the left side equal to the right side and solve for this because there is more flexibility in it we have to account for this flexibility and the simplest that we can do for this is we can rewrite this expression we can rewrite this equation as the cross product between the transform point and the point before the transformation multiplied with the transformation matrix and that must be equal to zero and the reason for this is that in the cross product the cross product of two vectors is zero if these two vectors are pointing in the same direction but it doesn't depend on the length of these two vectors so this is a more convenient expression for us to work with because we wanna to make sure that these two vectors x tilde and x tilde prime are homogeneous vectors so they can scale arbitrarily so this is the term that we are concerned with now using h little h tilde k transpose to denote the cave rho of h tilde this can be rewritten as a linear equation in h tilde and this is what this equation looks like this is just rewriting this cross product if you multiply this you will see very easily that this corresponds now this is nice because now we have a linear system where this h vector is the unknowns that we want to solve for and here on the left we have a matrix of uh um like we have this this system that we that we use for solving for h so in other words we have a linear system a h tilde that shall be equal to zero now the last two rows are linearly dependent up to scale on the first two and can therefore be dropped last row is linearly dependent up to the scale on the first two rows and can be dropped sorry and so typically what people do is just consider these first two rows so we have a three by one vector h1 and a three by one vector h2 tilde and a three by one vector h3 tilde that is multiplied with this row and this row and this is a two by nine matrix so each of these elements here has three columns and we have two rows that means as expected we obtain two constraints per point correspondence per eye where we can now stack all of these correspondences into a bigger matrix to obtain a a determined or over determined system that we can use to determine the vector h and that is the l that contains the elements of the matrix h so each point correspondence yields two equations and stacking all these equations into a 2 by 2n times 9 matrix for instance 8 times 9 matrix a leads to the following constrained least square system where we have h star this is the solution to that system is the minimizer of this expression with respect to h tilde and this expression simply says well because we we want this to be zero we want the squared error of a h tilde to become as small as possible um so this is the l2 norm here and we want at the same time we've introduced a constraint that fixes h tilde the norm of h tilde to one as the h matrix that we tried to determine is inhomogeneous and this is how we it's homogeneous this is how we introduce this constraint and constrain this matrix um because otherwise it will be unconstrained and only defined up to scale and so we have to constrain this matrix to one particular solution out of this equivalence class and we do this using this lagrangian formulation where we have this lagrange multiplier here this is a standard constraint optimization formulation with lambda [Music] times h minus or h squared minus 1 where this term here vanishes if the norm of h is exactly equal to 1 and if we take the gradient of this we arrive at this expression here that is not true um um where uh so where um what has happened here is only um uh i i've rewritten the um the expression on the top uh without the square right so this is this is just rewriting it in terms of standard linear algebra so we have h squared is simply h transpose h and a h squared is h transpose a t times a h okay now the solution to this optimization problem is the singular vector corresponding to the smallest singular value of a in other words the last column of v when decomposing the matrix a as a singular vector decomposition u d v transpose and this is very similar to what we have discussed in the deep learning lecture 11.2 in the context of pca except that here we're using a singular value decomposition instead of an eigenvalue decomposition but we could also use an eigenvalue decomposition of the matrix a transpose a which is equivalent to a singular value decomposition of a yeah and why why do we have to use the singular validity composition here well um we see that this is an eigenvalue problem if we want to minimize this expression and we compute the derivative of this expression with respect to h we see that we end up with an eigenvalue problem as in the case of pca we have a h equals minus lambda h and this corresponds to a eigenvalue problem good the resulting algorithm is called a direct linear transformation so this is a very general algorithm that not only applies to homography estimation but in general applies to this estimating and closed form solutions to systems expressed in terms of homogeneous coordinates what can we do with this now well and this is also something you're going to do in the exercise we can now define correspondences between two images so we have two images here that are taken from the same viewpoint that's important you may not translate because otherwise the projective geometry doesn't work they must be taken from the same viewpoint but they can be taken in different directions so you can rotate yourself around your own axes and what we want to do now is we want to stitch them together into a panorama as your camera stitches images that you take into panorama and for that we can determine correspondences for instance we can mark this point on the hill and this point here and maybe this point here well this is not visible actually so this point here and this point here so we can determine four such correspondences between these two images and then using svd solve for the matrix a solve for the vector h which defines the homography h and then we can warp one of these images into the other image space and combine these two images into a larger panorama of the scene and this is very standard today every almost every smartphone has the possibility to create such panoramas from multiple images |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_111_SelfSupervised_Learning_Preliminaries.txt | hey and welcome to computer vision lecture number 11 on self-supervised learning this lecture is divided into four units in the first unit we are going to discuss some preliminaries and basic motivation for self-supervised learning in unit number two we're gonna look at the first type of self-civilized learning models that we get to know which i call task specific models that are self-supervised formulation formulations for specific tasks such as single image depth prediction for example in unit number three we are going to cover classical pretext tasks for learning generic visual representations that can later on be fine-tuned to some specific task and finally in unit number four the latest self-supervised learning trend which is called contrastive learning let's start with the motivation and the preliminaries most of the approaches that we have learned about in this computer vision class were so called supervised learning techniques which means that in order to estimate the parameters of these models and there's many of them there is maybe millions or even billions of parameters in order to obtain these parameters we have to train on a very large label data set there's many parameters so we need a lot of data to set these parameters and these labeled data sets have to be collected right there has to be typically a human or many humans that verify each other in a sophisticated process there is entire company specialized to these processes that take images look at images and annotate these images that assign a class label to an image that draw 2d bounding boxes in the images and assign an object label that draw 3d bounding boxes that annotate them in terms of their semantic per pixel categories and so on so supervised supervised learning really requires very large amounts of annotated data and it's very tedious and time consuming to obtain this data here's a little picture that i've drawn for one of our papers that shows how much effort effort he is measured in terms of annotation time is required in order to obtain annotations for a particular data set and so there are some data sets where annotation is relatively relatively easy like imagenet where you have to just decide for one out of 1000 categories and this is a single label for the entire image so you can do that in the order of maybe a minute or so but then if you look at the other extreme of this curve here we have semantic segmentation which is about assigning a semantic class label to each and every pixel of the image and that takes between 60 and 90 minutes roughly if you want to do it well and so there's there's really this curve here that says well even if you have a lot of resources then there's a trade-off so you can either annotate few images very precisely or you can annotate a lot of images very coarsely but you have this trade-off here so let's look at some concrete annotation problems and let's start with the de facto gold standard image classification the imagenet annotation problem imagenet has been annotated by humans so there has been millions of images have been presented to human annotators typically more than one annotator to make the labels consistent and noise free and these annotators had just had to decide in a quite sophisticated process in the end had to decide which of the 1000 categories each image belongs to however there's several types of apart from the the cost that is induced by this there are several types of errors that humans do and this has been analyzed in more depth by andrei carpathi in a self experiment and so the first type of error that he found was fine grain recognition for example there are more than 120 species of dogs in the imagenet data set and it's really hard for a non-dog expert to classify which precise species a particular dog belongs to some dogs look very similar to each other despite they belong to different species and some dogs might look very different from each other despite they belong to the same species and that's something that not every human can do easily and so he estimates that about 37 percent of the human errors fall into this category the second error he mentions is class unawareness the annotator may in some cases be even unaware of the ground truth class present as a label option because there is so many classes there's 1000 classes and so sometimes the annotator doesn't even know that this class exists for this particular one out of thousand classes image classification annotation problem and then finally insufficient training data in some cases because there's only few examples there's only just a few examples also for that we can present to the annotator to to train the annotator the annotator has also to go through a training process but if the annotator is not an expert and is presented only with let's say 13 examples in some cases here of a particular category of a particular dog for example then this is insufficient for that human annotator to generalize well and so there's also errors induced by this and then of course image classification is very expensive for example imagenet if you would um calculate like the time it would take one human to annotate image net that would be 22 human years full-time work of a single person just sitting in front of a screen and an annotating or categorizing images and i put the link here of andrei kaphi's blog which is quite a nice read if you wanna wanna understand this this better so he did experiments where he tried to figure out what a human what performance a human can reach if a human would try to classify the imagenet uh data set as a baseline for state-of-the-art methods and and these are some of the findings that were a by-product of this experiment so here are just a few of these 120 dog categories that are present in the imagenet dataset i believe in the imagenet benchmark in the end they only used 90 categories but even 90 different dot categories are very hard to categorize if we go from image classification to semantic segmentation the problem becomes even harder because some of these objects are very small there's just a few pixels and it's really hard to recognize even if you zoom in and then because you have to densely annotate these images it takes much much more time for example for the cityscapes set it was estimated that one image required about 60 to 90 minutes annotation time and there were multiple annotators required in order to get the label error down you can see an example of one single such annotation then there's other tasks like stereo or monocular depth estimation or optical flow for example as present in the kitty data set where for humans it's nearly impossible to actually obtain labels manually it's really difficult for humans to solve the correspondence problem something that computers are actually quite good at we've seen examples in the structure for motion lecture for example but for humans it's really hard to tell which points correspond in particular if you look at this image here if you look at a point in in the vegetation area on this tree and you look at an at this tree from a slightly different viewpoint it's very hard to say which point corresponds to which point now luckily in the case of kitty there was a way of at least getting depth ground roof because there was a additional sensor mounted on the vehicle um there was a lidar laser scanner and with that scanner we were able to obtain ground truth for depth but for optical flow ground roof is really hard to get what you can do and what we did is we were estimating the motion of the ego vehicle and compensating for the rolling shutter effect of the lighter scanner in order to convert a depth map into an optical flow map for the static part of the scene but this does only work for the static part of the scene and there's some estimates involved because you have to estimate the ego vehicle speed etc and for dynamic objects it doesn't work so what we did for dynamic objects is we were trying to retrieve cad models that were the right cut model for that particular type of car and then we were trying to fit the cut model to multiple frames with a lot of manual intervention and with this process we were able to obtain 400 images but really not much more it's a it's a very tedious and time-consuming process and also there's a lot of errors still in this in this data so it's really hard yeah so that's how we train computers how are humans trained when we grow up there's there's not a lot of labels that we actively perceive right so if if we see a lion like in this case here the daddy explains to the kids what what line is by pointing to that line that happens maybe once or twice or three times but then just from these three examples the kid is able to extrapolate to generalize to new viewpoints to new line individuals to new uh also to different species and this generalization ability is quite remarkable and something that we haven't achieved with computers really yet but parts of this lecture here are going into that direction trying to learn um without a lot of or without any label data so how do humans learn well humans learn mainly through interaction by not just perceiving but by also interacting with their world but then also by passive observation by just looking and making sense of the world moving a little bit around and looking how things behave and this is something we haven't really exploited yet much a lot of the things that we have looked at so far were really supervised learning where we always had this x y data label pairs so here's an example of a baby interacting with an object and observing and figuring out that this is something that can be grasped and that has a certain taste and that can be played with and it makes certain sound etc so what is the idea of self-supervision the idea of self-supervision is to try to avoid having large annotated data sets but still train a lot of the parameters that we need in end for the target task that we want to actually solve such that we only have to fine-tune the network in the end a little bit from this initial estimate that we obtained without labels and how do we do this well instead of taking labels from a human annotator we take labels from the data itself sounds a little bit like magic but it's actually not quite hard so um the you can think of this as predicting parts of the data set or parts of a particular example a particular sample in the data set from other parts of that sample let's assume you have an image and to remove some part of that image you hide it and you're trying to predict that that region of the image from the observed from the from the observed part of the image that poses a problem to a neural network that requires abstract read or more abstract thinking or semantic reasoning in order to be solved that's the idea of self supervision instead of supervising with labels supervised with data because data sets are relatively cheap to obtain and labels are the things that are hard to obtain and there's many ways you can do this this is slide from yan li kun you can predict any part of the input from any other part for example you can predict the future from the past here with the time axis you can predict the future from the recent past you can predict the past from the present you can predict the top from the bottom you can predict the occluded from the visible but in all these cases you pretend there is a part of the input you don't know you're removed from the input and you predict that and that's called pretext learning it's a pretext task because it's a a substitute for the real task that you want to solve here's an example it's one of the um oldest examples of self-supervised learning maybe it's called a denoising auto encoder where in this case on mnist we have an mnist digit and we have an auto encoder here so an encoder and a decoder these are neural networks parametrized by some parameters but then we don't show the original input image to the auto encoder but we partially destroy it so we remove certain pixels or we add a little bit of noise to each pixel and that's what we show to the auto encoder and the task of the auto encoder is then to reconstruct not the noise the image but the original image that's why it's called a denoising auto encoder it tries to denoise the noisy image into a sharp image therefore the denoising autoencoder predicts the input from a corrupted version of the input and then after training we can throw away the decoder that was just used for training the model but what we are interested in is actually the encoder which is kept and this is the model that we're gonna use then for making a prediction so we can then use this model and add a little classification head on top a little neural network that then we train for the target class for example for categorizing these digits which was not part of the pre-training so called pre-training stage which is self-supervised pre-training stage because in this pre-training stage we didn't require an enabled any labels we're just trying to get a sensible encoder that has learned something useful through this auxiliary task about the world to put self-supervised learning a little bit into context i list here four different types of learning problems the ones that are most commonly referred to the first is reinforcement learning if you go to our self-driving cars lecture or other lectures at this university you will hear about reinforcement learning which is basically learning models model parameters for example parameters of a neural network by using active exploration from sparse rewards of the environment there is an agent that interacts with the environment and once in a while there's a reward telling the agent well you did you did well or you you didn't do so well and from these rewards you you learn how to adapt the parameters then there's unsupervised learning where the goal is to learn the model parameters using a data set without any labels and classical examples of this are clustering dimensionality reduction pca or gans and then there's supervised learning where you learn the model parameters using this data set of data label pairs xy for example traditional image classification or imagenet regression structure prediction semantic segmentation and so on and finally there's self-supervised learning where we learn the model parameters using a data set of data data pairs which i denoted here by x i and x i prime to indicate that this is not a label y but this is also part of the data set and this is what we're going to look at today and then finally this is a slide i i think everyone has to show is maybe the most famous slide from yan likun the famous black forest cherry cake that illustrates now from a different viewpoint that illustrates how much information is given to the machine during learning for each of these learning problems and the idea here is to illustrate that well in reinforcement learning which is symbolized here by this cherry that the machine predicts uh where the machine predicts just a scalar reward um uh that is given once in a while there's only very few bits like you play an atari game and then once in a while you get you get a best score and then you have to update your parameters of this entire sequence based on this reward then there's supervised learning which is the icing on this cake and there the machine predicts a category or a few numbers for each input so for example it predicts a image label or it predicts a few 2d bounding boxes and the ground rule for this is supplied by humans so we have a few bits per sample but then there's self-supervised learning which is all the inner stuff of the cake where the machine predicts any part of its input for any observed part for example predicting the future of video frames and there's much more bits per sample now because there's many as we'll see many tasks that we can formulate many loss functions that we can formulate many auxiliary labels that we can generate for a single image and so there's much more information in this self-supervised part than there is in in supervised learning and reinforcement learning now before we move on i want to give also some credits this slide deck has been uh heavily inspired and uses reuses some of the slides from yan li kun and also the stanford cs 231 class and these are excellent resources which i can highly recommend you to have a look at so i put the links here as well as links to a blog post from yan li kun and isha mishra on self supervised learning the dark matter of intelligence as it is coined by jan liken and also another blog here on self-supervised representation learning that i found quite useful you |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_11_Introduction_Organization.txt | welcome to computer vision a lecture at the University of tubingen my name is Andreas Geiger and I am excited to be your lecturer for this course what is computer vision well simply speaking computer vision is the attempt to replicate the phenomenal perceptual capabilities of humans in a machine in other words we're concerned with converting light into meaning for example given a 2d picture we want to detect objects or segment objects in that picture or given a collection of images we want to reconstruct the 3D structure of the Worlds by just processing and analyzing 2D projections of the 3D World that we live in to give you a little teaser about this I want to show you this video here that is produced by the work Nerf in the wild neural Radiance fields for unconstrained photo collections which is exactly about reconstructing the world from an unstructured collection of 2D images in this work we present Nerf in the wild an extension of neural Radiance fields or Nerf that can be used on the sort of unstructured and uncontrolled photo collections you might find on the internet our system takes as input and unconstrained photo collection of some scene in this case the Brandenburg gate in Berlin and produces as output novel images of that scene where the camera can be moved and also the appearance of the scene can be changed let's start off by just looking at some view synthesis results from our system for the six scenes from the phototourism data set that we used in all of our experiments you can see that we're able to produce high quality renderings of Novel views of these scenes using only unstructured image collections as input isn't that amazing let's understand together how this works in the context of this course this class is taught as a so-called flipped classroom what does that mean well instead of me presenting the material to you during the physical presence time will provide to you lecture videos that you watch beforehand before coming into this so-called life sessions where we will then have a lot of time to discuss interesting or difficult materials of that particular lecture so we provide lecture videos on YouTube to you before the actual live session and your task is to watch these videos and take down to node questions that you have and bring those into the live session we also encourage you to form study groups with your peers to discuss the content the materials beforehand and to um motivates you a little bit we also provide a lecture quiz that serves two purposes first of all it helps you to self-assess yourself and also you can gain a little bonus for the final exam but also it helps us to understand what the difficult topics are so that we can focus on those mostly in the time that we have during the live sessions at the same time we continuously provide to you assignments in the form of exercises that you work on and if you have questions for the exercises or questions for the videos you can post them in the in the chat that we offer but for the exercises in particular we also offer a online weekly online Zoom exercise help desk that you can join and we highly highly encourage you to do so and also we provide a quiz for the exercise again for self-assessment self-motivation and for us to understand which topics are difficult so during a typical week you watch the videos you work on the exercises you interact with us and the other students of this class through the chat or your study group as well as the exercise help desk you complete the quizzes and then we all come together in a lecture hall and discuss the topic and the difficult topics again and that could be in the form of questions that I might ask to you or questions that you bring into this forum and ask to me or maybe we do some little tasks together that's all I wanted to say for now I'm very much looking forward to meeting you all in our first live session see you there |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_43_Stereo_Reconstruction_Siamese_Networks.txt | while in the previous unit we talked about very simple handcrafted similarity metrics this unit is about learning similarity metrics from data using the power of deep neural networks the motivation for this is that hand crafted features and similarity matrix do not take into account all relevant geometric and radiometric invariances or occlusion patterns and of course it would be beneficial if you could learn all of this from data rather than trying to handcraft the feature descriptors as also deep learning has revolutionized recognition for example where handcrafted sift descriptors were used before but now all of the features are actually learned so we want to exploit the same principle here but still staying within the block matching regime so far so we're using a similar algorithm for matching we just want to have a better similarity metric unfortunately world is too complex to specify this by hand that's why we need learning and the observation here done in this seminal paper by spontaneous is that the matching computation can actually be treated as an image classification problem the idea behind this very simple if i take a patch from the left image in a patch from the right image and i have the ground with disparity so this needs of course an annotated data set where we know the ground with disparity and it needs a sufficiently large annotated data set then we can we can label these patches based on the ground of disparity so for instance in this case this is the correct displacement so the patch appears similar this would be a good match and then here in this case this would be a bad match because the disparity is slightly off so we can assign this good match versus bad match labels and then turn this into a classification problem where we're trying for a particular left patch to classify all potential right patches in the right image and then the classifier should tell us which one is the correct one amongst all these hypotheses here's an overview of the method the method assumes a large annotated data set and one of the first large annotated disparity data sets was actually one of our data sets the kitty data set in 2012 and then in 2015 which provided each over 200 annotated images and much more raw images still that also have disparity ground true from which these models can be trained one has to say that the siamese theory networks that we talk about in this unit are still much more label efficient and data efficient because they are just trying to learn features and similarity matrix compared to this end-to-end stereo estimation techniques that we're going to discuss in the last unit but still they require quite a bit of data for training now given that we have trained such a patch classifier if you will we can now calculate features for each of the two images using this learned model and we can correlate these features between both images using a simple dot product or a more sophisticated multi-layer perceptron to compare these patches and then we can find the maximum for every pixel the simple winner takes all strategy that we have discussed in the previous unit or we can what's also been done in this paper to get slightly better performance run a global optimization algorithm that incorporates some smoothness assumptions about the problem and that's something that we're going to discuss in the next unit so here in this unit we're still focusing on the block matching setup in the original paper two architectures for this matching problem have been proposed one which i call learned similarity and the other i'll cosine similarity and the way they differ is that for this first architecture what is happening is that we take a patch in the left image in a patch in the right image and run convolutions to compute features but then we concatenate these features and run another fully connected multi-layer perceptron that outputs the similarity score for these two patches it's still a siamese network it's called siamese because we have these two branches here which are the same they share the parameters so the same convolution convolutional neural network is applied to the left input patch and to the right input patch but then there is this um expensive part here while being more expressive potentially than the simpler solution here on the right um it's very slow to compute because now for every pixel we have w times h which is window image bit times image height pixels in the reference image times all possible disparity hypotheses so there's a very large number here it's maybe 300 000 times 100 for vga image resolution which is rather small so a very large number of times that we actually need to concatenate these two features and then run an mlp on top so this is very slow because we need to do a lot of mlp evaluations while these two parts are rather fast so a simpler method is to remove this mlp here altogether and shift all the burden of this similarity matrix computation to the features so what's happening here is we compute using a confidence the features and we can do this at inference time then of course using you know reusing computation as in normal cnns we can do this across the entire images so it's very fast in a few milliseconds you get the feature for both the left and the right images for all the pixels and then we normalize across the channel dimension for all the pixels and this is the feature and then we compute a simple dot product between these two and um this dot product is done directly the similarity score that's called the cosine similarity so these two operations with this in the siamese branch here with the shared weights at inference time are really fast because we can run them on the entire images and then this dot product which is the thing that we have to do w times h times d times is of course much faster than running an mlp and so the entire evaluation here runs like two orders of magnitude faster than for the learn similarity model and interestingly it doesn't lead to a significant drop in performance so performance is roughly on par between these two and this is also an architecture that has been used by many follow-up works because it's so fast now how does training work in these models well the training set is composed of patch triplets and it uses a special loss function that we're going to talk about a patch triplet is composed of a patch from the left image that is the reference for which we want to compute the correspondence and then we have a patch in the right image that is a negative that's an incorrect patch then we have a patch that's the positive example that's the correct patch that's exactly displaced by the disparity by the ground roof disparity and this x this bold x here are just two dimensional coordinates these are the image coordinates in the left and right image now the question is of course how to choose the negative examples and how to choose the positive examples and what this paper does is it does hard negative mining and the reason for this is that if we would just choose negatives everywhere else than the positive patches then this would lead to a very simple classification problem because most patches actually look very dissimilar from the correct patch but what's happening here is what we're trying to do here is we're trying to find negative examples that are actually quite similar but still not correct to the correct example and the way this is done is by choosing a negative patch or the coordinate for the center of the negative patch by taking well the correct patch location which is x left minus the disparity plus an offset o of course at the sk at the same scan line we're in the rectified setup where this offset o neck is drawn from a uniform distribution with some thresholds n low negative low and negative high in the negative range and in the positive range and then we take positive examples by also like moving to the correct location xl minus d and then taking offsets that are drawn from a smaller range closer to the correct location from minus positive high to positive high so here d denotes the true disparity for a pixel that's provided as the ground truth and typically i think that's also what's done in this paper the hyper parameters for this hard negative mining are that the positive patch is from a range minus one to plus one and then the negative are assembled from ranges three to six so they are quite close to the positive example but they're still far enough away such that the classifier has a chance to classify them correctly so here's an example here let's assume this is the correct location here the center one where we have the reference location minus the disparity and then we to the center location we add this offset o and that depends that offset is chosen from these ranges that i mentioned before so for instance the positive examples are chosen very very close to this correct location and the negative examples are chosen further away but not too far away so from three to six let's say minus three to minus six and plus three to plus six pixels but not from here let's say and this leads to um the strategy leads to patches in this particular case here that look like this so here we have the correct location here we have it shifted by one pixel and here we have shifted by three or four pixels into the left and right direction and this is then is are then the hard examples that the classifier has to distinguish from these examples and if the classifier is able to distinguish these from those then it's also able to distinguish easier ones from the correct ones easier negative ones from the correct ones now this is about the triplet creation now how do we use these triplets to train this classifier and i'm saying classify it's actually not exactly correct because we're using a regressor so this this model here is not outputting a classification score but it's outputting a similarity score how similar are two patches and and the same for this more complex architecture here so what we do here in order to learn this model to predict the correct similarity scores we use a hinge loss and that hinge loss is defined as such it's the maximum of 0 and a margin m that's a tunable hyper parameter typically in the paper i think 0.2 is used plus s minus minus s plus where s minus is the score of the network for the negative example so it's a score when inputting the reference patch in the left image and then the right image the negative example and s plus is the score of the network for the positive example so inputting the reference patch and the positive example and of course we want to make sure that um uh the loss um is uh well that the the score of the positive example is higher than the score of the negative example now this loss is zero when the similarity of the positive example is greater than the similarity of the negative example by at least this margin m and having this margin m prevents separation or having this hinge loss prevents separation of positive and negative features that are already well separated that's why we have this hinge loss and the margin and it gives the model the capacity to focus on the hard cases because things that are already well separated where as minus is already much smaller than s plus we don't need to separate more so we should disable the gradients for them during training so here's an example of this what i've illustrated here is the loss of in the let's look at the left case here of uh the loss over or off as or with respect to the value as minus you can see this hinge loss curves here for different fixed values with different colors of s plus so let's consider for instance s plus equals zero in this case the hinge loss is linear if s minus is bigger than minus 0.2 and it's zero if it's already smaller than minus 0.2 which means that if well if if the if s minus is bigger than 0 0 then of course we want to decrease it so we're in this linear part here but we also want to decrease it if it's if it's already smaller than s plus but it's only smaller by let's say 0.1 here so if we are in this triangle area here but if it's already smaller by 0.2 than 0 then we want to ignore it and we want to save capacity of the model for other harder cases that are harder to distinguish and the same holds true for these other curves so here we have s plus equals 0.4 in this case we introduce gradients for training until s minus is 0.2 or smaller and then we ignore this example during training and the same this is vice versa if so here we have the curves for s plus let's say if s minus is equal to zero then we want to decrease or we want to increase as plus as long as we are still smaller than 0.2 but if we are already at 0.2 or higher then we are 0.2 higher than 0 and so we can ignore this because they are already separated well enough and we can focus on other examples what does that look like here the winner takes all results we have the left input image from a kitty example from the original kt 2012 data set here is the result of the siamese network compared to a standard block matching result and you can see that this is a much less noisy and it's a better result than this block matching result so learning actually improves the quality of the similarity metric now if we run a global optimization and that's something we're going to talk about in the next unit on in addition to this cost computation we get even better disparity maps that you can see here and we do this for the left image and for the right image and then we can do a consistency check where outliers are removed and here colored based on their occlusion relationships you can see that things that are half occluded are in green here and we obtain a very nice disparity maps the original version of this algorithm was implemented on cuda with lua torch 7 and run on a titan x training required about 5 hours they trained it for 45 million with 45 million training examples these little patches remember that training only requires patches not entire images that's why it's rather data efficient and they trained it for 16 epochs using sgd and then inference for a single pair of images takes about six seconds for the simple architecture and 100 seconds for this complex architecture where an mlp has to be evaluated for each disparity hypothesis |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_112_SelfSupervised_Learning_Taskspecific_Models.txt | task specific models these are self-supervised models that are specific to a particular target task downstream task as opposed to the models that we're going to consider in the later units that are trying to learn generic representations independent of any downstream task the first case that we're going to consider is the case of unsupervised learning of depth and ego motion the task is as follow the input at training time is just monocular videos that we can just download from the internet or record using a camera that's attached to a vehicle in this case here for example and what we want to do is we want to train two neural networks the green one and the red one the green one is a depth neural network a convolutional neural network that takes a single image as input and produces from that single image a then step map that's a so called single image depth prediction task and we have seen a model for this task already but this model was supervised it was trained in a supervised fashion here we assume the ground truth depth map is not known but we want to train these parameters of this green model without access to such information which is of course much more challenging and it might seem a little bit magical at first glance even how this is ever possible but we're gonna see that it's actually not not quite as hard as it might look like in the first place and the second network that we're going to train and that's important actually to learn this task you can only do these two things jointly you can't do them without with only either of them the second network that we're going to train is this pose cnn in red that takes two adjacent frames and predicts the relative camera motion the rotation 3d rotation matrix and the 3d translation vector 6 degrees of freedom and we want to train the cnns to jointly predict the depth and the relative pose from 2 or 3 adjacent video frames now how are we gonna learn these networks without any supervision what we're gonna do is we're gonna get inspired by the way classical optical flow for example techniques were optimized and we're going to use the photo consistency assumption that these models have been using for optimizing the flow field for example to now not optimize a per-pixel optical flow but to instead optimize the parameters of the green and the red network this is illustrated here on the right so we're gonna try to create a photo consistency loss by warping the source fuse so here we consider two free adjacent views this is what has been considered in this this paper here cvpr 2017 because it makes training a little bit more stable there's more information if you do it for the previous and the next frame so we have the target frame here this is the frame at time t and then we have the previous frame at time t minus one and the next frame at time t plus one and the frame at time t is called the target view and the frames at time t minus 1 and t plus 1 are called the source views and what we're going to try to do is we're going to try to learn this green and the red network parameters such that when they make a prediction the green one predicts a depth map and the red one predicts the relative motion between frame t and t minus one and between frame t and t plus one this rd six degree of freedom rigid motion transformation here called t this is r and t when the network predicts these two networks predict these two quantities we can take any point in the target view and because we know the depth map we can of course project it to 3d and because we know the relative pose between the two frames we can project that point onto the previous frame and onto the next frame so we know the correspondence from frame t to frame t minus one and t plus one which is implicit through the implicitly given through the specification of a depth map and a relative pose assuming the scene to be static the scene is static then we can take any point on that static scene and we know the 3d location because we have the depth map and then we can just translate and rotate it according to how the camera has moved and then this point is a 3d point in the camera coordinate system of the other frame and then we can project it back using the calibration matrix into that image plane and this is where the point the pixel is then found the corresponding pixel because we can do this we can take any pixel in a target view and project it into the previous and the next frame we can take we can warp effectively the source view 1 and the source view 2 into the target image and then in the target image we can compare the pixels we can compare the pixels of the target view to the warped source view 1 and to the warped source view 2. and only if assuming lambertian scene and photo consistency and aesthetic scene just ego motion just camera motion so making all these assumptions what should happen is that if the predicted depth and the predicted motion is correct then the same scene point is observed at the same pixel in other words if i warp this frame here to this frame and i take the difference the difference should be zero or if i take this source view too this is the next frame and warp it into the reference frame and take the difference to the reference image the difference between the rgb pixels should be zero because i have warped the source fuse according to the predicted depth and the predicted pose however if the predicted pose and the predicted depth are incorrect then there will be a discrepancy and i can use that discrepancy that photo inconsistency in order to back propagate gradients to the parameters of the green and the red network and update these parameters so effectively we want to try to minimize the photo consistency for a lot of frames of monocular sequences and this doesn't require any ground truth pose or any ground with depth we just minimize photo consistency here is the process illustrated again this is a process for the warping assume we have a pixel in frame t here the red one then based on the depth that is predicted at that pixel we can project that pixel into 3d and then based on the rotation and translation between this and the source frame we can project that pixel into the source frame and then we can buy linear interpolator means we can find a linearly weighted combination of because this is this pixel is not going to fall into a discrete pixel grid location is falling necessarily in between so because it's producing some some real value not a discrete pixel value so we are bilinear interpolating the color values and then we can take these color values and insert them at that location and this is what we call the warped image so we take a bi-linearly interpolate interpolation step here and that bi-linear interpolation step is also differentiable so we can back propagate gradients through this through the entire photo consistency loss all the way to the parameters of the depth and the pose network of the green and the red network in the previous slide here's the formula for the projection process we have seen such a formula before in the case of homographies here in this case we use the depth of course the predicted depth so the depth and r and t this is the pose depend on the parameters of the network and on the input images it's not shown here as a dependency just for clarity but this is the pose r and t and d is the depth for that pixel here and this is a homogeneous coordinate so we have the augmented vector we multiplied with the inverse of the intrinsic the calibration matrix of the camera multiply with the depth then we multiply both r and add t and multiply with the calibration matrix in the end and this gives us the projected point and there's actually a bracket missing there should be a bracket here and a bracket here as well yeah as mentioned we use bilinear interpolation to warp the source image to the target view and we call that then i had s this is the warped source image into the target view and then we measure we compute the consistency loss between the targets and the warped source images just by comparing rgb value differences and we do this twice we do this for the first source view and for the second source view we have a photo consistency loss that we can just average this is an illustration of the architecture that's used in this particular model this is a simple unit dyspnet architecture here with a multi-scale prediction loss the depth is predicted at multiple scales and then also the image is warped at multiple scales to improve the gradient flow and improve optimization it's a very difficult problem of course um it suffers uh from all the problems that classical um stereo methods uh and optical flow methods suffer and and in this case we even need to train the parameters of a neural network which are many so we need to make sure there is a proper gradient flow happening and here's the pose network here the input is two images and then after the encoder the pose is produced like in postnet and then there's another decoder here that produces an explainability network that can re-weight the observations um or re-weight the measurements in the photo consistency term but we're not going to go into detail about this here the final objective for optimization includes this already mentioned photo consistency loss but it also includes a smoothness and an explainability loss for almost all of these methods it's crucial to have a smoothness loss like we had to introduce smoothness constraints for example for stereo estimation or optical flow estimation otherwise the problem is too ill post and the parameter that controls the strength of the photo consistency loss with respect to the smoothness loss is a critical one and has to be tuned this hyper parameter but the smoothness loss is pretty standard it's just a smoothness loss on the disparity map we want the disparity to be smooth otherwise we we fall into a bad local minima during optimization what do the results look like here are the results surprisingly and these are results from 2007 today these type of methods produce much better results you will see some some improved results in a second but in 2017 it was surprising um to the community that the performance of this self-supervised method was nearly on par with depth or post-supervised methods here are five examples you can see the input image and the depth maps that have been produced the ground truth this is a method that has been fully supervised in terms of depth this is a method that has been supervised in terms of pose the pose was given but the depth was self-supervised and this is the completely unsupervised method that estimates both depth and pose and he has to estimate both otherwise the mapping is ill-defined and you can see that this model produces a sensible results we can see the tree we can see the car you can see this car here it does make sense and it's quite smooth but it also doesn't contain a lot of noise it's a really good result in contrast for instance these depth supervised methods they also produce good results but because for example here the ground roof was produced using a lidar scanner it wasn't available everywhere they can't predict everywhere if you extrapolate into this region you would get very bad results while the self-supervised method handles these regions easily because it has observed supervision it just wasn't labeled supervision it was self-supervised so this was quite amazing the performance was nearly on par with these supervised methods but this method assumes the static scene so it can fail in the presence of dynamic objects what can we do about that well another way of training a monocular depth estimation network is in a self-supervised fashion is to do that from stereo and the advantage of doing it from stereo is that first of all depending on the baseline you you can get sharper results but also because the theory images have been taken at the same time there's no problem with scene dynamics scene dynamics doesn't matter here can completely cope with seeing that with dynamic scenes doesn't doesn't get affected this is a paper from um gabe brostov's group um cbpr2017 the idea is basically the same as the one that i've just described so i'm going to go a bit faster just applied to not too consecutive images but to the stereo pair so here are three different models that they tried and the right one is the final one the left one is the naive model they call it a neve model the nev model takes the left input image runs it through a cnn that produces a disparity map but because it compares that to the target is the right image the disparity map has to be represented also in the right image coordinate frame why is that well because if i run this sampler this bilinear interpolation mechanism then i can only interpolate points in one direction i can only warp another image into the reference image for which i have the disparity map and so if i want to compare the left image if i want to warp the left image here i need to do that using a disparity map that's defined in the right image frame that's not what we want we don't want to predict the right image disparity map so what we can do is we can we can say well i take the left image and predict the left disperation map and then i warp the right image and i this is the warped right image and i compare that to the left image using a photo consistency loss that works but he suffers from artifacts as they found and what works even better is if you take the left image run it through a stereo network run it through a monocular depth estimation network that outputs not only the disparity map for the left frame or left image but also the disparity map for the right image it produces both left and right disparity maps and then we can take the respective other image we have the left and right rgb images and warp these and compare them to their counterparts and the advantage of doing this is that we can now apply a consistency constraint between the two disparity maps a left right consistency constraint we have talked about this in the context of stereo which helps eliminating outliers and also helps during training of this self-supervised technique here so here's a comparison here's uh the result with out left right consistency check and here's the result with left right consistency check and you can see that some of these artifacts at trees or here at the car have been removed in a follow-up work also from the same offers they looked into more general into self-supervised monocular dev estimation methods both supervised using monocular videos or also stereo videos and they introduced some improvements for example if we have three consecutive frames and we so let's assume we have these three frames and we look at this point here on the this is the surface point that we want to compare at the top left of this door then we see that in the first frame it's occluded so the appearance is very different in the second and the third frame in the reference frame and the next frame it is unoccluded which means we got a low photo consistency arrow while here we have a high photo consistency arrow between these two the baseline approach that i explained previously took the average of these two so basically says well despite the fact what this implies is that despite the fact that the correspondence has been correctly estimated it assigned a relatively high photo consistency loss meaning that the model was told that this is not a good match but it actually was a good match it was just occluded in one of these frames but however if we have three consecutive frames then if a feature is visible in the central frame it has to be visible in at least the pre in either the previous frame or the next frame or in both frames so if we take instead of the average we take the minimum then we ignore this occlusion effectively and we're just taking the best of like this correspondence match and this correspondence comparison and this leads of course to a small for the consistent or small appearance loss overall and this is one of the contributions that they have the per-pixel minimum for the consistency loss to have better handle occlusions then they have another improvement or they show that if we compute all the losses not at the multiscale resolutions but warping them backward back up scaling them up using also bilinear interpolation to the input resolution this can avoid some texture copy artifacts so they they use a multi-scale architecture but then for computing they lost the warp also low resolution depth maps back to the high resolution and then they compare the high resolution images and then they have another improvement which is about auto masking so if they detect that the camera stands still which for instance in the kitty data set happens from time to time and then the problem is becoming completely ill-posed if the camera is still then any depth map is valid right so estimation becomes completely ill person that harms if such cases occur frequently in the data set that harms training these networks so it's good to mask them out and what they do is they basically just compare like how pixels overall change in the images between adjacent frames and and this way filter individual pixels or entire images from the computation and they get quite amazing results i believe so here is the previous method which was still quite blurry despite producing something reasonable but then this so-called improved monodev2 version leads to significantly sharper depth boundaries and more details and i'll link the video here i will briefly play play it so they they compare different training types but at inference this is just a monocular video and a per frame computation of depth from a single image you can see how accurate that is given that the downstream task is already very challenging and then the supervision was done using pure self-supervision no depth labels involved you can see some errors here and there the boundaries are a little bit over smooth you can see in particular errors where you expect them at reflections etc but you can also see how accurate the ground and the objects in general have been recovered okay so if you want to see more you can click this link so finally i want to show you that this type of techniques of course also work for unsupervised learning of optical flow we can use exactly the same ideas this is a method from stefan rhodes group from 2018 and what they do is they use so-called bi-directional training they use a flow net c here the gray one and they use the same flow net c that's why it's called where we have weight sharing here so we use the same network um in the upper case here with i1 and i2 given as input and then the lower case with i2 and i1 so we flip the two images and effectively we produce a forward flow and a backward flow and we want to train this network as a single network just apply twice and we want to train the parameters of this network such that both the forward flow and the backward flow are accurate and this is mostly it it's all illustrated in this very nice plot here so it has of course also smoothness loss otherwise this is really ill post and doesn't work and here we have this network that's applied with the inputs swapped to yield the forward and the backward flow and then using that flow they're simply happening a warping of the first or the second image respectively to yield the warped first and the warped second image and then there is also a forward backward consistency check here in order to retrieve occluded areas that can be done masked from the data loss and there's this consistency loss where this also these occlusions have an influence on so in in in total they have three losses they have a smoothness loss they have this data for the consistency loss and they have a consistency loss between the flow fields they want to encourage that it's like left right consistency or in stereo here they encourage that there's a forward backward consistency of the flow if i take a pixel in the first frame and they're projected using the forward flow and i take the flow at that pixel in the second frame and project it backwards that that pixel is the same and these are some results from left to right the columns are a supervised method a unsupervised baseline and their approach and on top is the color-coded flow field and in the second row respectively we see the arrow map where white indicates large error and black indicates small error it's just very difficult to see the errors in these flow fields it's much easier to see here you can see how this method produces quite reliable flow estimates and you can also do the same game for scene flow estimation for monocular scene flow estimation which is a difficult task so from just a monocular video sequence you want to now predict the depth at each frame but then also the 3d motion of each of these 3d points is a combination of monocular depth and optical flow and you can use the same ideas of self-supervision here using now stereo videos videos where we have a stereo camera for supervision but then at inference time only a monocular camera for making the prediction the stereo videos are just used for self-supervision |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_61_Applications_of_Graphical_Models_Stereo_Reconstruction.txt | welcome to lecture number six of this computer vision class in the last lecture we've introduced motivated by the ambiguities in the stereo matching problem we've introduced graphical models and a particular inference algorithm called belief propagation that allows us to compute maximum episterity solutions or map solutions and also marginals now in this lecture we're going to discuss some applications of graphical models to computer vision problems in the first unit we're going to discuss the then stereo reconstruction problem that we already know using markov random fields in particular performing maximum upper story inference in markov random fields with discrete variables so using the max product algorithm that we learned about in the last lecture in unit number 2 we're going to discuss applications in terms of dense multiview stereo reconstruction which is stereo reconstruction but not from two views but from more than two views in the context of the sum product algorithm for inferring marginals and probabilistic results and then in the final unit we're going to discuss the optical flow estimation problem which like the stereo the binocular stereo estimation problem is a estimation problem in the 2d image plane but because the optical flow problem is typically formulated using continuous variables we're going to use the gradient descent algorithm to do inference in to do maximum upper story inference in the corresponding graphical model let's start with stereo reconstruction this is a slide that i have already shown twice but let's just remember again what are the difficulties in local stereo matching or block matching well there's a lot of ambiguities arising due to for example texture surfaces that match to many texture surfaces in the other image repetitions in the image occlusions as well as non-lambertian surfaces and in order to overcome these ambiguities we'd like to incorporate some prior knowledge about the real world statistics and we have seen that such statistics can be derived for example from data sets such as the brown range image data set that provides a large number of depth maps from which we can compute statistics like this and we have concluded that we have concluded from the statistics that depth very slowly accept at object discontinuities which however are sparse so in most areas of the depth map we have a very smooth transition very small changes in depth but then at the object boundaries we have a large transition we jump over a large range of disparities in order to go to the next pixel and this reflects in this statistics here we have a lot of probability mass centered around zero this is the gradient of the range but we have also some probability in the tails the so-called tails of this distribution away from zero now we want to incorporate such statistics into the model at least approximately and we can do this using a markov random field on a grid graph here we see a four connected grid structure graph this is corresponding to a three by three pixel image of course if we apply this algorithm to a real image this will be a much larger graph but the structure will look the same here in this graph every circle corresponds to a variable so every circle corresponds to a pixel and the variable is the disparity that we want to infer for that pixel and then we have unary factors these are the squares that are connected to only a single variable and these are the matching costs that we can compute using a block matching technique for example or a siamese network but then we also incorporate our prior knowledge about the smoothness of disparity maps into this markov random field and we do this by adding pairwise connections in this factor graph here pairwise um potentials that connect to neighboring sites so this is a very simple graphical model it can be formulated as a as a standard undirected markov network or markov random fields but it can also be formulated easily as a factor graph as done here so we have unary factors and pairwise vectors and this is reflected in this formula here so what we want to do is we want to solve for the entire disparity map now not just for the disparity at a single pixel but for the entire disparity map so we model a distribution over the space of all disparity maps capital bolt d is a disparity map that's in the case of a vga image that is a 640 by 480 dimensional matrix and now we want to model the probability distribution over the space of all these matrices and this probability is proportional to this factorization into unary factors and pairwise factors where this tilde here denotes adjacent sides for example this pixel and this pixel are adjacent in this graph structure that we have defined the unary factors are the data terms that are determined by our local matching cost and that depend only on a single variable and the pairwise factors are the smoothness factors or the prior constraints about the smoothness of the disparity maps that we want to integrate into this problem and they depend on two adjacent disparity variables or random variables now we can convert this formulation into another representation that is the so-called gibbs energy and we haven't done any change here except that we have replaced the unary and pairwise factors through corresponding negative log factors in the previous lecture we have used just the log factors but here we're using the negative log factors now so that we can interpret this as an energy and we want to minimize an energy which is equivalent to maximizing the probability over this disparity map so we have a equivalence through the skips distribution of maximizing a distribution and minimizing an energy this energy is an energy over the disparity map as well yes so psi data is the negative log of f data and psi smooth is the negative log of f smooth and in order for these to be the same we have of course to add the minus here which means that minimizing this energy corresponds to maximizing the probability of d this is the equation from the previous slide just to recap we have i tilde j denoting neighboring pixels on a four connected grid and we have unary terms which are the matching cost and by formulating this as an energy formulation we can directly take for instance the sum of square distance here as the matching cost where lower is better and we have pairwise terms here that encourage smoothness between adjacent pixels for example we have a very simple term here that simply says well if the disparity of adjacent sides or pixels is not the same then i add a penalty and if it's the same i add zero no penalty but we can also model something like the truncated l1 penalty which is l1 distance truncated with such a truncation threshold here that's coming already much closer to the distributions that we've seen on the previous slide and that basically says well up to a certain threshold i want to penalize to adjacent sides if the disparity is different and i want to penalize proportional to the difference in the disparity and of course lower is better the lower lowest energy value would be obtained in terms of the smoothness or pairwise terms by just having a constant disparity map where all the d's are the same but of course in that case the matching cost would be high because unless this would be really a scene where we have a we are looking at a plane where this actually holds we would violate the matching cost so now by solving this mrf we are trading off the unity term and the pairwise term and this tradeoff is controlled by this parameter lambda that we have introduced which is the regularization strength or weight and it controls the strength of the smoothness prior if we set lambda to zero then we are only optimizing this term here and we fall back to the standard block matching algorithm the winner takes all solution and if we increase this lambda then we obtain smoother and smoother solutions but of course if we increase it too much then we will obtain solutions that are too smooth then we are weighing this this prior knowledge about the smoothness too high so we can solve now this mrf using for example the belief propagation the max product belief propagation algorithm that we have discussed in the last lecture in order to obtain a maximum of a story solution which corresponds to the most likely disparity map at least approximately it's an approximate algorithm we obtain the most likely disparity map under this model and there's other algorithms that have been used like graph cuts which we don't have time to go into detail here but which also do which also provides provide a solution to this map problem which also minimize this energy now here's a result for the scene that we are already familiar with the middlebury cones scene and you can see how the inference results using such an mrf are much improved with respect to just a local block matching algorithm and that's also the reason why for example in the paper of sponta and lacun on sign mysterio matching this local siamese matching costs that are computed by a deep network are complemented with a with inference in a markov random field in order to overcome ambiguities and to further improve on the estimation result of this local matching cost winner take all solution now this was a very simple markup random field we can also think about more or factor graph we can also think about more complicated graphical models that incorporate even more constraints about the world that integrate non-local prior knowledge and here's an example of this this is the energy formulation from before just using slightly different notation where we have an energy over the disparity maps this is the gibbs energy that is composed of an appearance or a data term and a smoothness term and if we do this then we get a result like this for that particular scene and the reason is that despite we have introduced the smoothness regularizer due to the very strong violation of the matching assumption this very local pairwise term cannot deal with such strong violations of our assumptions like in the case of these reflections here we can see this scattered result and so we need stronger assumptions we need to integrate stronger assumptions into this problem and what we had done in this paper here at cvpr 2015 is to not only think about the scene in terms of per pixel depth for disparities but also in terms of the objects that are present in the scene and because for street scenes like this there's a lot of cars for example and we know what the shape of cars roughly looks like we can try to infer not only the disparity map but also the objects here o jointly and so there is two additional terms added here there is a semantics term that tries to make the semantics of these infrared objects similar to semantics that have been inferred from the image and then there's a consistency term that tries to make the 3d objects that have been inferred consistent with the disparity map and now by integrating this object level prior knowledge we can further regularize the problem and we can get a solution like this here you can see it's not perfect particularly at the boundaries but these cross outliers at the reflections have been now taken into account by this by this object level knowledge and so here are some results on the left you can see the result of the baseline method the scion mysterio matching network from spawner at all and on the right you can see the result when regularizing at the object level and all of these results that you can see here these point clouds these color point clouds are inferred from just two images okay in summary block matching easily suffers from these matching ambiguities that we've talked about before and choosing the window size is really problematic due to this trade-off that we've discussed in lecture number four and what we can do is we can integrate smoothness constraints that resolve some of these ambiguities and allow for choosing smaller windows by integrating the smoothness constraints using these graphical models we can then choose smaller windows and instead increase the smoothness parameter which reduces the bleeding artifacts and this can be formulated as map inference in a discrete mrf all the results that i've shown on the previous slides have been inferred using the max product algorithm so this map solution can be obtained using this max product algorithm the belief propagation algorithm or graph cuts or any other algorithm out of our toolbox of inference algorithms in discrete graphical models and we've seen that integrating recognition cues like detecting objects and trying to infer objects jointly with the disparity map can further regularize the problem and help overcoming very strong ambiguities |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_114_SelfSupervised_Learning_Contrastive_Learning.txt | the last unit is about contrastive learning what is the problem with the pre-text tasks that we have discussed so far well in this pre-text task for pre-training the large chunk of parameters in the neural network we're considering a task that at first glance is completely decoupled from from the downstream task for example here in this case we're trying to solve a jigsaw puzzle for pre-training the parameters of the neural network and then we're attaching small linear or mlp readout hats to this network and training the parameters of these readout layers using fewer supervised examples on the actual downstream task which in this case is image classification but we don't know that this jigsaw puzzle solution task is actually related to the downstream image classification task how should we know we don't know and as i mentioned there is no good theory that we can use here it's all empirical um so what we do in other words is we we hope really hard that the pre-training task and the transfer task are aligned and try to find better pre-train uh pre-training tasks that are better aligned with the downstream task and that's what contrastive learning is about so here is another illustration of the problem of this pre-train pretext feature features learned from this pre-text task for example in the case of a jigsaw puzzle what happens if we attach the readout head at different layers of that network is illustrated here in terms of in this case map of the downstream task performance higher is better the task doesn't really matter here for this purpose of this illustration what we observe however is that the pretext feature performance saturates so if we go to later layers more semantic layers layers that should actually be better for our downstream classification task and we attach our linear readout there the performance separates or even drops at some point what is the reason for this well the reason is that the pretext task the jigsaw puzzle task is too different from the image classification task and that the last layers then therefore have specialized for their pretext task the last layers are very specific to solving jigsaw puzzles are very specific to solve context prediction or to rotation estimation but they don't contain the semantic knowledge required for solving the image classification task and so performance degrades the question is now can we find a more general pre-text task what is desirable well the first thing that's desirable is of course the pre-trained feature should somehow represent how images relate to each other so for example images even if slightly altered of the same object of a cat should be in the same feature in the same area of the feature space and images of different animals should live in different areas of the feature space but at the same time also the feature should be invariant to nuisance factors such as the specific location of the object or the lighting conditions or the color etc so all of these versions and you can recognize some of the pretext tasks that we talked about before all of these versions here should be similar to this one here but what we are now going to do in the context of contrastive learning is not to try to predict some information that has been lost but rather to try to make those similar directly and so we can use any augmentation that we want these augmentations that we generate from a reference image this is the reference image here are called views so this is one view this is another view there's another view in this community they are called views just to introduce the terminology already in short we want to build a model where for a particular reference image different views of that reference image are close in feature space but any view of any other object is further away and that's at an intuitive level exactly what's happening how do we implement this this is the picture from before you can see the reference here called x and you can see the positive examples which are different views alter alterations through data augmentation we've removed some part we've rotated it we've changed the color we've cropped it etc but they should all be similar and then we have x minus which is a negative that's from a different image or different region in an image that's very far away from the region of interest so we can call this a negative these are positives and these are negatives with respect to the reference and what we want to do now mathematically is given a particular chosen scoring function s we want to learn an encoder network f that yields high score for positive pairs x as x plus and low score for negative pairs x x minus or in other words we want the score of f of x and f of x plus to be larger than the score of f x and f x minus and we're going to formulate this as an optimization problem this is the de facto standard contrastive learning objective that we consider assume we have one reference x one positive and n minus one negative examples the positive example is called x plus and the n minus negative examples are called x minus j now consider the following multi-class cross-entropy loss function this is a standard multi-class cross-entropy loss function as it is used in imagenet classification except that now we do have the scoring function inside so loss is minus the expectation over the entire data set that is composed of each of these samples is one reference one positive and n minus one negative examples and we have many of those that we can generate we can draw from a data distribution so this expectation over the entire data set of the logarithm of this exponential of the scoring function of the reference to the features of the reference to the features of the positive example divided over the sum of the positive and all the negatives right you can see this this softmax here and this is commonly known as the info nce loss as has been coined by art from an art at all in 2018 and the interesting thing about this objective is that it's it's negative minus l is a lower bound on the mutual information between f of x and f x plus between the features of all the reference reference images or patches and all the corresponding positives and the detailed derivation of this is given in ord 2018 the paper is linked here at the bottom but we don't have time to go into detail here in this lecture the only thing that's important here is that the mutual information between these two features is bigger or equal than the logarithm of n minus l and the larger the negative sample size and the tighter this bond that's another crucial takeaway we need really a large and a large negative set for this to work but what we can see here from this equation is already the key idea that we're trying to follow the key idea namely is to maximize the mutual information between the features extracted from multiple views which forces the features to capture information about higher level factors this is the goal here we want to maximize we want to minimize the loss so we want to maximize the mutual information between multiple views of the reference image this is what this objective here tries to do and if you do that here's one particular method we don't suffer from the problem that i just mentioned we can also take features at later layers that are still well aligned with the downstream task in this case imagenet classification accuracy there's a couple of design choices for these contrastive methods that we can consider and there's also a couple of problems related to it for example the large amount of negatives that lead to very large memory requirements and so we're going to discuss these different design choices and problems in the following the first design choice is the scoring function we haven't talked about the scoring functions so far we've seen there's a feature function that produces a feature vector from an image or patch so it takes this image produces a feature it takes a view an augmentation of this image and produces another feature and then there's a scoring function and the most common choice for the scoring function is simply the cosine similarity which is the inner product between the features divided by their norm if you remember the lecture on stereo where we're also discussing siamese networks similar to here this is also siamese network because the features are computed with the same network from the inputs we're using the same score function the second design choice is how to choose the examples the positive and the negative examples in contrastive predictive coding these examples are taken from the same image in a way that related examples are chosen from nearby regions for example here the crown of the tree and unrelated negatives are chosen far away the more common scenario nowadays is so-called instance discrimination where we say well all the patches from the same image and the image has to depict not not an entire scene it doesn't work with scenes with multiple objects it has to be something like imagenet where there's a single object it's a very simple scene like this tree here but then if it's a single object i say that all of the patches that are related to each other or all the patches from the same image are related to each other because all of them are somehow showing a tree and the unrelated ones are take any random different image and take any random patch from that different image that's called instance discrimination because we're we're discriminating instances and the first design choice the third axis is augmentations there we have a big playground but now in contrast to these pretext tasks that we have to find before we can combine all the augmentations that we want and that's what's happening in practice and what's really important for making these methods work for example we can crop the image we can resize the image we can flip the image we can rotate the image or cut out a region from the image we can also drop some colors or jitter the colors we can add gaussian noise or gaussian blur or compute edges one of the top performing methods in this space currently is called sim clear for simple framework for contrastive learning it uses a cosine similarity score function where now we have c here as the arguments um which is the features from before but i'm taking the figure here from the paper and what you can see here is that sim clear takes the input reference image and produces two different views through randomly sampled augmentations and it runs the network f that we're interested in this is the network that we optimize for to produce a representation and on top of that it produ it runs a projection network um which we are not interested in and which is thrown away after training which produces a z um so this representation we're interested in but the c is only this this projection is only required for for better training and i do believe this is also required similar to the pretext task that we discussed earlier because the task itself is not directly related to the contrastive learning objective the downstream task so it helps to take earlier features in other words similar to before and another reason for having a projection network is that we can also project into higher dimensional spaces so we can play with with the dimensionality here and that also makes a difference so f is what we want g is an auxiliary network that we use during training but throw away afterwards and improves learning because more relevant information is preserved in age which is discarded later in c as it's closer to the contrastive learning objective here are some of the augmentations that are used you can see crop and resize color distortions rotations gaussian noise gaussian blur all the things that we discussed before and it turns out that actually these augmentations having a diverse set of powerful augmentations are really crucial for getting good performance with these methods this is the pseudo code of the method it's actually quite simple to understand so first we draw two augmented augmentation functions and then we generate a positive pair by sampling data this data augmentation function so we have the first one t of x k and then t prime of x k that produces these two views the first and the second view are the first and the second augmentation and then we run the network we run the representation and the projection network to get agent c for each of those and then we define here the info nce laws that we discussed before and we iterate through and use each of the two n samples as references and compute the average loss you can see we have the sum goes from k equal 1 to n but we have inside this loss we have 2k minus 1 and 2k so we have it twice because each of these pairs is serves as one serves as a reference and then the the other view is the positive while all the others are negatives so this is how the losses implemented and it works it works amazingly well so here we see results you can also compare it to some of the older methods like rotation and instance discrimination and it produces much better performance this is top one accuracy on imagenet compared to the supervised method you can see that if we increase the number of parameters this is a model that has is four times wider than this model we are on par with the supervised pre-trained model and in this setup here we train the feature encoder on imagenet on the entire training set using sim clear but without labels and then we freeze the feature encoder and train the linear classifier on top with label data what's happening here is that we train the feature encoder on imagenet again on the entire training set using self-supervision and then we fine-tune the encoder with only one percent or ten percent of the label data on imagenet so this is a much smaller data set now for fine-tuning and what can be observed is that in these cases we significantly outperform now the supervised baseline that doesn't that has only seen this one percent or 10 percent fraction of the supervised data samples and of course if we have just one percent then the performance gain is this is even larger than we have 10 percent of data for the supervised baseline however a large training batch size is crucial for simclear what we have here is for different number of training epochs we have the batch size from 256 which is already pretty large to 8129 and you can see that the top performance respectively is achieved at around 2000 or 4000 negatives and this is problematic remember we have a siamese network and we need to forward pass and backward pass through all the negatives for training in the forward pass we have to store the activations and the backward pass we compute the gradient so it's a huge memory requirement and that's why this type of model can't be can only be trained at google scale so it can be trained on your gpu not even the gpu node with eight gpus at home but it requires this distributed training on on tpus in this case for this imagenet experiments so this is not nice this makes it rather impractical as it requires these huge compute centers and to alleviate a problem a method called momentum contrast has been proposed which in spirit is similar to simclear but has has a couple of major innovations that reduce the memory requirements here's the model the idea is that we have now um so-called queries and keys so like they phrase it in terms of a database with queries and keys but the keys i mean these are really just positive um and this is a reference and these are negatives here um for which we want to comp com compute the contrastive loss um using a cosine similarity as before and here we have after the encoder we have the features after the encoder of the keys we have the key features now the the crucial thing about momentum contrast or moco is to keep a running queue or dictionary of keys for the negative samples you can think of this as a ring buffer of mini badges we push in here we have a very long ring buffer where we push in a new mini badge maybe that's 64 or 128 negatives and we remove in that ring buffer we remove the oldest of the 128 negatives and that's how we update the ring buffer with these keys and then we don't back propagate gradients to these uh negatives to these keys here but only to the query encoder not to the queue and therefore we don't have to store all the intermediate activations we just have to store the final results the features the keys and therefore the dictionary can be much larger than the mini batch size the problem with this is now that the dictionary becomes inconsistent as the encoder so if we would take this encoder as this encoder here this is what is updated so we can we could just clone it and use this encoder here if we would take this encoder as the encoder for the keys as well we would have a completely differently encoded mini badge inside the dictionary so each mini batch would have been encoded with a completely different encoder because that encoder varies relatively quickly over time because it gets updated through the parameters to alleviate this problem and to improve consistency of the keys in the queue the what was proposed in this paper was to use momentum momentum similar to momentum in stochastic gradient descent where now we don't just copy this encoder but we use a momentum encoder that's a linear combination of well the query encoder and the previous momentum encoder such that there is less fluctuation there's more consistency across the keys across the features inside that big queue that have been encoded using different encoders so we're smoothing it out and this is key to actually making it work and this is already results from um an improved version called moco version 2 where they have now used also stronger augmentation like in sim clear which hasn't been used in moco version 1 before and the non-linear projection head which both turned out to be crucial here as well but the key difference to sim clear is that now the same or even better performance can be obtained at a much smaller batch size so this can be now trained using well not your home gpu but at least using a regular eight gpu node um without like a google scale tpu cluster because the batch size is only 256 compared to 8 000 or 4000 for a sim clear at the same performance so this is now the last part of this unit all of the methods that i've shown so far in this unit were classical contrastive learning methods and there's many more just highlighted some of the most prominent ones but a very recently proposed method deviates a little bit from this paradigm and makes things even simpler it's called barlow twins and it's inspired by information theory as it tries to reduce redundancy between neurons it reduces redundancy between neurons not between samples it doesn't it doesn't look at distances of samples it just looks at distances of neurons that's why it's so simple and the key idea is that neurons should be invariant to data augmentations but independent of others in this example an image is projected uh or is augmented in two two different ways so these are two different views of that image and then pass through the network that we want to train to yield a representation imagine this to be a 64-dimensional feature vector ca and 64-dimensional feature vector cb maybe let's say it's a three-dimensional feature vector ca and a three-dimensional feature vector cb blue green and red are the free features and now what we want is that for these two different views this feature the red feature is similar because these are two different augmentations of the same object and the green feature is also similar to the green feature from the overview and the blue feature is also similar to the blue feature from the overview but at the same time we want to minimize redundancy we want the features themselves to be different from each other we want the blue feature to be different from the green one and from the red one how can we formulate this mathematically well the idea here is to simply compute the cross correlation matrix over the mini-batch we take the mini batch compute the empirical cross correlation and then encourage this to be the identity or to become the identity matrix so we have a loss that tries to make the off diagonal elements zero and the diagonal elements one and the intuition is of course that the diagonal elements are the neurons across the different augmentations which should be correlated the same neuron but on the off diagonal they should be not correlated to minimize redundancy very simple so here's just the correlation and this is the loss that's used the crucial difference to simclear and moco is that no negative samples are needed like in classical contrastive learning but the contrastive learning here happens at the neuron in the in neuron space and it's super simple to implement we we compute the represent we take two randomly augmented versions of x we compute the representations normalize them compute cross correlation the loss and back prop it's probably one of the simplest self-supervised learning methods that you could implement and it works it's a simple method that performs on par with state of the art if you compare it to for example sim clear here on imagenet or with a fine tuning experiment or just looking at linear readout classifiers it is also more mildly affected by batch size compared to sim clear at least so you can see here the red curve which is is a little bit affected by the batch size but at 256 or 512 you can still get reasonable result for this method reasonable results for this method they also found similar to the previous approaches that projection into a higher dimensional space well now in this case projection into a higher dimensional space but it's also projection only for the pre-training leads to better results so again here is a yellow auxiliary network projector network added for the correlation that is thrown away afterwards and only the cyan feature encoder is the end product the result of this method but having this projection helps and in particular what's interesting here is that you can increase the output dimensionality of this mlp this little mlp and performance keeps on increasing here's a comparison of um this method in red and boil and sim clear in black that's it for today we talked about many different methods and i want to briefly summarize creating labeled data is time consuming and expensive as we've seen in the first unit self-supervised methods have the potential to overcome this by learning from data alone without any labeled examples task specific models typically minimize some photo consistency measures for predicting optical flow or depth pre-text tasks have been introduced to in contrast learn more generic representations and these generic representations can then be fine-tuned to the target task like image classification or normal prediction however classical pretext tasks such as rotation estimation often do not well align with the target task and contrastive learning and redundancy reduction in contrast are better aligned and produce state-of-the-art results so they close the gap as we see right now to fully supervised imagenet pre-training that's it for today thanks for watching |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_82_ShapefromX_Photometric_Stereo.txt | we saw how these ambiguities arising in the shape from shading problem can be partially addressed at least using strong smoothness assumptions now in this unit we're gonna see that these ambiguities can also be addressed using simply more observations and that's called the photometric stereo problem instead of using more smoothness constraints what we do here is to assume that we have more observations per pixel instead of taking just one image we take k images of the object at least three images and we make sure that these images are all captured from the same viewpoint for example using a tripod and a static object but with a different known point light source each here's an example here we have taken four images from the same camera viewpoint but each time with a different location of the light source and from this with at least three images we can recover now the surface normal and in fact also the albedo um uniquely even if we just consider a single pixel so what we do is now we we can do a per-pixel estimation we don't even need smoothness constraints of course we can add smoothness constraints which will help in the case of noise or if we have just few observations but we can at least in principle do a per pixel estimate of the normal and the albedo or the material parameters if you just have a la version material a diffuse material as we're going to consider in this very basic setting here then the number of parameters that we need to determine is free we have two degrees of freedom for the surface normal we have one degree of freedom for the albedo and in order to determine three parameters we of course need three observations so we need three observations per pixels per pixel that means three images here what you can see here is an example with non-lambertian reflectance if you have more complex material properties then you might need more observations to estimate the parameters of these more complex bi-directional distribution function models what we're also going to assume is that the camera and the light source as in the shape from shading setting is infinitely far away we're going to do this for convenience to consider the simplest form of this algorithm using the same rational as before but there is extensions of this technique to the case where the camera or the light source or even both are um close with respect to the scene or relative to the scene here's an example of what such a setting could look like this is a professional light stage as you can see there's a subject inside that's illuminated at very high frequency from different light sources in order for the object to keep static for the entire time span where this recording takes place so this is a typical setting a typical light stage where such recordings happen now by taking k images what do we get if we take just a single image and now we're going to consider a single pixel if we take a single image then we obtain for a particular light source direction this is for image one the location of the light source in pq gradient space so it's ps and qs here which gives us these iso brightness contours as conic sections now let's assume that at that particular pixel that we consider so this is a reflectance map here that we see the iso contents of the reflectance map let's assume that for that particular pixel xy that we consider the image intensity or the reflectance that has been recorded at that pixel is 0.5 and let's assume that this is the iso surface the iso brightness contour that corresponds to 0.5 then we know that the normal in pq representation must lie on that curve right now we can do the same thing for a second image this is the second image from the same viewpoint but with a different light source location ps qs2 are different from ps qs1 and therefore the iso contours are also different and let's assume that for this particular image at the same pixel as before the same pixel that we consider here x y the intensity 0.3 has been recorded that corresponds to this particular isosurface that's shown in thick red here because we know that the normal must lie on both of these curves we know that the normal can be either located here or here and now if we take a third measurement let's assume the light source is here and at the pixel x and y we observe intensity 0.5 that corresponds to this iso contour then we have uniquely now determined the location of the normal this is a graphical illustration of how the process works and how we narrow down the two-dimensional search space to a single location where we have now identified at that particular pixel x y the normal that's represented here as pq gradient of the depth map so this was a visual illustration let's now make it more formal and let's use some math for doing that again as we mentioned in the beginning we're going to assume laboration reflectance and we're going to assume l in equal to one to simplify matters then the image intensity or the reflected light is simply given by i equal to rho times n times s or s transpose n which is the same because these two are scalars so we have this expression i which is the image intensity equals to rho s transpose n and what we're going to do now is we're going to estimate both rho and n and we're not going to use the gradient space representation here because in this general normal form we can actually utilize the fact that the normal is unnormalized to capture the to capture the albedo row as the magnitude of the normal so given three observations we have the same view direction but we have three different light sources which give rise to three different intensities that we observe in these three images at that particular pixel corresponding to a particular surface point p um so the same v but different s we can express this in matrix form as follows so we have i 1 s 1 transpose rho n this is this expression here for the first image then we have the second intensity this is a scalar i2 equals s2 transpose this is a normalized light source direction which we assume to be known here times rho n and then we have the same for i3 now if we denote this vector here as the vector i and this matrix here as the matrix s and if we denote rho and n as n tilde the solution is simply given by the following we can take this matrix s to the other side by inverting it so we have n tilde being equivalent to s to the power of minus 1 times i so we multiply that vector i with s inverse and because we know that the normal it must be a unit vector we can then take rho as the length of that and that unnormalized unit vector or that unnormalized normal n tilde and then the unit normal is given simply by n tilde over rho so it's a very simple algorithm that we can apply and implement in parallel at every pixel that gives us with just three observations three images at every pixel the normal direction as well as the albedo this is all nice but when does photometric stereo not work when does this even this simple setting not work well we have seen that we have to invert s right and of course this doesn't work if um this matrix s here is rank deficient if it doesn't have full rank we cannot we can't invert it or in other words if there is a linear dependency between s3 s2 and s1 for example if s3 equals alpha s1 plus beta s2 or in other words if all the three light sources s1 s2 and s3 and the origin p lie on the same 3d plane as indicated here then the linear system becomes rank deficient and thus there exists no unique solution so we have to ensure that when we illuminate the scene that the lights are not all in the same plane and of course there's additional challenges the intensity intensities that we observe might be noisy the material might not truly be lamborghini or the camera might not be truly autographic and we might want to add more constraints such as smoothness constraints or relax some of these assumptions to tackle these challenges of course better results can always be obtained using more lights even for this simple setting here with the lum version the diffuse material because we can simply average out for example measurement noise so if we have a this should actually be n here if we have n observations and n light sources we can stack that all together and then the least square solution is simply given by multiplying the pseudo inverse with i instead of the inverse of s and this gives us n tilde and that albedo and the normal the unit normal can then be obtained as before what does this look like here's an example for a lamborghini buddha statue and we can what we do here is we first compute the surface normals and the albedo this is illustrated here here we have the surface normals here we have the albedo you can see that this statue is not uniformly colored everywhere but there is brighter parts that nicely show up in the albedo but then there is also issues with shadow that show up in the albedo map and then we take this normal map and we can integrate it using for example the variational formulation that i've shown in the previous unit in order to obtain depth map and so now we can we can take this and re-light for example the scene so here you can see in this example that we have chosen a uniform albedo and a new light position to re-light the scene and to render a new image of that scene of that same object here are two other examples what we can see here are the three input images we see the normal map that has been estimated and you can see for example how accurately the normal orientation of the nose has been estimated in this case here and we can also see the non-uniform albedo map that has been estimated for color images we can simply apply the photometric stereo formulas to each channel separately to obtain the color albedo and we can take the average for the normals for example but we can also see that if we deviate from the lambertian assumptions as in the case of this object here at the bottom that is a little bit specular uh for example here and here we see that the method doesn't work so well anymore so we have to tackle these problems using a model that actually models non-diffuse objects otherwise we'll bake in some of these artifacts into the albedo map as illustrated here on the right and of course that will also impact negatively the estimate of the normal map or the geometry here's the integrated surface that has been obtained in the case of the mask from this previous slide here so this is the reconstruction that corresponds to this normal map as you can see from these just these three images a very detailed reconstruction has been obtained we can take this photometric stereo algorithm and also apply it outdoors the sun is a perfect point light source that is almost infinitely far away so we can utilize it to capture the geometry of a scene by observing a scene at many different days and times of day over the year over the course of a year for example using a webcam as has been done in this paper here by akama nadal photometric stereo for outdoor webcams at cvpr 2012. in this case this object has been reconstructed has been observed by this web camera here and has been reconstructed in terms of its normal maps as shown here at the bottom one problem with photometric stereo is also that the light source is assumed to be known which even if it's a point light source is often not the case so often you have to also determine the actual position of the light source and this is also a topic under investigation under current research here's an example of an algorithm this is a a deep network now there's of course also deep models that try to do photometric stereo by inputting a stack of images and masks and in a supervised fashion using a lot of synthetic rendered data trainer model that is able to predict the normal map and in this case the model explicitly integrates a light estimation model and from the light sources and the images then the normal map is inferred you can also go one step further and try to disentangle post geometry and more complex spatially varying brdfs and this has been attempted here with a little bit more complicated sensor that's a project from our group where we have a sensor that has these light sources mounted it can be alternatingly illuminated and the challenge about this is that we didn't assume the sensor to be static on a tripod but um that the sensor also moves so we have to also infer not only the position of the light sources but also the pose of the camera to support this setting there is a little depth sensor here integrated that gives a rough initialization for the depth that we can then start working with and here's an example of the reconstruction here are some input observations you can see there is areas that are more diffused but there's areas that are also very specular on this object and you can see the resulting geometry appearance the normal map and also the decomposition into the diffuse and the specular materials here on the right and then we can take this and render the object from novel viewpoints and also under novel illumination conditions here's a little example of this and here's another example where photometric stereo which is classically a a 2d problem we're trying to confirm a normal map and from the normal map depth map is integrated with multi-view approaches so it's called volumetric multiview photometric stereo and you can see uh for these model scenes here how accurate in some parts the reconstruction is that has been obtained using these shading cues using shading cues we can obtain much more fine-grained detail than using for example correlation because we would need very high image resolution to correlate at that level of accuracy but the shading directly gives us all uh the little uh geometrical details at every pixel so you can see all the details here at the roof for example but you can also see that in areas where there have been less observations interpolation happens and we don't know much about the surface current research topics include perspective photometric stereo which is the case where the camera is not assumed to be autographic uncalibrated for the metric stereo where we assume the light sources not to be known as well as deep models which we've just seen are volumetric models and mobile setups and [Music] in particular also tackling problems such as shadow estimation and the estimation of non-laboration materials where more parameters have to be used and so therefore also a larger number of images or stronger smoothness constraints need to be incorporated to make this tractable you |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_102_Recognition_Semantic_Segmentation.txt | now let's move on to semantic segmentation there have been a few attempts to semantic segmentation also before the deep learning era but the performance of these approaches is nowhere near comparable to the performance of deep methods so we're going to focus on deep methods in this unit let's remind ourselves again what is the problem the problem is given an input image that is not shown here but you can imagine what it looks like to classify or assign a semantic label to every pixel in that image so for example this pixel here we want to associate with the class sky which might be label number five and this pixel here we want to associate with the label grass which might be pixel number 13 and so on this is the goal and we want to do this for both objects like cars and stuff categories like sky in 2015 um a group at berkeley uh realized that actually we we can obtain semantic segmentation using a classification network they observed that given that the the very strong performance of image classification networks why can't we just apply those classification networks in a convolutional manner on larger images and instead of producing a one by one output which is turned into a probability distribution as we have seen in the previous slides return a feature map that is converted into several heat maps one heat map per category here is the heat map for tabicat and as you can see now in addition to the classification results we have some red pixels that indicate strong probability for the presence of tabicat we also know where the presence of this class is because we have applied this network here in a convolutional manner in a sliding window manner over this image but of course we have implemented it as a big convolutional network and so we have this heat map here that tells us for the category tabby cat where there is high probability for that category so we apply a classification network in a convolutional fashion they call it convolutionalization in the paper this is a figure from the paper in order to obtain class heat maps and we have such heat maps for each category of course and then we can use simply a cross-entropy loss at every pixel of the output here of these heat maps and sum over all these classification losses right so training is the same as a classification network except that now we have a sum over losses where we have one classification loss for each of these pixels in this stack of heat maps however the problem as we can already see from here is that the output heat maps are really low resolution due to all the down sampling that takes place in the neural network in order to achieve a large receptive field that is necessary in order to actually recognize the object however using only convolutional filters without any down sampling does not work in practice due to the small receptive field in other words i would need a huge amount of convolutional filters hundreds of layers in order to obtain the same large receptive field that i can obtain with just a few down sampling operations such as max pooling and this is just impossible to train the number of parameters is too large this doesn't work the idea therefore in this paper and it's something that has maybe gone a little bit unnoticed but a lot of the ideas that are present in modern semantic and instant segmentation networks are already present in this original paper from 2015. this core idea the second core idea apart from the convolutionalization is to learn also an upsampling operation and combine low and high level information so here you can see indicated by this grid structure the resolution that we're operating on here's the image the convolutional layers are not shown just for clarity but there's some convolutional layers here then pooling conf pooling conf pooling and so on and then in the most basic version here at this from this very low resolution we directly up sample with one layer upsampling it's called a deconvolution it's a transposed convolution the inverse of a convolution is an upsampling operation it's one form of doing up sampling so here we up sampling directly from this very coarse resolution to this very fine resolution of course the model cannot recover defined details but what we can do is we can up sample from here to just this resolution like the next one and then take features from that layer and combine them add them or concatenate them and then do the same again up sampling the transpose convolution they learned up sampling and then combine with a sum or concatenation with the features at that level and therefore because these features here already contain some information about the edges some high frequency local feature information we get sharper boundaries and this is illustrated here fusing information from layers with different strides improves detail here's this very simple variant and here is the variant that fuses these features from the layers with the different strides however as you can still see from these results even for relatively simple examples the output is still relatively coarse this is 2015 this is the first approach to the problem still beating all of the state of the art or in terms of classical approaches and then from there on of course the quality improved here's another model that has been developed around the same time concurrently they all have a similar architecture it's all an encoder we have this this part of the network that is contracting where the spatial dimension is reduced and the feature channel number is increased and then there is a an expansion part here on the right where it's called a decoder where information is decoded back up to the highest resolution and there's the skip connections in this case they they skip just the pooling indices in order to retain uh high details so there's a couple of different ways to do these skip connections um so for example like for the up sampling operation we can do nearest neighbor up sampling or bed of nails up sampling or bilinear sampling but what they did in this particular segment work is they used these pooling indices that they basically in every pooling operation they remember which element was activating maximally there's a max pooling operation so in this case this element and then in the unpooling operation we injected this information by unpooling into that location so for example here the six was most strongly activated so the four in the respective unpooling operation gets inserted here here the three is most active almost most strongly activated and so the one in the respective at the same resolution unpooling operation gets inserted at that same location so for unpooling remember which element was the maximum during pooling and this is where the location where the element is inserted and here's some results of this segment model and it compares also to the fully convolutional model and you can see that this particular type of skipping improves results an alternative to down sampling and up sampling or a model that can be used in combination actually is the so-called dilated convolutions they are an alternative to this combined down and up sampling because they allow to reach a large receptive field size relatively quickly just with convolutions dilated convolutions increase the receptive field of standard convolutions without actually increasing the number of parameters this is key without increasing the number of parameters i can increase the kernel the receptive field and this is the core idea of dilated convolutions and therefore a network with dilated convolutions is able to perform image level predictions for example in semantic segmentation without upsampling and down sampling in theory in practice now even in this work here where this idea has been presented has been presented concurrently in multiple works so even in this work here for example they still required a standard unit backbone and could just do this on top to refine the segmentation but then they could get much much more details and much better quality how does a dilated convolution look like well we have a dilation factor here this in this case dilation factor is two and that distributes the sampling of the input signal it's the previous layer this is next layer so we increase the receptive field size because we're effectively skipping some locations so at the expense of skipping some locations and potentially also introducing some some artifacts some mosaicing some aliasing artifacts um we're increasing the receptive field but we are not increasing the number of parameters this kernel still has three by 3 times the number of channel parameters no matter what the dilation factor is and that's the crucial thing we can now use even higher dilation factors and get larger receptor fields and combine this and here are some results so you can see this is the fcn the basic version uh no this is the fcn defined version this is the deep lab model and here is the this is one of these state-of-the-art backbones combined with the dilated convolution as the front end you can see that this model achieves much higher accuracy how much more details boundaries compared to these these models yeah here's another model also from 2015 it's called a u-network very simple very similar ideas also one of the standard models this is also has an encoder it has a decoder has skipped connections but in this case here the skip connections are concatenating the features from the respective from the same resolution um with the upsampling predictions from the previous resolution in the decoder here the white and the blue the white comes from the previous encoder layers at the same resolution and then this is concatenated with the upsampled result of the decoder stage here and this is one of the defactor standard architectures for many tasks with image level outputs such as depth estimation or segmentation here's a work where semantic segmentation has been tried to be made temporarily consistent if i apply a semantic segmentation algorithm it's not necessarily semantically temporarily consistent because i applied to each frame independently and so there's a lot of flickering artifacts a lot of noise going on but here they optimized this model to be temporarily consistent as well and the results even for this is 2016 are are really really quite impressive i think so i let this play for 10 seconds this is on the cityscapes dataset here is the cityscapes leaderboard this is one of the standard benchmarks for evaluating semantic segmentation methods in the context of self-driving and you can see like what i what is the state of the art there's a lot of methods ranked on the leaderboard you have to submit your results um on a data set as a test set for which the ground roof is not available and the server evaluates your method similar to the imagenet challenge or the kitty data set and um then it it creates a number and you can put your entry your method into that table and then the measures that are used for evaluation of semantic segmentation methods are the simplest is per pixel accuracy just measure how many pixels are correct but that's not used very often more frequent these days are per cluster car index or intersection of reunion this is basically for each semantic class we measure true positives and false positives and true negatives and false negatives in terms of for that semantic class comparing ground truth to the prediction and this is also a metric that will talk about a little bit more in the detection unit later on and then because some of the semantic classes due to the perspective projection and to the size of these classes are very small occupy only few pixels there's also metrics that weight them relative to the instant size and here are results of a state-of-the-art multi-scale approach as of 2021 which is currently ranked fourth on cityscapes and you can appreciate all the details that are predicted by this model at really high resolution you can see for example this pedestrian these traffic lights here these signs um the boundaries of the trees of the buildings of the cars etc it's really amazing what these deep neural networks for semantic segmentation are able to achieve these days and what i mentioned already in the beginning one task that is basically uh combining different uh you know the combining the semantic segmentation problem with the instant segmentation problem that we are going to discuss later in in the lecture is called panoptix segmentation so here the goal is to predict for each pixel the semantic category and both for both the stuff and the objects as well as the instance label if there is an object below this pixel so this is called panoptix segmentation |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_123_Diverse_Topics_in_Computer_Vision_Human_Body_Models.txt | this unit is on human body models humans interact through their bodies as illustrated here on the slide which is a slide i took from michael black who is a max planck director here in tubingen and who has dedicated a lot of his research to understanding and modeling human bodies and modeling and understanding human bodies is important if we want to understand what humans do but also in order to interact with humans so here i see some of the interactions that humans do hugging collaborating dancing interacting with their environment etc and what we're going to discuss today is the so-called simple body model smpl and the simple model is a maybe one of the most popular models these days in the research community and the foundation for many follow-up works and it's a generative human body model and the goal of this is to create a model of well minimally cloth human bodies and cloth human bodies is harder so some model focus on on the human body itself so focus on people in minimal clothing as you can see here and that the goals for this model are as follows the reconstructions from this model should look like real people of course we want to build a model that's precise it should also allow for deformations that look like deformations of real people it should have a small number of parameters so that it's easy to represent and to store and to manipulate and it should be easy to fit data because we often want to fit this model to scans or to images in order to understand humans and it should also be easy to animate because often we are also interested in taking a character an avatar that we have generated and animating that character for example in the context of generating movies or video games etc um yes so here this is uh the model that we'd like to obtain m the output is represented in terms of 3d mesh and that mesh depends on the pose and the shape and the dynamics and the texture and some parameters right some general parameters of that model here's an illustration on the left you can see a real image and on the right you can see the reconstruction of that simple body model in terms of both the geometry and the appearance and you can see that this reconstruction is matching the input image quite well and in this unit we are going to discuss how this is done modeling of human bodies human shape and pose has a long history in computer vision starting from well the 70s and 80s and the question is of course what is a good representation so the representation that the simple body model focuses on is a representation of not the interior of the human body but the human body as it can be observed basically the minimally sk minimal clothed human body but to represent that minimally clothed human body as accurately as possible in terms of geometry and appearance and a big inspiration for the simple body model was the seminal work by blunts and feather in sikra of 1999 on human face models that is a precursor if you will of the simple body model but for a specific part of the human body for the face and there's a little old video from this technique which is still amazing to watch these days in terms of the accuracy that this model has achieved so i want to play this video for a little while so you can get an idea of what this method is is doing and what it's capable of the morphable face model is derived from a data set of 200 colored 3d scans of faces individual faces are combined into a single morphable model by computing dense point-to-point correspondences to a reference face a modified optic flow algorithm establishes 3d correspondence automatically the morphable model combines 3d shape and texture information of all example faces into one vector space of faces we can form arbitrary linear combinations of the examples and generate continuous transitions starting from the average face individual original faces are caricatured by increasing their distance from the average forming the average for male and female faces separately the difference can be added to or subtracted from an individual face to change the perceived gender other facial attributes such as the fullness of a face can be manipulated in a similar way from a labeled set of faces the characteristic changes are extracted automatically in our model they're controlled by a single parameter differences in facial expressions captured from another face can be mapped to any individual we now reconstruct 3d shape and texture in order to animate a face given only a single photograph of a person first we manually align the average face to the target image roughly estimating position size orientation and illumination then a fully automated algorithm finds the best reconstruction of the face within the morphable model 3d shape and texture are optimized along with parameters such as size orientation and color contrast the output is a high resolution 3d mesh of the face it is an estimate of 3d shape and surface colors based on a single image additional texture extraction improves details and texture the reconstruction can now be rendered into the image and a whole range of facial variations can be applied here we simulate weight gain and weight loss facial expression can be post-processed in images forcing a face to frown or to smile from this image we also estimated 3d shape and texture and combined the photograph with 3d computer graphics cast shadows of novel objects are rendered correctly into the scene illumination conditions can be changed and pose can be varied to some extent from a single black and white image we obtain a full estimate of 3d shape the result of the matching procedure includes an estimate of surface color since the morphable model contains color information finally we show the application of our model to a painting this was 1999 more than 20 years ago quite impressive okay so let's dive into simple simple is an acronym for a skinned multi-person linear model and the idea is as in the previous video actually seen to take scans from a from a big human body scanner take many of these scans um and to create a generative 3d model out of those and to create this model in a way such that it is factored that we have different controls for pose and shape dynamics and breathing and all the other factors that we might be interested in so here's an example to illustrate this you can see a character you can see the geometry so i want you to focus on the geometry here on the top you can see on the left that the pose is fixed but only the shape is buried in the center the pose is fixed at a pose is varied and the shape is fixed and on the right both the shape and the pose vary so you can see how these factors are disentangled in the model and can be in independently controlled and give rise to such generations here so let's now look at the technical details of this model simple uses a 3d mesh as an output representation and as we know a mesh is represented through vertices and faces the base mesh is designed by an artist to match the majority of human bodies well and the mesh has 7000 vertices which yields a representation with 21 000 numbers each vertex is a three-dimensional vector and so we have 20 21 000 numbers to represent the human body this is the space that we want to operate in but it's of course a very high dimensional space it's not as high dimensional as typical images but still very high dimensional and via that it's clear that most settings of these numbers do not really correspond to people or any shapes but just random noise random garbage so we want to find now a sensible setting of these numbers that corresponds to shapes in the data set that we've used to train our model that's the goal in order to do so as a first step we need to do something is called mesh registration the raw scans are just unordered and incomplete 3d points they can be meshed but you will see they have many artifacts and holes as illustrated here and to get correspondences a template mesh this template mesh designed by the artist must be aligned with each scan and this is a hard problem it's actually a chicken and neck problem because you require the body model the trained body model with the parameters in the first place to deform it towards the input towards the observations but you also require the observations to train the human body model right the registered observations and the solution here that is taken by the simple body model is to solve model and registration to solve model estimation and registration jointly now let's for a moment assume that we have already solved the registration problem and that this allows us to convert all the scans that we have obtained from the scanner into a canonical pose and that pose is this a pose called a pose here because the arms are in this a position then we can we can basically apply we can transform each of the scans into this a pose as illustrated here so these are the raw scans transformed into the apos and then we can fit this this body model to these two these scans in a pose and what we have then is basically we have this this this body model we have the the we have this this this uh 3d mesh which is uh a collection of these 21 000 numbers of these 7000 vertices so if we vectorize the vertices uh the mesh vertices we call them t we subtract the mean mesh take the average of these vertices and this is the colored model that's illustrated here and we're going to subtract that and then we can apply a principal component analysis to yields a linear model of this right so we can stack these into the columns of a matrix t then we can decompose that matrix such that these observations are linear combinations of some basis uh shapes and these are called the blend shapes that are combined linearly with this weights beta here and these weights beta now are very low dimensional compared to original 21 000 dimensional representation these weights may just be 10 dimensional or up to 300 dimensional to express most of the variance of the observations so we can now using pca because we have assumed that the body is a canonical pose we can reduce this problem and uh obtain a 10 dimensional latent space that expresses most of the variance in the data what does that look like this is an example of variation of of individual principal component principle vectors you can see that in in the beginning when we vary the first principal component then the size of the character changes if the vary the second principal component we see that the size of the the weight of the vector character changes and then if we go on to additional principal components we see that smaller details start to change and this is what we expect because the first principle component expects explains most of the variance in the data as we know however of course we we do not have all of these as initially explained all of these observations registered into the same coordinate and so we have to solve this chicken and egg problem where we are solving for both the registration and the model parameters jointly and that's what simple does so simple solves also kind of a pca problem but at a larger scale by also deforming the template mesh and how it deforms the template mesh is it uses a extension of so-called linear blend scanning so first need to understand what linear blend scanning is in linear blend scanning the goal is for each vertex on the template mesh to find the transformation to the mesh in target pose as illustrated here on the right so here we have a template this is in t pose we want to find for each of these vertices the transformation that transforms of each point on that surface we want to find the transformation that transforms that to the corresponding point on the surface in target pose and we're going to do that through this linear equation here this is just a sum over some weights these are called the blend skinning weights times a rigid bone transformation matrix that's just a rigid transformation matrix with 6 degrees of freedom that depends on the pose parameters and the joint locations and we're we're gonna multiply this with the input point t so we take this t i and by this linear combination with um this weighted combination of these with these g's we're transforming that point here so what is this g is this rigid bone transformation this is illustrated here so in different colors here you see different bones let's look at for example this green bone here which is the right upper leg and this green bone here which is encircled by this ellipse is transformed using a rigid body transformation to that green bone here which is the same bone right and we call that this is the cave bone we call the transformation that rigid transformation g k so any point that lies on the surface of that bone should be transformed according to that bone transformation decay so we simply transform that point with by multiplying ti with gk and the weights which are discolored this colorful representation on the surface of the human body indicate how much each of the bone influences that transformation so if you take a point that's in the center of that of that body part k um then all the other weights will be zero except for the weight of that particular bone of that particular body part but what happens if we take a point that's in between two body parts let's say this point which is in between the red and the blue body part well in this case the weight of both the red and the blue body part will be high both will be 0.5 let's say but all the other weights of all the other 25 or so bones or body parts will be zero so we have a linear interpolation of the transformations these really body transformations of two adjacent body parts or bones and this is necessary because we have a non overall we have a non-rigid transformation so we need to interpolate whenever we have a joint we need to interpolate between the rigid transformations that are induced by the two adjacent bones and these rigid bone transformations of course depend on on the pose theta and and potentially also on the joint locations j um because depend on the size of the character but in particular we depend on theta because uh if you think about this as a kinematic tree if i change this joint here the elbow then of course the corresponding child's part or child bone transformation is also affected by this change of the joint in the kinematic tree so this is how linear blend skinning works it's basically a linear blending of rigid bone transformations where points that are in between two adjacent bones receive information receive transformations from the two adjacent bones such that there is a smooth transitioning happening now how are these skinning weights obtained well the answer is simple in practice they are typically obtained by an artist and the artist manually in a careful process designs these skinning weights such that they are leading to realistic opposed human body shapes but that is a difficult task actually and uh a particular configuration of skinning weights doesn't lead to very nice uh deformations um okay it cannot be guaranteed to lead to nice deformations for arbitrary body poses as you can see here so if you look at the elbow for example the deformation that's happening at the elbow is appearing very unrealistic and the cause for this is really that this linear blend scanning is is too simple it's it's not enough to just linear blend the transformation of these two bones because you will get effects like this it's also illustrated in this video here we have linear blend skinning on the left you can see these effects and you have the simple body model which we're going to talk about in a second on the right which is the generalization of linear blend skinning which doesn't suffer as much from these effects you can see a much more natural shape appearing so what does simple do differently from linear blend skinning this is lbs linear blend skinning versus simple or simple versus linear blend skinning on the left we have the same illustration as before and here this is the formula from the previous slide which is the formula for linear blend skinning and on the bottom you can see the formula for simple and in red are highlighted the three key differences the first key difference is that because simple is a model for multiple bodies while lbs was typically applied to single character simple now has to handle many characters of course the joint locations must depend on the body shape there's tall people they're small people and of course the bone length and therefore also the distance between joints must and the location of the joints in the canonical say t pose template must depend on the body shape that's why j now depends on the body shape and it's regressed effectively from [Music] the body body shape and then two additional modifications are so called correctives in addition to uh in addition to simply take that point t i on the surface in the canonical rest what's called the rest pose and transform that point to yield t i prime what is added to that point ti already in the canonical space are two correctives a shape corrective s and a post corrective p the shape corrective allows the model in the rest pose to accommodate different shapes to accommodate heavy people and to accommodate fin people for example and in order to model the shape variation of course we have to change the location of the surface point and we have to model that and that depends on the shape parameters which are called beta and the additional the last difference here is this post corrective as we've seen previously a major limitation of lbs is that it is too simple and that there is many artifacts if we just take this linear combination of um at an at a joint of two adjacent body parts or bones if we take the linear combination of this rigid body transformation g of these two this is too simple and it leads to these artifacts that we have seen so what simple does instead is it adds these post correctives which are also changing the shape of the human body in the rest pose but it's changing it not based on the shape parameters beta as the shape correctives do but it's changing it on based on the post parameters theta so you get also a post dependent corrective to the shape in canonical or in rest pose and these two models are linear this is a little simplification here of what the model actually does it actually uses not the let's say the joint angles directly but it uses a bit more clever representation of the joint angles but still it's a linear model in this representation and then this model is is trained on a big data set and optimized for the parameters of that model which are um well the basis shapes s and the the basis post correctives p so basis shape correctives and basis post correctives in order to yield the final model and of course also in terms of the uh the upper parameters like uh what lbs is optimizing for the pose and the the well the skinning weights are given so here's an overview of the data set this is the just a little glimpse on all the scans that have been used for training that model it's enormous it's a large data set of several thousands of shapes and here you can see an illustration of what the model has learned in this case in the context of blend shapes so you see what happens if the animated this is a generated geometry from the model the animated model what the change in the pose of that model induces on the shape in canonical pose you can see that due to the pose dependency because we want to this final generation result to look good don't have these artifacts that are induced by lbs the shape in the canonical in the rest pose is modified accordingly to compensate for these effects and this is the what these post correctives these post blend shapes do this it's basically all the big pca model just factorized into these different components some depend on shapes some depend on posts and here's a result of what this model can do so you can see different identities different people performing the same action um through this simple human body model it's a very precise representation of the human body shape and can be fitted to many different people the model can be used for example to retarget characters here's an example where there is a motion the original body shape on the left and there is a re-target motion for a different shape here on the right you can see that the person is moving accordingly but with a very different shape as the input character the model has also be extended in various ways actually so this is just one extension it's called dimple dynamic simple in particular for taller people um for people with more weight you have soft tissue deformations that depend also on time and this can be modeled by taking time into account as you can see here now this model cannot just be used to generate artificial characters but it can also be used to more precisely estimate the shape of humans by just looking for example on a single rgb image as shown here there's a little video for this you can see the pose and the shape that has been extracted for the people that are visible in these images here on the left of course if you estimate human body pose and shape from a single image there's a lot of ambiguities to resolve the more views you have the better or if you have depth data that's also helpful but in this case it's estimated just from a from a single image and you can see that the model quite successfully is able to reconstruct pose and shape due to its prior knowledge because it has been trained on this big data set of 3d human scans now this is of course not the end of all of human body modeling it like this does a big research community in the community computer vision community that's just concerned with this problem and some of the current research directions as illustrated here include modeling of clothing which is much harder than the naked human body or minimally clear of human body modeling of garments realistic modeling there's a lot of stochasticity involved also modeling of appearance many of these models that i've shown to you didn't include appearance but of course appearance is important if you want to realistically render this then modeling of dynamics few shot learning learning just from rgb images and building avatars from very little data like maybe just from your webcam observing yourself and inserting them into maybe a video game or a virtual reality application interaction for example interaction with other humans with other avatars or with 3d environments |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_92_Coordinatebased_Networks_Differentiable_Volumetric_Rendering.txt | we have seen these implicit representations as useful output representations for geometry appearance and motion but what we have done so far is always assumed full 3d supervision in order to train this model we always had to have a set of 3d points for which we knew if they are inside or outside the shape however in practice often this is not available and would be much nicer if we could reconstruct 3d shape and appearance just from a set of 2d images as we have already done it in our lectures on 3d reconstruction now the question is can we also do this with these implicit models can we use these implicit models and supervise them only with rgb image information without any 3d supervision at all and that's what leads us to so-called differentiable volumetric rendering a technique that allows to back propagate gradients from image-based reconstruction losses to the parameters of implicit neural networks or coordinate-based neural networks so want to learn from images let's first have a look at the architecture that we're going to use we're going to have a encoder that encodes a 2d image into a global latent code in this case here again and then we're going to have a set of 3d points that we pass together with this condition through these residual network blocks but in contrast to before now we have two shallow heads one head that predicts the occupancy and the number head that predicts the texture or the color so in contrast to before now we have one backbone and two shallow hats so one model that predicts both the occupancy and the texture because of course if you want to do multiview reconstruction we need to represent both occupancy and appearance just as we did it in the discrete case when we talked about the for example the array-based probabilistic volumetric voxel-based reconstruction method in lecture six yeah so the input again is these coordinates that's why they are called often also coordinate based representations or coordinate based methods and the output is for each of these coordinates these 3d point locations potentially also including the view direction and the light direction for all of these predict the occupancy probability and the rgb color value that's why we have three coordinates here and one here but the question is now how can we train the parameters of this model given just rgb image image observations and of course in order to do so we have to define rendering operations for this model we have to render the representation into images while these differentiable rendering operations are commonplace for other output representations including voxels and points and meshes for implicit representations that haven't hasn't been done before and that is what the contribution of this dvr paper cvpr 2020 is now in order to do so we first need to define well the rendering which is the forward pass but also the backward pass in order to be able to back propagate gradients through this rendering operation and we start with the forward pass the forward pass is rather simple so let's assume we have a camera that's located here this is the camera center and this is the image plane and we can draw rays by connecting the camera center with any pixel in the image plane so this is pixel u and this is the ray w and that ray intersects or that ray extends through the 3d volume and intersects our surface at some point and this is illustrated here this is the thick line which represents the point where along the ray the value of the occupancy network changes from below the threshold to a value above the threshold for example from below 0.5 to above 0.5 indicating that points behind are more likely to be inside points in front are more likely to be outside so this is what is illustrated here by these contours this is how this implicit function looks like these are three different level sets of this implicit function and what we're going to do now is for all the pixels in the image we're simply going to find the surface point p hat along the ray w by ray marching so we go in equidistant steps along the ray until we find that the value of the occupancy prediction changes from below threshold below 0.5 to above 0.5 and then in this interval we can find the exact the precise location of the surface using numerical root finding for example the second method which we're going to talk about in the next slide this doesn't require any gradients it just requires function evaluations and is able to retrieve the surface location in a very short amount of time if we have a signed distance field representation we can even march more quickly to that point using so-called sphere tracing algorithm but that's not something we're going to talk about today and then what we're going to do once we have found this fine point b hat which is the optimal which is basically the intersection of this of this array with the implicit surface is we're going to simply evaluate the texture hat note that i'm always going to use the parameters fita for both networks despite the fact that they have a common backbone but two separate heads i'm always going to call the parameters vita for simplicity so we're gonna call this is basically this this network here this head here we're gonna call that head at that estimated location of the intersection p hat and this gives us the color this is the texture and we're going to project that color onto or we map that color we insert the color at that corresponding pixel location for which we have defined the ray and then we can do that for all the pixels in the image and we get the colors of all the pixels in the image that's very simple one quick word about the second method it's a very simple numerical method it goes back thousands of years and it's a method for without gradients finding a root of a function so here's an example of a function in blue and what we do is we consider a line between two points this is to initial points one must be after the root crossing and one must be in front of the root crossing this is the two points that have been found by our ray marching of equidistant steps before right this is basically this point and this point and then we're going to define a line between these function values so we evaluate the function at these two points these are these two points and then we're going to define a line that intersects these two function evaluations so this is basically this definition of the line y2 of x2 where we have the slope of that line here and the line defined as such and now of course because this is a linear equation it's a line we can solve this equation for the root by setting y2 equal to zero and solving for x2 this is just rearranging this equation to be solved for x2 assuming y2 is equal to zero which is happening exactly at the root of this line now you can see already that this root is closer to the actual root that we're searching and that's the principle of the second method it's not guaranteed to converge if you're too far from the actual root it might diverge but if you're close enough it diverge it converges and then we iteratively apply this rule and basically what this is is simply a finite difference approximation of newton's method and the advantage of this is that we don't have to compute the gradients of the occupancy function which however would also be possible we can compute the gradients and sometimes it's useful we have done that for example for refining the mesh after marching cubes using double back propagation but it's expensive so we don't necessarily want to do that very very often as we have to do it here so we use this finite approximation of newton's method called second method this is the first step now we take that point x2 and now we take that point x2 here and call it x1 and we take that point x1 and call it x x0 and we're trying to find a new x2 so for this x2 we find the function value which is here and we draw a line from the new x0 to the new x1 and intersect that with zero and that's here and then we do this again and now you can see we're already very close x2 and the third iteration is very close to the actual root of the function in blue very simple good now we have to find the forward pass the rendering operation but how do we define the backward pass how can we propagate gradients and that's where we need a little bit of math that i want to explain first or for those of you who are familiar with this to quickly recap the first concept that we need is we need to know what a total derivative is um so let f of x and y be a function where y depends also on x let's assume that and that could be x 1 x 2 x 3 and y here we just consider a simple case x y a function of x and y now the partial derivative there's a partial derivative and there is a total derivative in mathematics the partial derivative is defined as using this partial notation as the partial derivative of f x y with respect to x which is f with respect to x times using the chain rule x with respect to x which is 1. so as a simple example if we have a function f of x y which is x times y the partial derivative of this with respect to x is simply y note that we haven't considered that y depends on x also the partial derivative doesn't consider that this is the solution for the partial derivative in contrast the total derivative he also uses the chain rule and we write the total derivative with the d symbol and as opposed to the partial symbol here but the total derivative takes the dependency of y of x into account so instead of just this first expression which we have here we also have plus a second expression which is f with respect to y times y with respect to x now because y we know that y depends on x we have to compute the derivative of f with respect to y and using the chain rule y with respect to x this is the difference the total derivative also takes into account dependencies of functions which are arguments of the function that we want to differentiate okay so here's an example this is the same as before f of x y is equal to x times y but we now assume y depends on x in particular y is equal to x if we compute the total derivative we have f with respect to x which is well y and then we have f with respect to y which is x times y with respect to x which is one so i have y plus x and because y is equal to x we have two x you can see the two solutions are different here we have the solution of the partial derivative which is y and this is the total derivative which is 2x the second concept that we need is the concept of implicit differentiation because we're dealing with implicit functions we can use the principle of implicit differentiation now an implicit equation well actually we are dealing with implicit curves to be more precise not functions an implicit equation is a relation that's given as such it's a function f of x and y let's again consider just two variables equal to zero where y of x is defined only implicitly assume also that y depends on x here now again as before but it's um but it's defined implicitly through this expression that's why it's called an implicit equation now implicit differentiation it one one option to obtain now what we want to do is we want to obtain the gradient of y with respect to x one option would be to solve this expression for y and then do standard differentiation but that's often cumbersome and sometimes even impossible and so what implicit differentiation does is it applies it computes the total derivative of this expression at both sides left and right and then solves for the partial derivative of y with respect to x if we do that for this function then on the right side because it's a constant 0 we obtain 0. if we compute the total derivative of the left side assuming y depends on x then we obtain the expression from before f with respect to x x with respect to x plus f with respect to y and y with respect to x this is this expression here um uh differentiated implicitly so a lot of implicit stuff going on in this lecture you're gonna see more implicit stuff later on so this is about implicit differentiation okay let's look at an example this was pretty abstract let's look at an example here we have a implicit equation x squared plus y squared equals one do you recognize what that is you've seen that before in high school for example this is the equation of a circle right a circle with radius one now if we do implicit differentiation here what does that do well we compute the total derivative of the right hand side which is zero and the total derivative of the left hand side which is well f with respect to x is 2x plus f with respect to y and y with respect to x is 2y times y with respect to x right and now we can solve this expression for y with respect to x by putting this on the other side and dividing by 2 y we obtain minus x over y note that in this expression here we have a y in the derivative of y with respect to x which would not occur in standard differentiation right that's why we can using implicit differentiation also define uh differentiate implicit equations that lead to implicit curves not just functions here we have an implicit curve as a circle it has two possible values for each x value between zero and one but the gradient is still well defined even for that's a curve and not a function for example if x equals zero and y equals one we have the gradient minus zero over one which is zero or if um x equals one and y equals zero we have minus 1 over 0 which is minus infinity so it's an infinitely strong slope here at this location x equals 1 y equals 0. and so on so we can now find gradients for any implicit curve and that's what we're going to exploit in order to make this implicit rendering differentiable for back propagating gradients here's the backward pass of dvr this is the illustration from before what we do of course is we compare this prediction this predicted image with an observation this is the actual rgb image that we have captured with a camera then we're gonna come computer loss function for example an l2 loss or an l1 loss between the observation and the prediction this is simply a sum running over all the pixels computing the difference and the norm of these two images now the gradient what we want to do is we want to get the gradient of this loss function with respect to the parameters of the model the parameters of the backbone and the texture and shape head that we've seen before of course we just in order to differentiate this we just apply the chain rule so we have the gradient of or the derivative l with respect to i hat times i hat with respect to theta because of course the predicted image depends on theta now how does the predicted image i had depend on theta well here we apply the rule of the total the total differentiation because um well this is uh this color is just the text texture evaluated at p hat that's what we said before right this is the color because this this color depends this on the parameters through the color network but also depends on the parameters through the optimal point location the surface intersection because if i change the occupancy network the surface intersection p hat would also change so p hat also depends on theta which has been dropped here for notational simplicity but you can think of the p hat also being a function of theta we have to apply the total derivative here so we have t theta of p of theta with respect to theta times now t theta of p hat times uh with respect to p hat times p hat with respect to theta so this is the inner derivative of the chain rule and this is just an application of the total derivative because t depends directly on theta but it depends also on theta through its argument p hat because p hat of course depends on theta as the surface intersection will change if the object shape changes through theta and now we can what we want to do is we want to compute of course this this quantity here and for doing this we need the concept of implicit differentiation what we do is we differentiate the level set expression f theta of p hat equals tau implicitly and if we rearrange this we get this closed form analytic solution for p hat with respect to theta and i'm going to show you on the next slide how this works but this is just to illustrate that we have a very nice and elegant closed form expression for the gradient of p hat with respect to theta and all of these other terms here are easy to compute these are just standard uh these are computed using standard back propagation for example through the texture network both for with respect to the parameters and with respect to the input point this is just standard back propagation for the neural network or here also standard back propagation through the occupancy network and now we can update the parameters filter such that the prediction becomes closer to the observation and this is what this might look like you can see that both by updating the parameters both the color of the object as well as the shape of the object changes so we found an analytic solution and in contrast to for example rendering techniques that work on voxel-based representation there is no need for storing any intermediate results along the ray for voxels intersecting along the ray here we just by having this implicit model we can directly supervise at the surface and through the smoothness properties of the neural network this information gets propagated you can imagine as being propagated in the vicinity of of the surface okay so let's look at in a little bit more detail how we obtain this expression here we said we're using the rule of implicit differentiation consider the ray p-hat equals r0 plus d-hat w so d-hat is already the optimal depth for the depth that corresponds to where this point intersects the surface and this is what is provided to us um we have a dep um no this is what would be provided to us if we also would supervise with with a depth uh map but if we just supervise with rgb is not provided to us but we can do both we can supervise with the depth we can supervise only with rgb so we consider this ray and then we implicitly differentiate this level set expression f theta at p hat must be equal to tau this is what we want right at the point p hat we want um the function f to be exactly equal to let's say 0.5 and if you do differentiate this with respect to theta on both sides we obtain this expression on the right this is a constant so it's zero on the left we use the rule of the total derivative so we have f with respect to theta plus f with respect to p hat times p hat with respect to theta and what is p hat with respect to theta well p hat is this expression this is constant so p hat with respect to theta is w times d hat with respect to theta and now we can solve this expression for d hat with respect to theta just by rearranging the terms and we obtain this expression so the nice thing about this is that by effectively projecting this gradients along that or onto that ray that's specified as such we can actually invert this expression here because we have a this is an inner product actually so we should have written transpose here so this is a a free vector transpose this is a free vector and because this is the the orientation of this ray and this is a scalar so we can invert it which is one over the scalar times w times the gradient of the occupancy network with respect to theta which can be obtained using back propagation so all of these expressions here are easy to compute okay that was maybe a little bit hard to digest um but that was probably the hardest part of this lecture so let's look at some results oops so we can use this principle of differentiable rendering for supervising a model that conditioned on a particular 2d image like in the case of occupancy network predicts a 3d shape as we have done here this is with 2d supervision alone we can also supervise with three two and a half d with depth maps we get better results and of course it's very hard here in this case because this is a very textureless object so it's hard to reconstruct for cars it works a little bit better and we can also use this model just for plane free reconstruction by optimizing the parameters not conditioning on any input optimizing just the parameters of a neural network to optimally represent a particular 3d object and this is what is shown here and this is a concept that has been used later a lot like example in nerf for novel use in texas where the parameters of a neural network representation are optimized this illustration of the optimization process it's a little bit wobbly because of the stochastic gradient descent and batch nature but you can see how the geometry evolves and there's uh many follow-up works already on this differentiable volumetric rendering work one here in particular i want to highlight from the iron dipman's group israel and europe's 2020 that extended this model to also take into account view dependent appearance conditioning the model also on the input view which allows for modeling specularities light and reflectance like in the case of the skull which is not a lamborghini surface and so it's of course important to take this into account and you can also what they've shown there very simply in the same way as i've shown the previous slides back propagate gradients to the camera poses so if your camera poses are not precise if bundle adjustment didn't work properly you can densely align the camera poses together with optimizing geometry and appearance |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_31_StructurefromMotion_Preliminaries.txt | hello and welcome to lecture number three of this computer vision course last time we discussed the image formation process how a 3d point gets projected onto the 2d sensor plane now in this lecture in the structure from motion lecture we're going to utilize that knowledge in order to build 3d models from multiple images observing the same static scene this lecture is subdivided into four units in the first unit we're briefly gonna touch on two preliminaries that are required for building 3d models one is camera calibration how to recover the camera parameters such as the focal length and the other is how to detect and describe features and how to match features point features in particular across different images that can later then be used for reconstructing these features in 3d in unit 3.2 we're gonna cover the most basic reconstruction form which is two frame structure from motion we're gonna discuss the apripolar geometry and how we can recover the relative camera motion between two frames and also the 3d structure from these two frames and then in the last two units we're going to discuss multi-frame structure for motion techniques in particular closed form factorization techniques and their advantages and disadvantages as well as more modern bundle adjustment techniques and in particular also incremental bundle adjustment techniques that are mostly used today good so let's start with the first unit on preliminaries and while camera calibration and feature detection could easily fill entire courses on their own we're gonna treat them relatively shortly in this course just for the sake of understanding what they are and giving a few examples to get an idea of what's happening so let's start with camera calibration camera calibration is a fundamental prerequisite for most 3d reconstruction techniques in particular if you want to obtain a metric reconstruction you first need to know what the camera parameters are in order to know how the rays emanate from the camera center or how the rays are getting projected onto the camera image plane the camera calibration is the process of finding the intrinsic and extrinsic parameters of the camera and most commonly a known calibration target such as a 2d pattern or a checkerboard is used as shown here actually camera calibration was also the first project that i tackled the first research project i tackled it's a very fundamental thing so that's something that many people get started with um so how does camera calibration work there's a whole variety of techniques but the most fundamental and basic technique and the one that most people use and that's also implemented in packages such as opencv for example is working as follows there is a calibration target and most often a checkerboard is used because it's relatively easy to find corners very precisely of the checkerboard and if you know the number of checker planes and then you know the 3d model of the checkerboard you know exactly what is the distance between the corners you can find the corners very precisely in the images and you know how this plane looks in 3d so the only thing you need to determine now is the parameters of the camera the calibration matrix as well as the pose of the checkerboard in each of the images and this is the optimization problem that you're going to solve first you're going to print such a checker board and you glue it on a very flat surface and then you capture that checkerboard in different poses as shown here here's an example of the checkerboard photographed in different poses you need to make sure that the checkerboard is almost always fully visible so that you can always detect all the corners and that is covering the post space well in order to get enough observations enough unambiguous observations in order to obtain a good precise calibration of your camera now in the second step for each of these images you're going to detect features in the case of a checkerboard you're going to detect the corners and you're going to try to refine them subpixel accurately and there's various algorithms for finding subpixel accurate corner placement of checkerboards here's an example and then in the final step you estimate the camera intrinsics and extrinsics and here extrinsics means the pose of the checkerboard with respect to the camera coordinate system because you're waving this checkerboard in front of the camera in each image the checkerboard will have a different pose however this is a feasible optimization problem because the pose has only six parameters and the intrinsics of the camera are maybe just 5 or 10 parameters depending on if you utilize distortion coefficients or not and so overall you have much more observations because each of the corners in the checkerboard gives you two observations one for the x and one for the y coordinate and you have plenty of them per image you have much more observations still than parameters that you want to estimate and that's why it's possible to do that now direct non-linear optimization of all the parameters is done by minimizing the reprojection error by taking these 3d points of the current estimate and projecting them into the image and comparing the distance the occluden distance inside the image with respect to the detections in the image trying to minimize that but you can't just randomly initialize the intrinsic parameters of the camera and the extrinsics the poses of the checkerboards because you will get very easily trapped in local minima so you need a good initialization for all these parameters and there's a couple of papers that describe techniques for finding such a good energizations in closed form so for instance there is this technique here at the bottom which is maybe the most popular and most cited technique from shang at a flexible new technique for camera calibration published at dpalmy 2000. so this is for perspective cameras the most common case and it gives you um the focal length and the principle point by constructing linear systems from the observations that you make now this is only a very coarse and approximate energization but it's much better than a random guess and so from this initialization you can also derive a good initial estimate for the poses of the checkerboards and then this together forms the initialization for the non-linear optimization process that using some gradient based technique like lievenburg markwood optimizes the minimizes the reprojection errors and refines all the parameters and typically the closed form solution ignores the distortion parameters and then you initialize the distortion parameters to zero and then during nonlinear optimization you iteratively add the distortion parameters and so that is a relatively well posed problem so you don't need a good initialization of the distortion parameters for most cameras unless the cameras have a very wide angle lens let's say some remarks there exists a whole variety of calibration techniques that are used in different settings and also for different lens models etc and these methods differ algorithmically but also in the type of assumptions and calibration targets they use there's algorithms for 2d 3d targets there is algorithms that use planes algorithms that use vanishing points etc but the most common technique the one i've just shown that uses these checkerboards or checkerboard like patterns and if you like to know more there is this very famous matlab toolbox that was used for 20 years or so here i will get the doll that's still available and has a very nice description of all the parameters and how the process exactly works this is now implemented also in opencv and there is also a chapter in the silesky book on camera calibration but here for the purpose of this lecture we're going to keep this part short now let's move on to feature detection and description now we have calibrated the camera now in order to obtain a 3d reconstruction a 3d model a sparse 3d model we first need to find points in the images that correspond to each other so you need to find salient points that are distinct that can be easily re-detected in another image so this is what we call point features point features describe the appearance of local salient regions in an image in this example here there's a few cases given where you can see the difference between a salient point and a non-salient point so for instance this point here is very salient because the local appearance is very distinct look at this patch it's very easy to find the patch in the other image however there's regions where it's harder like in in this case here where you have just like this white black transition with uh almost linear transition and so you can move this patch along that line and maybe you find other places in the image where you have a similar transition so it's not as ambiguous anymore and it's not as easy to localize that patch and then here in the sky region where the texture the colors are very homogeneous it's very difficult to find correspondence so you want to have an algorithm that finds salient locations in an image and then describes this salient location as well point features describe the appearance of local salient regions in an image and they can be used to describe and match images taken from different viewpoints they have actually been used in in the pre-deep learning era of computer vision a lot also for recognition you're getting a lot of these alien regions describe these alien regions putting them into a bag of words model into an svm classifier and then classifying the image based on these local features now of course these techniques are not state of the art anymore for recognition but they are still very uh frequently used despite some of the detection systems have been replaced by deep learning but they're still the basic idea and even the basic algorithms we're discussing are still used for 3d reconstruction methods and they really form the basis of the sparse 3d reconstruction methods covered in this lecture so what are the prerequisites what should these features look like well the feature should be invariant to perspective effects and illumination because if we take an image of an object like here a frontal view and then we take another image of these objects with a different illumination condition where colors can change and also where you view these objects from a different viewpoint so the perspective changes you want to find the same features back right you want here are some examples of features that have been retrieved from features extracted on the original objects you want to find the same features back and you want to find them in a way that if you extract a feature vector that then that feature vector is distinct and only matches to the correct location in the other image and that's what i mean by saying the feature should be invariant to perspective effects and illumination they should abstract away from this but they should be discriminative they should still be very discriminative they should say oh this is this is the location i'm sure this is the same location and all the other locations are unlikely otherwise i'm not giving you back this feature so the same point let's say this point here and this point here they should be described by such an algorithm with a feature vector that's very similar well if i compare this point let's say this point here then the feature vector should very should look very different and it's clear that the plane rgb or intensity patches that we would simply extract here from these images will not have this property easy to see that if i just rotate the image by 180 degree the rgb patch will look very different if i scale the image the rgb patch would look very different if i translate it would look very different so we cannot just use and match these plane rgb intensity patches and this is where feature detection and description methods come into play they have had a really revival uh in in the 1990s and 2000s and have been used throughout almost all areas of computer vision and one of the most popular one is called sift you might have heard of before it's called sif because it's a scale invariant feature transform so it's invariant not only to rotation or perspective effects but also like scale and the way this is done is by constructing a scale space by iteratively iteratively filtering the image with a gaussian so here this is the original image and then you filter it with a gaussian filter multiple times so these filters get more and more blurred here and then once this is blurred beyond a certain point you can also rescale that image because it's so blurry um that you can continue at a so-called next octave at a smaller scale you could do this at the original scale as well but it would be wasteful so typically people scale this down so once you have such a scale pyramid you compute the difference between adjacent blurred images so this is a blurred image and this is slightly more blurred image and you complete the difference and that's called the difference of gaussian and the interesting property of such difference of gaussian images is that if you detect extrema in these difference of gaussian images is extrema these interest points that we're detecting are blobs in the image so this is a blob detector it finds blops in the image and finds this at multiple scales in that image because we're doing this difference of gaussian at all scales at all adjacent scales and we're finding then this extrema by looking at the scales at this difference of gaussian scale space at this volume and trying to find maximum and minimum points in that space where maximum minimum means that we have a point where all the other surrounding points not only at the same scale spatially but also in terms of the neighboring scales are lower for a maximum or higher for a minimum so these are extrema in scale space why are difference of gaussians block detectors well let's look at what a difference of gaussian does here we have two gaussians a green one and a red one and if we take the difference of these two we get this kind of mexican head similar to a laplacian filter here the blue one and this is a 1d example here of course so if you imagine now 1d signal and you're swiping this blue filter over that signal and there is a blob that has the radius of of this here so this depends on the scale at this scale we have this particular radius here so we have a blob a disc or something like like like a step function kind of or two step functions um where we have a one here and a zero outside um then this filter here will will respond and this is the same example in 2d space right so this is what this difference of gaussian filter looks in 2d space this is why it detects blobs because we apply it at multiple scales it detects blobs at multiple scales so here's an example here we can see such a sift scale pyramid we can see that the images becomes more and more blurry if we go down in terms of resolution and scale here and what's interesting is that now if you look for instance at the eye and this eye is detected at a particular scale so there the value of the center of the eye which is a blob feature is high at a particular scale but if you look for instance at the time the value of the tie is low so it's not an extreme at that particular scale but if i'm not down scaling the image i'm looking here at this scale for instance and we see that the the tie is also detected you can get a feeling of detecting blobs at multiple scales by looking at okay so we have now detected potential interesting points alien points in the image at multiple scales and what sift does then is it looks at these interest points at the respective scale so this feature descriptor is also scaled with the scale and then it looks at all the gradients it computes sobel filter response um and computes the gradients from there and then aligns this descriptor it rotates the descriptor area based on the dominant gradient orientation and by that it becomes rotation invariant now we have a feature that's not only scale but also rotation invariant simply by looking at this gradient histogram and then rotating it and then once we have rotated it um we compute gradient histograms in subregions and this form then the descriptor so here's an example of two by two subregions in final version of the sip descriptors they use four by four sub regions but here we have two and two horizontal and two vertical regions so these here and we're looking at the gradients inside these regions and building a histogram of these gradients so for instance in in this case here we have a lot of gradients a lot of gradient directions that point to the top right and so that's why this spin here responds highly but then to the bottom left we have only maybe this one here and this one here so we have a less strong response in that direction we complete these gradient histograms for the local subregions and then we concatenate these histograms into a big vector in the case of sift a 128 dimensional feature vector and the reason why we're looking at these histograms is that these histograms are illumination invariant so if you look at the gradients then uh you you because you get invariant to relative brightness changes and if you look at histograms you're not only invariant to the brightness changes but you also invariant to slide perspective transformations such as translation so you can slightly translate or perspectively deform the image but the histogram will not be affected as strongly because it aggregates information over a larger region than a single pixel in a plane rgb feature would be that's the intuition why these histograms are useful to achieve the properties that we discussed in the beginning so this was one example maybe the most prominent one it started a revolution in 3d reconstruction because with sift it was possible for the first time to match features very robustly um across very different viewpoints even different camera models but following up on sift there have been a variety of feature detector feature detectors and descriptors proposed such as surf usurp brisk or fast and many more and recently also deep learning learning-based ones however shift was really seminal due to its invariance and robustness um and kind of revolutionized recognition and in particular machine particular matching and enabled the development of this large-scale structure for motion techniques that we discussed today and despite being more than 20 years old by now a sift is still used today so for instance the state of the art structure from motion pipeline that many people use today and compared to as a baseline is called call map and internally uses a variant of sift till today now how can we get matches using these descriptors well if we have two images and we have detected features in these images then we can use efficient nearest neighbor search techniques that build a search tree in one of these images to efficiently retrieve our particular query features in the first image the closest features in some let's say oblivion space this vector space second image so when i find that 128 dimensional vector that in terms of l2 distance is closest to that particular feature in the left image for which we want to compute the match now this is not always going to be unambiguous and therefore ambiguous matches are typically filtered by computing the ratio of distance from the closest neighbor to the distance of the second closest neighbor so i compute not only for a particular query match in the first image what is the closest match in the second image based on that distance of the 128 dimensional space feature but i'm also computing the second best and if that distance between the first and the second best is um small in other words if that ratio is large second best is almost the same quality as the first then that match is removed because it's uh it's not clear if maybe the second best is actually the correct match and they have just been um the first is just first because of noise in the input images and that's what what i mean here in the last sentence by a large ratio 0.8 indicates that the font match might not be the correct one a ratio of 1 would mean that there is two features in the second image that are exactly the same distance from the query feature and here's an example so here we have two images captured from the same two images of the same object captured but from different viewpoints you can also see there's it's a different day because there is three here in front of the church and here it is not and you can see that a lot of matches have been retrieved that are correct here in green but there is also some feature correspondences that have been achieved that are not correct for instance here in this case there is a large ambiguity because a lot of these local structures look very similar to each other it's easy to make a mistake but overall if you have found enough good features then and use these features now to estimate the a bipolar geometry and the 3d structure as we're going to discuss next |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_55_Probabilistic_Graphical_Models_Examples.txt | and finally in this unit i'd like to show you some very simple examples of this algorithm in practice and also guiding towards the exercise tasks while then in the next lecture we're going to consider more realistic examples with real problems such as stereo matching so the first example i want to show you is this vehicle localization example that i've already alluded to in the first part so the goal here now at the longer time horizon is to estimate the vehicle location at ten different time instances so these are the time instances from left to right t one to ten where we state of each of these random variables we have one random variable at each time step these are ternary variables so they can take three different states lane one lane two or lane three depending on if the vehicle is at lane one two or three at this particular time step so for this particular situation that we see here we have this particular configuration of random variables at the bottom so we have 10 random variables here where each of them can take state one two or three now in addition to this we have observations observations we specify as continuous [Music] probability scores where we have an observation for each time step and we specify with which probability the vehicle is observed at a particular lane for that time step so for instance for the first time step we observe the vehicle with 70 percent probability our perception model is confident that this is on lane one but there's also some probability for lane 2 and even more for lane 3. so we have now converted simply this numerical representation into a more easily easy to understand graphical representation we can do the same for the other observations so let's assume these are the observations and what happens of course if if we just naively take the most likely state independently at each time step according to the perception model we see is such a result where we already have we have a very uh implausible transition here transition that's physically implausible and that we try to avoid now by integrating this prior knowledge using with the help of graphical models because this is not a very plausible inference result instead this is a more plausible inference result it's a little bit less likely under the perception model but it's much more likely under the prior knowledge under the prior that we have about the world the factor graph of this problem looks like this we have the unaries that specify um or that are determined by the perception model the observations these are called f and then we have and these are different here f1 to f10 because we have different potentials different observations at the time each time step but then we have a g that's the same and that's connecting nearby time steps x1 and x2 x2 and x3 so there's a pairwise potential that connects two so the distribution can be written or factored according to this mark of random or this factor graph here into this representation the unit vectors we directly take as the observation probability scores so for instance 0.7 for the first lane um 0.2 and 0.1 and so on the question is now how should we choose the pairwise factors and of course ideally we want to learn these pairwise factors this is in this case is a three by three matrix because we have three states for the first variable and three states for the second variable and so we have a three by three probability table here and the right thing to do here is now to have some training data and learn the optimal parameters from this data and this is what we're going to cover in lecture seven but now for the purpose of this lecture where we just discuss inference we assume that this is given to us and let's assume that this is somehow reasonable and reasonable means that well the highest probability is um maybe that we just stay on the lane because that happens most frequently and then there is some probability for a transition of lanes but not only to the adjacent lanes not to uh by two lanes which is physically completely implausible so we have a zero probability here or the potential is zero here now if we use this model that combines this unary with these pairwise terms and we now do some product inference and look at the marginal distributions we obtain this result and you can already see that compared to the observations the probability for lane free in this particular state here at this particular time has changed completely all right so we can see how this prior knowledge is incorporated into the solution now this was the marginal solution but we can also apply the max product algorithm to obtain the maximum upper steroid solution and this is the map solution that we get this is the solution that we wanted and that's the solution that we get if we use exactly this smoothness or pairwise term that we've just introduced a second example is image denoising let's assume we bought a camera but the camera is is not so good the camera is a black and white camera it has just 10 by 10 pixels 100 pixels in total and not even that it actually takes very noisy images so this is what you want to take but this is what you actually take so now the question is how can we go from this noisy image to the clean image and that's called image denoising and we can use the same properties that we have seen in the stereo matching class that namely nearby pixels are more likely to have the same value than to have different values now not in terms of disparity maps but here in terms of rgb value or binary pixel values so if we model this using a markov random field and if we just assume unity potentials this is what the model would look like we have 100 variables because we have 100 pixels here they are binary and we just have unary potentials and we specify these unit potentials such that we have this this is again the iverson bracket so we have this indicator here that says well this potential is equal to one if the variable takes the value of the observation so for instance here if we get a black pixel that's good here if we get a white pixel that's good this is our observation and we have used here the log representation so these potentials here are log factors so the probability distribution is one over c the product of these factors equal to one over c of x of the sum over these log factors so we'll directly specify this problem now in contrast to example one here we directly specified in terms of these log factors which we of course can do and and again for solving the inference problem of course we pass log mesh messages as well now if we do that what is the outcome well the outcome is unfortunately the same image the noisy image because we have not assumed any prior knowledge we have just said well we have these unary constraints at each pixel and of course every pixel if we maximize this this is maximized by maximizing individually each of these unit potentials which is simply maximized by setting the variable to the observation so what we do want to do instead now is to integrate prior knowledge by adding constraints to this problem so what prior knowledge do we have about this image assume we know what images typically look like assume we have some images like this well one thing that was already mentioned is that well neighboring pixels tend to have the same label but can we make this a little bit more concrete can we quantify this how many neighbors actually share the same label let's look at this pixel grid here and let's find all the edges that are separating pixels with different color and these are indicated here in red if we count this number of edges or transitions this is where now later the potentials are going to be defined they are defined at between two pixels so they are going to cross these edges we see that we have well first of all 180 neighborhood relationships in total so there's 180 edges here because we have 10 by 10 times two one for the bottom and one for the right and minus 20 because the picture ends so this is 100 edges little gray edges in total but then we have only these these red edges we have only 34. so there's only 34 edges where we have a transition from black to white while there is 146 edges where the transition is from black to black or white to white so the label is the same so just in other words a factor of 4.3 more transitions where the label stays constant compared to transitions where the label changes and now of course we can exploit this by introducing a smoothness assumption by saying well it's more likely that the json pixels are the same and in the ca in this simple discrete case we just consider a very simple um model for this so here we have again the model with the unity potentials but now also adding these pairwise potentials on the four connected grid here we look only at a mrf that's that's a 3x3 sub image here but of course you have to imagine this as continuing to a 10 by 10 grid in the same structure as we see here the unaries are the same as before but now we have the pairwise potentials that are defined as such they're defined such that if uh on two adjacent so this means adjacent neighboring sites if for two adjacent pixels the state the inferred state is the same x i equals x j then we have one otherwise we have a zero and we multiply this with alpha which is the strength of this regularizer that is a hyper parameter of the model if you make it too strong then all the pixels will turn to the same value if you make it too weak if you make it zero then we get the observations back because we're effectively disabling this term but you can already see now we have a trade-off the model or the inference algorithm has to do a trade-off has to weight the observations with respect to the smoothness constraints and it cannot fulfill both either it fulfills the observations and has ones here everywhere or it sacrifices some of these ones here it makes some of these inferred variables not the same as the observations in order to obtain more rewards to obtain more once uh on on the on the pairwise potential side here because we only obtain a one if two adjacent sides are the same and otherwise we obtain a zero here so we have this trade-off now and the parameter alpha controls the strength of the prior and you will implement this and play with this in the exercise and then next time we're gonna talk about how these graphical models um are used in more advanced problems like like stereo estimation and so on with larger label sets or multiview reconstruction where you have also larger potentials potentials beyond pairwise potentials let's stop here thanks |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_103_Recognition_Object_Detection_and_Segmentation.txt | let's now move to object detection and some other related tasks that can be solved for similar algorithms such as object instance segmentation or object mesh prediction let's remind ourselves again what object detection is about the goal is to localize and classify all objects in the image we're not interested in the stuff categories here we're just interested in objects but we want to localize them in terms of a tightly fitting 2d bounding box as well as a class label so and say that this is a car maybe this is label number five and this is a poll maybe that's label number seven and at the same time we wanna have a bounding box that's the high level idea of course there's many motivations for why we wanna do that we need it for navigation we need it for self-driving object detection we need for manipulation we need it everywhere and here's an example this is an image from mobileye that's a company that produces driving assist systems and it has a system that produces a warning based on camera input if the vehicle comes too close to the vehicle in front by detecting that object in front and also estimating how far that object is from the eagle vehicle let's look at the problem setting a little bit in more detail the input is an rgb image or a laser range scan in case we want to do 3d object detection we'll have a very brief look at that in the end of the unit and the output is a set of 2d or 3d bounding boxes with category label and confidence so we don't only want to know where the object is and what it is but we also want to know from the detector what is the confidence this is a number that is relevant for any subsequent processing stage if we predict something very low confidence we might still want to drive on but if there's a high confidence in a prediction then we want to take it seriously and similarly the evaluation metrics that we're going to discuss are also going to take the confidence the detection confidence into account here is an example for 3d detections at the bottom so this is the 3d equivalent instead of 2d bounding boxes we are now predicting 3d pounding boxes in these point clouds in both cases 2d and 3d note that the number of objects and the object size are not known a priori so it's really a complicated output space it's a structure prediction problem where even the number of output variables is not known a priori we go into algorithms let's first discuss a little bit how we can measure the performance of such an algorithm in classification and semantic segmentation we've seen it's easy we can just use accuracy but even in semantic segmentation accuracy is not enough so we've talked about intersection over union and that's also the metric the performance metric that we're going to use in object detection so let's look a bit more detail into what is intersection over union in this case for bounding boxes but you can do the same thing for masks let's suppose we're seeing this image here and our object detector returns the red box as the predicted detection but the true box is the green box here that is the box that has been annotated by the annotator now the intersection of reunion measures the area of the overlap of these two boxes divided by the area of the union of these two boxes and as you can see if both the predicted and the target box are the same this number will be one but that's actually very hard to achieve in practice we'll already be happy if the intersection of reunion is 0.9 and even if it's 0.7 it's already a pretty good detector but if it's below 0.5 then the alignment of the prediction and the true box is not very high and so we say the support poor detection and that's why typically in most evaluation scenarios a detection threshold an iou of 0.5 is considered now this was for a single object how can we fairly measure detection performance in case of multiple objects what we do for this is we run the detector with varying thresholds or what i actually mean with that is that normally you just run the detector once but it returns a confidence for each detection and so we can change the confidence threshold to from the set of all detections that are returned by the object detector get just a subset of the detections that are the detections that are higher than this confidence threshold that i'm looking at so this is what it means to run a detector with varying thresholds in practice we almost never have to run the detector again we just threshold the detections that are returned based on the confidence of the detector in these detections now once we have created such a subset we assign the detections to the closest object and we do this using bipartite graph matching using the hungarian method for example such that we have a one-to-one correspondence of predictions and targets or ground truth bonding boxes and you can already see that that it's it's not great if you produce a lot of wrong boxes so it's important that you you really detect the objects correctly but you detect them only once and remove all the other detections that might be detected or produced in the vicinity of of a correct detection by the detector so now once you have done the association you count the true positives the false positive and false negatives what are those the true positives are the number of objects that are correctly detected where the intersection of reunion is bigger than 0.5 the false negatives are number of objects that are not detected the number of ground truth target boxes that are labeled by the annotator but there's no prediction that's nowhere close and then there's the false positive which are simply the wrong detections these are detections that are returned by the object detector that are either not overlapping enough with any of the targets an iou of smaller than 0.5 or that are overlapping with a high iou but there is actually already another detection that has been associated that has a higher iou even so this this assignment here is based on these ious as well and so the detector effectively produced two detections for one object and so one of these two detections must be counted as a wrong detection because there are no two objects there's only one object and then we compute the average precision from these numbers and this is the metric that we're going to consider and that's average precision is kind of the integral under such a curve so what is that well the first quantity that we can compute from true positives false positive and false negatives is the so-called precision this is true positives divided by true positives plus false positives and the recall is the second number that we can compute the recall is the true positives divided by the true positives plus the false negatives intuitively what are these two things the precision is high if the objects that are returned by the detector are actually correct they have a high overlap with the targets um but there's no false positives there's basically false positives lowers this number here there's no detections in spurious places or no double detections for one object it doesn't matter if an a true box hasn't been detected precision can still be high but at least there shouldn't be any false positives recall is the opposite a high recall is obtained if for all of the true boxes i have at least for each toolbox i have at least one prediction that's nearby but it doesn't matter if i have a lot of false positives that doesn't count here so i can produce a lot of detections and recall will just increase now of course if i want to have a good detector i want to have both a high precision and a high recall so if i plot this on a curve like this that i want to be in this area of this plot but that's hard so what we therefore do is we vary the detection threshold and enter all like these these precision recall points into the position recall plot here for these different thresholds and that has been done here so this is the orange curve that shows these points that are connected by a line segment then and then the average precision is simply taking the area under this curve measuring the area under this curve by taking the average over this in this case 11 discretization steps sum over the precision for this respective call we're trying to produce a value for each of these recall levels by varying the threshold appropriately and then we take this sum over these numbers here but we don't do it exactly like that as you can see here there's this maximum which means that if we're at this point here we take the maximum recall to the right which is this here so that this function which we integrate where we compute the area under is actually a monotonic function and not influenced by this this very low precision points here this non-monotonic behavior of the detector so it becomes a little bit more robust that way this is just how average position is defined but in general you can see your detector must for varying thresholds return either high precision or high recall and there must be some values where both the precision and the recall is high in order for this detector to be ranked highly if the curve would be like this then the average precision number would be smaller than for this orange plot if the curve would be like this then the average precision number would be higher and we want of course a higher average precision so far for evaluation let's now talk about object detection methods and let's start with one of the most simple methods which is a sliding window object detection the idea of sliding window detection is to run a sliding window a crop in green of a fixed size over the entire image and extract features for each of the crop locations so i'm running it here here until here and i go to the next line i'm running it here go to next line running it here and here i have finally a crop that actually exactly fits a pedestrian and for which i would expect if i'm taking this crop and passing it through a image classification algorithm it would return a high probability for predicting a pedestrian here so effectively we run this crop over the image and classify each crop independently with an object versus background classifier for example pedestrian versus background car versus background we can do this using any features and any classifier like an svm for example and this is also one of the classifiers that have been used in the beginning using for example histogram of oriented gradient features that we'll discuss next and because these objects might appear not only at different locations but also different sizes in the image we need to run this algorithm also for different possibly different aspect ratios and scales of this box so it's really computationally expensive to run this sliding window detector and then another thing we do is because like in the vicinity of a detection often we also get like high detection uh highly confident detections we are doing non-maxima suppression so we are we're suppressing all the detections that are in the vicinity but don't have the highest possible confidence in order for not getting penalized in the evaluation later on by predicting too many false positives now one of the first methods that did this very successfully and became one of the landmark techniques for a decade or so is the method from dala landrix called histograms of oriented gradients for human detection appeared at cvpr 2005. the first question to ask is well what features can we use for this lighting window detector and we know already that rgb pixel space is not a good idea and so what they do in this paper is they use so-called histogram of oriented gradients which are very similar to sift features and also appeared at a similar time the idea similar to sift is to represent patches with histograms of these oriented gradients here's an example this is an input image we first run well differentiate that image spatially and get the gradient magnitude and the gradient orientation here discretized into eight angular bins with the different colors then we subdivide this image domain into in this case four by four cells and for each of these cells we compute a histogram of the orientations weighted by the gradient magnitudes so for example in the first cell we have a little peak here for orange because there is some orange and yellow gradients here but not that many and for the second cell here we got a histogram that's much larger peaked at orange and yellow but also has a little peak at blue because there's some blue gradients in inside here and we do this for all the cells that we have to find and then concatenate all these feature vectors into a big feature vector and that's the feature vector that we're going to use with a support vector machine for classification similar to sift hawk is invariant to small deformations if we just translate the input by one or two pixels it will not affect the hog feature much but it will affect the rgb pixel space features a lot similarly if we just scale the image slightly or rotate it or apply a small perspective transformation the features will not be affected much and that's the benefit of these hog features that they are invariant to these small deformations however they are not invariant to large deformations and there's a lot of categories where large deformations happen they can happen because there's different viewpoints but they can also happen because some objects just deform non-rigidly and what has been done in this work which was also one of the most influential works of the this particular era in object detection is to use part based ideas part based modeling ideas from the 70s here's this image here on the left is from a paper from the 70s and 80s where the idea is you model components and then you model relationships illustrated here by the springs and you're allowed to stretch and squeeze these springs this is all implemented with a graphical model but then you have this ability to actually locally reconfigure the model and allow for these non-rich deformations so the idea is to model objects based on their parts and then model distributions of these part configurations and this allows for more invariance than the naive hog based template which is necessary for example for non-rigid deformations however inference is much slower because it's effectively implemented using a graphical model and there is not much gain with respect to a actually has been retrospectively found with respect to a multi-view hog model using just multiple planes to capture the entire space of non-rigid deformations and viewpoints works actually quite well as as well okay so classical approaches haven't really led to major breakthroughs before the deep learning era and only worked for certain non-safety critical tasks the performance wasn't really good enough for safety critical tasks why is that well first of all as we know already by now computer vision features are very difficult to hand engineer and it's unclear how a good representation should actually look like furthermore sliding window approaches are slow in practice and inference is complicated inference in in these complicated part based models is even slower the computation is not really efficiently reused as it is in deep neural networks but as we know deep learning has changed this and has lifted recognition has really boosted recognition is maybe the the ideal application for recognition which is so hard to hand engineer where learning is really crucial so has really transformed the field of recognition here's an example this is the deformable part based model a highly sophisticated deformable part based model with hog features and a graphical model inference and it was really the best performing model for several years 2012. now with the first deep architecture in 2015 there was a three times improvement in a average position uh when moving to these deep learning methods and ever since from 2015 to 2017-17 there has been another um like a free point a three times increase in only 2.5 years and performance continues to increase since 2017. this is another example of the importance of end-to-end training based on on large data sets and in the remainder of this unit i want to give you a little overview of deep object detection algorithms and of course this is just a little snapshot and shows just the most influential works but luckily most of the most influential works have been done um by just a few people and and and one of the main drivers is ross gerschick and this is also where the slide credits go to so these slides are taken from one of his tutorials the first model that the team around ross gerschick developed is called rcnn based convolutional neural network and the idea is really simple we use an office shelf region object detection proposal algorithm it's a traditional algorithm that people have been using in state of the art object detection methods before deep learning and this is an algorithm that returns a set of bounding boxes maybe two thousand that are likely contain an object but the model doesn't know what category or if it's correct so there must be a second stage but it reduces the output space already significantly because we have to deal with just 2000 proposals from which we have to to determine if they are correct and in case they are correct which category they belong to so this is the per image computation then for each of these boxes we crop and warp the input image so for example for the yellow one we have a cropped and warped input image to a fixed size and now we simply run a standard convolutional network for image classification on this crop so we have transformed effectively this object detection problem to an image classification problem and we don't need to do it in a sliding window fashion because now we have this proposal mechanism that gives us 2d 2 000 boxes and we just need to run our confidence 2000 times and this predicts then the category of that box this is this linear classifier and at the same time it tries to refine the corners of this bounding box so if because this proposal mechanism is not very precise um this confident also predicts a delta for each of the corners this is called the box regressor refines the proposal location localization with a bounding box regressor that regresses the deltas but otherwise it's a very simple confident like a standard confident for image classification looking at this we can generalize this framework this is the generalized framework again we have in yellow the per image computation and in green the per region computation per image we compute some features and some bounding box proposals and then per bounding box proposal we compute a featurized representation for each of these proposals and we run some neural network in order to do some further processing in order to perform several tasks for example predict the category of this box or to predict the offsets of the bounding box to relocate to refine the bounding boxes so if we look at this r scene this basic rcn algorithm from 2014 in this general framework it looks as follows we have the input image the per image computation just copies the image into this feature there's no network here then we run a traditional region proposal method we crop and warp this image based on these proposals run a confidence and classify that box and regress the box corners this is rcnn region cnn in the generalized framework what is the problem with our cnn well per region there's a heavy computation involved because i have to run this image classification network in other words i need to do 2000 full network image classification network evaluations for each of these regions i need to do a full network evaluation and so it doesn't scale well it's very computationally intensive there is no computation feature sharing that's the reason why this is so computational in is inefficient and in addition this low traditional region proposal method methods add to the runtime and in also these methods are not very good they have very limited recalls so the there's an upper bound to rcnn based on this traditional region proposal techniques that are used the next version of this model is called fast rcn and appeared in 2015 in fast arses and the idea is to use lighter weight per region computation and shift some of the heavy weight burden to the per image computation and that way gain a speed up so what is done here is instead of copying the image running a fully convolutional network to produce image features a fully convolutional network maps the image to a lower resolution spatial feature map shown here in white and then we have a region of interest pooling operator that converts each of these regions from the proposals into a fixed dimensional representation that is then passed through a more lightweight mlp with much fewer parameters and that can be executed much faster in order to predict these different quantities that we want such as the category and the box regression now we have shifted a lot of these heavy computations from the per region computation to the per image computation instead of doing it 2 000 times we have to do it only once and that's why this method is much faster let's look at the individual components for the fully convolutional network part we use a standard backbone any standard convolutional network might serve but we remove the global pooling such that the output spatial dimensions are proportional to the input spatial dimensions and of course if we use a stronger backbone and that has been also observed by the offers here if we use a stronger backbone then also the detection accuracy benefits so features really matter better use a good backbone the rye pooling operation is very simple it takes these proposals a region proposal of interest and snaps it to the closest grid cells and then this royal pooling transformation computes a max pooling to a fixed dimensional representation so any arbitrary size proposal from this feature map here can be converted into a fixed dimensional representation for example a two by two feature map in practice these are of course higher dimensional but here for example as an example a two by two feature map is shown and this can be done input to an mlp because the size of this representation is fixed it's always the same this is the key of this roy pooling transformation for any proposal we extract features from the feature map such that they are in the same format with a fixed dimensional representation that's all fine but what is the problem with fast rcnn we have seen that we have removed the heavy per region computation and now there is computation and feature sharing that's good but there's still this slow region proposal method and this very generic region proposal method that has a low recall so let's get rid of this traditional rpn method and what we're going to do is we're going to learn the proposals as well and we're going to share the computations with this general feature extractor here with this feature encoder here so instead of computing them separately from this image we're going to take these feature maps that are computed from the image and apply the rpn on top of these feature maps and this rpn is now a neural network that can be trained end-to-end with the entire model so here's an example of this region proposal network we basically run a sliding window over the image it's a 3x3 sliding window over the feature map sorry this is a run over this feature map and it scans the feature map looking for objects and then for different anchor boxes this yellow one here is an anchor box it tries to decide if there is an object remember it's a proposal network it just needs to determine if there is an object so it tries to determine if there's an object and then also tries to um adjust the coordinates of this anchor box by regressing deltas to this bonding box such that it's a better fit a tighter fit to the object so here's the objectiveness chord as predicted and here's the anchor box regression that is predicted and this is for another example here in this case the objective score is very small because the feature is not central on the object with a single anchor box that doesn't work so well because it cannot deform arbitrarily or at least very hard for the model to learn so what is done in practice is to use k objectness classifiers and k anchor boxes with different aspect ratios to cover different aspect ratios of objects just to mention there is an alternative line of works that doesn't use this region proposal mechanism and that we don't have much time to look into here in this lecture but i just want to mention it one example is yolo another is ssd which shifts even more more of this computation to the per image processing and tries to predict 2d binding boxes in a single stage there's no region proposal and proposal based prediction but there's just a single convolutional network that directly predicts these anchor boxes and the classes but because in a single stage that is a very difficult problem while being potentially faster and more efficient often the accuracy is not on par with the two-stage detectors that are state of the art currently okay let's go back to the two-stage detector one additional innovation that has been presented here at tbpr2017 is to improve the predictions by considering a multi-scale feature representation this is again the topic of objects observed an image appearing at multiple scales potentially and so if we can take into account that this happens at multiple scales efficiently in a neural network then we can potentially also predict better detections so we want a an image feature representation um that is multi-scale right so we want to detect and classify objects no matter at which size they appear in the image and this fbn pro like introduces this ability there's different strategies how you could implement such such a mechanism the first is to simply use an image pyramid and then independently for each resized image apply the object detection algorithm but that's very slow a second strategy would be to create multiscale features using a standard neural network with pooling and just predict at the lowest feature and this is the approach that's taken in the models we've discussed so far fast and faster rcnn yolo etc but it leaves all the computation to the features and while it's fast it's sub-optimal it doesn't predict all the scales equally well because we are have kind of a preferred scale that we can predict well the third strategy is a naive in-network pyramid where we have this this feature hierarchy but then we do predictions at multiple stages now this is still fast but again it's still sub-optimal because at the lowest resolution we have very strong features because we have passed many layers of the neural network at the highest resolution we have just passed a few layers so the small objects with the small objects we can predict with much much fewer capacity so the predictions will be worse for the small objects the features are worse and the predictions will be worse and so what is proposed here in the feature pyramid networks is to have a unit-like architecture we have seen that before there is a feature pyramid um that scales down in terms of spatial resolution and then we have to skip connections to the right and we have a inverse pyramid that scales up again and at each of these scales we're making a prediction now this is one neural network and now we have both at the low resolution and at the high resolution we have strong features because we are we have this encoder and this decoder with the skip connections here so it's basically just an implementation of a particular unit variant and and prediction that multiple resolutions that is proposed in this paper is a very simple idea it's quite powerful and works very well another thing that we can do with this type of two-stage detection models is we can add additional network heads as outputs to this framework which has already been indicated in the previous slides so one example is mask rcnn where in addition to the class and the box regressor we also have a couple of more convolutional features per region of interest and predict a segmentation mask and similarly in dense pose a per pixel a texture map coordinate is predicted corresponding to a human body model that can be used to infer human body pose so let's look at this so-called instance segmentation algorithm mask rcnn in a little bit more detail the goal here is to assign a semantic and an instance label to every pixel of an object bounding box is not enough anymore now we really want to delineate the individual objects and here is the figure a little bit larger than before and what we do here in addition as already mentioned to the class and the box to the class prediction and the box regression we add a few convolutional layers to predict the foreground background mask per detection so here this orange region is the foreground everything else is the background and we do this for each of the detections and we supervise of course also densely with annotated instances there's a little change that has been made to the faster rcn and backbone which is that this roy pooling operation has been replaced with the so-called roy align operation which is a bilinear interpolation instead of this max pooling and that is that allows for much more precise um estimation of object boundaries and then this extra head here is a standard convolutional network that predicts binary masks and various architectures have been tested in this paper here here's an example of what the supervision looks like we have an image with a training proposal and we can see the foreground background mask the ground roof mask for this particular proposal that has been extracted at this location similar here for the bed or the kit and the results of this mask are cnn are just amazing this there's also have been some improvements also to the detection um model to the backbone to the training etc so also the 2d detections have been become better but as you can see the instance boundaries are really precise and sharp so it's really it's really impressive these results here i believe evaluation can be done similar to evaluation of object detection algorithms by using the intersection of reunion but here now at the mask level not at the bounding box level so it's much harder to obtain the same iou here because the correct mask shape has to be predicted as well as already mentioned there are several now with this general framework there's many other possibilities that what we could potentially predict one possibility is to predict um a texture map coordinate of a human body so in this dense pose work where the main contribution was actually a novel data set that was annotated with as you can see here with these colors that correspond to locations on a mesh so now we can predict these locations on a mesh we can learn if you have a big data set like that we can learn to predict these locations on the mesh and we can then this is a video from from this work we can then run this algorithm and do this in parallel for many many objects in the scene and fit a human body model to these predictions this is a paper that created a lot of attention in one of the recent vision conferences and here's another example where the prediction is not 2d but 3d this is a mesh prediction so for each 2d bounding box the goal is to predict a 3d mesh which is also possible with this type of models and what is shown here finally is the prediction of 3d bounding boxes this is basically an a version of fast rcnn for point clouds it's called point rcnn it's published at cvpr2019 and really all the components that you're seeing here are very similar to the components of faster rcn and except that the input is now point clouds and the output is 3d bounding boxes not 2d bounding boxes if you want to play yourself with these type of models there's a great toolbox released now by the facebook team that has developed or made most of the major contributions of this type of work it's called detectron 2. you can access it here at this link it has pre-trained models for all kinds of outputs and it's quite fun to play with that's all from me today thanks |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_122_Diverse_Topics_in_Computer_Vision_Compositional_Models.txt | another research area that i'd like to show you is on compositional models it's about trying to learn about the world in terms of its compositional structures in order to build more robust models and models that are more useful and more semantically interpretable and there's many aspects of this research questions and the results that i'm going to show you are quite biased towards the results that have been obtained from my research group but there's also many other works um and i'm happy to tell you more about these if you are interested in that but let's talk about some specific examples that we have been looking at one area that we have been interested in is in 3d shape abstraction in particular unsupervised 3d shape abstraction and one of my phd students this pina partially dewis has been working on several papers along this this question the motivation for this is simple here are some of the 3d representations that we have learned about voxels points meshes and implicit representations and the limitations of these existing 3d representations are that they require a large number of unstructured geometric elements or such even maybe even less interpretable neural representation and therefore do not convey any semantic information for instance about the parts the functionality or the affordances like what are suitable areas etc um and so this work here called super quadrix revisited as you can see here by the shape primitives these are so called super quadrics it's a very simple form of representing shapes based on very simple geometric primitives the idea is to represent shapes using just a few primitives and still we are able to recognize maybe even more semantically as before what the underlying shape was if i ask you to classify or identify these three objects you would probably have no issue in telling me what each of what each of these objects are despite they are represented in a very abstract way with very few bits these representations still convey a lot of semantic information also a lot of geometric representation if you think about a robot that has to interact with the environment and if you have to navigate um then in this case maybe this level of granularity is sufficient to fulfill the task but the challenge in inferring such type of representation is that there's a variable number of primitives so you have an inference problem where even the number of unknowns is unknown and in addition there is very few annotated data sets like this so you have to really learn without supervision which makes it hard so the goal here is to learn 3d shape abstraction to infer also the number of the primitives but to not require any supervision at the primitive level so the primitive level means that i don't know like this this part here is the wing of a plane etc i just have a point cloud as an input so here you can see two meshes but what we actually take from this meshes this is synthetic data what we take from this meshes is a sample point cloud 1000 points randomly sampled from the surface of these objects as a substitute for you know an object that has been scanned with a laser scanner for example so here maybe a little bit confusingly here what is shown is the mesh but what the input to the method is just a sparse set of points from that mesh so the input is a point cloud or an image we want to train a model that's given a condition an image or a point cloud predicts such an abstracted shape and as a supervision we want to just use the point cloud but no labels at the primitive level just a point cloud derived from this and this problem has actually a long history and this is one of the older papers it's not the first one but it's one of the first ones from um pentland my mit it's called part structured descriptions of shape it has been has appeared in 1986 so like almost 40 years ago and already then the uh the vision was that well do we really need to represent scenes in a complicated way in pixel in terms of pixels and and meshes with a lot of vertices or is it maybe not sufficient to have just very simple shape primitives like super quadrics that have been proposed in their paper and that we have been revisiting 30 years after and such a super quadric just have has 11 parameters which means that a scene like this here on the left and this is a photo from that paper so that's why the quality of that photo is bad but i hope you can still recognize it a scene like this on the left here can be stored with just 1 000 bytes so with just a very small memory footprint we can represent the scene as complex as this that has many aspects to it it has you know you can recognize from his image there's two people one male one female they're holding drinks in their hand there's a sun shining there's a palm tree probably or a beach or something so very rich semantic properties conveyed in just 1 000 bytes which is not possible with an image or mesh representation however this line of work has has created a lot of excitement initially but then has been abandoned because it didn't really work what they tried is to fit these models to single instances using optimization and what has been found later on like 30 years later is that now well this this like fitting approaches didn't work but if you use deep learning it starts working because then you can learn from a lot of data and then the network automatically takes advantage of the regularity across many different scenes and then this fitting problem becomes in a sense regularized through the data and that's what's happening so we are using these super quadrics this you can think of these as ellipsoids that have a little bit more parameters than just an ellipsoid can express slightly more complex shapes but just slightly and then of course each of these has also a post parameter so we can put it at arbitrary 3d location and in arbitrary 3d orientation and we have also probability of existence that we have to model this is a latent variable in our model because we don't know the number of primitives that we have to infer and then well the network architecture is quite simple it's a standard deep neural network architecture that takes an image or a 3d point cloud and encodes that into some features and then predicts for a fixed number of primitives this is the upper bound that we can predict the upper bound in terms of the number of primitives predicts for each of this primitive the size the shape the translation rotation probability of existence as well now the question is of course how do we train this network and the loss function is the crucial part here so this overall loss is composed of three components the first is a primitive to point cloud loss that measures the distance from the predicted primitives to the point cloud and then the second one is the point cloud to the primitive loss which measures the distance of the point cloud the supervision the unlabeled unstructured 3d point cloud to the primitives that have been predicted and then there is an additional regularizer on the existence we want to predict at least one primitive otherwise these other two losses become ill post and parsing money loss which induces sparsity we want to obtain a parsimonious representation which means we want to obtain a representation in terms of primitives that reconstructs the scene well but that has as few primitives as possible in order to reconstruct that scene well and that's the best possible representation we can have for that scene yeah and i don't want to go into detail into these loss functions uh just a little glance into one of them um the second one is even more technical to do efficiently um so this loss function is the loss function that compares primitives to point cloud distances and what we have here is basically well um we have these deltas here which are uh for each point on the primitive we want to find the closest distance and these are sample points that's why it's just an approximation we want to find the closest distance to the observations in black and the observations have been transformed using the 60 pose of the primitive into the primitive coordinate system in which it's much easier for us to compare these and then the loss is basically the expectation over the probability of existence this is the z here for all these primitives and then the sum of the primitive to point cloud loss over well all of the primitives that exist but as i said i don't want to go into technical details here and if you're interested in this there's all the details in the paper we have this losses this primitive to point cloud loss the point cloud to primitive loss and then this regular rises and then we train this model on a lot of data where we have input images or input point clouds and then a supervision also point clouds and then the task for the model is to reconstruct these output point clouds in terms of primitives and this is what happens so here these are some inference results you can see here this is the ground truth shape from which we have obtained a a sparse point cloud as supervision and also as input to the model and you can see the shape abstractions that have been learned and this was for dogs and rabbits and here's an example for motorcycles of course the reconstructions are not as good as you would expect from reconstructions that are very detailed in terms of obtaining high reconstruction fidelity but they have a semantic meaning and in particular this has been obtained without any supervision and you can see that for instance here these parts correspond to the lower left or lower right leg it's always in pink and the torso is always in purple and the head is always in red so it has really automatically discovered this structure without us telling it good and there's a couple as i mentioned there's a couple of follow-up works there's for example this one here it's called neural parts in collaboration with nvidia where we're trying to learn more expressive representations than these super quadrics which are quite quite restricted in terms of their expressive power and there's a little video which i'm going to play where the spinner will tell you briefly about that method hello i'm despina and today i'm happy to present our work neural parts learning expressive 3d shape abstractions with invertible neural networks a joint work with angular catheteroplus andreas geiger and sonya fiddler existing primitive based representations seek to infer semantically consistent part arrangements across different objects and provide a more interpretable alternative compared to more powerful implicit representations they rely on simple shapes such as 3d cuboids superquadrix 3d anisotropic oceans or more general convex shapes in this paper we identified that there exists a trade-off between the reconstruction quality and the number of parts in primitive based methods due to their simple parameterization existing primitives have limited expressivity and require a large number of parts for capturing complex geometries however using more parts results in less interpretable abstractions since they do not correspond to identifiable parts for example primitive base reconstruction with 50 convexes result in accurate reconstruction however it is not possible to identify whether this part corresponds to the human part or a plain part to address this we introduce neural parts a primitive representation that is not limited to a specific family of shapes which results in geometrically more accurate and semantically more meaningful abstraction compared to simpler primitives for example reconstructing planes and humans with neural parts results in clearly identifiable semantic parts such as legs and arms for humans and wings and tails for planes existing primitive based methods utilize a feature extractor architecture okay so just to give you a gist of the method and as it allows for reconstructing more expressive parts otherwise the idea is very similar to what i've presented before but then what is interesting is how these parts are actually represented and in this particular case they are represented using invertible neural networks that are then uh these are by by objective mappings that can be learned at the same time as the decomposition into these primitives so this was one line of work another line of work that we have been following is to learn such compositional representations from images alone and that has led to a method that i have already shown you briefly it's called compositional generative neural feature fields has received the best paper award at cppr 2021 and the idea here is to use a nerve-like representation as we have discussed in previous lectures but not a single nerve for the scene but try to in an unsupervised way decompose the scene into multiple components where each component is represented by a radiance field and then we stitch them together in 3d by combining their features and then decoding them into an image and that gives a lot of flexibility in terms of manipulating the scene and this flexibility has been gained without any supervision and that's unlike 2d gans for example where you don't have this flexibility so here's the comparison against the 2d based scan so if you train a 2d gan on this simple clever data set and afterwards you you want to modify individual latence in a latent space you see that these latents are entangled if you modify one object it also modifies other objects and you can't really precisely control how you modify the object how you translate it but we can instead because we have this decomposed scene representation but we can modify individual objects as we desire we can translate them we can rotate them that's something that's completely impossible with these global 2d gans that don't decompose the scene and in our case the decomposition happens in 3d space because also the the scene lives in the physical 3d world and so that's where we want to model the world so here's some other examples this is on cars it looks a bit strange here because the data set is also very biased there's only front and side views and back views of the car and so it has never seen like these intermediate views but still we can we can interpolate and we can extrapolate and this also works for outdoor scenes and we can we can change also for example the background appearance and all these latent factors not only rotate we can rotate the foreground we can rotate the background we can translate the foreground but we can also change the appearance and all of this is kind of nicely disentangled yeah but this well this is working but it's working only for for well some data sets it's not working for general in the wild data sets yet and it's working only at limited resolution and it has some artifacts so there's still many many tasks to be addressed by the by you by the future of researchers one thing i also want to show you is causal reasoning which is related to this disentanglement one problem with neural networks for example if you have a classifier is that they learn spurious correlations and often take shortcuts if i show these images and you you ask you what what is what you see here and you answer camel cow camel cow then you're correct but that's not what a neural network would answer that's trained on camel versus cow classification and why is that well it's because the network has seen very little examples of cows in deserts or camels on green pastures and so and this is what is what is happening with neural networks they are they're they're really performing the shortcut learning and and pick up spurious correlations they are picking up that well if if there is some cow-like object the camel is also a cow-like object but if there's a cow like object and it's on a green pasture here as as here then it's very likely to be a cow so it takes a lot of the the cues that it takes it takes like it takes up actually from the background it learns this correlation to the background which is a spurious correlation it's a shortcut for the network to identify this object and of course in some cases that's the right thing to do but there's many cases in which you don't want this behavior because if you change the data distribution if you do out of distribution classification then suddenly your your performance of the classifier will fall back to china's performance because the classifier hasn't really understood network hasn't really understood what's in the image has just learned to correlate green pastures with cow so the question is well can we learn to decompose the image generation process here more intelligently into the the actual physical independent causal mechanisms and if we could do that then this also would allow us to ask counterfactual questions such as how would this image look like with a different background with a green background for example with trees in the background and the other hope and what we've also found in this paper is that if we have such a generative model that decouples the causal the underlying causal principles then we can also use this to generate counterfactual data this is for example cows are in in desert landscapes and if you trained in a classifier on on the entire set of data then this classifier becomes robust to this out of distribution data so this is the model that we have we are introducing a couple of inductive biases in order for this to work otherwise it's very difficult and then we train this on imagenet i'm not going to go into detail but what i can show you is that then we can generate these counter factual examples that can be used to make image classifiers much more robust so for example here we have a red wine glass the shape is red wine with the texture of carbonara and the baseball background or here we have a triumphal arch with an indian elephant texture and a viaduct background or we have a mushroom shape here with a barrel texture and a gray whale background etc etc and also i post a link here if you're interested there's a blog post on most of our projects so you can get a better idea on them if you're curious and so what i also want to show you finally is on holistic 3d scene understanding we worked on holistic 3d scene understanding which is also has to do with compositionality of scenes here we're for example interested in inferring the 3d up 3d objects and the layout of indoor scenes from a single rgbd image which is a very hard task and in 2015 we were one of the first to actually do this with real you know high fidelity results we're trying to based on a big data set of cad models trying to represent the scene in terms of planes for the walls and floor and ceiling and also in terms of cad models for the objects that have been identified based on just a single rgbd image so here you can see a video of the process this is actually a combination of recognition and a graphical model running and you can see the reconstruction that we obtain here on the right in terms of a depth map and in terms of our re-rendering of the scene we can see these card models that have been retrieved and have been positioned into the scene according to a particular input image and we've also worked on this in the context of outdoors under scene in terms of outdoor scene understanding urban scene and understanding in particular so we're interested in multi-object traffic scene understanding this was actually a big part of my phd thesis oh the link is out of date i have to update that and the goal here is to reason about this multi-object traffic scenes from a movable platform with just a monocular video as input want to recover the 3d scene layout the location of objects and traffic activities so the input is just this monocular video on the top where we have a bunch of object detections and this is pretty deep learning era so there's uh just a you know traditional object detector which for cars actually works quite well and the output is shown here on the bottom we we have a rough idea of where the lanes are where the cars are the traffic participants and what is the uh uh the way of right right here um who which car is allowed to to move in which direction is illustrated here and infer here at the same time |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_93_Coordinatebased_Networks_Neural_Radiance_Fields.txt | maybe one of the most popular follow-up works on this idea of representing shape and appearance implicitly is called nerf neural radiance fields the idea or the application of nerf is actually a little bit different from what we have considered it is a novel view synthesis which is the task of given sparsely sampled images of a scene multiple images of a scene you're not primarily interested in the reconstruction but to render an image from a novel viewpoint as realistically as possible so assume you have captured 20 or 30 images of this particular scene from similar viewpoints then you want to obtain a reconstruction like a synthesis result like this so you can generate you can manipulate the viewpoint and you almost don't notice any artifacts in this approach which is why this approach has created such a big hype and there is so many follow-ups already on this work itself i think is cited more than 300 times by now already despite that it appeared less than 10 minutes ago now my credit also ben mildenhall and john barron offers of this work from whom i have taken some of these slides here in this presentation so the goal is now given a set of images how can we render novel views as photorealistically as possible and of course this also requires some sense of reconstruction as we'll see but the quality of the reconstruction is not the ultimate goal the ultimate goal is the quality of the images that are rendered and then one one advantage of this method of reconstructing the scene um and and and the appearance in order to render it photorealistically is that it works for general scenes the dvr method that i've shown before which uh i haven't shown you before required masks right so for for um reconstructing objects we always assumed that for each image we had a pixel aligned mask of the object otherwise it was very hard to convert to something plausible there is some follow-up works also from our group that deal with this problem but this work here is much more general at least for object level scenes it doesn't require any masks it just works directly with rgb images post rgb images so we still need to know the viewpoint there was also an assumption we did for dvr so the camera poses have been initially extrinsically calibrated but here this is this is a little bit more general and the reason why this is more general is that the representation is slightly different it's a little bit easier to optimize it's it's it's more time consuming but it's easier to optimize because it allows for a softer representation of the world so what this nerf does is it also uses mlp a relu mlp in this case with 9 layers and 256 channels per layer to model to map from a 3d input point and a view direction similar to idr the lipman extension of our dvr work but in this case the output is the rgb color value and a density and the density is not directly related to the occupancy probability but it's rather at the value of a particular free location in the scene and the rendering works differently from dvr and idr as we'll see so this density sigma describes how solid or transparent the 3d point is and therefore this model this nerf model can also model effects like fog etc despite well it requires the scene to be static so it's going to be difficult with fork but you can model transparent surfaces such as windows etc conditioning on the view direction as in idr allows for modeling view dependent effects which is very important in real world because many surfaces are non-laboration and in practice this is a little remark here the direction here is not represented by these two angles but by a normalized 3d vector d which is easier for the network to digest the model this nerf model is rendered into an image differently from dvr and idr also using volume volume rendering techniques but this uh it uses alpha compositing it doesn't just evaluate the color of a point on the ray at the surface but it evaluates the color of multiple points and the density of multiple points along the ray and then from you can think of it as from back to four uh to front stitching it together using alpha composition if there is a color value observed and the point is is dense then it occludes basically the color values that have been observed behind that point so in instead of just evaluating the point at the surface we let the surface to be more fuzzy which leads to better optimization and do alpha composition for rendering and that's different from dvr which renders the color of the point at the surface and always assumes solid surfaces so this here instead is more similar to traditional ray tracing computer graphics if you will and it works by shooting well if you want to render an image we shoot array through the scene and we here shooting array through the scene as sample points along the ray we query the radians to obtain the radians field to obtain the color and the density and then this is done here so we obtain the color and the density illustrated here with the different colors and transparency level of these points and then apply alpha composition going from back to front adding up these colors into and then setting the pixel to the corresponding color here's the math behind it's actually quite simple this body might look scary at first glance this is the alpha composition model we have n points along the ray so again the ray equals o is the camera center plus direction times t and the color is simply the sum over all of the colors of all of these points where we have this weight here that tells us how much light is blocked earlier along the ray and then we have this alpha weight which tells us how much light is contributed by that ray segment so let's assume that this t value here is one and this value which means that basically all of the previous alpha values are zero you can see the similarity to the volume rendering equations that we've seen in the discrete case for the probabilistic model so if all of the previous alphas are big no if all of the previous alphas are small then this product here will be one and if the current alpha here this current segment alpha i is large and that is large if the density is large if this value is large then e to the power of minus something large will be zero so one minus zero will be one so one is the largest value that a that alpha can attain so if this is large and this is like this is one and and this is one then basically here at this point we are taking the color of that little line segment here that's represented by the color ci we take the color of the first occupied point along the ray except that we don't have true occupancy here but that points can be fuzzy they can be um they can be half occupied as well that's how it works that's basically all the equations of this model and then training works simply by rendering images and minimizing reconstruction error via back propagation very simple using an l2 loss we render back propagate gradients through the sampling process which is straightforward and it doesn't even require this this differentiation and then compared to the observed image i right good these are the equations one remark nurse parameter are optimized on many different views of a single scene compared to more traditional correlation based methods this method requires a lot of use of the same scene so the results that you're going to see are amazing but keep in mind that many views of the scene are required for for obtaining those results another remark is that this sampling is quite slow if we sample a lot of points along the ray we always have to query the radiance fields and in order to do alpha composition and so this is very slow if we do this densely so what nerf does instead is it it first allocates these samples more sparsely and equidistantly these are the white points and then once it's more certain about where the surface is located because it has already evaluated these white points it allocates these red points more closely to the surface and through these two pass sampling procedures save some samples save some time and there's many variants of this by now that save even more time than what has been proposed in the original model okay now here's an example of how nerve models view dependent effects we've said that the input view direction is also provided to the radiance field you can see that if we look at the same point in different viewpoints you can see how the point changes in color and so we can also visualize the radiance distributions here so this is a visualization of that point for different view directions you can see that we have this specular specular highlight here but we have dark colors here for example this viewpoint conditioning happens later in the mlp to not entangle with the geometry so there's a special constraints on the architecture that help this disentanglement and i encourage you to have a look into the paper for the details another essential trick um that was done by nerf is to compute a so-called positional encoding for the input point x and the view direction d if you apply nerve naively it will obtain some blurry results as shown here but the results that i've shown the beginning look like these ones here and this is an interesting phenomenon and has been analyzed by the offers in a follow-up work called fourier features let networks learn high frequency functions in low dimensional domains why is it problematic to directly take a low dimensional input to an mlp and that applies not only to appearance but also to the geometry models that we've seen the beginning of this lecture using positional encoding of fourier features also helps those models to represent higher fidelity frequencies in order to understand that phenomenon the offers looked at as a more simple task which is memorizing a single rgb image the input now is just a 2d pixel location and the output is an rgb value so it's the same problem as before but much simpler it's just 2d and we're just predicting the rgb value something that should be very simple right so there's a simple mlp which maps from the pixel location xy to the rgb color but surprisingly this doesn't work even if there's 10 times more parameters in this model than pixels in the rgb image that we want to memorize and even if we let it converge for days and weeks we'll always obtain something like this over smooth results and we can also see these rayleigh art effects here you can see how the linear separations of the railway activations appear in the result and this was surprising and so the authors looked more detailed into that and found a solution to it which is to pass not directly the let's say view direction or the input coordinate to the mlp but to first lift these features into a higher dimensional space similar to what people in kernel methods do so they pass this low dimensional coordinates through a fixed positional encoding which is shown here this is just a bunch of sine and cosine functions at different frequencies controlled by this hyperparameter l that determines how many frequencies to consider or in this follow-up work random fourier features this is random projections on fourier basis and what they've shown and analyzed also theoretically in this paper is that these features let the network learn high frequency functions now in low dimensional domains so the problem is really that mlps are biased mlps with low dimensional inputs are biased to represent very smooth objects as an output but if we first project these low dimensional inputs for example the location x or the view direction v into this higher dimensional space we can now suddenly learn much higher frequency functions and this is shown here so it's exactly the same network but with this additional trick a deterministic lifting of the input into a high dimensional space there's no parameters it's just this application of the sine and cosine functions we get this result they are much sharper more detailed you can see really these rayleigh artifacts and also it converges much faster and you can play with this bandwidth if you use the random fourier features you can play with the bandwidth and you see that if you are on the one extreme you get very blurry result with a lot of artifacts if you're here in the middle you get good results if you go to the other extreme you get very noisy results which indicates overfitting so you can you can play with under fitting and overfitting here in this in this input feature representation good that's it let's look at some results these are results from the original paper you can see as a comparison to previous work you can see how detailed and accurate the synthesis results are it's very hard to distinguish the rendered images from a real image you can see a lot of artifacts in the baseline on the left on the right you can see artifacts only if you look very closely and at the boundaries of the image where there are less observations it's remarkable how sharp the details are that this method is able to represent it's typically very hard to reconstruct things like this uh railguards which are very thin you can see a few dependent appearance you can render the scene from the same viewpoint just changing the illumination you can see how the reflections change or you can render from novel viewpoints you can see how the reflections change on the car when changing the viewpoint in this canonical representation here and this is demonstrating that if you consider viewpoints at least that are close nearby you get extremely accurate geometry for free from this model by just determining where the ray intersects or where the density starts to change quickly and you can use this for virtual effects such as shown here |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_33_StructurefromMotion_Factorization.txt | so far we discussed how to [Music] obtain epipolar geometry from two views but of course it would be ideal to obtain both the camera pose and the 3d reconstruction directly by optimizing over many views of the scene in order to minimize the camera pose and 3d reconstruction error and this is what we're going to cover now in these last two units the first technique that we're having a look at is so-called factorization from tomasi and kanade introduced a very old technique but very elegant mathematically and in the basic form only applying to setups where we have a rough autographic projection there is some extensions of this technique that also work for prospective cameras so how can we use more than two views for structure for motion let's assume that telegraphic w contains these image coordinates contains image coordinates for features that are tracked across several frames so here we have n frames or cameras and p are the number of feature points let's say we have 100 features and they are tracked over 10 cameras we assume you're always full tracks every feature is visible in every camera given these observations this is illustrated here right here we have some features and here we have these feature tracks and assuming autographic projection our goal is now to recover both the camera motion and the structure jointly the structure is the 3d the set of 3d points corresponding to the tracks that we have observed and it turns out that we can do this in closed form solution for this very simplified setup with autographic projection and complete feature tracks and there's some modifications of this algorithm that make it also applicable to perspective projection and to alleviate this strong to reduce the strong assumption of requiring the features being tracked across the entire sequence here's an illustration on the bottom we have a sequence of images where we detect and track features for instance using the lift descriptor or by using lucas canada feature tracker and we get these in the 2d space we get these 2d image trajectories of the sphere that is rotating here and then we are re reconstructing both the camera planes of these autographic cameras and the 3d structure in terms of the 3d points how does this work well under autographic projection a 3d point maps to a pixel in frame that should be i in frame i in the following way as it's an autographic projection here what we do is we take that 3d point here and we subtract the translation of the camera coordinate system with respect to the world's origin so this point is expressed in world coordinates and by doing that we're expressing that point in camera coordinates so this is basically leading to this vector here xp minus ti this point here now expressed in these blue camera coordinates and then we simply need to project that point onto the axis of that coordinate system to get the x and y pixel location so here's an illustration where we are projecting it now onto that plane by taking the projection onto that vector and onto that vector and this gives us the x and the y coordinate for that point e in the plane the image plane of frame i now without loss of generality we assume that the 3d coordinate system is at the center because there is an ambiguity here right we can always define the world coordinate system arbitrarily so we need to define it somehow and we defined it such that the 3d point cloud here is centered around this world coordinate system to remove that degree of freedom this is an assumption that we do now let's um x i p and y i p denote the 2d location of feature p in frame i centering the features per frame and collecting them yields the so-called measurement matrix we are not only centering these points that we want to recover in 3d remove that ambiguity but we're also centering the points in the image coordinates before actually doing something with them so if this is the image that we observe for frame i we're centering them such that they are zero mean and this is expressed by these two equations here and then we are taking these measurements and we're stacking them into a big matrix where now we have the xs and the y's these are the two parts of the matrix here we have it for all frames n and for all points p there's a complete matrix that contains all the measurements we mark the tilde notation here denotes centered not homogeneous coordinates we're using the same notation here um but this is inconsistent with the other lectures where we typically use it for homogeneous coordinates let me repeat the tilde annotation denotes centered not homogeneous coordinates please keep that in mind this is only for this unit okay so x tilde i p r the centered x-coordinates of point p in image i now as we have this expression from before the expression for projecting a 3d point onto the if image plane the centered image x coordinate is given as follows we have x tilde this is the centered notation which is x i p minus the mean of all the 2d image features in that frame now we can plug this in here so we have expression here for x i p and we have this expression here for x i q now let's have a closer look at the second expression so in the second expression this term here does not depend on q so we can pull it outside and we can also pull u i t i outside this expression here this summation because it also doesn't depend on q so we get this expression right [Music] now we have uh on the left hand side here we have u i t t i a negative and on the right hand side we have u i transpose t i positive so these two terms cancel and so we're left with u i x p minus u i uh this expression which we can summarize as ui transpose this expression here and because we have assumed that the 3d points are centered this expression here is zero so we obtain ui xp so in summary the centered image x coordinates assuming centered 3d coordinates is x tilde ip is simply ui transpose xp a very similar relation a very simple relationship and the same is also true we obtain a similar equation for the y coordinate so we have these two expressions x i p is the u axis or the x axis of the um i've camera coordinate system and this is the v or y axis of the iv camera coordinates we have these simple linear projections to obtain the centered coordinates now we can collect all of these centered coordinates into a centered measurement matrix w tilde and we use tilde here for denoting centering not homogeneous coordinates and we obtain a very simple linear system which is w tilde equals rx but w tilde is now the centered image coordinates so this matrix is the same as this matrix here and we have r which is collecting the axis of the camera coordinate systems u1 un v1 vn and we call this r because these are simply the rotations this can be integrated as a rotation matrix right this is just the rotation of the autographic camera coordinate system translation doesn't play a role because it's an autographic projection so we just have rotations here and this is um these rotations are formed are spanned by these basis vectors and here we have the x matrix here on the right which is collecting simply the 3d points we multiply these two together we obtain w tilde which is exactly corresponding to this element-wise expression here on the top so we have in this normalized representation found a very simple relationship here a simple linear relationship that we have to solve now again r represents the camera motion in our case because it's autographic just rotation and x the structure and that's why it's called a structure from motion or sometimes structure and motion algorithm because we're trying to recover from the observations w alone r and x together now let's have a closer look at r and x r is a 2n times 3 matrix because these basis vectors that span the camera coordinate systems are three-dimensional vectors and they're stacked here into the rows of this r matrix and x is a free times p matrix because we have p points but we each of these points is a three-dimensional vector therefore in absence of noise where this w tilde is exactly the product of r and x we know that the matrix w tilde has at most rank free and this is an important property that we are going to use right because this matrix and this matrix have rank 3 maximally also their product can only have maximally rank free so it's a rank deficient matrix however when adding noise the matrix matrix easily becomes full rank so we have we have to retrieve this rank 3 version of the full rank matrix by canceling the noise so given real observations with full rank w at let's call it w hat here because this is observations we now have to find a rank-free approximation w tilde to w hat and we can do that by finding a matrix w tilde that's closest to w at but has only rank three and it's closest to that in in terms of the frobenius norm which is basically the l2 norm on matrices and this can be done also by a singular value decomposition so it can be can be shown that you do single or valid decomposition of w hat um and you consider the singular vectors corresponding to the top three singular values and the others should anyways be small because they are just capturing the noise then you're finding a matrix that's in the frobenius sense is closest to the measurement matrix that you have actually observed we do a singular value decomposition of the full rank matrix w hat into u sigma v transpose and then we can obtain a rank three factorization by take by considering only the singular vectors and the singular values corresponding to the largest three singular values here in in this factorization the sigma becomes basically a three by three matrix and then we can simply obtain our head as u sigma to the power of one half and x hat um which are estimates then as sigma one half v transpose because if i multiply both together i obtain u sigma v transpose but now because we have done this factorization and erased all the singular values and singular vectors that are not the top three and we obtain again a rank three factorization see that sigma is three by three v t is three by p so we removing all the singular vectors that are are not relevant unfortunately this decomposition is not unique as there exists a matrix q a three by three matrix q such that with an arbitrary matrix q we obtain the same w hat i look at w hat and i have the singular value decomposition here into r hat and x hat then i can also multiply q to r hat and q power minus 1 to x hat and i obtain the same w hat but i have changed our head and x hat another question is well how do we actually find the right r head and x hat so to find this q that is the right q that we actually want we observe that the rows of r are actually unit vectors and the first half are orthogonal to the corresponding second half of r remember r is are the basis vectors that span the coordinate systems of this autographic projection and they are orthogonal to each other which means that if i multiply u with v i get zero and if i and they are also unit vectors so if i multiply u with u i get one and if i multiply v with v i get one right so that means that if i take these vectors r and i multiply them together u t q u t q transpose then that should be one if i i transpose this expression i get this expression here this this is called a metric constraint you call it's called a metric constraint because where we know that this the length of this view vector is one must be one so um if we multiply this together here then um this must be one and we should choose a q such that this this becomes 1. and similarly for v and then also we have the cross term u times v which must be zero so we want this matrix such that these matrix constraints hold and this is then our r that we are interested in from all the entire family of ours that we consider choice of q so these are the matrix constraints that we want to enforce and this gives a large a large set of linear equations for the entries in the matrix q q transpose as you can see here it's linear in the entries of the matrix q q transpose this is considered as one matrix and from there the matrix q can be recovered using standard cholesky decomposition because it's a it's a q q transpose decomposition problem we can just solve using a standard algorithm so now we have obtained we have obtained the q and from the queue we have we can obtain the r and the x that we are actually interested in and so this solves our problem and here's an overview over the algorithm summarizing it we take measurements w hat that are not necessarily free because of measurement noise so we compute the svd of this and keep the top three singular values and vectors and then we define our head as um and x hat as these expressions here we compute then q and q transpose from uh from from this and from there we compute q and then our final r and our final that should be x are computed by multiplying this with q some final remarks on this algorithm well the advantage of this algorithm is is obvious is a closed form solution is super fast and it doesn't have any local minima and it's determined that's not an advantage that's in general too it's determined up to an arbitrary global rotation because we can always rotate the world coordinate system and the cameras as well the disadvantage is that it requires complete feature tracks if there is a feature that's not detected in one frame or if a feature like leaves the frame because of an occlusion this basic form of the algorithm can't handle this but there is solutions for this so the algorithm can be applied to subsets of features and frames and then propagate to some form of matrix completion iteratively to fill in the missing entries and still operate on the data and actually data that i've shown you before the clear data has already this problem that not all features are visible in all frames and this has been solved using such an iterative propagation scheme and here's another example where this algorithm has been used for a reconstruction of this object here including the hand and then a 3d model has been returned by meshing the results remember that these are results from 1992 right so this is one of the very first multi-frame reconstruction approach mathematically very elegant but assuming autography and requiring some additional effort if feature tracks are not complete to summarize thomasie and canada's original factorization approach assume autography and therefore there have been a couple of extensions of this algorithm for instance the one by christian hora they perform an initial autographic reconstruction and then correct the perspective effects in an iterative manner there's also a follow-up by tricks that performs projective factorization iteratively updating the depth values now even though these algorithms make some assumptions that make them inaccurate vectorization methods can still provide a good initialization for iterative techniques such as bundle adjustment gradient based techniques such as bundle adjustment that minimize the reconstruction error which is kind of the gold standard algorithm but leading to a non-linear optimization problem that requires a good initialization however in modern structure for motion algorithms like core map that we're going to discuss next um this is not how it works it's not like it's using one of these multi-frame reconstruction approaches to get a record together initialization for the entire scene because this is still very difficult to do for not like tiny object like scenes but for very large scenes that we are interested in reconstruction in reconstruction and reconstructing as even for you can do such an iterative procedure it requires that still the majority of features is always visible in all of the frames and in practice this is simply not true if you have very large scale reconstructions an image is showing only really a very partial view of the entire 3d object or scene that you want to reconstruct and therefore modern structure for motion approaches work slightly differently they often perform what's called incremental bundle adjustment they initialize with a carefully selected tool view reconstruction as we discussed in the second unit and then iteratively add new images to the reconstruction and iteratively growing that reconstruction based on overlap between features detected in new images and features that are detected in images that are already part of the reconstruct and that's what we're going to discuss also next unit |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_113_SelfSupervised_Learning_Pretext_Tasks.txt | pretext tasks actually both this unit and the last unit of this lecture cover pretext tasks but the last category of pretext tasks that we discovered the so-called contrastive tasks have developed into their own field that's why i separated these two units but both are effectively pre-text tasks what is a pretext task so far we have looked at self-supervised models which were tailored towards a specific downstream task such as monocular depth estimation or optical flow or scene flow estimation now in this unit our goal is to learn more general neural representations using self-supervision towards this goal we're going to define a so-called auxiliary task a pretext task such as predict the rotation of an image a random rotation of an image the image has been randomly rotated like the image of this bird and we want to uncover what is the canonical orientation of that image and we're going to use this auxiliary this pretext task as a auxiliary task for pre-training in the hope that this learns good this leads us to good features that we can then reuse for the downstream task of interest so we define an auxiliary task so such as rotation prediction for pre-training and for this of course we have lots of unlabeled data we can just download millions billions of images from the internet and then we can rotate them and ask the network what was the rotation that i did and i can rotate rotate them arbitrarily in arbitrary step sizes for example i can take any of the 90 degree possible rotations and ask the network by how much has this image been rotated so this is the pre-text task we take a lot of unlabeled data we define such a convolutional network on a pretext task where we have a few convolutional layers and some fully connected layers in this case we want to classify either of four possible orientations in this case 90 degree would be the correct answer um and then we throw away these last layers here in white and just keep the convolution the weights of the convolutional layers the so-called feature extractor and copy that for the downstream task let's assume the downstream task is is not rotation prediction but this classification we want to classify that there's a bird in this image and then we have a small amount of label data maybe 20 instances where we have images with birds and other objects and we have the labels for these 20 birds and 20 dogs and 20 cats but because there's only 20 images per category overall there's a very small data set we cannot afford to update all the parameters of the network but we hope that these parameters that we have learned in the pretext task are already good they are general they are semantic in some sense such that it's sufficient to just add one or two more layers and just train these layers for example just one layer linear classifier and obtain good performance from this model by having transferred the the large chunk of parameters from the pretext task and just adding a little small readout head to that network that has to be trained and it can be trained with with less data and then we evaluate this network on the target task which is classification now what is pretext what is a pretext task um in general so i copied this from wikipedia a pretext pretextual task is an excuse to do something or say something that is not accurate pretexts may be based on a half true for developed in the context of a misleading fabrication pretext have been used to conceal the true purpose or rational behind actions and words now clearly this is not what we want to do we want to find a pretext task that is somewhat related to the downstream task yet it is a pretext it is a replacement for what we want to actually do and we do this because it's easier right as in this example it's easier to tell my wife i'm going to the library every evening rather than telling the true story okay so um here's another like now in the following slides we're going to look at a couple of these like of these pre-text tasks that have been proposed in the literature and admittedly they are all quite you know hand engineered there is no theory at least not yet underlying this pretext task there is just empirical evidence for each of these pretext tasks that they empirically work well that empirically when we copy the weights and apply them to some downstream tasks where we just fine-tune some layers or maybe a few iterations of all layers that we can get away with less label data for the downstream task okay so here's another pretext task this is called uh learning by context prediction it's one of the first one from der shedal in ictv2015 in this task the goal is from two patches to predict the relative location so we're taking a center patch here in blue and then we're asking for this patch here we're just observing these two patches and we're asking the network to classify if the red patch was at this location or this location or this location etc there's eight possibilities so it's a eight-way classification task given this blue patch and the red patch there's eight possible locations how the red patch could be locate located with respect to the blue patch so the goal of context prediction is to predict relative position of patches relative position of patches it's a discrete set and the hope again here is that this task requires the model to learn to recognize objects in their parts the task is designed in a way that if the model wouldn't learn about learn something interesting about how objects are composed it wouldn't be able to solve that task to understand this a little bit more let's play a little game this is actually the game that has been in the teaser figure of that paper you'll find it if you look up the paper can you identify where the red patch is located relative to the blue one so i give you five seconds to do this forward experiment probably if you look at these two patches probably what you will say is well this is the front of a bus and this is the the left back of the bus so it's likely that this patch is at the bottom right of this patch it's unlikely that it's at the bottom because it would have to show the front of the bus it's unlikely they would be to the left because it would have to show some background similarly here this patch is more likely to be on top of this patch because it shows the top of the train and this shows the bottom of the train if a model a neural network wants to do the same task correctly you can already see that has to gain some knowledge about objects in the world and that's what we're leveraging here this is the architecture that was used there's these two patches coming in there's a siamese kind of architecture here with convolutions and pooling until we have a fully connected layer and then the output of this fully connected layer is is concatenated and then there is a small number of fully connected layers in order to make classification this sounds all great but there has some care has to be taken in particular we have to take care that the neural network doesn't take any trivial shortcut solutions and that's really what's happening in practice neural networks don't do what you want them to do they do what's easiest for example if we take patches that are directly adjacent to each other the neural network can instead of understanding the object in terms of its semantics just look at edges and try to align the edges if it sees that edges are aligned then it's likely that this patch this patch configuration here is more likely than this patch configuration like left and right are swapped right because of this continuation of this edge so we need to distract the model from looking at this from exploiting these trivial shortcuts and the way this has been done in this work is to include a small gap between the patches to not make them directly adjacent to each other so such that edge continuity can be exploited and also to jitter the patch location slightly spatially to also avoid this phenomenon there is a more subtle shortcut and i can clearly remember when i was at iccb 2015 it wasn't chile i was standing in the back of the big lecture hall where this paper was presented and i was i was just immediately catching my attention because i was just i wasn't believing it in the first place so they found out there is another shortcut it was quite quite tricky to actually understand where it comes from in order to understand where it comes from the shortcut that the network was exploiting in order to overcome having to actually learn more semantic meaningful representation was that the network had actually an easy time to predicting the absolute image location and if a network from just observing a patch is able to predict the absolute image location then the relative location of patches can be easily inferred obviously so why is this happening the offers ask and they did a little experiment to understand this or to first verify that this is actually happening so they took images and cropped patches at random locations and then trained a neural network to predict in a supervised fashion the absolute image location of each patch given just the patch and no other information the network had to predict that this patch here if it sees that patch that patch appears in the top right and here on the right you can see the result and you can see that it's actually working the network that's just observing this patch is able to predict the absolute image location how is that possible where is this image location encoded in this local patch how can the network learn that this part of the image is should actually be on the right of the image our dispatch here should actually be on the left here's the solution the solution is that some cameras some camera lenses shift color channels differently depending on the image location in particular bad cameras with bad lenses suffer from so called aberration problem and we have discussed aberration already in the image formation lecture aberration comes from the lens reflecting light of different wavelength differently and that means that there's a color shift of the different color channels a spatial shift of the different color channels and that spatial shift is dependent on the location in the image and the model can exploit that fact it can just look at the local can learn the local color shift without ignoring everything else ignoring texture ignoring semantics ignoring what's happening in the patch just trying to figure out the color shift in order to predict the location and that's what was happening the solution to avoid this trivia tutorial shortcut for the network was to randomly drop color channels or projection uh project the colors towards a gray channel or just use a gray image which of course doesn't admit to this shortcut yeah okay so now having solved these trivial shortcuts the network was actually able to learn something semantically meaningful that can be transferred to downstream tasks and so they tried this for object detection for example this is an rcnn that we have discussed already and what they did is we trained these convolutional neural network features using this pre-training technique using context prediction without any labels and then they used a small uh then they used a smaller set a training set for training the remaining parameters and and fine-tuning the remaining parameters of the neural network and what they found is that if you don't use any pre-training you obtain significantly worse results compared to the results that they obtained using context prediction for self-supervision for pre-training and while the performance is not yet on par in this case with pre-training using imagenet labels also because this is a very semantic and a very it's a recognition task so it it benefits really from this from these supervised semantic categories it's quite impressive that the performances is is already rivaling the performance of fully supervised imagenet huge amounts of data pre-training without any labels and it also works for surface normal estimation so here are results for surface normal estimation which is a completely different task but again they observe that without pre-training the results are significantly worsened they sometimes even outperform the imagenet pre-trained models and in this case it also makes more sense surface normal estimation is a task that's very different from recognition therefore it's harder for a recognition data set to pre-train the parameters of this model well and so it's easier to compete with these supervised methods here what they also did in the paper is they looked at what the network what the visual representations actually are what the network has learned and one way to do that is we've discussed this already in the deep learning lecture is to just find nearest neighbors so for these particular queries here they tried to find the nearest neighbors in the feature space and they compared random initialized network alexnet and their self-supervised method and they found that in many cases random energization doesn't give you meaningful results but their retrievals are actually semantically meaningful in a way that also the patch is retrieved using imagenet pre-training are as you can see here for instance for this water body image there's there's a lot of water images retrieved for horse legs there's a lot of horse legs retrieved for this cat there's a lot of cats retrieved and so on this is another pretext task it's called jigsaw puzzle in a jigsaw puzzle the idea is to in a jigsaw puzzle pretext task the idea is to just randomly permute a 3x3 tiled version of the image and ask a network to recover like in a jigsaw puzzle recover the original image now because there's nine factorial different permutations which is very large this is a two large classification problem so they restricted themselves to 1000 possible random permutations that they have predefined and these permutations have been chosen based on the hamming distance to increase the level of difficulty so they try to find particularly hard permutations to make the model learn better this is the architecture there's these nine patches that are fed in permuted first and then fed into this convolutional architecture and then the features are fed into this mlp for a 1000 way classification and that's then used for back propagating and training these models also in the case of jigsaw puzzles they had to make sure to print shortcuts and the story is similar to the story from the previous paper also the methods have been developed at similar times the shortcuts happen because they are useful but the shortcuts are important for the model because they are useful for solving the pretext task but they are not relevant to the target task that's why we want to avoid them one shortcut is low level statistics if you look at json patches they often include similar low level statistics like the mean and the variance and the solution that they applied to this problem to solve this problem is to normalize the patches based on their mean and variance then also edge continuity we talked already about this and so they also selected 64 by 64 pixel tiles randomly from larger cells effectively also applying some jittering and padding and then the same is here also true for chromatic aberration and they basically applied grayscale images or apply the spatial jitter to each color channel by a few pixels to avoid this shortcuts is in a general an interesting problem for learning deep neural networks and if you are interested more in shortcut learning i can highly recommend this paper here from matthias bitkes group in tupingan called shortcut learning in deep neural networks here are the representations that i learned by solving jigsaw puzzles you can see they are meaningful there is this low level statistics learned in the earlier layers of the network and then as you go to more to later layers you get more semantic uh the meaningful activation so these are the patches that activate a particular neuron so here also they applied this representation to pascal voc classification detection segmentation this is one of the famous challenges from a few years ago and what they found is that their performance in some cases for instance in the case of object detection was already rivaling the performance despite now this being a recognition task of imagenet fully supervised pre-trained models using same architecture same pre-training time but just without labels so here this first row is pre-trained on imagenet fine-tuned on pascal voc this was the standard paradigm up to then but they have shown that actually you don't need this imagenet data that you can also pre-train using such pretext tasks and you obtain in some cases at least comparable performance this is another pre-text task it's called imaging painting it's a little bit related to the denoising auto encoder that we have already seen but in contrast to the denoising autoencode it requires more semantic knowledge because here now not individual pixels but entire large regions are masked out and the key at the task now is to try to recover this region that has been removed and to recover that from the context alone you can see that the model actually can do a a decent job in in in doing that state of the art methods today five six years later are much better than this but already in 2016 where things like semantic segmentation and image level reasoning with neural networks was quite new results like this could be achieved and then this in order for the model to to actually perform this well it has to learn something about how the world is structured it's always the same idea this is the image rotation prediction task we've already seen in the teaser the goal is to predict in which orientation the displayed image has is shown or the images are artificially rotated and the output is a four-way classification we try to recover the true orientation and again in order to recover the correct orientation the premise is that semantic knowledge is required for solving that task to summarize the pretext task unit pre-text tasks focus on visual common sense for example rearrangement prediction of rotations in painting colorization etc the models are forced to learn good features about natural images for example a semantic segmentation of an object category in order to solve the pretext tasks so they have to implicitly solve this harder task in order to solve the pretext tasks that is how the pretext tasks have been chosen we don't care about pretext task performance but rather about the utility of the learned feature representations for the downstream tasks that we actually care about and these tasks that we actually care about could be image classification object detection semantic segmentation depth estimation normal prediction etc now the problem with this is that designing good pretext tasks is tedious and still some kind of black art and there is no good theory around this so it's not clear how to construct better pre-text tasks and another problem is that the learned representations may not be general it's not clear how the pretext task actually relates to the downstream task and that's what we're going to look at next |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_54_Probabilistic_Graphical_Models_Belief_Propagation.txt | in this unit we're going to see how we can do inference in graphical models how we can answer questions such as what is the marginal distribution of a certain subset of variables or what is the so-called maximum upper story solution what is the most likely configuration of random variables under that probabilistic model defined by a factor graph for instance and we're going to first look at inference in chain structured factor graphs and what we are in particular going to look at is the so-called belief propagation algorithm which is an algorithm that depending on the order of the factors involved can be quite efficient in any case it is much more efficient than just doing a naive calculation of let's say the marginal distributions as we'll see so here we have a simple chain structured vector graph all the nodes are arranged along a chain with factors in between and this is the corresponding probability distribution now if we want to compute the marginal distribution of a what we would have to do is we would have to take this expression and sum over all configurations of b c and d what is the computational complexity of this well if the variables are all binary then we have 8 terms here so we have two to the power of three equal to eight terms because we have three times a variable that has two possible states so there's eight possibilities in total that doesn't sound too bad yet but if you think about a even a chain that is very long let's say has 100 variables then you already have a computational complexity of 2 to the power of 100 and it becomes really intractable to do marginalization with such long chains and that's not really a large problem even if you think about images that have maybe a million of pixels it's clearly infeasible to do such a naive strategy for computing for example a marginal so what we're going to do instead is we're going to use a little trick and for doing so we're going to observe the following if we take this expression here and we want to compute the marginal of a b and c which means we want to sum over d we observe that not all of the factors involved in this expression depend on d but only the last two do therefore we can pull this summation operator inside that expression or we can factor out these two factors here in front of the summation operator and so what we're going to do here is we're doing the summation over d of just this expression here and if you look carefully this expression is a function of c only because d is marginalized over so the only free variable here is c and we're calling this expression the entire marginalized expression a message a message that's sent from variable d to variable c and that depends on c so it has an argument c you can think of this as a vector in the case of binary variables this would be just a vector of dimensionality 2 that has two elements one for c equals zero one for c equals one so we're calculating that vector only that vector and then we can do the same thing for c so the marginal over a and b what we have to do is we have to take this expression and sum over all possible states of c and we observe again that the first factor does not depend on c so we can pull that summation inside and we can only do this calculation here which again is a message in this case it's a message we call it a message sent from c to b and this is a function of b only because all the other terms have been marginalized and then we can simply recurse further we can now compute the marginal of a by just some summing over b now b the sum over b cannot be pulled in further as we can see here because the first factor already depends on b as well so this is directly giving us the result so we have now this expression here which we can call a message sent from b to a that is a function of only a because we have marginalized b in other words the message we call this message mu sent from m to n from variable m to variable n and that's a function of n carries the information beyond that variable m it summarizes the information and carries it beyond so what do what did we gain well if we look at the computational complexity now what we can see is that we have now just done a summation of two terms here a summation of two terms here and a summation of two terms here so we have a computational complexity of three times two which is six so six is not horribly much better than eight but again if you now think about very long chains where you may have 100 or more variables then in this case this would be 100 times 2 so we would have to do 200 operations while in the previous case in the naive case we would have to do 2 to the power of 100 operations which is more than there are atoms in the universe so it's completely intractable to do so for this very simple example it doesn't play a big role but as soon as the chain gets longer as soon as it has 10 20 variables already for 10 variables we have 1000 operations versus 20 operations so one grows exponentially and the other only grows linearly however we didn't need the factors yet so we have a vector graph but we didn't use the factors yet but we will see that making this distinction is actually quite helpful if we go to more complex structures like tree graph three structured graphs or branching graphs so here this is not a chain anymore but now we have a tree and now it becomes useful to have these vectors because now we can collect this information also at the vector level and then spread it further from the factors and so it becomes easier to describe mathematically so this is a vector graph a tree structured factor graph and the corresponding factor representation here of factorization now how can we compute the marginal of say a and b this marginal distribution over a and b of this distribution over a b c d e well the idea is the same we compute and pass messages but now not just between variables but in between variables and vectors and from factors to variables so we're sending messages from variables to vectors and from factors to variables and this is what it looks like so we have for instance a message sent from this vector to this variable and the message from this variable to this vector and this factor now collects these two vari these two messages and sends them further and again we are exploiting this idea of dynamic programming of pre-computing results or mathematically speaking of pulling the summation term inside in order to gain efficiency so let's do it concretely now for this three structured graph if we want to do the marginal compute the marginal of a and b then we do the same thing as before we need to well we need to sum over c d and e and then these terms can be pulled inside depending on the variables that these individual factors depend upon so we def like we define this entire expression as the message of f2 to b because it only the only free variable here is b and then this measure message can be split further by pulling in the e summation because these two these three terms here don't depend on e and so we we come up with this um message passing scheme where we send a message from d to f2 here and then from c to f2 from c to f 2 is actually very simple because this message here only receives information from a single factor so this random variable here is connected only to a single factor so it's the factor itself um and then we have the summation over c and d of this vector multiplied with these incoming messages with these two incoming messages the message from c this one here and the message from d this one here so we take a summation over the entire state space except of the variable that we're sending to just c and d we're summing over but all combinations in this case this would be four terms because if we have binary variables because we have four possibilities for c and d and we do this for the factor multiplied with the incoming messages so this is repeated from the last slide here if we now want to make this a little bit more general what what can we see here well we can see that at the general level more generally speaking what we have is that a message that's sent from a factor to a variable and as an argument has the variable is simply a summation over the state of um the variables that are involved in that factor so here in this case would be bcd except for the variable that we're sending to except for b this is excluded here in this summation b is not part this is indicated by x f without x so this is a summation over the state space of all the variables except the variable that we're sending to and the summation is over the factor itself and the product of all the incoming messages so you can see here that these are all the messages that are sent to that factor that we are considering sending a message from so we want to send a message from here to here so we're considering all the messages that are sent to that factor except the message that could be sent by the variable b so here we have excluded the variable that they're sending to from the set of variables that we are receiving messages from the messages are only from the incoming side not to where we send that's important so that's why it's excluded here so we have a product here over the neighbors of f except the variable they were sending to x in this general notation and we're distinguishing these two types of notations because here this is a summation over a state space and here this is simply a product over a set of variables here where we're misusing a little bit notation because we're using it also here as an index of the of the message so we're using it as an index of the message and as the variable itself just to not clutter the notation too much but i find this the most clear exposition of this admittedly quite complex material okay so and and the other side is this is a message that's sent from a variable to a factor in this case from this variable to this factor we have simply a product of the incoming messages so we have a product of this message and the product of this message here as you can see and so in the more general form this can be written as a message sent from a variable x to a factor f that's a function of x or a vector in the states of x if you will in a discrete case and this is simply a product of all the incoming messages where incoming means that we have chosen the factors such that they are neighbors of the variable x in this case here d but we are not considering f2 just f5 and f4 because we are sending to f2 so exclude again the mess the factor that we're sending to in this calculation here in this product admittedly there's many subscripts here and it's easy to get confused and that's why we have an exciting exercise for you to practice this and to get a deeper understanding but um it's important to look a bit at these slides and to understand how indexing works here once computed this is the keep message messages can be reused and then important observation is that all the marginals can be written as function of messages and now we simply use an algorithm to compute all the messages we have given some very concrete examples so far for computing very specific marginals but of course we want to more generally have an algorithm to compute all the marginals all possible marginals let's say at the same time ideally and for marginal inference this is the so-called sum product algorithm we're also going to get to know another algorithm which is called the max product algorithm but this is solving a different problem the map problem the maximum upper steroid problem finding the best possible configuration of variables here we are looking at the marginalization problem as an inference problem and this is solved by the sum product algorithm so it's also called sometimes belief propagation because we're sending these messages on on this graph and it's an algorithm to compute these messages really efficiently to avoid this exponential complexity that we discussed in the beginning at least if the graph is not complicated if it has maybe just pairwise potentials or not too high order potentials and belief propagation assumes that the graph is singly connected so there's no loops it's ever a chain or a tree but we'll see that this can also be generalized to loopy graphs if we let go some of the guarantees in particular the guarantee of exactness if we are happy with an approximate solution which often is good enough then we can also apply this algorithm on loopy graphs and the algorithm works as follows we initialize in the first step then we send variable to factor messages then factor to variable messages and we repeat this until all messages have been calculated and the algorithm is converged and then we calculate the desired marginals which is the goal of the sum product algorithm from these messages so let's look at these steps in more detail in the first step this is the initialization step the messages from the extremal node factors are initialized to the factor themselves so here's an example the message from the factor to the variable um if this factor is an is is the end of the graph basically then this message here is uh simply the if there's no other variables connected to that vector then this message is simply the vector itself um and that's easy to see if you look at the equations from the previous slides and the messages from the extremely variable notes to the factors is set to 1 but they don't really matter because they get updated directly with the algorithm in the second step [Music] then the second uh step of this algorithm is the variable to factor message computation we have already seen the formula it's simply computing the product of all the incoming messages and here's an example we have this variable that wants to send a message to this vector so we want to calculate this message here and this message is the product of the incoming messages but not the message that's sent from that vector to x of course we are computing messages sent in all directions but for computing this message this update here we only considered incoming ones to be mathematically correct as we have derived from the simple example that we've shown now the sum the factor to variable message has this form as we've already seen and this gives also raised rise to the name of this algorithm the sum product algorithm because we have a sum of a term that involves products so to compute this message sent from the factor to this variable we go over the entire state space over all y1 y2 and y3 except for the variable that we're sending to this is excluded here and we compute the value of this factor multiplied with the value of the incoming messages that are also functions of these variables and then we're sending this to this variable and then finally once we have done this algorithm we have applied this algorithm and has converged then we can read off the marginal distributions by um simply taking the product of all the messages coming into a particular variable now in this case the difference here unlike in step two is that here we calculate the product of all the neighbor over all the neighbors so there's no factor that we're sending a message to so we are taking all the incoming messages in this case the product of and then we know that the distribution must be proportional to this it's not normalized there's no guarantee for normalization here um but because we're getting out a vector it's easy to normalize that vector to one and that we can do in post-processing now a few words about numerical precision in practice what we do for very large graphs is to use a so-called log representation because in very large graphs we as you have seen have to compute products over many messages and if these messages are smaller than one then they might become suddenly very very small or if they are bigger than one they might become very large and this leads to numerical problems when storing these messages in a computer as floating precision or double precision numbers and the solution to this is to instead work with log messages so instead of working directly with the messages mu work with the messages lambda which are the logarithm of mu and if we do this then the product for instance in the variable to factor messages turns to a sum and of course the sum is much much easier to work with it doesn't have the problem that small numbers get get to zero very quickly because we're just summing up terms here so it's it's better to work with and that's what you you will have to do if you work with reasonably sized problems and apply this algorithm to it just because a computer has fixed precision or limited precision um yeah and so similarly for the factor to variable messages they then become they look slightly more ugly here so because we have the logarithm now we have the logarithm of a sum doesn't simplify further so the logarithm remains here but inside here this message is replaced by lambda and we have well we have the exponential of this sum because the exponential of this sum is the product of the exponential of this expression and because we have chosen this as the logarithm of this this is equivalent all right so it's just uh it's just the same now this was the sum product algorithm for computing marginals we do message passing in both directions between fact from factors to variables and from variables to factors and once the messages don't change anymore once the propagation has happened along the entire chain or the entire tree we can read of the marginals as discussed now there's a second algorithm called the max product algorithm which is very similar but tackles a different problem the i think the goal here is that for a given distribution p of abcd we want to find the most likely state we don't want to find a marginal what we want to find here illustrated with this arc max notation we want to find the a star b stars c star and d star which are maximizing this probability distribution i want to find the arc marks and this is called the maximum a posteriori problem or map problem or if we find it it's a solution and it's called maximum upper story because we're doing this inference on a graphical model where through the specification of this graph and the relationships in that graph we have specified our prior knowledge so we are finding the most likely solution after integrating our prior knowledge into the problem that's why it's called maximum upper story we are maximizing the posterior again we use the factorization structure to distribute maximization to local computations as before with the summation so again an example for the chain here and we can go a little bit more quickly so it's exactly the same with the maximization where we observe that some of these factors here don't depend on some of these variables and so for instance the d can be pulled inside and then we call this a message sent from d to c similarly a message from c to b and from b to a and so on now what we do with this algorithm actually is we obtain the maximal probability value but that's not what we were after we wanted to know the most probable state so we're not at we haven't arrived at the goal yet and the solution to obtain so this is only the first step the solution to obtain the max the the map state is that once the messages are computed we can find the optimal values by backtracking so we can't find now backwards here the optimal a by maximizing this message that we have computed and then once we have found the optimal a we can plug that in here and we can go one step backward and find the optimal b by maximizing this expression and recurse further backwards then and this is called backtracking or dynamic programming or in the case of a markov chains also called the viterbi algorithm maybe i've heard of this before if however the maximum is unique then the map solution is the maximum of the max marginals and that's much easier to compute because we need to just find the maximum of the computed vector per variable so we run the max product algorithm that's similarly defined to the sum product algorithm just with a maximization instead of the summation and then after running it we obtain a vector at each variable and that vector is called a it's not a marginal because we didn't run the sum product algorithm so some people call it a max marginal because it's kind of the marginal of the max product algorithm despite the fact that actually marginal is not correct but let's call it a max marginal so for these vectors that we have obtained after running the max product algorithm we can then just go through each vector individually and find what is the maximum value and then pick that state so for binary variable let's say if the if the second entry is larger then the state of that variable is one and if the first state is larger than if the first entries larger the state would be zero so this is much easier now because we can do this pre for each site and in most cases the map solution is unique so we can do that okay so the algorithm in this case is the same as before we initialize we compute variable to factor messages factor to variable messages and we repeat this until all messages have been calculated and then we calculate the desired map solution i haven't specified the individual steps here because they are completely analogous to the sum product algorithm and i have summarized them on the last slides of this unit to help you also with the implementation of this algorithm so all the information is there now what we have assumed so far is that the graph that we consider is either a chain or a tree but it doesn't contain any loops however in computer vision many problems actually have loops if you look at a simple image grid like this four connected structure that we have discussed in the beginning of the lecture then of course there is loops in this graph already very early on if you just look at a very local part of this graph you have loops and so these assumptions that we have made in order to get an exact solution actually break however of course the messages are well defined also for looping graphs so we can just try to apply the same algorithm that's exact for trees and chains to loopy graphs and that's called loopy belief propagation so we can apply the same graph the same algorithm to loopy graphs as well but the downside of this is that we now lose exactness so it's only an approximate inference and in some cases there's even if in the simple case in the simple algorithm that we consider here there's even no guarantee of convergence so it might actually diverge or oscillate but often for many problems in computer vision this works surprisingly well despite these violations of the assumptions um now while the question is what if we have such a loopy graph how should we actually pass the messages and typically people either choose a random or a fixed order and a popular choice is that we first pass because it's a bipartite graph all the factor to variable messages and then we pass all the variable to factor messages and we repeat this for any iterations until we hopefully converge again there is no guarantee of convergence but we hopefully converge and so this can an advantage of this strategy is that there's a lot of parallelism that can be exploited because all the messages that are sent from factors to variables can be computed in problem so it's easy to pluralize this algorithm so here's a summary and these slides mainly serve as a guide or to help implementing the exercise task where i try to make everything concise on a few slides describing the entire algorithm the goal of the sum product belief propagation algorithm is to compute marginals of some distribution by passing factor-to-variable messages and variable-to-factor messages as we have seen or derived before to avoid very large values what you can do is you can in addition to the log representation which you should always use and which is used here subtract the mean of the messages after the message update in equation 2 to keep them around zero and similarly here the equation for the max product belief propagation algorithm which computes the most likely state where now the variable defector message hasn't changed but this message here has changed and the sum has been replaced by a maximum in the special case of a pairwise mrf this simplifies further and that's the case that you're going to consider in the exercise so here for instance the unary factor is simply the logarithm of the factor itself it's easy to see this because well there is no other variables except for the variable of that vector so the summation goes away and and also these terms here disappear because there's no no neighboring variables of that factor that are not the variable that we consider um and but also the pairwise factor simplifies because now there's only one neighbor so we have to just sum over the states of y which is the neighbor if we want to send from f to x and we have just this one incoming message that comes from y that arrives from y at vector f and the same for max product bp and then at the end we have the readout stage where we read of the marginal or the map state at each variable which is similar to the variable to factor message except that we sum over all the incoming messages so this is done here for the marginal and because this like the output here is not normalized we have to normalize it at the in the end um you can see that we have an exponential here because we're considering log messages so of course we need to first convert this into probabilities and then here on the right hand side this is what we called considering the max marginal so we're taking these so-called max marginals here this is the sum over this log messages and then we find the entry that is largest in that vector and this is a very generic overview of the algorithm the input is the variables and the vectors that are specified this is the definition of the problem we allocate the messages in this log representation then we initialize the messages to zero let's say which corresponds to a uniform distribution but you can also initialize differently and then for some iterations we update all factor-to-variable messages all variable-to-factor messages these are these equations from the previous slide color coded and then we normalize them by subtracting the mean and in the end we read of the marginal map state |
Computer_Vision_Andreas_Geiger | Computer_Vision_Lecture_41_Stereo_Reconstruction_Preliminaries.txt | hi and welcome to the fourth lecture of this computer vision course last time we discussed sparse 3d reconstruction techniques how to based on sparse correspondences or features reconstruct 3d structure sparsely from two or more views the topic of today is dense stereo reconstruction which is how we can obtain a more dense 3d reconstruction from just two images if the camera intrinsics and the camera extrinsics the camera poses are already known this lecture is structured into five units in the first unit we are gonna cover some preliminaries to the following units including how to bring the images into a suitable configuration such that matching is fast and also how to obtain depth from the actual measurements the so-called disparities that we are actually estimating using these stereo algorithms in the second unit we are going to discuss the most basic then stereo matching algorithm called block matching and then we move on to algorithms that use deep neural networks so-called siamese networks we also cover spatial regularization how to overcome some of the ambiguities that are present in this problem and finally cover the more recent end-to-end learning approaches that require a lot of data but produce state-of-the-art results today okay so let's start with the preliminaries how can we actually recover 3d from an image in general this is of course an ill-posed problem but there are several cues if we look at an image for instance what we can look at are occlusion cues if we look at this image here on the right then most of us will probably say well based on this configuration here the object this vertical object here is in front of this horizontal object in the back now this is a mechanism that has been exploited by artists and you're probably all familiar by the famous asher paintings so this is one cue how to obtain 3d structure from an image another cue is parallax and that's closer to the stereo problem that we're going to discuss today but it's also something that you can obtain from one image alone where if you look at an image and you see that an object is moving relative to another object for instance by i'm seeing a mirror reflection of that same object then you know that these objects can't be located at the same depth so for example here we have this pole with the lamp on top and we have the moonlight and so they are clearly displaced here while in the mirror reflection they are co-located and this is called parallax and this principle is also used in stereo that we're going to discuss today but using two cameras another queue is of course perspective if you look at an image and we know that some things in that image are purple parallel then we can infer something about the perspective effect and also the depth then one thing that we can also use this so-called accommodation a code accommodation means focusing on a target because we do have lenses that are not sharp everywhere we also we as humans need to actually focus on a specific object and if we focus on a specific object then two things happen one thing is that we have to adjust the lens and in the human eye that's done with muscles as you can see here there's two different configurations on how the lens in the eye is stretched and because there is receptors inside these muscles we can recover a signal on how these muscles are actually what is the state of these muscles that's also something that goes into of course the focusing process so we can measure the focus state of the lens and the other thing that happens is that different areas in the image if focusing on a particular point get blurred differently so if we look at this point here then maybe the background gets blurred and now this effect can also be used in an artistic manner for instance in this example where if you see this image and i ask you is this a real image it will probably tell me well it looks like a real image but it's it's more likely to be a toy scene something that you would observe when looking at a toy scenery through a big lens and that's true because the lens that you would need in order to create such a depth of field effect as we can see here would be insanely large for a real scene so this effect here has been synthetically created and it's actually a real image that is sharp everywhere and has been captured with a relatively small aperture compared to the scene and then finally what we're going to talk about today is stereopsis where we use the principle of triangulation and take an image of a scene from two different locations from two different viewpoints and by observing points in the scene move differently far by observing this parallax effect and by knowing the location of the cameras we can then triangulate that point in 3d we can follow these rays and see where these rays intersect and that gives us the 3d point now why is by not binocular stereos is a good idea well there is some examples in um history of species that do not have two eyes there's some species that have more than two eyes actually and there's some very rare cases like here that have only one eye but in general most species have two eyes and so it seems like this is a established system it's the minimal configuration with which we can perceive depth relatively robustly and so that's something that we probably also want to exploit if we want to make machines see there's also some examples where the principle of stereo perception has been exploited in a special way in a way that um the so-called baseline which is the distance between the cameras has been artificially enlarged in order to obtain a better depth perception and remember that when you do triangulation then the larger the relative viewpoint the more precise the more certain the triangulated 3d point is and so if you put the cameras further away or if you put the focal points further away then you will get a better sense of depth and that has been the principle has been used by the stock 8 i'd fly and also by these devices like the stereoscopic range finder that has been used in uh also in the military in order to determine the distance of a target from the observer by adjusting prisms in this tube in order to triangulate distance now here in this course we are going to covering the estimation of depth or equivalently disparity which is inverse depth disparity is the relative displacement that we observe between pixels in the two images that we have captured of the same static scene and we're trying to do this as densely as possible so the goal is that for ideally for every pixel that we have in the input image let's say in this reference left image here we want to find a depth map that's color-coded in this disparity map here on the right-hand side and as you can see in this case the scene has not even been captured at the same instance in time but the majority the large parts of the scene are static um and so that the majority of the scene can be reconstructed despite the fact that there is some people here moving around now given that we have reconstructed such a displacement map such a disparity map that we're going to talk about a lot in this lecture we can of course also recover the depth because it's it's basically just the inverse of the disparity as we will see and then we can take all of these pixels for which we have such a depth value obtained and we also know the color value because we have the color in the reference image and then we can project this into a 3d point cloud and for this particular example this has been done with a very old stereo algorithm that we have been publishing more than 10 years ago it's actually a real-time algorithm was already real time in 2010. we obtain a 3d reconstruction like this this is a reconstruction that's relatively dense and comparably precise at least at the time while being real time and covering the majority of the scene and only being reconstructed from these two images so to summarize what is the task what is the goal we want to construct a dense 3d model from two images of a static scene or a mostly static scene alternatively we can take two images at the same point in time if we can synchronize the cameras for example in a stereo sensor that's mounted inside a vehicle many modern vehicles have cameras inside and some of them do have stereo cameras these cameras are triggered electronically such that they capture the scene exactly at the same point in time and then of course the scene is at that point in time it's static with respect to the two cameras so the pipeline is as follows we first calibrate the cameras intrinsically and extrinsically as we discussed in the last lecture then we rectify the images given this calibration and what rectification means we're going to discuss today it's basically the process of bringing the images into a format that's more easy to process using these stereo matching algorithms and then we compute this displacement this disparity map for the reference image for every pixel in the reference image we ask well how much does this pixel have to move um to or how much does this pixel move um the freeze uh in in order to land on the on the new location in the right image in the target image how much is the pixel displaced between these two image then typically there's a step that removes outliers using some consistency or occlusion test and then we can obtain the depth from the disparity using the parameters of the camera and finally we construct a 3d model for instance via a simple a concatenation of the 3d points that would be the simplest method and meshing them or via more advanced methods such as volumetric fusion and meshing as we'll discuss them in lecture 8. then let's see how such a stereo matching algorithm at a broader level could be integrated into a bigger system into a entire 3d reconstruction pipeline so what do we do here we have a not just two images but a set of input images from which we want to obtain a 3d model and then a typical procedure is as follows we take these images and we compute the camera poses using incremental bundle adjustment let's say as we discussed in lecture three and then once we have the camera poses and we know which of these views are adjacent to each other we can compute dense correspondences as we'll discuss in this lecture for each adjacent view using the ap polar geometry that we know then once the cameras are calibrated and so we get depth maps for each of the images and these are two and a half d representations so in order to combine all of these into a 3d model we can use algorithms such as depth map fusion that lead to a coherent 3d model where we can extract a mesh that takes into account all the observations that we have made so the topic for today is really this this part of the pipeline where we assume the camera has been calibrated intrinsically and that also all the camera poses have been estimated precisely which is typically the first step and then we can go and start and find adjacent views and try to do then stereo matching or across the json fuse if we do this for two images as discussed in this lecture this is called binocular stereo matching if we do it for more than two views and we'll discuss this later in the lecture this is called multi-view stereo so both options exist using only two views is a little less precise of course is a little bit more ambiguous but it's typically faster so both problems are heavily researched still in in the community and then the final model could look like this this is a very old model that we have produced and it's a model that has been produced by simply stacking all the point clouds together so no sophisticated fusion algorithm but you can already see you can already get a pretty good idea of what the 3d structure should look like and it's grayscale because these were grayscale cameras you can see these little triangles here these red triangles here are the camera poses illustrate the camera poses that have been used for each of these camera posts we have estimated depth map this was done using a stereo camera and then we have accumulated all the point clouds into a larger 3d reconstruction okay now something that's really important for performing dense stereo reconstruction is that we know the a bipolar geometry and that's what we have discussed already in the last lecture and let me just recap that with a single slide so here on the right is an example of an api polar geometry where we have the epipolar plane that is connecting the 3d point projected onto both of these image planes and the 3d point is projected to the left image plane with a wire camera calibration matrix k1 to the right image plane via k2 and then once we have estimated the a bipolar geometry which can be expressed by the essential matrix e tilde which is a homogeneous matrix image correspondences expressed in homogeneous coordinates are related by this ex simple expression here x 2 transpose e x one equals zero where these x tildes here are the uh the ray directions where we take the pixel location and multiply with the inverse of the intrinsic calibration matrix k for each of the cameras and then one important property of this apipola geometry is that the corresponding the correspondence of any pixel x1 in the first image is located on the apipolar line in the second image and that apipolar line is given by the essential matrix multiplied with that point in the first image or the fueling direction to be more precise and then vice versa the correspondence of some point x2 must be located on the apipola line l1 in the first image and then also the apipolar lines intersect at the ap poles so here's another example where the 3d point has changed we can see how this plane forms a pencil of planes and all of these planes actually pass through the ap poles which means that also all the avapola lines pass through the ap poles that's what we have discussed last time now the most important property of the apripolic geometry that we're going to exploit today is the fact that i've just mentioned that if i take a point in the left image call it x1 and i want to determine where that point is located in 3d space if i want to triangulate that point and i have to find the correspondence of that point in the right image in order to do so then that correspondence must be located on the epipolar line so instead of searching through the entire second image domain we just have to search along the apipola line remember that if we want to if we don't know the relative camera poses we want to estimate the apollo geometry then we don't know this property so then we really have to do sift matching for instance to all possible feature locations in the right image but now we just need to search along the apollo line and that's much more efficient so now we can do this densely we can query for every pixel in the left image all pixels along each of the corresponding api polar lines in the right image and find the best possible pixel that in some form as we'll discuss later corresponds best to that pixel that looks most similar to the pixel in the first image and this reduces the corresponding search problem to this much simpler 1d problem so as an example for vga images if we wanted to for that point here we wanted for that pixel here we wanted to find the correspondence in the right image and we wouldn't have such a simplification to a 1d search constraint then we would have to search for all the pixels in the right image which for vga resolution would correspond to 300 000 pixels so we have 300 000 hypothesis that we need to do a comparison to in terms of little image patches let's say now if we just have to search along the apripolar line and let's assume the ap line is roughly horizontal then we have roughly 640 pixels only so factor 480 less computation which is significant of course now one important concept that we'll introduce in this unit is image record image rectification what happens if both cameras face exactly the same direction which means that the image planes are lying in the same plane in the same 3d plane in other words the image planes are coplanar what that means is you can already imagine what's happening then is that the epipoles they will not lie inside the image domain anymore because the baseline obviously can't intersect image planes anymore because the baseline is now parallel to these two planes so what happens is that these ap poles e and e prime d or e1 and e2 they go to infinity and all the epipolar lines become parallel to each other and if we do it right they also become horizontal so they become parallel to each other horizontal so we we just have to search now correspondences along image rows and this significantly simplifies implementation we don't have to search across slanted lines across slanted epipolar lines in the target image but we can search along image rows which is much simpler to implement and that's why rectification is important and rectification is typically applied in order to make these stereo matching algorithms binding binocular stereo matching algorithms it's only working for two images can only do this for two images unless all of the cameras are located all of the camera centers are located on a line otherwise you can't do that for more than two images but if you have two images then you can exploit this property and that's what almost all binocular stereo matching algorithms do now this is nice but it's very difficult to actually manufacture a stereo setup where these cameras are exactly facing the same direction because just a slight deviation from that direction which is easy to introduce by just some mechanical slack that is introduced is already resulting in a couple of pixels error and so matching along apripolar lines will not work anymore because the april lines are shifted by a few pixels and so the patches will not align anymore and these matching algorithms are very sensitive to that so you need to really make sure that these api polar lines are really pixel accurate or sub pixel accurate so what can we do if the cameras or the images are not in this required setup setup well the nice thing is that there is a trick we can actually rewarp these images through a rotation that maps both image planes to a common plane parallel to the baseline in other words we can bring the non-rectified setup into a rectified setup and that's called this process is called rectification and the nice thing about this is that this for this rotation around the camera center because we're just rotating the the camera into a new virtual camera around the camera center the optical center these two points here we don't need to know the 3d structure similar to where we have estimated homographies for stitching panoramas by taking pictures from the same location and rotating the camera this is also just the homography that we can apply here a rotation homography so without knowing the 3d structure which we don't know we can actually warp the images to bring the cameras into this rectified setup um which is really nice so here's an example so this is the setup from before and now we're rotating these cameras virtually by warping the images such that the ap pools move to infinity and the epipolar lines become parallel and horizontal so here we have the apipolar line for x1 for example lies on the same image row as x1 in the other image and the apollo line of x2 lies on the same image row in the first image now how does this work how can we make a bipolar lines horizontal let's first look at a simplified scenario for that simplified scenario let's assume that both intrinsic both calibration matrices and the rotation matrix are equal to the identity matrix and that the translation vector between the two cameras is just a vector in x direction so from the left camera from the first camera we go t millimeters or meters into the x direction but then the other coordinates the y and the c coordinates are empty for r0 for that translation vector and so in in this what we want to con film now is that in this simplified setup this is the goal um the apollo lines are actually horizontal this is already directified setup if this setup would occur then we would be done in this for this particular case the essential matrix is very simple remember the essential matrix is the cross product um between the or this q symmetric matrix corresponding to the cross product matrix of the translation vector times the rotation matrix and because the rotation is the identity this becomes just this skew symmetric matrix and because this vector here is very simple this matrix looks like this all the entries are 0 except for this x components now as the calibration matrices are the identity matrix two corresponding points in augmented vector notation are related by the a bipolar by this a bipolar constraint remember again from the previous lecture that we had x tilde here and x tilde one here because we assumed that there is an arbitrary calibration matrix and so we have to compute x tilde by multiplying x bar with k inverse in order to get the direction but here we don't need to do that because the calibration matrix is already the identity matrix so it's a very simple setup so we can directly write x bar here and otherwise this is simply the apipolar constraint which must be zero now if we plug in the essential matrix now here we obtain this expression here so here we just multiplied x1 with this matrix and this y one here indicates the y component of the first point so this is the second element of that x bar one vector now we multiply x bar two take the inner product with this vector and again we denote by y two now the second element of that vector so the first element is multiplied with zero the second element which is y two gets multiplied with minus t and then the last element of this augmented vector is one of course gets multiplied with t and y one so we're left with this expression t y one minus t y two and this expression must be equal to zero because it must satisfy the a bipolar constraint so what we see here is that for this very simplified setup that we have constructed artificially y1 equals y2 so in other words the two images of the 3d point the projections of the 3d point x must lie on the same horizontal line they are on the same scan line on the same image row projections of the 3d point are on the same image row this is the rectified setup now what if this setup is not the case how can we take an arbitrary setup and rectify that setup well in order to rectify an image we can just rotate we can compute a rectifying rotation matrix that rotates these cameras virtually such that this particular setup here is fulfilled and what do we have to do well we have to of course align the x-axis of the camera coordinate system with the translation vector so that must point to the upper camera center or to the ap pool in other words and then we want to introduce as little distortion as possible to the upper two axes of the new camera coordinate system of this virtual camera coordinate system and of course the camera coordinate system must always be a proper right-handed coordinate system so all the coordinate axes must be perpendicular to each other with these constraints we can now calculate this rotation matrix this rotation matrix is simply what we have we have parameterized it by the vectors r1 r2 and r3 which are the rows of that rotation matrix where yes the rows of the of this of this matrix where r1 is simply uh pointing into the direction of the translation vector but of course this translation vector magnitude must not be equal to one so we have to normalize it to obtain a proper unit vector and then r2 is the cross product of the c of the vector of the c coordinate of the original camera coordinate system with the first vector that we have computed in order to compute the y coordinate of that camera coordinate system and then the third coordinate the c coordinate of this new camera coordinate system must be the cross product between the first two because otherwise it wouldn't be perpendicular right um so this is one choice there's more choices but this is the most uh this is the this is the simplest and and the best choice if the cameras are already roughly aligned and we'll see an illustration of this on the next slide now as the ap pole in the first image is in the direction of r1 it's actually easy to see that the rotated avery pole is ideal is an ideal point for that we can simply multiply r1 with this rectifying rotation matrix and because the first row of that matrix is r1 and because r1 has magnitude 1 we get a 1 here and because these other two coordinate axes are perpendicular to r1 they are zero which means we have an ideal point a point at infinity for the epipole after rotating it with the rectifying rotation matrix and it's a it's a homogeneous vector that has a zero as last element means and that means it lies at infinity and it lies at infinity in the x-direction because it has a one here in the x coordinate ideal point at infinity in the in x direction therefore applying this rectifying rotation our rack to the first camera leads to parallel and horizontal a bipolar lines as desired so here's an overview of the entire rectification algorithm we first estimate the essential matrix then we decompose it into t and r and construct our rectus above warp the pixels in the first image then using this expression we take the inverse of the so we take a pixel in the first image then we compute the inverse of the calibration matrix in the first image and we apply this rotation and then we take the calibration matrix of this virtual camera which you can decide arbitrarily in order to obtain this projected point and similarly we can do this for this for the point in the second image except for also taking into account the rotation between the first camera and the second camera the extrinsics basically um because we we also i mean this work this overall rotation is of course a rotation that's combining the rectifying rotation of the left camera plus the relative rotation between the left and the right camera in order to have both of the image planes being coplanar after this warping little remark k is a shared projection matrix it must be the same but it can be chosen arbitrarily typically this is chosen such that it's close to one of the calibration matrixes let's say k1 and in practice we don't compute this transformation but we compute the inverse transformation because taking all the pixels in the original image and projecting them into the virtual image is uh is not ideal because there might be points i mean that get projected on top of each other and points that get stretched so there is a missing pixel than in a target image so what's better to do is to go through all the pixels in the target image in this new virtual image and then query the source pixel in the original image by applying the inverse transformation so we just invert this expression here and then this point will not necessarily lie on an integer location of course so what we do then in the original images we do an interpolation like bilinear or trilinear interpolation bi-linear or bicubic interpolation or some other interpolation technique in order to find the rgb color value that we have to insert at the target image for which we have queried the source so here's again a graphical illustration of this rectification of the calculation of the rectifying rotation matrix let's assume we want to compute this for the camera 1 and then adapt camera 2 accordingly based on this equation here which includes this additional pose rotation r so we have the camera centers here i didn't draw the second camera because it's irrelevant here we have the principal axis that's the z-axis of this camera and we can see here that the epipolar line is not horizontal in the image but is slanted and the ap pole is in the image domain so what we do now is we compute this is the desired new x-axis of that new coordinate system which we compute as the vector that's pointing into the direction of the second camera center and then we compute the cross product of the c direction of the original camera system and this new vector in order to obtain the new y coordinate of the new camera system and that makes by doing this we make sure that we introduce little distortion that the camera is oriented roughly the same way because we could also rotate the camera with an in-plane rotation but we don't want to do that we want to make as little introduce as little distortion to the original image as possible and then we have the third coordinate which is perpendicular to the first two so we have computed x y and z coordinate of the new camera coordinate system this is the rotation that's applied to this original coordinate system and by doing that we obtain a new image where then the epipolar lines are horizontal here's an example on the top we see this is actually on the left is the right image on the right is the left image but doesn't matter so here we have an example where we see the ap polar lines before rectification and at the bottom you can see the perspective transformation the homography that has been applied in order to warp these two images such that the lines that we see above become parallel to each other and horizontal in the image domain and if we look at a particular point in this image here we find that point along this epipolar line in the other image this would be the correct correspondence but if we do this in the rectified setup we just have to search along a single image row so correspondences are located on the same image row as the query point here's another example of a rectified setup i show you here the first image and this is the second image and what you can see is if i toggle between these two images that points that are far away move little and points that are nearby like this point here on the car the backlight move more and this motion of the points is called disparity and because there's this relationship of points that are close to the camera move a lot and points that are far away move little we can estimate the depth from the disparity there's actually an anti-proportional relationship between disparity and depth the depth is actually proportional to 1 over the disparity here is a illustration of a depth map that has been estimated by some algorithm where you can also see with this color coding here at the bottom that points that are far away have little disparity they move little and points that are nearby here in the yellow and green and red regions they have a larger disparity in this case the pixel motion is between 0 and 160 pixels actually pixel pixels move to the left right so if we look at the left image here and then we look at the right image then a pixel in the left image has moved from its current location to the left but we typically count this disparity still as positive disparity values despite moving to the left now given this desperate given that we have these disparities calculated what we want to do is we want to recover the depth from the estimated disparities and what we know now in this rectified setup which is quite nice is that the left and right ray must intersect as both lie by definition on the a bipolar line because we're searching along scan lines this ray must this rays must intersect because we are searching on the same a bipolar plane and so in this particular setup where both here we have the cameras again now in a bird's eye view we don't show the y coordinate just x and c and we have the image plane and the camera centers of the left and the right camera in this rectified setup where the camera planes are parallel to each other there's a 3d point at distance c and we have now the disparity which is the combination of x1 and x2 which is the relative displacement in the images from here to here basically if we would toggle between these two images and now because we have this very simplified setup also the depth estimation becomes very becomes a very simple formula now assuming the disparity d being x1 minus x2 so we have x1 here and x2 is actually negative so the disparity is let's assume this would be 10 pixels and this would be minus 10 pixels then the disparity the displacement would be 20 pixels then the following relationship holes we look at these two triangles these equal triangles red and blue the blue triangle can be gives us the following constraint we have c minus f this is this minus this is basically this distance here is equal to c minus c divided no c minus f which is this distance divided by b this is the baseline minus the disparity which is actually this distance here so we have this divided by this is equal to the entire c here divided by the entire baseline b because these are equal triangles now we can slightly rewrite this equation by multiplying on both sides with the denominator and by solving for c and so we see now this anti-proportional relationship that c is actually proportional to one over the disparity the depth is proportional to one over disparity in particular it's equal to the focal length of the camera of this rectified setup of this new virtual camera matrix times the baseline which we know from the extrinsic calibration divided by disparity that we estimate using a stereo algorithm discussed in the next units um what's a particular what's particular about disparity estimation is that actually the depth error delta c grows quadratically with the depth um which leads to the fact that if we estimate depth from this disparity maps for very far away objects we are very imprecise well if we estimate that for objects that are close we are very precise the proof of this is very simple and will occur in the exercise |
Introduction_to_Robotics | Lecture_12_Evolution_of_Robotics.txt | Welcome back. So, we will continue the introductory part of this robotics course. So, in the last class I briefly mentioned about the various applications of robots and then how people are actually using robotic in various fields. And we saw that because of the varied applications and the or all people from different works of life we have a difficulty in defining the robots, so for some time it maybe a hobby for many people, sometime it may be an engineering discipline or it's more of a technology used in the industry. So, one definition is not really going to suit for the robotics and that's why we saw that the earliest definition or the definition existing quite some for quite some time, it is like a robot is a software controlled mechanical device that uses sensors to guide one or more end effectors through programmed motions in a workspace in order to manipulate physical objects. So, that is more suitable for an industrial robot and then the current generation reports do not really fall into this definition that is why we got a new way of defining robot as the intelligent connection of perception to action. So, this is what actually we saw in the last class. So, when I say intelligent connection of perception to action, it basically says that, I have a perception of doing something which is not possible for me in the normal situation of using the current level of technology or the current level of products available I am not able to do, but I have a perception of doing that. And if I can actually make it happen through some intelligent connection then we call it as robotics. So, to give an example assume that I have a perception or I have a perception of getting my lunch delivered to my office simply by pressing a button or activating something, so I do not want to go home, I don't want to take my car and then go home for lunch and then come back instead, I would like to have my car just when I press a button I want my car to go to my home collect my lunch and then bring it to my office somehow, if it is I mean that is my perception. So, I have a perception of having my lunch delivered to my office, so this is my perception. Now, the question is that, if whether I can do it through an intelligent connection? So, one way of again have a assistant or somebody to do this or I can actually have a driver to drive my car and then come back with lunch or I can have a delivering agency doing this. So, these are all possible way to get my lunch without me going out of my office, so that is my perception. So, my perception is I do not want to go out of my office, I want my car to get my lunch. And as I mentioned there are many ways to do this. But if you look at there are no intelligent connection in any of these because my perception of getting lunch and then delivering the lunch I said the action, the action is basically getting the lunch without me going out of my office. Now, if I can have a intelligent connection, for example, if I actually convert my car into a fully autonomous car and then program my car to go to my home and collect my lunch and then bring it back to my office, then there is an intelligent connection, I am actually connecting my perception to action through an intelligent means. So, that kind of an intelligent connection if you can see in any of the systems or products then we call this part of robotics, so we call that as the robotics technology. So, this is the way we can define robotics because any other definition will not fit for all the type of robots existing in the fields. And that is where we say that, if you can have an intelligent connection of perception to action then we call it as robotics. And to what extend you can call just robotics as it depends on what extend is your intelligent connection, how intelligent your system or how intelligently you are connecting this perception to action through some means through actuators, sensors control of whatever it is. And if that is having that intelligent connection then we call it as robotics. So, this is the way how robotics is defined nowadays. Of course, you can always question this definition because again it is not really defined what you mean actually intelligent connection, how intelligent it has to be? And we simply remote control or it can be a fully autonomous, so all those things are there. But we do not go get into that those details but we will say in general we can say if there is a perception and there is an action and if these two are connected through intelligent means and if you are achieving this perception to action through an intelligent means then we can call it as a call it as robotics. So, that is the way how we can define robotics now. So, let us briefly look into the history of robotics how the robotics technology evolved over a period of time and what is the current status. So, if you go back to the fourteen or fifteen century, it was Da Vinci, Da Vinci who actually brought something into the field or something called robot Leonardo's robots, which were actually make something like armoured it was a like an armoured Knight which move like a human, so designed to make the knight move as if there was a real person inside. So, that was the mechanical device that looks like an armoured men proposed by Da Vinci in 1495. And it was only in 1920 this Czechoslovakian law can play right Karel Capek introduced the word robot in his play Rossum's universal robots and the word comes from the tedious labor, so robot actually means tedious labor. And this play actually introduced the word robot and we continue to use this one for many years now. And it was Asimov most of you must have heard about Asimov is a writer and he publishes the book run around in which he defines three laws of robotics and then actually we do not have any robot at that point of time and even this play also actually introduced the word robots and 1942 this three laws of robotics were this were published. We will see what are these three laws more of any historical importance and then lot of things happened in 1951 at teleoperated articulated arm for the atomic energy commission was used because in electronic sorry in nuclear installations handling of nuclear fuel is an issue, so they need to have something called the teleoperation that can operate the device from a remote location. So, in 1951 France there in France they brought this kind of device and that was regarded as one of the major milestone in force feedback technology. And again in 1954 the first programmable robots was introduced the universal automation, planting the seed for the name of his future company, so it is a universal automation Unimation was the robotics company, so they introduced this program build robots for industrial applications. And of course 1962 General Motors purchase the first industrial robot from Unimation that is a universal automation company Unimation and it installed it on a production line. The manipulator is the first of many Unimates to be deployed, so it was somewhere in 1962 we had this first robot coming to the industry for practical application. It was somewhere in the same time in 1965 this homogeneous transformation applied to robot kinematics was published by Denavit and Hartenberg. And that actually is one of the major milestone in the kinematic analysis of robots, so they proposed a methodology to analyse the kinematics of robots and introduce the homogeneous transformation methods. And then after that there were a lot of development taking place, lot of industries coming came up with robots like a PUMA, Standard Arm, then we have this PUMA robots PUMA and then Brooks Automation started introducing a robot called SCARA or SCARA configuration and then Fanuc also started working on industrial robots and it was somewhere in 1994 so parallel with the industrial robot lot of people started thinking of various kinds of robots. So, industrial robot was a fixed base one and people started thinking of mobile robots, walking robots etc. So, a six-legged walking robot was introduced somewhere in 1994 and it was somewhere in 1995 Intuitive Surgical came up with their medical robots and then NASA's Pathfinder missions started in 1997 and the mobile sorry the Asimov or the humanoid robots was introduced in 2000. And 2001 there were this space robots the Space Station robot was introduced in by Canada known as Canada Arm was introduced in 2001. And after that after 2000 also a lot of things happened in 2001 to 2020 we had lot of applications being developed by various industries and academic institute's ok underwater robots were introduced and then space robots, robots were healthy elderly care, healthcare robotics, so lot of development took place. So, this was just to tell you that as a robot history is not very old, it was somewhere in 1960 the first robot came, so we are only 60 years old the field. But it is actually spread very far and far wide applications and still lot of research and development is taking place, though lot of interest and lot of people are working still we have a long way to go because the technologies and the systems available are still far less adequate than what actually we expect the robots to do. So, these are just to tell you that research and development is still going on and a lot of work is to be done in order to make good robots autonomous robots to serve the humanity. So, I mentioned about this laws of robotics, as I told you is was actually proposed by Asimov in his book handbook of robotics actually he proposed this three laws, it is off more of an historical importance but still as robotics engineers you should know somebody proposed these laws and then they are actually still have some kind of relevance in terms of human behaviours and humanity. So, the first one was basically a robot may not injure a human being or through inaction, allow a human being to come to harm. So, no robot should be allowed to bring harm to a human, so that was the first law. And the second one robot must obey the orders given it to by human beings except where such orders would conflict with the first term. So, whenever instruction is given to the robot the robot should follow the human or whoever is giving the instruction the robot should follow. Provided it is not against the or it is not in conflict with the first law, that means you cannot ask a robot to injure a human being, then the other robot should not follow that instruction. And then the third one is a robot must protect its own existence as long as such protection does not conflict with the first or second law. So, that is the second sorry the third law, a robot must protect its own existence as long as such protection does not conflict with the first or second law. So, the robot should not destroy itself or it should not take an instruction to destroy itself by someone, because that as per the third law it should not it should always try to protect its own existence. As long as it is not in conflict the first or second law. So, this is the these are the three laws proposed by Asimov, long back. But someone thought that these are not sufficient to protect the humanity from robots, so it does not talk about humanity as a whole and it's possible that a robot may be in control of a nuclear station or a very lethal weapon. And it can actually wipe out the humanity by action or inaction. That is why a zeroth law was added later to make sure that the robots are robots the humanity is protected or a humankind is protected from the acts of acts or non-acts of robot. And the zeroth law says that, a robot may not injure humanity or through inaction allow humanity to come to harm. So, as a humanity total human species should not come to harm through action or inaction of a robot. So, this was added as a third or zeroth law of robotics. As I told you it is of more of historical importance still people I mean nobody will really checks whether the robot follows the laws 1, 2, 3 etc, there is no checking or standardization based on the laws. But as a moral principle or ethical principle of design of products, we still ensure that the robots do not injure human or robots do not harm the humanity. But many times it is not really feasible to follow all these laws because technically speaking a medical robot or a surgical robot is used to cut open the human body or to do some surgical procedures in human body. And technically speaking it is actually injuring the human though it is controlled by a doctor, so we cannot really say that the robot should not injure a human being in that sense and that is why it is not possible always to ensure that all these laws are followed, but we look at the spirit of the law and then see whether we are able to follow the spirit of this law and make sure that the human as an individual or humanity as a whole is not getting affected by the robots action or inaction. So, that is about the laws of robotics. So, now let us look at the field how the evolution actually happened in the in this pictorial representation. So, this is from a paper published in IEEE magazine, so you can see that if you look at the growth the evolution of robotic research as I told you it was somewhere in 1960 s the industrial robots actually started coming into the market to us, so this was the period where actually the industrial robot had very good growth and lot of robots came into the market and lot of industries started using industrial robots. And it was somewhere in the same time somewhere in 1970 s, because of the success of the industrial robots to do many task in the industries people started using or thinking about can we use this for other applications also not only in industry and that is where the mobile robots walking robots and humanoid robots were actually started getting attention and many research centres and few industries started working on these robots. But there was not much of growth in that area because most of the focus was on industrial robots, most of the people were looking at the application of industrial robots, especially in the automobile industry and all. And it was somewhere in 90 s and beyond everyone realized that the industrial robotics technologies almost maturing and there is nothing much you can do because whatever we are doing we can do is being done in the industries because lot of robots are being used. And there was improvement taking place in terms of the load carrying capacity controlled and sensors and things like that, but there was not much of research or development taking place in terms of new products coming to the market. And therefore it was actually saturating and people realize that the technology being developed for robotics, because lot of people were doing research in terms of control, development of controllers, development of sensors, actuators etc. And the impact of those developments in the industrial robot was not very significant and that is where people started thinking, can we do this? Can we take this technology to other fields? Or can we take the technology to application areas where we have where we can bring the robot more closer to the human? And then we started this area of field and service robots that included underwater robots, personal robots, then construction robot it called as field robots, then security robots, medical robots, assistive robot, etc etc domestic robot, surveillance et cetera. So, the current focus of robotics research is to see how we can use this robotics technology for an application, the field or a service application, can we take this technology use this technology or integrate this technology and the system in order to get a new application? That is where the focus of current robotics research is more on the field and service robotics. If you can actually find an application and develop a system a robotic system to suit that application then you are actually contributing to the field of robotics. Otherwise, if you develop the technologies and then we are not able to use them effectively, then we are not able to take the take forward the robotics field and that is why lot of research is research and development is taking place in the field of field and service robots that includes robotic application in any field, so you can name any field and if you can find a robotic solution, then we are actually contributing to the growth of robotics. So, if you are interested in robotics, once you take this basic course on robotics which actually introduces you to the very basic fundamentals, your focus should be on to see how can you take it forward or how can you use this knowledge to develop robots for field and service applications. I hope you got the fundamental principle of the robotics, how the robots a robotics field is growing and what evolution is taking place in this field. Just to tell you a few things about robotics, so there are many myths about robotics or people because as I told you there are lot of hype created in the media about robotics so there are lot of myths about robotics also. So, this was actually from IEEE Spectrum 2014, so one of the major myth is that the robots are intended to eliminate jobs. So, the first look if you see then you will see that there were a lot of industries were actually robots replace the human operators, may be true if you look at only that aspect. But then they actually create a lot of jobs in many other fields and therefore the it was not it did not it really eliminate jobs, but actually it actually moved the jobs from the shop floor to some other place. So, that all that actually happened. So, there was therefore the robots are intended to eliminate job is a myth, in fact, it is actually to enhance the capability of a human and to enhance the productivity and to ensure that the quality of production and quality of output increases with the use of robotics technology. So, elimination of job is not at all true and it is a myth. And one of the important facts about robotics is that manufacturing and logistics must adopt robots to survive. So, if the industry has to survive, it has to adopt the robotics technology because automation plays a major role and robotics is one of the key elements in industrial automation, which actually allows the industries to enhance their productivity improve the quality of production and that will actually help them to survive in the market. And another fact is that the autonomous robots are still too slow, so just I told you, we have a long way to go to get a fully autonomous robots, so though there are lot of research going on both in terms of technology as well as in terms of products, we are still far away from having a fully autonomous robot which can function the way we want because most of the autonomous robots are too slow nowadays mainly because of the our limitations in sensing perception and control. And robots are too expensive also is a myth, because the cost of robot is actually coming down and looking at the capabilities of the robot the capability of robot increases and the cost is also coming down and therefore the overall reduction cost is very high and therefore the notion that the robots are too expensive is actually a myth. And this still remains as a fact the robots are difficult to use and this is where actually we need to do lot of work to make the robots easy to use. Still we do not I mean our robots are still not that friendly with the human, we still do not have full trust on the robots, so we keep the industrial robots away from the human, we try to have fencing and other things and the current trend of developing cobots some of you must have heard about this, cobot is basically a collaborative robots, so the robots are slowly getting replaced with cobots where the robots can actually work with human and the robots are made in such a way that it do not really hurt the human or cause injury to the human, so that is basically the cobots. Though this kind of development is taking place there are again lot of challenges and we are having difficulty in using robots for practical application. So, these are the facts and myths about robotics and robotic technology, so we need to understand that like any other technology robotics has also got lot of advantages and lot of limitations. And the focus is basically to see how can we overcome the shortcomings of robotics technology as well as the robots and then make it more and more user-friendly, more and more human friendly and that will lead to the acceptance of robotics were many more applications. So, every robotics engineer s focus would be to see how can we develop better applications and how can we make it come more closer to the human being and make it more useful to the human kind. Now, let us look at the enabling technologies for robotics. So, we mentioned that robotics is an interdisciplinary field and then you need to have an understanding of interdisciplinary subjects or multidisciplinary subjects you need to understand in order to learn robotics as well as to do research or development in robotics field. So, if you look at the I mean robotics is also part of automation, so if you look at the pentagon automation, so this is known as the pentagon of automation, where we see that these five elements actually contribute to the automation field. So, the most important part is the actuators. So, any automation we need to have a motive power to move things and that actually comes from the actuators. So, the actuators being an electrical actuator or mechanical or hydraulic whatever it is, so this actuators provide the necessary power for the system. And then we need to have sensors in order to get data and then based on this data we can get the actuators provide the necessary power. So, sensors make some become an important element in the automation pentagon. And the other one is a processor, a processor is the one which actually process data and then take decision based on the intended motions as well as the current situation. So, it actually collects the information from the sensors and process the information and then give the instruction to the actuators to do work. And in order to make this possible we need to have a network of communication, because we need to communicate between various elements in the system, so we need to have a network and of course we need to have a software in order to link them together and then have a unified way of functioning. So, these five elements are known as the pentagon of automation and robotics also basically depend on all these five elements, if you look at a robot you can see all these elements are present in the robot also. And the scientific discipline which actually provides you knowledge about all these elements is basically we call it as the mechatronics, where we have the mechanics the design and manufacturing, informatics, microcontrollers, electronics and electro-mechanics. So, the mechatronics stream really covers all these areas and the student who actually has got some understanding of mechatronics will be able to contribute well to the robotics field. That is not mean that you need to be an expert in mechatronics, but the point is that if you are if you want to be a good robotics engineer or a robotic researcher, you need to have an understanding of these elements, you cannot be in isolation, ok I know only software, so I do not want to be only a programmer for a robot that actually not going to help you as a designer or a researcher because you need to know what is happening inside the robot, what how is it functioning and to understand that you need to have some understanding of this technology areas. And the challenge is basically how do we actually make sure that the students can actually learn all these field. There is no straightforward answer to that question, but one thing is that we start with some basics of these field and then you build upon that and then try to ensure that you are able to understand what is happening in other fields. So, you cannot remain as a mechanical engineer or an electrical engineer or a computer science scientist and still want to be a robotics engineer, you need to come out of your own domain and then make sure that you understand these areas also to some extent so that you can contribute well to the field of robotics. Now, we will so that is basically to tell you that an understanding of various discipline or an interdisciplinary knowledge is essential for the robotic engineer. So, let us go to the robotics field again and then have a very quick look at the various classifications of robots, I showed you few videos in the beginning just to tell you that these are the various robotics application. But let us have a little bit more close look at this classification, so we can say that this the fine classification is industrial robots. And then one is the field and service robots. And then the we can say entertainment educational robots. So, this is a small area which actually does not have too much of interest for us, but this is these are the two areas where actually you can focus your attention. And under the field and service robots you have a large variety of applications like wheeled robots, walking robots, humanoids, climbing, crawling, aerial robots, medical robots, agriculture robots etc. Let us, just quickly go through this applications, the classification. So, the industrial robots as I mentioned they are the kind of robots used in the industry, especially in the manufacturing industry for operations like pick and place, welding, assembly, painting etc etc. And there are various categories within the robots industrial robot, which we will see later when we go for the details of kinematics and other aspects, so you can see these are typical industrial robots, so up to this and then you will be attaching an end affecter at the tip and then it becomes a fully functional robots. And there are different classification also within this, so this is known as a SCARA robot. And these are articulated robot arms and this is the Canada arm which is there in the space station, so space station international space station as put a manipulator arm for doing repair work and pick and place operations, so that is known Canada arm. Again it is a kind of industrial robot configuration. And these are some of the applications pick and place, assembly, welding, painting, machining, etc. So, I show you this video that the two robots working together in order to do some machine tending and then helping for I mean pick and place another operations. And this is actually a welding robot, this is an assembly robots, which are widely used in industries for various application, basically to automate the process in the industry. And what are the major features of industrial robots? Again this will be discussing it later also, so mostly these are six axis robots and connected like a serial chain like a human arm we have joints, then we have links and then we have joined then link then join like that, so that is basically the serial way of assembly of about assembly of the links and joints. And most of them are electrically actuated nowadays, but there are hydraulic robots also when you did need to have large load carrying capacity, then we go for hydraulic actuation. And having a robot alone is not sufficient we need to have something called a robotic work cell, because a robot has to do some work which has to have some kind of peripheral equipment in order to make sure that the robot is able to do its task, so basically manipulator is the main robot parts and then you have this end effectors, then conveyors sensors etc vision, force, so all those things actually become part of the robotic work cell. And most of the robots are programmed through special robotic programming languages. So, most of the manufacturers provide a programming language so you can actually use that language and program the robot to do different tasks. And one of the advantage is that once you set up the robot, then it runs without any major deviation. So, it may take some time for you to set up everything and then make sure that it do its job perfectly. But once it is set up then you can actually leave it to the robot and the robot will work continuously 24 hours 7 days a week without major variations or major deviation from its performance characteristics. Of course, the mechanical damages and wear and tear will cause some deviation, but you need to as long as you take care of those aspects, then the robot will be able to work continuously without measure deviation as well as measure interventions. So, the wear and tear and then other mechanical damages warrants the calibration of robots, so maybe once in a year or so you need to calibrate the robot and then make sure that you are program and the parameters what you obtain from the mechanical parameters what you use in the programming they are correct or there is no variation, so that can actually be done using the calibration of robots. And we can do online or offline programming again online programming you can actually program the robot as a robot and the robot is doing its work or we can do an offline programming using some offline tools and then port the program to the robot at a later stage and they start working. So, important aspect as engineers, what we are interested in is in the kinematics, dynamics and control. So, these are the three important aspect of robots as a designer will be interested. Of course as a user you do not really worry about this, but as a designer as an engineer, you need to know about the kinematics, dynamics and control of this robots. And that will be the focus of the course, of course, the dynamics and control will be not be that detail in this particular course, but of course they will be you will be having additional courses available which talks about dynamics and control, so here we will talk more about the kinematics of the industrial robots. Coming to the field and service robots, as I told you there are different kinds of field and service robots, so wheeled mobile robots are the one which actually very popular and it actually the research started very long ago and lot of development took place in this field and it's defined as a robot capable of locomotion on a surface, solely through the wheel assemblies mounted on it and in contact with a surface, so that is known as the wheeled mobile robots. So, you can have various configuration, you can have two wheeled robots, you can have three wheeled six-wheeled, you can have robots for different terrains, so lot of development has taken place in the field of mobile robots or wheeled robots, so this is actually a robot for space application. And an extension of that mobile robot wheeled robot is now the autonomous cars, it is actually an extension of wheeled mobile robot. Now, we are actually using this technologies what we developed for mobile robots and then porting it to a real auto mobile and then see whether we can actually get autonomous operation of these cars. So, important issues in the wheeled mobile robots are the design of the wheels and the wheel geometry and configuration and an important aspect is the stability of wheeled robots, so you cannot have always the stability assured and then we need to make sure that the robot is stable under various operating conditions and the maneuverability and controllability are the two aspects when the it comes to the design we need to ensure that it has got good maneuverability and good controllability also. And this actually shows the various ways in which the actually the we can have different types of wheel, the standard wheel and or a castor wheel based robotic design can be done. Quickly go through the other fields also underwater robots, robots which are being used for underwater applications, so these are some of the commercially available robots and this is a student robot which is used for autonomous robot, so there are two categories one we call just remotely operated robots and an autonomous underwater vehicle. So, remotely operated vehicle and autonomous underwater vehicle. A remotely operated vehicle is operated on a remote location with the cable is connected to the robot and to the operator and operator sits at a different remote location controls it, AUV is fully autonomous, you do not need to have any pilot or a controller sitting and controlling it. So, it is defined a mobile robotic device designed and developed to work in underwater environments to accomplish specific tasks, which are normally performed by human operators are known as underwater robots. So, as I mention the two categories are remotely operated vehicle. So, remotely operated vehicle will be having a tether a cable which will be connected to the operator and then this thrusters will be used to propel it and using the cameras and other information other sensors the information will be relate to the operator and operator can control the robots. So, these are some of the commercially available remotely operated vehicle. So, ROV technology has come to a stage where it the lot of commercial applications and there are lot of commercial robot available in the markets and people are using it for many applications like inspection of underwater structures, pipelines, cables, etc etc. So, autonomous vehicle is extension of remotely operated vehicle, so instead of having it control from a remote location you program everything in the robot itself and then make sure that the robot goes from one location to another location and carry out all the tasks and come back without any human intervention. So, that is basically the autonomous underwater vehicle. So, it needs to be programmed and it requires a fool-proof navigation control and guidance system on boards to meet the mission accuracy requirements. Let me skip this. So, this is a video of a autonomous underwater robots this was actually developed for Indian Navy and we were actually part of the development of this robots the control navigation and guidance algorithms were developed by us so this shows the sea trial of these robots carried outs in the Bay of Bengal. So, once it is released from the ship you can switch on the system and it starts moving and it goes to the depth indented depth and then carry out its tasks and then comes out. Now, you can see it is being released from the ship and once it is given the command to go or based on the program it decides what time it has to start and you can see the thrusters getting activated and it starts going inside the water and of course after sometime it will you do not see it and it will comes up after some time and then you collect it from when it surfaces or it can actually program it to come back to at whatever location you want. So, that is basically the autonomous underwater robots. Let me skip this. So, another important area is robotic for healthcare. So, healthcare area is getting lot of technologies from the robotics field and lot of robots are being developed for healthcare applications. So, many times where people think that why do we need a robot in the hospital or if you see that a robot is going to replace the doctor and then we will be under the control of robots, so it is not that case, the case is that we are actually using robot as a tool to help the surgeon or the healthcare provider to help him in the day-to-day activities or specialised activities of activities like surgery or rehabilitation. So, in a typical surgical robots, what actually happens is that the normal laparoscopic surgery surgeon holds the tool and then he interacts with the tissue, in the case of a robotic surgery what happened surgeon uses a robot master robots and the master robot is connected to a slave robots and the slave robot is holding the tool and then doing the procedure. So, the surgeon is not directly connected to the tissue but he is connecting through a something called a master slave robots. So, the master slave robotic system or a telly robotic system is used in the robotic surgery and that actually helps the surgeon to improve the accuracy of the procedure as well as to reduce the fatigue and other related effects. So, this shows a typical surgical robotics system where you have a you have the surgeon consolable where the surgeon is sitting at this location and here is the master manipulator and he creates the I mean using the master manipulator he decides what kind of motion he has to make if he has to do a cutting or he has to do a switch ring, he can actually imitate that motion here is in the master and the master is connected through a controller to the salve robots so the slave robot is holding the tool and whenever the master is when the surgeon is making a motion the slave also is making the same kind of motion at the patient and do the motion there. That is how we get the telly robotic process done. And the surgeon is able to see the site through camera and we shall feedback system and it can actually also have a force feedback, so depending on the forces acting at this point the surgeon will be able to get the force feedback at this end and he will be able to get it done. So, both visual feedback and the haptic feedback had the surgeon to carry out the surgery in a very effective way. So, this shows a typical I mean this is the Da Vinci robotic surgical system and one of the only surgical robot available in the world currently. And another application of the rehabilitation area, so I will quickly go through the application. So, there are two types of rehabilitation one is known as upper extremity that is the upper part of the body and the lower extremity robots before the lower extremity lower part of the body. If you want to have a rehabilitation exercises or rehabilitation for a person who has bought a problem with his upper body or lower body, we can use robotic technology to do for rehabilitation or for getting some treatments. So, these are some of the rehabilitation robotic devices available in the market and currently under development. So, there are a lot of ways in which we can use a robotic technology to help the people who require rehabilitation. So, when your rehabilitation or any other rehabilitation requirement can be done with the help of robots. This is an exoskeleton which can be used for helping the people to walk when they don't have capability to walk because of some physical issues. And the other one there is the bionic arm, so you can see somebody who has lost his upper body parts we can use mechanical devices to can be attached or robotic device can be attached to the body and then we can use the robot signals from the body to activate these devices. Of course, I am just giving you a glimpse of what is existing and you of course you will be able to get more details either through courses additional courses or from internet you will be able to get information. This one this video I already showed you that how this device can be used to help the person to carry out normal activities with the help of a robotic device. Finally, the last one is the aerial robotics. Aerial robotics basically a robotic device which can actually it is an autonomous mostly autonomous or mostly autonomous which can actually carry out tasks without the human intervention. So, we can actually have different kind of aerial unmanned aerial vehicles, so you can have it like a winged one. So, with fixed wings or you can actually have it a vertical a kind of lighter than air and heavier than air system. So, in the lighter than air you have this airship, blimps and hot air balloons. And then the heavier then air you have this fixed wing, flapping wing and rotorcraft. So, these are the one which are not really we called as robots, but if you properly instrument it we can make it as a robotic device also. Because since they are lighter than air they can actually go up in the air and then we what we need is the control of its motion, but then heavier than air you have this fixed wing, flapping wing and rotorcraft. Fixed wing is like our normal aircraft when you convert that into an autonomous thing then it becomes a unmanned vehicle, flapping wing like birds and other insects you can have and rotorcraft is the vertical take-off and landing type of vehicles. And there are a lot of domestic robots are also coming up, so like Roomba was introduced as a vacuum cleaning robots and then you have this wheel chairs to assist people, so these are known as the domestic robots and people are working on these to make it more and more user-friendly. Currently they are not so friendly with the people and not so useful, so the question is, how do we make it more and more user friendly? Before I conclude I would like to give a small reading assignment to you. So, I want you to go through this paper the evolution of robotic research I showed you a picture from this paper in one of the slides, I want you to go through this paper and then prepare a short report on the evolution of robotic research and what you understood and probably your own estimate or your own understanding of what will be the future of robotics based on this evolution that is happening. So, just before conclusion, I will like to talk about the few things what is basically the hardware and software components of robots. So, we have this hardware part as the mechanical subsystem where we have the arm, gripper, body and wheels and then we have the electrical system which we call the motors and computers and sensor systems also. So, this part is the hardware part of any robotic system where it is an industrial robot or a field or service robots you will be having this as the hardware in the robot. So, without this hardware you cannot really have a robots. And then of course you need to have the software to make sure that the robot works well. So, you have this modelling software you have the planning and the perception then controlled simulation and an understanding of these two is very essential for any robotic engineer. So, what are the hardware s to be used? And then what are the software's to be used? Need to be understood very well and that actually leads to many topics in robotics. So, the topics as listed here kinematics is one of the first and foremost thing what you need to understand, it deals with the spatial locations and velocities of a robot end effectors and its internal joints. So, we have forward and inverse kinematics which is most important whether you are a computer scientist or a civil engineer or a chemical engineer and if you want to be in robotics field, you need to know that kinematics of robots. Then comes the statics which actually analyses the forces and the static condition, dynamics talk about how do you actually activate the robot? What kind of forces are needed or forces of thrust are needed in order to make the robot move? And in connection with that you have many things like trajectory control that how do we actually make sure that the robot moves in the desired fashion that includes the path planning also. And if it is to happen we need to have a lot of lot sensors and then the sensors need to collect the data and then create a perception of the environment, that is sensing and perception. And then we have the task planning to create a task for a robot and to execute the task and that includes the modelling of the world or the environment and then you have the programming and simulation also. So, these are the major topics that need to be covered in the robotics or robotics an engineer need to have a clear understanding of these topics to a great extent and therefore we start with the kinematics in this course. Of course, kinematics and to some extent we will consider the statics also. We will not be able to go deep into any other areas in this course, but the courses which are available later we will be able covering many of these topics also. So, in the next class onwards we will start the kinematics of manipulators, which basically talks about the spatial locations centre, position and velocity relationships of a robots with respect to a base frame. And that is though we will be discussing about kinematics of industrial robots, kinematics of mobile robots and the other robots also can be understood once you have the basic knowledge about the kinematics of industrial robots. So, from the next class onwards we will start the topic kinematics. So, let me stop here we will meet in the next class. Thank you very much. |
Introduction_to_Robotics | Lecture_61_Principles_of_PMSM_Control.txt | we started looking at permanent magnet synchronous motors and we said that it is distinguished from the brushless dc motor by having a sinusoidal emf that will be induced in the rotor induced in the stator when the rotor begins to rotate so we have seen therefore that this will mean that you need to apply the sinusoidal voltage to it in order that you can have a meaningful interaction on the electrical side right so if you need to apply a sinusoidal voltage which needs to come from an inverter then we have seen that the devices the inverted devices have to be switched in a manner that the duration of on time of the devices is go it needs to be varied in a s in a way that reflects the variation of the sinusoid amplitude right so this is then called as that is the inverter must be operated under sinusoidal pulse width modulation in the earlier case where you had dc motors and the brushless dc motor in these two cases also we had used pulse width as a means of conveying information regarding how much voltage needs to be applied right so there also we have a control on the width so this is just pulse width which is is being modulated so this is called therefore as pwm and this is called as spw how does one do this in a manner which is analogous to the way we saw here you have to take you can take a high frequency waveform like this triangular waveform and let us say this is your zero level what one does is then have a reference sinusoidal waveform i mean somewhere you will have to have a way of indicating what is the sinusoid you want to reproduce at the inverter output and therefore you need a reference for that sinusoid so you take that reference for the sinusoid and make a comparison with this waveform and so in a manner similar to what we have seen earlier what you do is you have a comparator and to this comparator you give the high frequency waveform here and to the other input you give the reference and this comparator will then generate an output which is going to switch between 0 and 1 are low and high and that low and high is going to depend on these amplitudes these intervals right so in the way in which we have drawn this will mean that you will get a high pulse here and then this high pulse will be slightly longer duration then this will be slightly longer duration and so on and it will increase and then decrease right so this can then be given to the device on the upper half of the inverter leg and then you generate an inverse which is given to the lower divide okay so if you do this then you will be able to generate a system that looks like this and therefore you get a sinusoidal moving average and the electric motor responds only to the sinusoidal moving average because the other term which is there which is dc the motor does not respond to dc because all the phases have the same level of decent so there will be no current flow due to the dc component but you will have current flow due to the sinusoidal components and the way it is done is then the inverter has three legs then you have one face of the motor second phase of the motor and third phase of the motor so if you call this leg as r and y and v then the reference sinusoids used for r y and b legs are phase shifted by 120 degrees therefore you are able to generate at the output of the inverter effectively three sinusoidal supply voltages that are phase shifted by 120 degrees the motor induced emfs are also phase shifted by 120 degree and therefore there is a meaningful interaction between the output of the inverter and the electric motor you need not because if you do it like this with a zero and this sinusoid the very fact that you are going to be switching an inverter leg that looks like this introduces a dc offset because the output from this can only switch between 0 to vdc you cannot have a negative you know negative output from the leg right so this way also can be done but if you look at the actual implementation that happens inside inside a digital system then in the digital system how this is done is you have a register which goes which is incremented from 0 to some high value let us say ff reflect right so which means that if you are having this is with respect to time this is the register number then the register number goes on shifting from zero and because of this you can't have a high frequency waveform that is going to go negative right so in a digital implementation that is exactly what you will do that you will level shift the ac waveform by half this amplitude and then do a comparison of the numerical value of the amplitude level shifted with this waveform that is also level shifted right so you still get the same sort of output in any case so this would be the manner in which you would do it in a digital implementation now if you look at the induced emf in the inside the machine that is going to be alternating right it will go high will go high and it will reach back 0 then it will go negative go to negative maximum and then come back so that voltage waveform is going to be alternating and therefore you have to give an input voltage to the motor that is also alternating it has to go negative okay now it has to go negative but you went there in s place class it has to go negative but we are offsetting it and if we offset it this component does not cause any impact on the machine because all three phases have it right therefore the motor will respond only to this component which does go negative right it has been level shifted by dc that's all and that level shifting is harmless to the motor because you're level shifting all the faces right therefore the motor doesn't respond the motor responds to whatever is alternating and therefore it works right so this is fine but we have another difficulty that the induced emf that is happening inside the motor is going to be dependent on the rotor angle just like we drew this waveform for the earlier electric motor this is for the brushless dc and we said that the induced emf is going to depend on the angle of the router right similarly in this case also the induced emf depends on the angle right therefore if you are going to generate a sinusoidal waveform from the inverter that sinusoidal waveform that is generated also has to be dependent on the rotor angle right otherwise you cannot have a synchronization between the two right therefore if you need to generate a sinusoidal waveform that depends on the rho triangle you must know the value of the rotary unlike the earlier case where the induced emf if you had determined this instant if you had determined if you have identified this instant then from this point onwards the induced emf is basically flat leaving aside the phase that is going to vary the other two emfs are going to be flat therefore there isn't anything that is going to change unlike this case in the case of the in the case of the sinusoidal emf machine the emf is going to change instant to instant as the rotor is going to rotate the emf value also goes on becoming different and therefore you cannot operate this machine without knowledge of the rotor position instant to instant in the earlier case it was sufficient to know at one instant then you only need to know where the rotor is when the emf begins to change again right until that time you know that it is fixed therefore there is no problem but whereas here the emf is going to change instant to instant and therefore you need to know the rotor position instant to instant so that you can relate ok if it is at this angle i would expect that much of emf given the speed right so therefore you need in this case continuous rotor angle information therefore it is usually necessary that you put a rotor position sensor in this machine so if you really want very high precision operation then it is necessary that you have a rotor position sensor without that this motor cannot be used so having said that however the philosophy of motor control operation still remains the same in the case of the earlier machine we did draw the block diagram of control circuit we said that whatever happens here is still the same as that of the dc machine the only difference now is that you have a different motor so the inverter has to be controlled by the hall switch information we were able to derive a digital implementation of that in this case also what we have is the same sort of arrangement that you have maybe a speed reference which may come from an angle reference and error and all that but i am just leaving that out and then you are going to compare the actual speed and then we need a controller that decides what should be done based on the error and based on the error in speed what the controller will say is how much electromagnetic torque needs to be generated so that is then the reference for electromagnetic torque this electromagnetic torque has to be then compared with what is the actual torque that is being generated in the motor this is the actual torque signal and this is then acted upon by a controller which gives an output this output is going to tell the inverter how much voltage to apply this is therefore the same as in the case of the dc machine until this point right but however here there has to be a difference because the motor is no longer dc and you need to generate a sinusoidal waveform and therefore there is one block that sits here which which implements the methodology to make pmsm look like dc machine okay so there is some interface that is going to sit inside your digital system that accepts the signals from the synchronous motor as a output signal and then it implements certain equations that make the whole thing look like a dc machine as far as the closed loop control is concerned the motor is not dc mode and then you get this block then gives as output three reference sinusoids which is then given to a pwm modulator which gives all the six signals required for switching the inverter so this goes to the inverter and this inverter connects to the motor that we have the motor shaft has a device to sense rotor position so this is what is done so this controller is then the torque controller this is speed controller now nevertheless this motor is an ac motor and we are going to be generating an ac voltage here in the case of dc machine we saw that the armature input current that flows will generate its own magnetic field but it does not impact the main magnetic field in the system because they are oriented at different angles but we cannot ensure that in the case of an ac machine there is a rotating magnet how does one ensure that the magnetic field if you leave it as it is the magnetic field generated by the flow of current in the stator does not oppose the magnetic field that is generated by the rotor itself right so that has to be explicitly implemented in this case and therefore there is another input that is given here which is a field controller and this field controller takes what is the equivalent of field current and then gives a reference field current this reference field current is usually zero in most cases because you don't want the stator to generate any magnetic field the rotor is doing that job right you do not want to either increase the magnetic field of the rotor or decrease the magnetic field of the rotor under normal operation so you say that i do not want the stator to generate any magnetic field that is going to oppose the main magnetic field generated in the rotor so you keep that zero whereas you focus on the other parts alone right so this is the way normally the structure is given and in order to do this job of implementing a methodology to make the synchronous motor look like a dc motor this requires the rotor angle information so that is used there and once you have the rotor angle information from that itself one can compute the speed after all speed is nothing but the derivative of the angle angular velocity is nothing but d by dt of the angle itself so if you know the angle you can get the speed right so this is the way in which this kind of a motor control system would work would look like of course as i mentioned you can get speed through another controller if you want to finally give reference position and then you take the reference feedback of the position itself so plus minus this would be the structure if you want to finally control the rotor position if that is the variable of interest to use so that's why i said the rotor will produce magnetic field we do not want the magnetic field produced by the stator to oppose this magnetic field no not a stray c if you are going see if you look at the way uh stator is arranged right stator is a cylinder right so stator is a cylinder and it has large number of slots in which lot of conductors are arranged so you're going to have so you have a slot here you have a slot here slot slot like this it is there all around and then you are going to have some conductors that are placed all in these slots and interconnected in some manner right let's not get into how that is done but if there are a large number of conductors which are interconnected in some manner and you are going to send a certain flow of amperes through this these conductors it will generate a magnetic field you can't avoid it right basic physics cannot be avoided you are going to have certain conductors that they are going to have some current and therefore they will produce a magnetic field okay at the same time you have the rotor which is also producing its own magnetic field now the question is will the magnetic field produced by these by the flow of amperes in these conductors how does it look like when seen from the rotor i mean is it opposing that magnetic field or is it at an angle 90 degrees to that magnetic field or is it at some arbitrary angle so that some part of it opposes the main field some part is not right so this is something one has to do i mean you have the option to arrange the magnetic field whichever way you want it because you are controlling the flow of state current right so now let us say that you have a magnet this north and south it generates a magnetic field okay now it can be shown that if you have a magnetic field oriented in one direction in this case depending on where you are the magnetic field has a certain orientation and if you are going to have another magnetic field this is let me call b1 and b2 if you have another magnetic field oriented in some other direction and there is certain angle between them then there is a force that is generated which you obviously must have seen you take you have this magnet you put another one here then it gets attracted right so when you have a magnetic field and you have another magnetic field there is a force of attraction or repulsion as the case may be and that force of attraction or repulsion depends on the angle between them if the fields are completely aligned then there is no force which will make them move if there is a force it should move right if they are completely aligned there is nothing to move so no force is generated so as you separate them then there is a force tending to attract them to each other so one can with the analysis one can then see this is proportional to v1 into b2 into sine of the angle between them right so now the issue is if you have the rotor that is generating a particular magnetic field and then you have the stator that is generating a particular magnetic field the question is how should you orient the stator field with respect to what is generated by the rotor right if you orient them in the same direction then there is no force produced on the rotor to rotate and therefore electromagnetic torque is zero right whereas if you can orient it at an angle 90 degrees to b1 then you get maximum torque because b1 into b2 into sine of 90 degrees is 1 therefore all the flow of current that you send into the stator is useful to produce torque whereas if you have a certain angle between them then all the current that is sending that is being sent in is not useful to generate electromagnetic torque because it is diminished by the factor which is sine of the angle you understand so we need to do something to say what should be the nature of the magnetic field that is generated by the stator now no oh okay in this case therefore if you remember i said that e r into i r plus e y into i y plus e c into i c is what gives you a fixed output power correct you can if you take these two equations e r i r e y i y you just do e r into i r plus e into a you will find that the result is a fixed number right now e r exists because the motor is going to be rotating there is an induced emf whether you want it or not in the earlier case we did not energize one phase because the emf in that was varying while the other two emfs were not varying correct and if it is not varying it looks like dc and therefore if you give a dc excitation dc current flows so e r into i r plus e y into i y is already a fixed number whereas in this case if you don't energize one face and you consider e r into i r plus e y into i y e e e b don't energize then you will not get it as constant right so in this case all three phases must be energized all the time and with the sinusoidal voltage applied so that the sinusoidal current flows so you cannot say that i will ignore one phase and energize the other two okay i forgot to mention that good that you brought it out so here this requires that all three phases be energized all the time right so now if all three phases are to be energized all the time then comes the question of how much current at what phase angle it should flow now how much current decides the strength of the magnetic field at what phase angle it should flow decides the orientation of the magnetic field and therefore you have an option how you want to do it right and that is what is going to determine whether you are going to have the field generated by the stator opposing any part of the field generated by the stator opposes the rotor field or does not oppose the rotor feed right so you generally say that you don't want to oppose the rotor field because why it simply does not make sense at least at the first loop right there is already a magnetic field why do you want to diminish that magnetic field and send extra current to generate that required amount of electromagnetic torque right so you leave that field as it is and then do it this control ideally if done in analog domain should be instantaneous it should be done all the time because the rotor position is going to be changing right but that is not feasible in an actual system so usually the control loop is executed at loop is executed at switching frequency which may be probably 15 kilohertz all the way down to maybe about 5 kilohertz depending on what switching frequency you choose right we have already seen what are the considerations for switching frequency you would all the time like to have as high a switching frequency as you can if you want to go to 100 kilohertz then it's very good right but there are certain other problem if you want to switch at very high frequency then the losses in the inverter start increasing right therefore the efficiency of the system starts coming down so you cannot really afford to go too high how high you can go depends on the hardware that you have so um we we discussed about devices right so we said that there are different varieties of devices one is called as a mosfet another is igbt so these are generally made of silicon and these devices therefore you can switch at certain rates for example fed if you are going to use a mosfet then it can accommodate even up to 100 kilohertz without generating too much of losses whereas if you use igbts then 100 kilohertz is a little too high because these devices have more loss associated one can go to switching frequencies of 50 to 20 kilohertz 15 to 20 kilohertz right but the difficulty is that igbts are the ones that are available for higher voltage levels so if you want to operate your electric motor from a dc bus dc input voltage of let's say 600 volt then you have to go for igbt because at 600 volt if you select a suitable fet it will have more loss than the igb right so normally nobody selects fits for high voltage applications whereas if you are going to look at low voltage application let us say you are going to be operating a small robotic contraption which is going to operate from a dc voltage of lets say 50 volts then you will not choose an igbt you will go for pets right at the same time there are also newer devices that are appearing for example devices made of silicon carbide that is s i c is called these devices have much lower losses than the other two and therefore if you are going to be using sic you can operate higher voltages at 100 kilohertz right so it's a question of what sort of device you are going to use what voltage levels are involved sic devices are of course as of today more expensive than the other two so you will have to take a call what is it that one wants to use yes so the ultimate explanation of why magnetic fields are at all being generated is because there is some internal flow of current electrons moving around atoms is what generates magnetic states that is the final physics which is understood as of now i think tomorrow somebody might come and upset the whole thing saying it's a new phenomenon so yes so why fields are getting attracted itself i think i don't think there is an intuition behind it right it is an observed phenomenon that if you have magnetic fields it attempts to attract and it if you by analysis inside the machine one can show that the electromagnetic force is then given by this expression but i will not be able to give you an intuitive answer as to why magnetic fields attract or repel right i don't i'm not aware of any such explanation also that's been given it's a observed phenomenon you just have to take it i think axiomatically that magnetic fields have the ability to attract so this is then what happens so the loop is executed at this kind of speeds now there are there is the so we have seen so far the dc machine and then we saw that we saw a brushless dc machine and then we have seen a pmsn at least since that we have seen three motors then the question that will arise is if you want to choose a motor for a given application which one do you choose out of the three right so let's look at the advantages and disadvantages of the motors as we have seen until now this motor the advantage of this motor is that it is simple to operate right it's very easy all you need to do is connect the dc voltage to it and then you are done if you are not really too much concerned about speed and accuracy position and all that all you need to do is just take a dc voltage connect to it it will work you don't have to do anything at all other than this if you want to have some minimal amount of speed control in the lab you just want to run something you want to adjust the speed etc it's very simple again just connect a resistor to it just connect a resistance in the armature circuit and your job is done you can vary that resistance a little bit here there and then you can get whatever speed you want so if you want to get i mean for example you have an idea okay and you just want to evaluate the idea to see whether it works or not the best way is to take a dc motor connect the supply to it adjust the voltage if necessary if you have the control or put a resistor in series operate your system whatever you want to operate you can verify whether it is working or not working so from that point of view this is really a very simple machine to use and do in the lab even if you want to do more sophisticated control operation let us say you want very accurate positioning etcetera etcetera then it is still closed loop implementations is easy it is not a very difficult motor to do control on so these are all advantages of the dc motor what are the disadvantages the major disadvantage is that it has a brush and commutator this arrangement means that size is larger for the motor than the other two varieties plus it requires to be maintained because there is a moving part and there is a fixed part there is bound to be friction you cannot avoid it and if there is going to be friction there is going to be material where you cannot avoid it and if there is material where the material will erode after some time there will be no material left so you have to replace it okay that is something that is unavoidable and therefore it has to be done then the other difficulty with the brush commutator arrangement is that there is likely to be arcing if you open up an electric if you open up a small machine which is going to take half an ampere or less and then see whether there is arcing it may not happen right if you are going to take a larger machine where several amperes are going to flow and its going to be broken i mean you are going to have a commentator action then there may be arcing the arcing phenomenon also depends upon how much load is being operated on at what speed the motor is running so it's not a simple operation and it depends right so arcing is going to be there there are no other major disadvantages with the dc machine right but this disadvantage is enough to say that i don't want to use this because the very fact that arcing is going to be there on the one hand restricts the domain of usage of the dc machine you cannot use it in environments where spark can be hazardous right and you cannot use it therefore in environments where you find it difficult to access and how will you go and replace all these things that are going to be eroded that you need access for for doing repair or doing maintenance so if it is going to be located in some place where you cannot even reach then there is no use of having this motor and secondly we have seen that it is slightly larger because that entire arrangement has to be accommodated right so you cannot say that i have only this much space somehow can i fit my dc machine into it you may not be able to make a dc machine to fit that on the other hand if you go to brushless dc machines the advantage that the brushless dc machine has is that it is brushless so all the drawbacks that are associated with the brush commutator arrangement in the earlier case are simply not there now right but the disadvantage is that being an ac machine it requires an inverter so if you have a electric motor like this in the lab and you say that well i want to do some experiment let me just take this machine and see if i can connect it and use it unfortunately you can't right it's not like a dc machine where you simply take the dc voltage and connect to it it will run it requires an inverter it requires all position switches so some basic semblance of control circuitry must be there right so rigging up that control circuitry getting it to work and all that will require some effort not that it is impossible but it requires some effort so this very fact that it is an ac machine makes it very difficult to use under very simple situation you need the required setup for operating this machine yeah i am talking about motor being available control not being available if you are going to go to a vendor and buy a motor it is quite likely that the vendor will only give brushless dc motor today you may not possibly get a dc motor at all right right there may be some cases where dc motor is sold so if you are going to get a motor plus controller everything available then there is no problem you just connect the dc supply as it is and it works but you must be aware that this has more sophistication as compared to the dc motor right there is in order to make it operate you need some basic sensors you need an inverter that is going to switch and some control logic must be there right there are many occasions where people have come to me and asked i have obtained this from something and it's not working it is making noise can you tell me how it is you know how we can set it right i have no clue because i do not know what is there inside right so if you are going to go with a electric motor and a drive that you purchase as a whole as a block you have to live with what you get right you have already given the money you've got it right so if if it is not working well you try to see why it cannot be working do this do that and all that but you can't repair it whereas if you are going to take the motor you are going to build all these things then you know what what's happening inside if you who have designed it you can do something about it right so yes if you are going to buy the motor plus controller which will be the case if you buy it along with an application right then yes you don't have to bother about it you just give supply and get it working that's all the disadvantage also is that you have a ripple thought whether the ripple torque is a serious disadvantage or not depends on the application i am not saying that triple torque is always a issue it may not be an issue under some cases so that needs to be evaluated so these are generally used for not used for high precision stroke performance applications and this motor also requires to be specially designed if you really want a nice trapezoidal induced emf right one has to take care to design this appropriate whereas if you go for this the advantage is that you get smooth torque smooth within quotes because you are going to be giving a switching waveform there will be some high frequency ripple you can't avoid it okay so within that constraint yes you have a fairly smooth plot that can be generated and therefore it is very suitable for high performance applications the disadvantage and these two machines are small they are the smallest among all the motor types that are known and available today for use the disadvantage is that the control structure is complex it is fairly complex because of this block which is used to make the motor look like a dc mode but these two machine varieties have another big big disadvantage is that we use we use magnets that's a disadvantage because magnets are expensive so these motors are definitely more expensive than dc motors so if you are going to look at a low cost motor you will probably not go with bldc or synchronous motor you will use a dc motor if you want only low cost is the main object so these are expensive further because magnets are used these are sensitive to temperature if your application requires operation in a high ambient temperature environment then you have to think clearly whether this motor is the right one right that needs to be appropriately designed if you want to use a high temperature you want to use it in a high temperature environment right if you lose the ability to generate a magnetic field then the motor is as good as dead right so we'll stop here and look at it in the next |
Introduction_to_Robotics | Lecture_42_Power_Electronic_Switching_and_Current_Ripple.txt | In the last class, were trying to look at how one can generate dc, which is of a different value as compared to what was there in these course. And we saw that you can generate a dc which has a different value that is some 100 volt. If you want a dc of 25 volt, one can generate that by using a switch and operating it in this manner. So, if you keep it on for one fourth of the times and the remaining 75 percent you keep it off then on an average you get the required dc value applied across the resistor. So now, the question is, if you are going to do this then it means that if you plot through the resistance, if you want to look at how much flow of current is there with respect to time, then that is also going to look like this. And this waveform is not accepted. While the waveform of voltage is all right, we do not really desire that that needs to be smooth for this application. For certain other applications, perhaps you may want the really smooth dc waveform that is okay. But in this particular case, this is not so important, but this is important and not acceptable. Now, the question therefore is, how do you get a smoother waveform for i? Now, if you take the case of a dc motor, what we really have is, let us say that this is the 100 volt that is available, what you have in the dc motor is a resistance of the armature, call it Ra and then there is an inductance of the armature, call it La and then there is an induced emf. If the motor is rotating at a fixed set speed then the induced emf is simply equal to some gain times the speed from k into omega. And therefore, for a fixed set speed operation, you can represent that by another dc source. This is the induced emf, I will call it Eb. So, this is induced emf, this is armature inductance, this is armature resistance. Now, normally for reasonably sized machines, this armature resistance is not likely to be very large, I suppose if you take a small motor which you operate from some very low voltage dc, those will have a very high armature resistance, but any dc motor that is of higher output power, you will find that the armature resistance is really small. We would like it to be small because any resistive element is a dissipative element. You do not want to waste loss in a motor, so these are designed so that armature resistance are small. So, for the purpose of analysis, what one can do initially at least is to neglect the armature resistance, say that, that is not there. Therefore, as far as the motor is concerned, seeing from the outside of the motor, it looks like an inductance in series with its source. Now, I am saying armature resistance is neglected for the purpose of looking at the dynamics of the motor, not for the purpose of V is equal to Ra into i plus k into omega, that is the steady state expression, where you cannot neglect armature resistance and say that V is equal to k into omega. So, one needs to do, one needs to understand where to make what approximations. Now, if this is the situation, we need to understand what will happen now if you apply this kind of switch control. So, let us say you are having a switch and this is close, what will now happen if you close the switch? So, if you close the switch, basic electrical circuit which I am sure everybody would have done at some stage or the other says that if this is Vg and this is Eb and you close this switch, a very simple circuit is the result, which is your Vg, there is an inductance and then there is Eb. This is all there is in a circuit. And you want to find out how much current will flow. The equation is very simple, it simply says La into di by dt equals Vg minus Eb, that is all. L di by dt is the voltage across the inductor which you very well know. This is elementary-elementary electric circuit. And therefore, we can easily conclude that di by dt equals Vg minus Eb divided by La, which means that this is a fixed number as far as the motor is operating at a constant speed, it is a fixed number. That means the inductor current will increase with a fixed set slope. That is what it means. So, you are going to have inductor current from some value. It may not start at 0, of course, if you are going to switch on the motor initially at 0 speed, motor was not at all operating, and you turn the switch on, there was no current flowing, current will increase from 0. But let us say you have done several set switching operations, after that if you see there was some initial current flowing and this current will now go on increasing. That is what is this equation says. Now, we are going to turn the switch off after some time, and the question is what happens if you switch it off? So, if you turn the switch off, then it means that you are interrupting the circuit. And if you interrupt the circuit, what will happen to this flow of current that was happening? Professor: What will happen to this current I, if you switch off? It will go to 0. That is very simple. The moment you open the switch, no more circuit is there. There is no no route for this current to flow. You are forcing the current go to 0. But this is not at all a good idea for this circuit. Can anybody guess why? Student: Half La square will be dissipated, will be gone. Professor: Half La square will be dissipated, will be gone. It will result in a spark. L da by dt L da by dt di by dt is minus infinity here, you are just switching it off. And L da by dt will generate a huge voltage spike, and that voltage spike will come across the open switch. It will result in a spark if it is a mechanical switch. But we cannot use a mechanical switch here, we will use an electronic switch. And a huge voltage comes across the electronic switch, the switch will burn. So, that is not at all a good idea to interrupt inductor current like that we need to allow the inductor current to continue flowing, but nevertheless cause the voltage across this as 0. So, one simple solution to that is to put a diode here. How does the diode help? If there was an inductor current flowing and you now switch this off, this inductor is trying to generate a voltage in this manner. And if you have a diode here that attempt to generate the voltage in the negative direction will automatically switch on the diode, because the diode becomes now forward-bias. Therefore, this diode will turn on and a good semiconductor diode is as good as a short circuit. So now, you go to an equation which says, La into di by dt equals, there VG is no longer there, so instead of Vg, it is now V rho because you have shorted this, V rho minus Eb, which is simply minus Eb. Now, therefore, di by dt has now become minus Eb by L, which now means, instead of this current falling to 0, it will now have a negative slope, which means the current will fall. And after some time, you are going to turn the switch on again, which means the current will increase again and then you are going to switch it off again, so current will fall and so on, like this one can go on and off. So, this current waveform is obviously much better. We saw that right at the beginning, where was that? We said whether we can allow this kind of current waveform and we said that yes, you can allow. What are the things you need to look at? Whether the magnitude of the ripple is small enough and the frequency of the ripple is high enough. Now, in this case, magnitude of the ripple is certainly under your control, because when you close the switch, the circuit looks like this and this current is going to increase. And therefore, you know how much increase is happening. If it goes beyond a certain value, you can always switch it off, and allow it to fall down. So, magnitude of this ripple is certainly under your control, you can control it to whatever you want. Similarly, the rate at which you are going to switch this duration, that is also under your control because that is decided by how many times you are switching the switch on and off in a given interval of time. Therefore, if you do this operation, both magnitude and frequency of ripple is in your control. So, in this case, for example, if the motor is going to operate in a steady manner and the circuit is also going to operate in a steady manner, then it means that this waveform has to be in such a way that if it starts from some initial value i naught, and during the time of this switch, it rises to some value, let us say, if, then during the off time of the switch, it has to come back to the same value I knot and repeat in this manner. This is what would then be a stable steady operation. And therefore, if this ripple is delta I, how do you compute delta i? Delta i is then equal to Vg minus Eb divided by La multiplied by the on time, where this is then the on time, this is delta I, it should also be equal to minus Eb by La into t off magnitude of that, because during this time the slope is minus Eb by L, during this time the slope is minus Vg minus Eb by L. So, it means, for a given Eb, which implies for a given speed of operation. If you want smaller ripple, smaller magnitude of ripple, how can you achieve smaller magnitude of ripple? If you look at this expression, either you have to decrease t on or increase La. That is the simplest trick. So, you can choose to do either of them. Either you can choose to decrease t on or you can increase La. How can you increase La? La is the armature inductance, you cannot go inside and modify the machine, so what you can do is put an inductance outside the machine. That is one way. The other way is to decrease the value of t on. So normally, what you do is, you do not put an inductance outside because that is extra cost, it may be big depending on how much inductance we want. So, the best way to do it is to decrease the value of the t on. But you cannot just decrease the value of t on, why? Because you want a certain armature voltage. And the value of armature voltage is Vg multiplied by t on divided by t. If you simply decrease the value of t on and keep the overall duration same, then it means the average voltage will come down, that is what I am talking about is, if you look at the circuit here, how do you get the average voltage? V average is equal to 100 multiplied by if this is the on time t on and if this overall duration is capital T, this is multiplied by t on divided by T. You do not want this to change. You want 25 volt average applied to the motor. And therefore, if you want smaller ripple and you want to decrease the value of t on in order to keep the average voltage same, you have to decrease the value of capital T as well. So, this ratio of t on divided by T, this ratio is called as Duty ratio. So, you need to keep the duty ratio fixed in order to get a certain voltage. And therefore, if you want to decrease t on decrease t on, which therefore means decrease capital T, which implies therefore, switching frequency to be made higher. So, important thing is that the major conclusions that we have are, it is necessary to use switching circuits. Why it is necessary to use these kinds of circuits with switching things? Students: to maintain average voltage. Professor: You want to maintain average voltage but you could have got an average voltage by putting a series resistor as well. Student: keep the efficiency high. Professor: To keep the efficiency high, to keep the losses low. So, it is necessary to use these kind of switching circuits. Switches cannot be mechanical. You cannot use a mechanical switch what you see here. Then you will have to put somebody there to switch it on and off at some frequency. Nobody will do their job for you. You cannot use any automatic mechanical switch also. Mechanical switches cannot really be switched at great speeds. So, you will have to go necessarily for electronic switches, electronic switch. Preferable to have higher switching frequencies. How high is a high switching frequency? What is high for you may not be high for me, what is high for me may not be high for somebody else. So, you will have to look at what is the switching frequency, which this electronic switch can handle. Any electronic switch will have to go from on state to off state and off state to on state again, that is the switching action. And every switching action results in a certain loss, every time the switch is turned on, there is a certain loss, when the switch is off there is a certain loss. One has to make an assessment of what is the loss that is happening as you increase the switching frequency and what is acceptable to you. So, one has to make that assessment to decide what kind of switching frequencies are admissible. In the systems that are implemented today, what you will find is frequency is in the range of in the range of 10000 hertz to about up to 20000 hertz are widely used. So, if you do all this then you can get a low ripple current, which might satisfy your need. But nevertheless, you have to estimate, What is the ripple current? How to estimate the ripple current? This is the way to estimate the ripple current, where you have seen an example of how that can be done. Now, in reality, there is an armature resistance. So, what is the difference that armature resistance will bring about? The armature resistance, if you have it then during the on time instead of the equation that we have here, we will have a slightly modified equation. So, this is the equation that we saw, if there is no an armature resistance, you have one more term which is plus R into i. This will mean that it is a simple first order equation, we do not need specific knowledge of electrical engineering to solve it. So, let us not get into a solution therefore, but now there is, you know what a first order equation solution looks like. This is going to go like that. Earlier, we had a simple linear increase of inductor current. Now, it is not going to linearly increase, it is going to slowly increase. So, if you arrive at the mathematical expression for this, the slope initial slope is equal to this Vg minus Eb divided by L that is what you will find is the slope at start. So, it is going to increase much more slowly as compared to what we thought was the case without the resistance. Now, as the value of the resistance becomes smaller then this is going to be determined by the time constant of the circuit, and that is La by R. If the time constant is larger, then it is going to take a longer time to reach the final value. So, this is what is going to happen and therefore, this rate of rise may be actually a little slower than what we want. But in any case, if this number is going to be large, we are not going to wait until that time to switch it off, you want high switching frequencies. So, it is very-very likely that you will be switching off somewhere here itself. So, you are not going to wait till it reaches the steady value of some level. So, in view of the fact that you are going to switch off very fast, again the approximation that we have made by neglecting the resistance is a good enough approximation in order to determine this. So, this, the semiconductor diode that goes here is called as a Freewheeling diode, because it allows the inductor current to freely circulate when the main switch is off. So, one important another inference is that switching circuits need freewheeling diode. So, what are the electronic switches that are used for this? MOSFET is one. What is the expansion of MOSFET? Metal Oxide Semiconductor Field Effect Transistor. MOSFETs are of various variety. Do you know what are the varieties of MOSFETs? Student: N-Channel and P-Channel Professor: N-Channel and P-Channel MOSFETs. Then there are sub-classification within that, what is that? Student: enhancement mode MOSFET or depletion mode MOSFET Professor: Each of them is, can be either enhancement mode MOSFET or depletion mode MOSFET. So, what is normally used is the N-Channel enhancement mode MOSFET in these applications. It is just a name just to be aware of such name. But as far as the circuit operation is concerned, it acts like a switch or it is made to act like a switch. So, this is used usually in low voltage applications. Low voltage applications meaning, if you are going to have a circuit, which is going to operate from a dc voltage of let us say about 100 volts, MOSFET is a good choice or less. But if you are going to go to higher voltages, normally when you look at heavy duty electric motors, which are going to be operated directly from mains, this is not a device that you will use because the voltage levels are much higher. There, you would use a device called as the IGBT. Do you know what that expansion is? Insulated Gate Bipolar Transistor. So, mostly, these are the 2 devices that are used. These two devices are what are called as Fully Controllable Switches, Fully Controllable Devices, that means you can decide when you want to turn on this device and when you want to turn off that device, completely both operations are in your hand. The other most frequent semiconductor device that is used is the Diode. And the diode is not a fully controllable device, in fact, it is a uncontrollable device. Why it is uncontrollable? You cannot decide when you have to turn it on, when you want it to turn on and you cannot decide when you want it to turn off. The circuit conditions decide when it will turn on and when it will turn off. So, every circuit that you are going to have in order to operate the electric motor will comprise of either the MOSFET or IGBT and the diode just like what we have seen here. So, there is a diode and there is another switch, that switch will then be replaced by an appropriate mechanical switch, the enhancement mode N-Channel effect is represented by a symbol like this. And this terminal is called as Gate, this terminal is called as Brain, and this is called Source. The IGBT is represented by a symbol like that. So, this is called Collector, Emitter and Gate. So, when it is going to be used in the circuit, the device will be used as follows. Let us say if it is defect, then you have the drain. So, this is drain terminal, this is source terminal and then you have the gate, and then you have the diode and whatever else comes on the other side. So, defect is operated by both the affect and, defect is operated by applying applying a suitable voltage across gate and source. The IGBT is operated by applying a suitable voltage across gate and emitter. So, if you apply a voltage that is high enough, it will turn on, behave like an on switch, it will behave like an on switch between the terminal s brain and so. So, current will flow that way. The IGBT will behave like an on switch between the 2 terminals C and D. That is how one can make this circuit operate. So, having understood that you can have a switching circuit in order to generate whatever be the average voltage you want, you can always adjust the duty ratio in order to get whatever armature voltage you want to apply to the motor. So, when we drew this figure some classes ago. Yeah. So, you had the motor, which we know is the dc motor, and then this block is what we have now been speaking about that there is a higher, high power electronic switch based electric circuit that is accepting a source of electrical input power. And this signal, whatever goes here, this signal is a signal that says, how the switches are to operate. So, depending on the instructions given by that signal, this block then delivers a suitable output power to the motor. Now, having understood that if you are going to connect a motor here with a suitable high frequency of switching, you can adjust the speed of the a motor to whatever level you want. But now, the question comes, there are a few more things we need to look at, that is What will you do if you want to reverse the motor? If you have a robotic arm, that is going to pick up an object and place it somewhere, that arm has to go back, otherwise it cannot take the next job. If it has to go back then all the operations will have to reverse, which means, the, all motors have to perform exactly opposite of whatever they did. And which means that they now have to rotate in the opposite direction. So, how do you make this motor rotate in the opposite direction? Student: Apply reverse voltage. Professor: Apply reverse voltage. So, to reverse the motor what you need to do for the dc motor, apply reverse voltage. And how do you apply reverse voltage here? Professor: If you have the output voltage here, V knot, which is the voltage applied across the motor or Va, V armature and this is Vg, we have derived an expression that Va equals Vg multiplied by duty ratio. And what can be the maximum value of duty ratio? Student: 1. Professor: 100 percent, 1. So, D is a variable that goes up to 1. And what is the minimum value? 0. So, D varies only between 0 to 1. If you want negative voltage here, you have to apply negative duty ratio which is just not possible. Duty ratio cannot be negative, it only lies between 0 and 1. So, what else can you do to apply negative voltage? Student: Put another circuit. Professor: Yes? Student: Put another circuit. Professor: Put another circuit. This circuit is incapable of giving negative voltage to you. Please note that you cannot say, I will reverse Vg . That would not work. Why it would not work? Why can you not just slip Vg and apply to this? Student: The diode will short circuit the whole thing. Professor: The diode will short circuit the whole thing. The moment you turn the switch on, both the switch and the diode will blast in your face because you are short circuiting the voltage source using those switches. So, you cannot say I will reverse the supply voltage and then attempt to get negative voltage across the motor. So, that is not going to be this easy. So, in reality, therefore, you apply a modified version of the circuit. That circuit looks like this. So, you have the dc voltage, and what we did was we had a switch and then we had a freewheeling diode. And from this point, we connected the motor. I am just redrawing whatever we did earlier. And this motor was connected at this point, it was the connection. It is just now as such, it is a same circuit redrawn in a different manner. Now, if you want to apply negative voltage, what one can do is take this and put a switch here, and then convert this path to a diode. So, if you want to operate it in one direction, where you want to control this switch, then you want current to flow like this. When the switch is on, see here, when the switch is on, you had the current flowing like this through the motor, it comes here. So, in a similar fashion you want the current to flow, so the current flows like this through the motor, and it has to flow back like this. But unfortunately, the diode blocks you cannot allow this current. Therefore, what you do is you put another switch across this and turn that also on. Therefore, if you operate for example, switch number 1 and switch number 2 together, then it will allow current flow from 1 through the motor through switch number 2. And it will close the path. So, you have this current flowing here, coming here flowing through this switch and then coming back that closes the path. This is for the mode of operation as we had seen earlier. Now, if you want to reverse the direction, what should you do? You need to apply negative voltage to the motor. Negative voltage means, whatever out of the 2 terminals of the motor, whichever was being applied to Vg on the higher side, that has to go to the opposite side now. So, what one can do is instead of operating switch number 1 and 2, you operate 3 and 4. So, which means that current will have to flow like this through the switch number 3, come back here and you want to flow here, but this diode is now blocking. And therefore, what you do is you add another switch across this. So, by this manner, one can attempt to get the motor rotating in the opposite direction also. But it so happens that, that alone is not sufficient, you will need diodes across this as well. Let us not get into reasons, why you need a diode, you need a diode. So, ultimately therefore, you have a nice-looking arrangement consisting of 4 switches and 4 diodes. This is called as a H-bridge circuits. So, using this kind of arrangement then you can apply voltage that is positive or you can apply voltage that is negative to the motor. When we say you can apply voltages in both directions, what we mean is average voltage across the motor. Not the instantaneous voltage. Instantaneous will also be negative. But what is more important is that we need to control the average voltage across the motor. So, it is important to therefore know how you will determine the average voltage and why is it that it will cause reversal of the motor, how is it that it is going to cause reversal of the motor. I think, we will see that in the next class. We will stop here. |
Introduction_to_Robotics | Lecture_83_Particle_Filter.txt | Welcome to Lecture 3 in Week 10. And so, in this lecture, we are going to look at what are called non-parametric filters. So far, we have been looking at Gaussian filters for state estimation. And the Gaussian filters essentially assume that your belief distribution is of a specific functional form. In this case, it was, we assumed that it was a multivariate Gaussian and it is described by a fixed set of parameters in this case, which was like mu and Sigma. So even though we looked at a couple of variations of this, the fundamental underlying assumption is that there is a specific functional form, that describes the belief distribution, and that is given by a fixed set of parameters. So these are called parametric. So now, what we are going to look at are what are called non-parametric filters and there are many, many non-parametric filters. We will look at a specific one. So the idea behind parametric filters is that, idea behind non-parametric filters is that they, they do not assume a fixed functional form. They can look at arbitrary distribution as I can see here. So this is an arbitrary distribution, does not look like a simple Gaussian. And so, the idea behind non-parametric filters is that the complexity of the function that we want to represent, the density, the belief, belief distribution that we want to represent, the complexity could vary to fit the distribution. So it is not like it is a fixed complexity function like the Gaussian distribution. It is not of a fixed functional form but the function could vary and typically depending on the kind of samples that we draw from the distribution, we kind of increase or decrease the number of parameters that we use for describing the function itself. So even though we call it non-parametric, does not mean there are no parameters, it just means that there is not a fixed set of parameters which we use for describing the function. Advantage of this kind of non-parametric filters is that they allow us to look at complex and arbitrary in some cases, arbitrary distributions and we are not restricted to Gaussian distributions. So if you remember, the problem with Gaussian distributions was that there had to be one hypothesis which we thought was most likely and we had small amount of noise around that hypothesis. So we could not look at a hypothesis that supported us having multiple modes. So it had to be a unimodal belief state and so, the non-parametric filters allow us to get around that. So what are going to look at in this lecture is specific, non-parametric filter called the Particle Filter. So in the particle filter, instead of representing the belief distribution as a function, I am going to assume that the distribution itself is represented by a set of samples that are drawn from the distribution. So the curve f describes the distribution that I am trying to represent, and the way I represent the distribution is by just storing a set of samples that were drawn from the distribution. Notice that this distribution is defined on a, on one dimension. So just this x-axis is the dimension, the y-axis is actually the density for the, for the values of the x. So for each value of x, this, the y-axis represents the particular density for the distribution. So, essentially, what I am doing here is, I am sampling according to this distribution represented by f and therefore, you can see that wherever the density is high, I get a lot of samples, wherever the density is high I get a lot of samples, and wherever the density is low, I get fewer samples. And you can see intermediate densities, I have intermediate number of samples. So basically, instead of trying to represent this whole thing as a functional form, what I do is, I just represent a set of points, x in this case. I just store a set of points x which in an indirect way represent the distribution. That makes sense? So in the case of the particle filter, I store a set of samples drawn according to the distribution as opposed to storing like say, something like mu and Sigma of the distribution. So instead of storing like a set of parameters for the distribution, I am going to store a set of samples that were drawn from that distribution and that will be the representation I have for my belief distribution. So I am going to make sure that we are specializing this discussion to beliefs but the notion of particle filters can be used in a variety of other settings as well. Okay so moving on. So in the particle filter, like I said, you take a set of samples and these samples are called particles. So these samples are called particles. These samples are called particles and we denote them by x1 to xM right, this is basically a set of M particles, capital M particles. And each particle is, in some sense, we can think of as a hypothesis, as to what the true world state may be at time t. It is just a guess if you will, of what we think is a true world state at time t, and the way we generate these particles is by sampling from this distribution. So a particle at time t is sampled from the distribution which is probability of xt, given z1 to t, and u1 to t. This is basically all the observations I have seen from the beginning of time until time t and all the actions I have performed beginning of time until time t. So basically what is this? This expression is a belief state. So where do I believe that I am at time t? So this is essentially what our belief state is and so, the set of particles are sampled from our belief state at time t. Now, as M tends to infinity, M meaning the number of samples that I draw, the number of particles that I maintain, then this set of particles becomes a very good approximation of the underlying density. So the denser a sub-region of the state space as populated by the samples; the denser the population of samples, the more likely is it that the true state falls into that region. So it becomes more of a proper representation of the underlying belief distribution. So the particle filter essentially represents the belief as a set of samples drawn from the belief distribution. We do not actually explicitly represent the belief distribution as a functional form here. Now, as with the earlier versions of the Bayes filter algorithm, so here is a version of the particle filter algorithm. And again, we update the belief of xt recursively from belief in xt minus 1. And as before, the particle filter algorithm takes as input the current action ut, the current measurement or the current observation zt, and our representation of bel xt minus 1. If you remember, initially in the Bayes filter algorithm, bel xt minus 1 was just supplied as it is and in the Gaussian filters, bel xt minus 1 was supplied by, yeah, mu t minus 1 and Sigma t minus 1. In this case, bel t minus 1 is supplied by the set of particles xt minus 1. If you remember, that is what we said. So the, the distribution any point of time is represented by the set of particles xt. So, so we start off with xt minus 1 as the set of particles that represent belief xt minus 1. So notice that, so what do I mean by the particles representing the belief? That means that these particles xt minus 1, there will be M such particles, so these m particles were obtained by somehow sampling from the belief distribution at time t minus 1. Now, I start off by initializing both my xt bar and xt which are two sets of particles, and this you can guess, xt bar is going to represent bel bar of xt, so the bel bar at time t and xt is going to represent bel of time; bel at time t, bel xt. So xt bar which is script xt bar, it is going to represent bel bar of xt, and script xt is going to represent bel of xt. So that is what we are going to do. So I will just run through the steps and then later in the next few slides, you will see how this implements the actual belief update. So what I do is initially, so the lines 3 to 7, so lines 3 to 7 are running through each, each of my particles once. So my particle here is a sample. So each of my particles is going to run through once. So what it does, the first, so what it does in line 4, so what is happening in line 4 is that, what is happening in line 4 is that I take my previous particle and take one sample, which is the mth sample, I am taking the previous particle from my belief xt minus 1. And then apply the action ut which I know. I apply the action ut and I look at the distribution of the resulting state. I am looking at the distribution of xt and I am just going to sample an xt m from this distribution. Basically, I am going to perturb this particle by the action. I am going to move it forward assuming that this is the actual state at time t minus 1. I am going to make a prediction as to what would be the state at time t and I am going to sample from that. So for every particle that I was using to represent xt minus 1, I am going to sample one particle now to get (myself), get me xt, get the new xt. Now, this is a little tricky part here. So what I am doing now at step 5, so what I am doing at step 5 is essentially trying to account for the observation probabilities or the measurement probabilities. We know that zt was the actual measurement that we made and I am assuming that xt m is the real state. So I am looking at, okay, hey, if xt m was really the correct state, xt m was really the correct state what is the probability that I would have made this observation. I am going to assign that as what is called as a weight or the importance weight for the sample xt m. So I have wt m which is the importance weight for the sample xt m. And then I have, I am just adding to the bel bar representation, bel bar xt representation, the sample xt m, so the actual, actual particle as well as the weight. So I am going to add the particle and the weight to the bel xt, xt m, or rather xt bar. It is not quite, it is not quite bel bar, it is actually something; so the xt m incorporates the movement. xt m incorporates the movement and the wt m actually incorporates the observation. So the xt bar is not quite a representation of bel bar because it already incorporates the effect of the observation through the weights. So in fact, I could just stop here. I could just stop here and say that, hey, look now, I started off with set of particles which were xt minus 1, I had 1 to m particles. Now, I have a new set of particles, xt m, and a set of weights that tells me how likely is that the true state to be that particle. Makes sense? This probability tells me how likely is it that xt m, if xt m was the true state would have given rise to the observation zt and now, I have this weight that tells me, okay, how likely is it that this is the real state given that you have observed xt. So it is, in a way I can still construct, reconstruct my belief, I can approximately reconstruct my belief using these weights and these particles. Then what is going to happen is that since these particles are now going to contain, suppose I come to a point where a particle has a very low weight. Because it says, hey, look, it is very unlikely that this particle is something that would have given rise to this observation. Or it is just a small probability event of you know, big noise. So I am trying to move forward move 1 meter but suddenly I move forward 2 meters. I mean, it could very well happen that somebody pushed the robot or something so that it moved forward 2 meters but it could be a very, very low probability event, you know. But then, let us say, that is a sample I draw. Right now I have gone forward 2 meters but the robot has really not gone forward 2 meters, it has gone forward only 1 meter. And then I make a measurement I am going to say, huh, this measurement is quite unlikely. But then what is going to happen? I am going to have a location that says the robot is 1 meter away from where it really is with a very low weight. But then when I go back and try to do a resampling of this when I make the next action, I will still keep pushing this particle forward and I am going to give lower and lower and lower weight to it till it becomes vanishingly 0. So what will happen is because there is so much noise in the system, many of the particles' weight will go to 0. And so, I will be carrying around a lot of particles whose weight is 0 and it is not a great filter then. It is not a great representation of my belief. So if I have, a lot of my particles have weight 0, then I am basically using only one or two samples to represent my distribution and I could be very wrong in that. We will see that in a minute when we go to the actual illustration of this algorithm. So how do I overcome this? So I know, I told you that already my xt bar has enough information that incorporates the observations but the set of particles are not really great. So what I do is something called resampling. So this, this phase, lines 8 to 11 of the algorithm, I do what is called resampling of the particles. I have the same, I have M particles now, each having some weight. Now, what I will do is I will resample M particles, capital M particles again. So new set of particles that I am drawing but the probability that I will draw one of the earlier samples, so remember that I am going to draw samples here with replacement from xt bar. What I am doing in line 9 is drawing samples with replacement from xt bar. So people, all of you are familiar with sampling with replacement. So we are going to do sampling with replacement from xt bar. So what we do there is we will pick a number i with a probability that is proportional to wt i. So what is wt i? It is the weight of the ith particle in my set xt bar. So I have computed M weights. So this wt i is the weight of the ith particle in my xt bar. So I am going to sample an i proportional, probability proportional to wt i. Basically, that means that I am going to take each wt i divide it by sum of all the ws, and that gives me the probability that I will sample i. Now, once I have drawn a sample, I am going to add the corresponding particle. The one that is here, xt i, I am going to add it to xt; script xt. So I am going to that M times because M is the size of the particles that I am maintaining. So I will draw M sized particles like this. Notice that these particles I am drawing are exactly the same set of particles here. Just that some particles might be drawn more than once and a few particles could just be left out in this new set M. Why could particles be left out in the new set M? Because they have low probability and particles could be repeated in the new set M because the probability of the you know, the probability mass in that particular state in the belief state is very high. And then I do this for M times so that I get a new set of samples M and then I return xt. So remember, so the last thing that return is bel xt so this is basically bel xt. So this is not quite neatly broken down into the first step where I compute bel bar and the second phase where I compute bel. So the first phase here is where I actually, you know, do my prediction as well as accounting for my observation. My prediction and the correction computation is already done in the first loop. So that is my prediction computation and that is my correction computation. I am already done that in the first loop. And what I am doing here in the second loop is essentially resampling the particles so that I actually get a more, more appropriate representation for bel. So as we said earlier, so the set of particles that I get from the last step. So is essentially my representation for bel of xt and so, the one I get in step 4 is kind of, so step 4 is that. So the set of particles is kind of my representation for bel bar. So that is not yet, that is not yet incorporated in my observation value. So it is a representation for bel bar. And what I finally get from line 9 is my, is my final representation for the belief of xt. So let us look at, let us look at; I will skip this line, I will come back to this or let us go over the slide, it is fine. So like we said, like lines 8 to 11 implement what is known the resampling or the importance resampling process. And as we said, the algorithm draws M particles with replacement from the set xt bar. Remember that xt bar is really not bel bar t. So actually, the original set of particles that we sampled in line 4 is our bel bar representation, not now. Now, what we do is essentially doing something like this. So our bel of xt is now sampled, is now kind of equal to the observation probability, the probability of zt given xt m times bel bar xt. So each particle that I am sampling here is going to be, sorry, so we have to edit the last part when we start talking about resampling. So I am going to redo the whole thing for this slide, okay. So like we, like we looked at, when we were discussing the algorithm, lines 8 to 11 implement what is called resampling, and sometimes it is called importance resampling. And again, I am going to draw M particles with replacement from the set xt bar. Remember xt bar is not really the set of particles representing bel bar. So bel bar is essentially, whatever we computed in line 4 gives you bel bar and xt bar is the particles with the weights. And like we mentioned, this is already encoding your bel xt in some way but the particles are not the right set of particles. Therefore we get the better set of particles when sampling with replacement. And so, before I sample with replacement, they were distributed. The particles were distributed according to bel bar xt that is what we said. So the xts were representation of bel bar already after line 4. But after the resampling, they are distributed according to bel of xt, more or less. So that is basically the idea here. And so, let us look at how this is going to work. Here is the simple belief representation. You can see that already this is something that a simple Gaussian filter cannot do. So we are basically having a bimodal distribution. So the state x here is just a single line. It could be anywhere on this x-axis. The robot could be anywhere on the x-axis. And I am going to assume that there are these two peaks. I am not sure whether the robot is here or whether the robot is here. Of course, I am going to have some baseline probability that could be anywhere on the line because of the noise. But most of my probability mass is now concentrated around these two modes. And the way I would do my representation; now, here would be a lot of particles will represent, will be present here, and another set of particles will be present here, and there will be very few particles elsewhere. So maybe there are 1 or 2 in other places but most of the particles will be present in these two modes. So this will be my representation of this kind of a belief state using a particle filter. Now, let us assume that I do the action, move right. I do the action, move right. So what should happen? So what am I going to do is I will pick one particle, so this is what we do. I pick one particle from the set of particles that represent my belief. This is my probability of xt given ut comma xt minus 1 m. So that is basically the line that we are trying to look at here. Just let me go back so that you can see what we are talking about. So what I am doing here is step 4. So I am going to look at the probability of xt given that I am applying ut to xt minus 1 m. Some particle, some mth particle at time t minus 1; I am going to pick that, I am going to apply action ut and I am going to sample xt. So that is basically what we have done here. So I have taken, I have taken a particle here so you can call this, you can think of this as xt minus 1 m, and I have applied ut which is just this action, go right. I have applied ut here and then I have sampled from the resulting distribution, assuming that it ends up here. So this will be my, this red dot will be my new xt. So this is xt minus 1 m, this is xt m. So it is basically the step. So every, for every particle here, so I will pick that, I will apply the transition, whatever is the action and I will sample from the resulting distribution and I will get a state. Great, and now I am moving on. So what is the next thing I have to do? Once I have done that, basically, I end up with this as my belief distribution. You can see that. So from these two being my modes, now I have become, these two have become my modes. And I have basically moved. So basically, what does it mean? The set of particles that were here have moved here. The set of particles that were there have moved here, and the set of particles that were here have moved here. That make sense? And you can see that it has flattened out a little bit. It is not as peaked as it was here. It has flattened out a little bit. Why is that? Because there is some noise in the movement. So when you try to move one particle from here forward, it does not always move the same distance forward. There will be some kind of a region of uncertainty around the end. So it might end up here, it might end up a little bit earlier or it may end up a little bit later. So the motion need not be deterministic and there will be some kind of a smearing out of the particles. And that is essentially what you see here and why this is, why this looks a little bit more flat. Next what I have to do is account for the, I have to account for the observations. So now, this is what my observations tell me. What does my observation tell me here? So okay, so whatever observation I am seeing now, so maybe I am seeing a patch of red on the wall, or I am seeing a door on the wall and things like that. So that door or patch of red could only occur in these two places. I think there is a door here on the wall and there is a door here. There are two doors here, there is door here and there is another door here. And so, now, even though I have moved to the right, I have moved to the right and I actually still see a door. Let us say, my observation says that, hey, there is a door in front of you. Now, we know from the way the observation works that the door can only be seen in one of these two places. We know that the door can be seen only in one of these two places. So what does that mean? The robot has to be either here, right in front of this door or it has to be in front of this door. But we started off with the belief that had robot in front of one of the two doors and now, I moved right, correct and again I see a door. So I know from my observation probabilities that it has to be one of these two places but I know from my motion model it has to be one of these two places likely, therefore, I am most probably here right and most probably here. So how does that work in this particular case? So now let us pick a few particles now. I am going to tell you how this whole importance resampling part works. Now, we already saw how we projected one particle and got this, so we got this particle here and how we projected, now we could have projected this forward and let us say, we got another particle here, I am just picking 3 particles as samples. And then, this particle could have come from either the motion models telling me that a particle has not moved at all, there is some probability of xt being the same as xt minus 1. Maybe that is a chance of a failure of the action, or it could be that there were some stray particles here or there to accommodate for background noise, and one of those particles kind of happened to be here. Just whatever is the reason, I am going to pick these 3 particles as samples to tell you how the importance resampling is going to work. So when I start off, when I do the prediction update, when I am actually moving these particles forward based on the action, all of these start out with an equal weight. So all of these would have notionally equal weight, I have not assigned weights yet, But now, I am going to assign weights. So how are my weights going to be? They are going to be, looking at the probability, I am going to add the probability that the observation happens here. So literally, so the weights will now, there will be a lot of weight for this guy because I have my, so this is my posterior. I have updated this, so this is my prediction bel bar and this is my, it is going to be the overall weight assigned to this particle. So these now, sizes have become proportional to the weight and these you can see that, have actually gone down from their default value. These weights have become very, very small. Now, what I am going to do is I am going to resample these particles according to these weights. When I resample the particles according to these weights, then what will happen is I will, more and more particles will end up in this region as opposed to the other regions and I can see that even though this was my bel bar update, after I do the reweighting, after I assign the weights and I do the resampling this becomes my bel. So this is, this was my original bel. This is bel of xt minus 1, this is bel of xt, and this is bel bar of xt. Make sense? So this is exactly how the particle filter will operate. So I will have a set of particles, to begin with. I will have a set of particles, to begin with, I will pick each particle from here, I will apply my action, and I will sample from the resulting location. So I will have a distribution over where this particle could end up, so I am going to sample from that. So I am going to basically take this set of particles and I am going to move them and so, I am going to get something like this. So there will be a slightly you know, spread out distribution that looks like that. And then, I am going to accommodate my observation probabilities. I am going to accommodate observation probabilities and observation here, as you know, was that I saw a door, therefore, I am going to accommodate that. And wherever the particles correspond to you know, the motion being reliable and the observation matching what you see, those are going to get high weights. Now, I am going to resample these particles. These particles should have now moved out toward here. I am going to resample them according to the weights. So probability proportional to the weights and I will end up with something that looks like this. There will be lot, lot of particles in this space. So many of these particles will end up coming here and fewer and fewer particles would go here. Basically, I will be resampling the same particle multiple times because as you can see, that if my density here is going to be much higher than what I had as densities here, so more fraction of my M particles, my capital M particles, a larger fraction of those should end up in, should end up in this region. A larger fraction of those particles should end up in this region. So how is that going to work out? So a lot of particles that are here will receive a small weight and when I resample, many of them will not even get sampled. Remember I am sampling with replacement so if these particles have a small weight, they will not get sampled. But the particles that are here, and because they have a large weight now, all the particles here, they have a large weight, will get sampled again and again. Remember, I am sampling with replacement. Even If I sample high weight particle once, it goes back into the pool so I might actually sample it again. So many of the particles that correspond to this part of the state will keep getting sampled again and again, so I will be repeating those. So I am basically going to make my, the particles, set of particles get concentrated around this region, maybe there will be one or two occasional particles that are spread out but most of my particles are going to be here. So the next time I am going to do a forward or a backward action or whatever motion here, that activity is going to basically be applied to this part of the state space and less likely to the other parts of the state space. So mostly, the particles will get concentrated in one region of the state space. So I hope that makes sense. So what I am going to do now is just go back and look at the algorithm once more, so that it makes, it is now people can see it more clearly. So what we do here, we start off with a set of particles that represent belief at time t minus 1. I have my action at time t and I have my observation at time t. We saw all of these. The action was to go right, observation was see a door. And now, for every particle that I have in x, script xt minus 1, I am going to take each particle one at a time, that is, I am taking a particle, I am applying action ut, I am going to apply action ut and then pick a new particle xt m. So that is a particle corresponding to t minus 1. And this particle essentially, you can think of it like, hey, I am moving my state from xt minus 1 to xt because I did action ut. So that is basically it. So at this point, at step 4, we saw that the second belief that we had, that the bel bar is formed by doing this. The next thing I do is assign weights depending on how likely it is that I am going to see an observation given that this is the particle that I have drawn. And it turns out that, so the places where we can see a door is the one that we are likely to pick and so, the weights for those are very high. And then I am going to assign the, I am going to add the weights to my particle and create xt bar and I am going to resample from that. And when I resample from that, I am more likely to sample in particles from regions which correspond both to the motion prediction as well as to the observation probabilities and that is essentially what you saw here. So this is what happens when I do the resampling. And so, that basically what the particle filter does. So particle filters are great but there are some challenges with using particle filters. So the first challenge is we use a finite number of particles. So you know that we said that when M tends to infinity, when capital M tends to infinity, the particle filter approximates the true distribution. But for a finite number of particles, there are all kinds of problems that come in and we typically say that the representation is biased because of the small number of particles that you are using. And if there is more nuanced, more complex distribution, you might not have enough particles to represent the whole region. But then what we typically find in practice for most applications that we are interested in, in the robotics domain is as long as M is reasonably large, and say even 100 or maybe in the 100s if you will, this bias effect is negligible except in very, very rare cases. And therefore, for all practical purposes, you can still use the particle filter but you have to use a large number of particles. You cannot just say that I am going to use 3 particles, 4 particles; you will have to use a large number of particles to make sure that you do not get into the biases associated with the finite number of particles. So the second problem is there is a high variance that comes in when you are using a particle filter because of the randomness that we are having in the resampling phase. So what happens is, if you remember what we saw here in this particular case, a lot of these samples that were retained in your bel representation were essentially the same sample being repeated. So, and this is something that will always happen. Suppose I had like 10 samples here and I had 10 samples here, when I do the resampling, I am going to get, say, 18 samples here and 2 samples here. And to, just to keep the numbers small; so I had 10 here and 10 here, now I do the resampling, I am going to get 18 here and I am going to get 2 here. So that 2 is not a problem but if the 18 is going to have a lot of repeated copies of the same 10. So basically, what is happening is I am kind of reducing the variety in the, in the particles. Now, that is fine. You will think that reducing the variety in the particles will reduce the variance. It does but that is for a specific run of the particle filter. If I give you the same sequence of observations and same sequence of actions and observations and ask you to rerun the particle filter, just because there is a randomness here in which particles I choose to resample, I might end up with a different set of particles being repeated. Again, the variance could be low for that particular run but across runs which particles are retained and which particles are used for continuing to update my thing could be very different because I am going to concentrate on a small number which are keep repeated, which are repeated. I am going to concentrate on small number of particles that are repeated. So this could lead to a huge variance in the estimate. So one way of getting around that is to say that, hey, this I am getting this only because I am resampling so often. So every time I resample I kind of reduce variety in the particles. So why do not I resample you know, maybe at a lesser frequency. So I can reduce the rate of resampling. Instead of resampling after every movement, maybe I will resample after every 10 movements or every 20 movements or every 30 actions, instead of resampling after every action. So that is one way of doing it. Then there are other things called the low variance samplers for particle filters which we could use and I am not getting into that. I mean it is discussed in the book if you are interested in looking at it. But then, when I am going to reduce the rate of resampling I have to be very wary of some things. So we already saw that if we resample too often, we lose the variety in the samples. But on the other hand, if we resample too infrequently, if I say I am going to resample every 100 actions or every 200 actions or something like that, what will happen is the problem that I pointed out earlier, you remember? I said, what happens if we do not do resampling? I can still reconstruct my belief just by using the particles I get for my bel bar and the weights, but what will happen is as I keep doing these updates again and again and again, the particles are more likely to move into low probability configurations and eventually, the probability is to go to 0 of the particle being there and that is going to create a problem because my set of particles will become not very representative. I will only have a very small set of particles, not the M that I thought I have. So if I resample too infrequently, we will have majority of the particles in low probability configurations. So we have to figure out what is the right rate of resampling so that we can maintain this and it comes from you know, it is lot of factors that go into it. It depends on the complexity of the motion model, how noisy your sensors are, how noisy your actuators are, and so, typically, it takes you a few you know, iterations before you can get this, this whole thing correct. And it comes, it becomes easier with more experience using these. And particle filters again are very, very popularly used in many of these state estimation problems. And there are couple of other challenges. So one problem is that, so the bel bar and the bel distributions can become very, very different when you know when we have sensors and actuators that have different levels of noise. Here is an extreme example. Consider the case where the sensors are almost deterministic and the motion has some noise. So now, what happens is I move forward. I move forward and then my, my motion model is going to smear my particles out over a range. But let us say that my sensor is very accurate. It can tell me exactly what the x and y, or the x-coordinate is for the robot. Now, since my motion model is noisy and it has smeared out the particles, most of the particles are going to have a weight of 0 because they are not at the right x, y coordinates. In fact, given the way this whole thing is going to operate, almost all the particles will have a weight of 0 because maybe 1 or 2 of them will actually end up at the exact right coordinates, most of the others would be slightly off. And since my sensor is deterministic and gives me the exact x-coordinate, all the others, the probability of z which is at specific x-coordinate given the location of the particle would be 0. So what is going to happen is my weights almost all fade away to 0. And now, sampling with replacement is just not going to make any sense. Maybe there are one or two particles or, which will have a non-zero weight. I will basically be repeating those particles again and again. Now, this will be very, very different from what is actual belief that I wanted to represent. And in fact, what happens is, maybe in one step you get that but as you keep repeating these updates multiple time, you have all your weights go to 0 in which case you will have no particles to resample. So one of the ways that, one of the simple ways that people address this, it typically it comes from having sensors that are too focused and, but you are not able to predict the motion correctly. So what people do is they assume that the sensors are noisier. Basically, you look at your probability of z given x and you add a little bit of noise to it. You do not use the actual probability of z given x, you add more noise to it than is actually there in the sensor. And this allows you to get around. This is a very cheap and simple way of making sure this divergence problem does not happen. And the last problem, last challenge, I should not say last problem, last challenge in implementing particle filters is something called the particle deprivation problem, or particle deprivation challenge. So typically, I mean we have been talking about one dimensions and other things because easy to draw but typically, you are operating in a very high dimensional space. So a lot of different state variables that you have to take care of while representing the robot. And any finite, any small finite set of particles, even I am talking of 100, 200, or something like that it will always be sparse in a high dimensional space. So when I am randomly resampling things, when I am especially, when I am initially, when I am doing my random resampling, so it could, I could very easily come to a point where particles that are close to the true state vanish. And so, instead of being at the exact correct state, instead of being close to the true state. I might actually sample things that are slightly farther away, and given the space is high dimension, so everything is sparse. There will be only a few particles, to begin with, that were very close to the true state, and now, they will vanish because of the random resampling. And I could start sampling slightly farther away from the true state but then, once I start projecting those particles into the future, they will move further and further away from where the actual state of the robot is and I might come to a point where I might not have any particles close to the true state for me to make any reasonable belief updates. I might actually be completely deluded into thinking somewhere else. So one way to address this is to not resample M particles every time, resample something less than M and then add a small number of new particles, sampled uniformly over the space. And just randomly you know, reset some of the particles instead of where I was projecting it and going, just randomly reset the particles to somewhere over the space and then continue that. So your importance resampling, instead of sampling M states, M particles will sample numbers M minus k where k is the number of particles you add to refresh the particle set. And you do not have to do this every time step. If you do this every time step then you are actually losing the whole effect of the particle filter. You do this, say, every few time steps, maybe every 10 time steps, or every 100 time steps. Or you could do something more cleverer. You could look at the variance of the particles. If you find that the particles are all you know, kind of clustering around one state, then you could say, hey, I am going to do a resampling. Then I am going to do this refresh of the particles. So you could say, okay, the variance of the particle space is getting low. I am going to just stop and resample or add a new set of particles; not resample, it is refresh. I am going to just refresh the particles and so, I am going to take k new particles and then add them randomly over the space, and then I will start updating. Hopefully, it will cover up for any, any problems that could be there. So particle filters are very powerful. They allow us to model arbitrary distribution, they are simple to implement. And then, in many cases, people have very, very efficient implementation of particle filters, so that you can do this at run-time so the robot movement is not impeded by all the particle computations that you have to do. And other thing is I did not say anything about what the motion model and what the observation model should be. I mean they could be anything, I do not have to assume any specific form for it as long as it is easy to compute so that I can plug that into my lines 4 and 5 of my algorithm and I can do this. As long as it is in some form where it is easy for me to sample from. And there are some caveats, there are some things that you have to be careful about but given advantages that you get of using particle filters, so it makes sense to look at these challenges and make sure that you are addressing them. And like I said, there are easy ways to address these challenges and there are more robust ways to do this as well. And so, we just go ahead and use particle filters. I will stop here. |
Introduction_to_Robotics | Lecture_33_DC_Motor_Equations_and_Principles_of_Control.txt | In the discussion yesterday, we were looking at the DC motor and attempting to understand what are the basic physics behind the operation of the motor. So, this gives an elementary view of how the motor is made and the physics behind the operation is given. Now, if you see here, we had put the rotor this is a drawing, rough drawing of the rotor and we had said that you will have an electrical conductor here and a conductor here, which is going to produce a force when you have an electric flow through it due to the presence of a magnetic field. But, you have so much more surface available, where you can put more conductors and therefore, the ability of the field to allow the rotation increases much more, if you can therefore put more conductors all around. So, that is indeed what is done, so you have conductors everywhere and all of them are going to run alongside, along the axis and they all have to be interconnected in some appropriate way, so that the whole thing works in the way you want. So, important that the interconnection be done appropriately otherwise, you would not get anything of use from the machine, so this whole arrangement of laying out the various conductors and interconnecting them in an appropriate manner that entire arrangement is known as the armature and the arrangement on the other member which is not going to rotate the main role of the other member which is not going to rotate is of course to house the entire thing and it is also the member that generates the magnetic field. So, the assembly which is going to generate the field inside the machine is then actually called as the field itself. So, apart from the magnetic field, the entire assembly is also called as a field arrangement. So you then have, in a sense you have the machine comprising of the stator and the rotor, which houses the field, the assembly and then, the armature assembly, so the stator has the field, the rotor has the armature. Now, the field in the DC machine which is used for these kind of applications if at all the DC machines is used is obtained by having magnets and this is basically large number of conductors placed around the rotor and connected in a suitable manner, which is there and it is to this armature assembly that external voltage is applied, the external voltage that is applied to make the motor rotate or to make the rotor in the motor rotate is applied to the armature and that supply is a DC supply that you are going to apply, that is why it is called as a DC motor. But, one would then see that, this is really a difficult job, because now let us say that, these are the two terminals, electrical supply terminals to which you have to supply a DC voltage and it is the rotor that is going to rotate, so how can you supply a DC voltage to an entity that is all the time rotating it will not feasible as it is, if you supply two leads to these two terminals and it is going to rotate, you will soon have a situation, where everything becomes intertwined and it will simply stop operate or something they simply snap off. So, in order to avoid this situation, there is a fairly ingenious arrangement that is done. In the system that, you have another cylinder, which may be very short in length and it contains many segments like that, all these are insulated segments from each other, like these are all made of, these are made of copper and all these are insulated from each other, therefore each of these segments is an independent segment and then what you do is, this is linked to the shaft of the machine. So, you have the shaft here it goes to the machine and the entire armature assembly fits here. So, you have the entire armature assembly fitting there and at the, at one end of the shaft near the armature assembly, you have this other cylinder as well on top of which there are many segments that are put. So, it means that if the rotor is going to rotate, then this cylinder along with the many segments will also rotate, because they are all fixed on the same shaft and then you have these two outputs from the armature you have two terminals which are going to come out, you make a connection of that to these segments itself from where else it do not. Now it mean that, as this is going to rotate this output from the armature lead that is going to come is actually linked to this cylinder and this is connected to another segment and therefore, the entire thing will rotate without any difficulty nothing will get you know intertwined or anything like that, this is one fit mechanical assembly and then what you do is, you place something else with slides on top of this, where you have another arrangement here which is a fixed block, rectangular or a block like that and you take out the lead from this. Similarly, what you do is, you place another block here and you take out another lead from there. So, this block that you have put slides on top of the rotating surface and therefore, both this cylinder is going to rotate this block does not rotate, it is the sliding surface here. Similarly, this also slide therefore on the one hand this member and that member they rotate whereas, this member and that member are stationary. Since, they are not going to be rotating you can take this wire lead and make a connection to a DC supply these two will not get intertwined because, the two blocks that we have put are stationary. So, now this arrangement the cylinder here, the cylinder here this is known as the, this is this one and these two blocks are called as they are known as brush. So, the brush and the arrangement of commutator as it is called is the one that enables you to make a electrical connection of a DC source to an armature that rotates. So, with this arrangement then, you can have a DC machine that has a rotating armature to which you can connect a stationary DC supply to it through a wire and the whole system will then move. So, yesterday some of you are asking about the DC machine equation that we wrote, that says V equals I into R plus K times P. Now, this equation is a steady state equation which means that, if you give a supply to a DC machine, which is steady DC supply then, the motor will start rotating, initially if it was not rotating the motor will accelerate run up to some speed and then will continue running at some speed, at what speed it will run that is an issue that we have to address still. So, the motor will pick up accelerate and run at some speed. So, when it is running at that, some steady speed, which you call as omega that is given here then, it will draw a steady flow of amperes I. Now, the relationship between that steady speed, steady current that flows and the steady voltage that is applied is this equation. But, if you were looking at determining the behaviour of acceleration of the motor then, this is insufficient, this is not going to give you what will happen there. Now, because you have an armature that is sitting there which contains, large number of conductors interconnected in some manner and so on. So, electrically the whole thing in addition to a resistance also looks like an inductor. Therefore, the inductance effect also has to be brought into the equation and then, the circuit equation becomes instantaneous voltage that you may apply is equal to a resistance current, resistance drop plus an inductance effect plus the induced EMF. So, this is a equation describing the electrical circuit performance of this machine and then, you have the mechanical side which says the electromechanical forces or electromechanical torque is given by K time instantaneous current I. So, these two equations together describe, what is happening inside the machine and what is happening outside the machine is that, the total moment of inertia that is rotating into d omega by dt is the accelerating torque, this accelerating torque is equal to the generated torque Ki minus, whatever is the load torque TL. The load torque is everything that opposes the motion, it may be due to friction, it may be due to damping, it may be due to a fixture that, you are applying outside if the robotic arm is going to move then by enlarge, it is an inertial torque that you are talking about because you have a mass that is accelerating in some manner. So, that inertia will of course figure in the entity J. If for example, the robotic arm is going to lift the heavy object then that is a mass that exerts its own weight. Therefore, that weight part of it will then figure in the load torque that you are having, apart from that the mass may also affect the inertia. So, one has to look at what is the mechanical system to which it is disconnected and look at how that mechanical system is going to reflect in the movement of the motor as far as electric motor see this is all that is. So, one has to determine that part of. So this equation, I mean for example you may have an expression like this K times i and it could be plus instead of the K times i minus some B times omega, if you want to have, if there is a viscous kind of drag effect, that is there and then, maybe some friction torque which is TL not, may not vary with respect to speed and then, maybe something else as well from T1. It could be more involved also, what is this form of the equation as I said has to be determined based on what the application here. But, this mechanical equation is one of the reason why, we said that even though you may have an armature current that goes in the form of ripples. This mechanical equation provides sufficient attenuation to the ripples that may be there in the flow of current i, so that the speed may not really see that kind of ripple. So, as I said in the first class itself one needs to then look at what is an acceptable ripple that is there in T and what is the amplitude of ripples that is allowable in i, so that one cannot say that I need always a pure DC to flow through the machine that is impossible. So, based on this then, one can attempt to DC how the motor itself is going to behave with respect to the voltages that are applied under the flow of current across. So, these two equation, that is equation 1 and this equation 2 form the basis of an understanding of how one can go about attempting to control the motor operation. So, the second equation says that the electromagnetic torque is decided by how much flow of current is there and the first equation then say how much voltage needs to be applied in order that a certain amount of current can be made to flow. So, it is important to understand or note here that, the electromagnetic torque is simply a scaled version of i, this may not be case in all motor. In this particular motor, the design and the geometry of the machine is such that electromagnetic torque is simply proportional to the flow of current. Remember that, we said that, the motor operates only when there is a magnetic field and the fact that there is a magnetic field in the entire thing is embedded in this number K, because K was the strength of the magnetic fields multiplied by the geometrical parameters of the machine and therefore, you must understand that if there is no magnetic field, there is no torque. Further, this expression two also says, that if you are going to send the flow of current i the level of magnetic field inside the machine is not affected by that we have lumped it together as a number of K which is not going to change which means, that we are saying that K is independent of i. So, we are really saying in this expression that, if you send an armature current i it does not change the magnetic field inside the machine, it simply results in electromagnetic forces which cause the rotor to rotate, this is a very important aspect of the DC machine which lands the utility of the DC machine itself. In fact, there are other varieties of machines as we see as we go along and the manner in which one attempts to control those machine is to somehow get it to try and behave like a DC machine. So, the DC machine is you know in a way that, this sort of sets the benchmark that this is how a motor should perform, why it is so is because of the form of expression to, which says that, if you send the current i it immediately reflects in an electromagnetic torque and the field does not change because of that. Now, why I am emphasising field does not change is that you know that the flow of electricity, flow of certain amperes to a conductor also generates its own magnetic field. So, one may ask like there is a magnetic field sitting inside the machine already. Now, you are sending a flow of current which will also generate a magnetic field why will not this magnetic field alter the strength of the existing magnetic field inside. Fortunately, what we are saying by this expression is that the design and the geometry of the machine is such that, that does not happen. The magnetic field that is generated, we are not saying that no magnetic field is generated by the flow of current i, physics says that if flow current there is a magnetic field you cannot change the laws of physics. But, what is happening is that the direction in which that magnetic field is generated is not aligned in the direction of the main magnetic field that is at an angle 90 degrees, so no component of the generated field due to current i is affecting the main magnetic field, this is one important aspect of the DC machine arrangement which lengths its utility and advantage for the purpose of control. So, having said this then, if you look at any actuator, what is of importance is that, how much of generated torque it will give you as a function of speed. Ideally, you would like to have a situation like this, this is my speed and this is toque, I would like to have maximum torque available at all speed from the actuator. Anybody is familiar with engines in automobiles. So, this is a sort of engine torque that you would want that irrespective of what speed you are operating at, you would like the engine to give the same torque, so that you can accelerate irrespective of what speed you are in. But unfortunately, engines do not give this kind of ability. It depends on the sort of engine it may have something like that, depending on speed. So, you find that as speed increases the generated torque from the engine may go down. Similarly, in the case of motors also you cannot expect that, the motor will generate the same torque will be capable of generating the same torque always by itself by itself. By itself meaning, how are you operating the motor you are giving a DC voltage to the motor and the motor starts rotating as the motor rotates at different speeds the question is, is it generating the same torque. A look at this equation will say that, it is impossible because, the amount of electromagnetic torque which the motor will generate, depends on how much flow of i is going to be there and i is determined by the speed as well. So, if the motor is going to speed up, input voltage being fixed. Obviously, this i will have to reduce. So, you find that the generated torque in the motor will generally come down with T. But however, this general and intuitive understanding does not suffice for really implementing something, you need to have some equation and therefore, what one can do is use these two equations and deduce, how the speed versus torque graph will look like Professor: Sure. So, we will just come to that in a minute. So, if we want to get this expression, then what we need to do is to take this first equation V equals R into instead of i I will put it as Te by K and then, plus K times omega. Which then means that V minus R into Te by K equals K times omega. Which then means that, omega equals V by K minus R into Te by K square. So, this then represents a relationship between speed and the generated electromagnetic torque inside the motor and one can draw this as a graph. So, if I put Te here and then P here, so this equation is a simple linear equation. You see that, this has a slope of minus R by K square and a y intercept of V by K. So, it is the V by K here and then, a line here which is minus R by K square will be flow. So, one needs to understand certain things about machine. Any actuator but, in this particular case an electric motor. So, we have seen that, Te is equal to K into i. So, the issue is what is the maximum torque that this motor is capable of developing. Obviously, the maximum torque is dependent on how much flow of current you can give to the motor and how much current can they give to the motor it will depend on the size of these electrical conductors. So, you have somebody has designed the motor with some conductor size or something and depending on that one will then, say that I can allow maybe 10 ampere to flow or maybe 100 amperes to flow, whatever that is. So, there is therefore, an absolute limit for this you cannot go beyond that, this is the maximum torque that can be generated. But, never the less you can extend this graph and then go to the point, this point is then saying that speed will be 0 at that point and this is therefore, called a stall torque, stall means the motor stall means motor stop. So, if the motor were required to deliver that much torque then, it means that, the motor will simply not rotate. Now, the question is whether, the maximum torque that the motor is capable of is less than the stall torque or more than the stall torque. For example, you may have designed the motor to take higher flow of current, the armature conductors may be higher, but due to some other reason, you are saying that no, I will not allow the motor to take more than certain value of current in which case the maximum current that if you take at 0 speed becomes a limit intake. And then, this represents a certain value of speed and that is the speed at which the motor will run if there is no torque needed to be delivered by machine and therefore, this is called as no load speed. In an actual machine, it may not be feasible to get the machine to run such that, it does not generate any torque if it does not generate any torque that means, it does not require any flow of amperes into it i equal to 0 and a y equal to 0 and no torque if generated, the question is, how do you overcome friction and all those things? So, in practise it may be necessary to allow some little bit flow of current in which case the speed, the no load speed is not really when T equals to 0, but then T equal to some small number, it is for response to friction equivalent. So, now the issue is you have a motor and you are going to connect it to some load. Now, let us say that you are looking at a situation of underwater robot, underwater robotic application in an underwater robotic application, which is let us say you are looking at underwater mobile robotic application this fellow has to move first and how will you get it to move you will have, you will have to have certain, you will have to have a propeller. Apart from that, the robot may be required to do something else, maybe take some layers of sand, maybe take some you know material that is there below etc, you may require other appurtenances to the robot like an arm and so on. But, if you look at this kind of situation where, first you look at the actuator which is going to make the robot move you need to have these blades that are connected to the shaft to enable it to move inside or move underwater and let us say this is a DC motor and let us say you are going to supply a certain voltage to it, this voltage V is what is given here V. So, for this motor if there was no load it would have run at that speed, now the question is having put these blade onto the shaft at what speed will it run, reduced is a very good guess. But, the question is how do you determine how much speed Student: intersection of speed v/s torque plot Professor: Intersection of. Student: speed v/s torque plot Professor: Yes. So, first of all, you need to know what is the speed torque graph of this load. In order that, the blades rotate underwater at a certain speed the blades need to be supplied with a certain amount of mechanical torque. Just like the fan, if you want the fan to rotate at a certain speed it requires a certain torque, because the fan has to oppose the flow of air in this case, it has to oppose the flow of water. So, you require for these blades, for these blades you require a certain speed versus torque behaviour. Only if you supply this much torque this blade, set of blades will run at that speed. So, now having connected both of them together on a single shaft how do you determine at what speed it will run, they both have to run at a particular speed such that, the motor will generate a suitable torque which is what is required for the blade to run at that speed. If the motor is generating more torque it means the system will accelerate, if the motor generates less torque it means the system will decelerate. So, it must happen only at one point where the torque required for the blade is the torque generated by the motor, that obviously means, that the system will run at a point where the two curves intersect and therefore, this is the point at which the entire system together will work. Remember that, the system together will work at a point where, the torque required by the load is the torque generated by the motor. It is not determined by the point where the two speeds are equal. Obviously, the two speeds are equal because you are connected them together that will always be the case. Now, having seen this the question will then be if you want to make the robot move at a faster pace, what do you do? It means that the speed of operation has to be different. You want the speed to be higher that means, the intersection has to happen at a higher speed. So, then you start looking at what is it you need to do in order to get the intersection happen at a higher speed. You cannot change the graph of the load, propeller is propeller you have somebody has designed it and fitted it you cannot go underwater and change the propeller when you want change the speed. So, you have to control the motor that means this graph which you have, this graph which you have, this is the graph of the motor and one has to change that graph in some manner, in order that the intersection occurs at a higher value of speed. If you want to run it at a lower speed, then you need to change it such that, the intersection occurs at a lower value of speed then the question is how do you do it. So, we see that the slope is given by minus R by T square and the slope is one of the way of changing the points of intersection, if you make it slope more the speed obviously will reduce, if you make it slope less the speed obviously will increase. So, for adjusting the slope you can see if you can change the value of R. But, that is also very impractical because, you cannot go into the motor and say that I will now change the armature conductors in a motor that is rotating, that is not possible. But, you can add a resistance outside effectively increasing the value of R. So, one of the ways therefore is to say that change R. But, what can you do to change that you can only increase the resistance you cannot decrease it, you do not have negative resistances that are available outside, so you cannot therefore decrease the value of resistance you can add resistors to increase. But if you increase it what will happen the slope will increase and the speed will only reduce. The other thing that you can do is to change V, if you change V then looking at this equation of speed versus electromagnetic torque V is present only in the y intercept, slope is not affected by V and therefore, if you change V then, it means that, the y intercept alone will change. So, increasing V will result in a graph that looks like this with the same slope decreasing V will result in a graph that looks like this with the same law. So, this means that either you get a speed of operation that is higher or you get a speed of operation that is lower this is certainly something that you can do. Because voltage is what you gave externally to the motor and you can give whatever voltage you want. So, the mechanism of speed control is the best method is to change the voltage to the motor. If you change the voltage the motor and you want to increase the speed, obviously increasing speed means acceleration, that means you generate more torque. So, if you increase the voltage more current will flow into the machine motor will accelerate and the speed will settle down at a new point at which some required flow of current will happen. So, that is going to be there for the dynamic. But, as far as the variable to control, so that will be speed that will be applied DC volt. So, this graph then provides you the basis of determining what is it you can control in the motor in order to change the speed of operation. So, this graph is called as the speed, torque, characteristic of the motor. So, just like we said that there is a maximum flow of armature current for which the machine has been designed for. Similarly, there is a maximum voltage that the machine has been designed for, you cannot say that I will increase the motor speed by increasing the voltage and go on increasing the voltage. Somewhere it will flash over and the motor is no gone after that. So, there is an upper limit. So, one always has to contain with whenever, you select an actuator, electric motor or some other actuator, one has to contend with the ratings of the motor. The rating will say, what is the maximum voltage that can be applied, what is the maximum flow of input current that the motor can take, what is the maximum speed at which the motor can run. So, these three are going to be limiting condition. Whenever, you look at an application and you need to select an actuator, you need to keep these things in mind. So, we have said that looking at this we have concluded that change in applied DC voltage is the most appropriate one for controlling the speed of the motor. There are other ways if you take any book, if somebody is going to take a book on DC motor control apart from changing the applied voltage to the armature people talk about field control, where you can change level magnetic field inside the machine that is also a feasible. So, there is a way to change in this geometry whatever, we have drawn how is the magnetic field established? By means of magnet and will you be able to change the field generated by a magnet? Once you fix with that is it, it generate whatever field that generate. So, in this geometry this is not feasible to do any field control. But, if you want to do field control it is possible by having a magnetic field generated through electric current, then you can control that current and vary the magnetic field, the way you want. But, that is not normally done in these kind of applications. So, we are not going to discuss that aspect of it. So, now the question is we have said that voltage control or more specifically armature voltage control is the desirable mode of operate. The question is, how? Now, why are we asking how? Why can we not say that yes, you have a motor you require 5 volt now, apply 5 volt, you want 15 volt now, apply 15 volt is there any difficulty with you see? Student: it depends on the application Professor: Yeah. So, you will have if you are going to look at a application that is let us say a robot that is mobile, I have mobile robot. How do you get the source of energy on this, you will have battery, battery is the source of electrical energy on both, battery will come if you are looking at let us say, lead acid battery. Lead acid battery is a box that you get with one terminal here, another terminal here, how much voltage will it give at this terminal? Student: 12 volt Professor: 12 volt it will give, if you want you can take one more box, there also you get the same output voltage, you can connect them in series and then how much voltage you get here? 24 volt you will get. So, maybe what you can do is, you can say that in order to operate the motor I will connect either one battery or I will connect two batteries in series, you can have some switched arrangement by which you say that I will connect only one or I will connect both. But, you have a big problem there, that is you will get either 12 volt or you will get 24 volt, what if your motor for that particular point of operation require 18 volt, what will you do? You cannot get it by this kind of switching the bank of supply arrangement. So, one of the ways we do is will be let us say for example, you have a source of 24 volts which maybe two batteries in series, do not alter that interconnection keep this way we have connected the batteries is fixed and you have a motor here by the way, the symbol of a DC motor is this. So, these two rectangle, they represent the brush and the circle of tools represents the rotor M stands for the fact that, it is a motor. So, that is a representation of DC motor that is how you draw it in a circuit diagram. So, the simplest way is to simply connect them together, which means you are supplying 24 volt for the motor. But you suddenly want to decrease the speed of the motor and therefore, you cannot give 24 volt anymore. So, what you will do is maybe, what you can do is connect a resistance here, which is like adding an extra resistance to the circuit, so adding a resistance can be seen some two view point, one is you are changing the slope here or you are also reducing the voltage that is available here. So, either way one can look at it is just a view point. But, affecting the slope is a more holistic way of looking at. But, if you do this, not only if you now want to reduce the speed of operation, this will work what it will do? It will give you a new graph that looks like this, it will give you a new graph that looks like this and you can get the point of intersection what you want. But, do you see any difficulty with this mode of operation, what do you think is an advantage, what do you think it is an disadvantage Student: its a disadvantage Professor: Yes. Now, that you are having a resistance and there is going to be current flowing through it, it will dissipate heat, it will generate heat and dissipate heat which means, that in order to run the motor at the particular speed that you want. Let us say you require that, the motor just needs to generate 50 watts. Ideally, you would expect that, if the motor is going to give 50 watt of output the source needs to give 50 watts to the motor. Maybe the motor will have some loss to instead of a 50 watts you give 60 watts, I have some loss. But, if you put a resistor there instead of 50 watts, that you give to the motor you may have to give 100 watts. So, it was very lossy, very inefficient way of doing things. So, one does not normally go for this kind of an approach this is not desirable at all. So, the question is if that is not desirable what next? So, that we will see in the next class. |
Introduction_to_Robotics | Lecture_101_Localization_Taxonomy.txt | Hello, everyone and welcome to the final week of lectures in the intro to robotics course. And as before, we will continue looking at the algorithm and computer science aspects of it. So, we have looked at, you know, what constitutes the notion of state, and then we looked at recursive state estimation, looked at motion models, we also looked at mapping problems, measurement models, and also looked at how to estimate maps that far. So, this week, so we will first consider the problem of what is known as the mobile robot localization problem. That is, the question is, given that you have a map of the environment, a map, in this case, it looks like this. So, there is a lot of walls and a few doors in between, let us say you are given the map of the environment in some form, find what the pose of the robot is, relative to the map. So, we already looked at this environment before in one of the examples, we are just looking at it again, in the context of localization, and then the second set of lectures, we will examine what are called path planning strategies for robot locomotion. So, the idea of path planning is you are given a map. And you are given a desired location that you would like to reach, let us say is in the top corner. How would you find the efficient path at least how to find a feasible path to go from your start location to the end location? So, given the map, and your models, motion models, and measurement models, and so on, so forth, so under your localization strategy? How do you form a path from start to the road? So, that is the second question that we will look at in these sets of lectures. To start off, so a large fraction of the localization algorithms, a majority of the localization algorithms are essentially an extension of what we have seen as the recursive state estimation problem. In fact, many of the localization algorithms are a version of the bayes filter algorithm that we already seen, just like we looked at the map estimation algorithm was a version of binary bayes. And many of these localization algorithms would also be versions of the bayes filter algorithm, we look at the very, very basic version of it. And for more, more complex versions of these algorithms, I leave you to study by yourself. That is not the goal of the course, the goal of the course is just to give you a very brief introduction to the variety of different problems that you will be solving algorithmically when you are working with robotics. So, we will, we will start off by looking at a taxonomy of the localization problems, some kind of a classification of the localization problems, what are the various dimensions under which these localization problems can be classified. So, in fact, these are specific problems themselves. So, and each algorithm would address these problems in a variety of different ways. And we will also point out how the markov localization algorithm handle some of these problems. So, the first dimension is the initial knowledge that the robot may process relative to the localization problem. So, what does the robot know? And when it starts, so that that tells you what is it that you want to do? What are the, what are the challenges that you would face? And the second is the nature of the environment in which the robot is operating. And the third is whether the localization algorithm actually controls the robot, the motion of the robot or is it just passively observing what is happening and so these are the three major dimensions. And finally, you could also have, whether you are working with one robot or multiple robots also, is gives rise to some interesting challenges, so, we will see in a bit. So, the first one is what we will call local versus global localization. So, local localization, that means that I already have some idea of where the robot is. And I would like to be more precise in my localization in that locality, I would like to get a more precise estimate of the pose. And I have some idea that what the initial poses, so basically, as the robot moves starting from the initial pose, I would like to be able to track it with as little error as possible. So, that is what the local localization essentially means. And it kind of manifests itself as the position tracking problem. So, in the position tracking problem, we are a little bit more aggressive. We assume that the initial position of the robot, the initial pose of the robot is completely known. And then what you do is, you accommodate the small noise in the robot motion and the sensors that you have, and continue to localise the robot, continue to update the position of the robot assuming that the initial process known. So, this is the position tracking problem. And as you can see, is very similar to the Gaussian filters and the Bayesian filters we looked at earlier. And you start off with, in fact, many of the examples we looked at, we assume that we did not know the initial position, we were like, add a probability of 0.5 for both the initial positions a robot could be being, but you could start by knowing the initial position, and then tracking this. And we assuming that the error is usually small, otherwise, the error is very large. Then you would have difficulty tracking the robot, you assume that the measurement error and the movement error are small. And, and since the uncertainty of the robot is essentially confined to a small region near the robots, true pose, because you assume that you know, the true pose, and the uncertainty is confined to the region near the robots true pose, you can in fact, these are cases where you can use a Kalman filter very effectively with a single Gaussian also, or even extended Kalman filter, you can use those efficiently with a single Gaussian, because you are not really interested in accommodating multiple hypotheses. But you are only interested in tracking this one true pose with small amounts of error around it. And because the uncertainty is confined to a small region, this is called a local localization problem, or local problem. That makes sense, this is exactly the kind of situations which the Gaussian filters were or more most appropriate for this kind of position tracking problem. So, the second class of localization problems are called Global localization problem here, I am going to assume that the initial pose of the robot is completely unknown, I just know that the robot is initially placed somewhere in its environment. But it does not know where it is. Or even if it knows, it might have multiple positions where the robot could start at. So, we do not know exactly where the robot is starting. And since this, a there could be multiple hypotheses that I would like to keep track of, and b the error could be anywhere, that my initial guess could actually be very off. So, we as we call this a global localization problem, because I do not have bound upper bound on what the error is, so I could be anywhere in this space. And these are the problems for which Gaussian a single Gaussian based filter would have problem and things like the particle filter are more appropriate. And, in fact this is a much more difficult problem than the position tracking problem and includes the position tracking problem as a special case, as you can see. So, the initial position of the robot is unknown. Yes, but it could be, you know, in one of few places, which could even be one, if it is in only one place to start off with, then that becomes a position tracking problem, but then, so this is the one way of handling this, of course, is to just set the prior probability that you have to something uniform across the entire space. But then uni model probability distributions tend to have difficulty handling such a large prior. And we have to look at things like particle distributions, particle filters. So, that is the global localization problem. And there is another interesting variant of the global localization problem which we call as a kidnapped robot problem. So, the kidnapped robot problem is actually a while variant. So, in some sense, it is it just makes the problem very difficult. What you are saying is a guy while the robot is operating, even if it has finally managed to localise itself somehow, and knows where the true position is it can suddenly get quote unquote, kidnapped, so basically you turn off the sensors of the robot pick it up, and place it somewhere else. So, now what happens is the robot is thinking that it is somewhere else with a very, very high probability wherever it localise itself, because the robot was kidnapped and teleported to some other location. It does not know that that transition has happened. It is truly in a different part of the state space, but it continues to believe that it is where it was originally localising itself. So, it makes it so to accommodate for this possibility of the kidnapping, you have to assume it is a global localization problem. You cannot continue to do position tracking. So, at every point of time you should be able to, you know, arbitrarily reposition the robot to another part of the state space. Now, except in sci fi movies, why would anybody want to kidnap a robot? Kidnap robot problem is just a fancy way of saying that I could have, you know, arbitrary errors in a localization problem because of some unlucky sequence of noise measurements, or unlucky sequence of, you know, actuator failures that happen. And therefore, the algorithm falsely thinks that you are in a different part of the environment, but it continues to get new measurements that kind of contradicts its original localization. And therefore, we would have to somehow recover from that earlier erroneous estimation. Is it clear so when we say kidnapped robot problem not really, that the robot is being you know, blindfolded and kidnapped or anything like that. It is essentially a way of telling, testing whether the algorithm that you have, can recover from arbitrary errors, arbitrary global localization failures, instead of putting you in the one corridor in the third floor, it might put you in another corridor on the second floor of a building, or if you look at typical and office spaces, you might be thinking that you are next to somebody's cubicle, you might be next to a completely different part of the office. Because all the cubicles look the same. And sometimes when I go to IT offices, I get lost, so the robot can certainly get kidnapped. In some sense, it can, it can localise itself catastrophicaly, in a different part of the state space. And so the question is, can you recover? It is different from global localization? Because in global localization problem, I know, the robot knows that it does not know where it is. Because it is a belief distribution is fairly large. But this problem, the robot believes it knows where it is. The belief distribution could very well be a spike, could very well be a delta function in a particular location, the robot knows for sure, that is where it is. Now, if I put you in a different if I kidnap the robot, it is impossible for it to recover, because the belief has to have room for it to recover. So, one way of handling these kinds of kidnaped robot problems should be to you know, periodically, if you are using something like a particle filter, so periodically to you know, put particles randomly all over the workspace and then try to see if you can recover the true pose by doing this kind of smoothing of the probability distribution, smoothing of the belief distribution. So, that is something that you have to keep in mind that you should never have a if it is possible for you to have this kind of localization errors, global localization errors, then you should make sure that your belief estimation algorithm never becomes too certain. It always keeps the possibility open of having some unknown pose be the true one. So that is always the, that is the challenge in designing algorithms to accommodate for the kidnapped robot problem. So, moving on. So, this is one dimension, so this is how, whether you think you are, whether you know how much, whether you are in which part of the state space or whether you have to assume for a global localization. The second direction is looking at static versus dynamic environment. So, what do I mean by that? Static environments are those where only the robot is moving, or none of the other quantities are changing, everything else in the environment is the same, the object is there, map is fixed, every object in the map is fixed, nothing moves. And only the position of the robot changes, these are static environments. And these are the, again, the kind of situation that we have been looking at so far in the ball in the motion model, in the sensor model as well as the map, the state estimation algorithms you have seen so far. On the other hand, you could have dynamic environments. So, dynamic environments are environments where objects other than the robot, could also have varying locations or varying configurations. Like these are configurations that change over time. And even here, there are two kinds of, there two kinds of changes that are that could happen. So, one or changes that are transient, you know, like, like I was, I do not know, if you remember the example I was giving you in the sensor noise model, I said sometimes the paper could just fly in front of the sensor and cause it to make a short reading. So, these are very transient noises, transient motion, and they do not really, you know, they do not really affect your localization, they do not really affect your localization. And you could just treat them as noise, so this is basically what we are saying here. So, changes they do not persist over time, they are very transient, they are there for a small time that is there and then again, it goes back on the floor. So, these are not of relevance to the localization problem, or of the to the path planning problem. And therefore, we can just treat these as noise, and then account for that in our model, account for that in our model. So, that is exactly what we did in the sensor model, we accounted for these kinds of transient movement or transient changes in the variables, and then just decided to treat them as noise that is where the short noise came in to play in the sensor model that we had. So, and similarly any kind of this kind of temporarily blocking the path of the robot could also be modelled as noise in the motion model, like I am trying to move, but then there might be a small small probability that I fail, the not because, just because my you know, wheel sleep or something like that, it could also be because somebody was just standing in front of me and that one, one second, and after that, they moved away. So, when it tried to move, there was a noise in the transition. What we are really interested in or, you know changes that persist over time. So, what do I mean by that? Suppose I move furniture, I take a table from somewhere and put it somewhere else. Now what happens, the table is something which will block the path of my robot. So in some sense, some pathway got opened up, because the table moved, but some other pathway got blocked, because the table went there. So, if I was trying to put myself on the map and say, go next to the table, now where I have to go becomes very different, because I have to localise myself with respect to the table. And then I have to go there and if I move the table around, then my whole localization problem becomes complex, something that I have to redo all over again, not things like doors, If a door is closed, it is one thing if a door is open, it is a completely different situation. And likewise, if there is a significant change in the lighting condition, or if people are there and people are standing, and then I have to either move around them or I have to localise with respect to people standing there, and then they might be, they might actually move to a different location in the environment, and so on, so forth. So, there are a few obstacles, few objects, whose positions are of interest to me, both in terms of the localization problem, and also in terms of planning a path. In such cases, we have to figure out a way to handle the handle these objects. So, there are multiple ways in which you can think of handling these objects. One thing is to actually make these objects locations, properties as part of the state, if you make this object, locations of properties are part of the state, then basically have to keep updating the state whenever these objects move. And every time when I want to estimate the new state, I also allow to check whether these objects have moved. So, in effect, it becomes like a mapping algorithm also, where I am looking at a feature based map where the features are assigned to these objects. So, I am also estimating the map while I am estimating my location, so this becomes a mapping problem as well. But another way to do it is to assume that there is an independent mechanism that is updating the map. So, at every point of time, when you look at the map, I have the position of these objects clearly marked in the map, even though when they are moving around. And now I have to just accommodate my next estimate will have to accommodate the current version of the map and not the previous version of the map. So, I am going to assume that in this case, we can just assume that your localization can operate independent of the mapping problem and the updation of the map itself is done by an independent estimation process. So, there are multiple ways in which you can handle this dynamic environment. So, the third dimension we wanted to talk about, it is what is called passive versus active. Passive versus active approach for localization, this essentially pertains to whether the localization algorithm itself controls the motion of the robot or not. So, in passive localization, the localization model only observes what the robot is doing. And based on that, just based on these passive observations, it tries to estimate where in the map the robot is. So, the robot could be just moving around randomly waiting for some, some tasks to be assigned to it. Or it could be just going around trying to perform its everyday task, trying to maybe it is, it has to fetch some mail and deliver it or whatever it is supposed to be doing in office environment, or maybe in a factory shop environment, whatever is the task that is assigned to it, the robot is trying to execute that task. So, the control itself, the movement of the robot itself, is not aimed at localization, it is not trying to localise itself actively. But the localization algorithm just has to run on the robot, observe all the actions the robot is taking and all the sensor information the robot is getting. And look at how to do the localization. Again, in some sense, this is what we have been looking at so far, in the state estimation problem where the goal of the control was never to get the better state estimate. The goal of the control was something else, we do not know. We were always given the control. And we were passively observing, what was the sensor reading. And what was the control that was given in order to refine our belief state. So, that is what passive localization is. Active localization, on the other hand, is very interesting, active localization says that, hey I am going to move my robot. So that I will find out where I am very quickly. I will not try to perform everyday tasks. Because if I am, if I do not know where I am, I could run into some kind of hazardous thing, I could run into an obstacle, or I could break the robot, I could fall off, a ledge or something like that. So, I really do not want to go about it, because the cost arising from the badly localised robot is very high. So, what I do is I first control my robot and actually move in such a way so that I will minimise the localization error. And again, this has to be this has to depend on the belief state that I am in and the sensor readings and getting, based on that, I am going to move in a way so that I minimise the localization error. And then I move to some kind of like position tracking, kind of a mode, and then try to execute my everyday task. So, this is basically what we call it active localization, where at every point of time, you also try to make sure that your localization error is minimised. In some sense, this is how we operate, I mean, we just do not just start doing things without knowing where we are. If you are put in an unknown environment, the first thing we try to do is determine where we are before we decide what we are going to do next time. So, that is what basically the idea behind what active localization does. So, here is an example where active localization will perform much better than a passive localization. So here is the, you know, a corridor. So, where most of the corridor looks symmetric, when I look from here, for example, I am going to see two doorways, and open space to the left and open space to the right. Correct and I am going to see two doorway. So, I really do not know whether I am here. Whether I am here or whether I am here, or whether I am here, because all of these places look similar. Let me, let me take, lets I, over here, or here, or here, all these three places look similar, because there is a doorway to the top, there is a doorway to the bottom. And there is free space to the left, there is free space to the right. And whatever slight angles I look at, I will always see the same measurement. So, if I, let us say I start in the middle of the corridor, so I do not know whether I am actually in the middle of the corridor or not, because I could either be here or here. So, what I should do is move to one of these two places, one of these two places marked by a circle, because if I come to this circle, I know for sure because there is an obstacle to the. So, therefore I know that I am at the right end of the corridor, or if I come here, then I know I am at the left end of the corridor, and then from here onwards, I can just start, you know, moving to whichever room I want, I will know for sure, otherwise I will always have this ambiguity as to which room I am in. So, this way this allows me these two locations allows me to disambiguate where I am in terms of the symmetric corridor. So, even though the rooms themselves are different, so once I enter a room, I will probably know. But then that is, that is a waste, and assuming that there is some room one of these rooms is actually a dangerous room, I do not really want to go in there and I have my robot fry. So, we would like to make sure that such things do not happen. So instead of just hoping that somehow the movement of the robot will allow me to localise myself active localization, that just basically tries to move the robot to one of these places, so that you can localise quickly, and then go ahead and do the rest of the job. So, the last dimension that we would talk about is single versus multiple robots. So, the single robot case is the one that we have that is most commonly studied. And that thing that we have looked at also. But increasingly, it is becoming more common for people to consider using multiple robots system, because the space is large to cover and so you really like to have more than one robot. In fact, there are some very nice use cases of robot, teams of robots, you know, taking visitors through museums where one robot hands off to another robot at various points. And therefore you get this robot guided tour of the museum. So, there are things like that that people have been working on. So, you could treat this multiple robot localization problem as multiple independent localization problems. So, there is no need to actually consider this as a special case, just like if there are 10 robots, you have 10 single robot localization problems to solve. And you could do that. But what is interesting is, if I assume that the robots can detect one another, instead of just detecting the obstacles if the robot can detect another robot, there are a couple of things here, not only does the robot become another landmark, or another feature against which I can localise myself. But that robot also has a belief state about where it is, I could potentially get the belief state from the other robot, say, hey come, not only so if you detect another robot, you basically communicate the belief states to one another. And therefore, now suddenly, you find that I have a huge update to my belief, because a, I know where the other robot is. And b, I know that what the belief of that other robot where that other robot believes it is. Now, I know where I, this robot believes it is, and therefore I can combine both, and this opens up a lot of very interesting questions. So, what is the level at which the robots have to communicate to each other? You know, how do you accommodate the information of the robot? Can you ask the robot questions? Can you ask the robot question? Hey, I am trying to find this person, did you see him? A lot of interesting questions. Opens up when I start talking about multiple robot localization, and again, multiple robots and active localization makes it even more interesting problem, so in some sense, these four dimensions, we talked about capture the most important characteristics of the localization problems, there are other ways in which you can think of, you know splitting the localization question, but these are the primary variations that we have to worry about. Other kinds of variations could be depending on how noisy your sensors are, are they noise free, how many sensors do you have? How reliable are the sensors? What is the knowledge that you have off the map? What sort of a map do you have? So, all of these could potentially give rise to other interesting questions. |
Introduction_to_Robotics | Lecture_62_Encoders_for_Speed_and_Position_Estimation.txt | In the last class we had looked at the set of machines that we had seen going from BC to BLDC to the synchronous motor with sinusoidal, and we saw what are the advantages and disadvantages of each one. So, apart from this there is one more AC machine which is the induction machine, induction motor. So, the induction motor is an AC motor, it does not have all the disadvantages that the DC motor has, there is you do not have anything like a brush commutator arrangement and therefore, it is definitely of advantage. The induction motor has these two varieties; one is called the Wound fields motor, I mean Wound rotor motor and the other one is called the Squirrel cage motor. And if one uses this variety then you do not have any electrical circuits to energize from outside on the rotor and therefore it is completely enclosed inside and because of this mechanical design, the induction machine is perhaps the most robust of all the variety. This has no magnets and therefore heating is not an issue, there is not any sort of concern with respect to the level of demagnetization, etc, all these have no role to play. And therefore, this machine is probably the most robust out of all the different varieties that we have seen. And industrial applications therefore by and large or larger capacity, this is the machine that they would use and historically what has been used. But however now, when you look at this induction machine, so if you look at the size aspect PMAC machines are the smallest and then you have the induction machine and then the DC motor. So, if you are looking at an application need where you really need a smallest electric motor that you can get, then you need to go for the AC machines having magnets. So, if size is not so much of a restriction you are allowed to have bigger sizes then you would maybe go for an induction machine which is there. So, how to operate the induction machine it is basically the same as the synchronous machine, the loop structure everything remains the same. So, we will not have any more discussion about that, but it is a little more involved to operate the induction machine as compared to the synchronous motor. Then the other most important thing is how expensive the motor is going to be. In this case, the induction motor most probably is the lowest, is the least expensive and then you have the DC motor and then you have the PMAC machines which include the BLDC or PMSM. Therefore, one would look at all these aspects that is the advantages, disadvantages that we saw, also which one is smaller, which one is larger, which is more expensive, all these will then decide which one you are really going to use in a particular application. So, we have seen that when you are going to look at the designing of this loop for the AC machine, it is very important to have some kind of mechanism to sense the rotor angle. And of course, you may need to have a mechanism to sense the speed of the rotor as well. Speed of the rotor is then indirectly an indication of how fast your application is going to move, whether it is going to move along the line or whether it is going to be a rotating application whatever it is, speed of the motor is the one that is going to make it happen and therefore, speed of the motor is definitely an important indicator of how we design it. So, you will need to have some kind of a speed sensor. So, you will need to have a speed sensor and in some cases speed sensor alone is not enough, you will need to have a rotor angle sensor. Now, this is what is required for the motor and you may require additional information for your application. For example, now, let us say you are going to have a robot that is going to move, somebody asked me this earlier if this robot is going to move then if you are going to say that I am going to estimate the velocity of this entire vehicle based on the speed of rotation of the motor, yes you can do it provided the wheel does not slip. So, as long as the wheel does not slip then the RPM of the motor is an indicator of the velocity of motion of this. But if you are going to have a slipping condition then the RPM of the motor is no longer a good enough indicator. Then if you are going to have slipping conditions then you need to have something else which will indicate what the speed of the entire system is. So, you may therefore need to have additional mechanisms to sense the speed which can be done in a variety of ways, so that is a different issue. In certain other applications for example, let us say you are going to have a robotic arm which is going to lift some object and put it somewhere then you need to have some kind of arrangement to sense that it has lifted the object, all that is there built into the robot definitely in some way, but how do you know whether this has moved and reached the end position? You can of course say that I will determine how the robot moves all the while because you know the rotor angle, you know the mechanical arrangement, therefore you can always convert it into some inquisition. But how do you know that no error has happened in between? So, you may need to then have an additional information regarding maybe you will need to put a sensor somewhere here, which will indicate whether the arm has reached there, and this may need to give information regarding where it is. So, you may therefore require in the overall application perhaps additional sensors depending on the needs of the application. For automobiles, you cannot depend on an external mechanism to sense, whatever you need to put has to be built in to the automobile. Yeah, so that is then a difficult issue. What you will need to do ideally is you will need to have some other mechanism to sense the velocity, maybe you can use air flow or an alternative will be, you somehow design your electric motor control system to ensure that there is no slip. So, which is what is usually done in the case of electric locomotives because there estimating the speed is very important because you need to go exactly and stop somewhere, you cannot all the time depend upon the fact that it may not have any slip or it may have slip. So, you have to estimate what the slip is going to be in some way and then you have to control the motor such that that does not happen. So, those are definitely required to be built in but the sensors that we will look through for some time is those that are required for the operation of the motor itself. So, if you are going to sense the speed of the motor, the simplest is to have an instrument or an equipment called tachogenerator. So, this by itself is an electric machine, it may be an AC motor, it may be an AC generator or it may be a DC generator, which means that if you are going to have your electric motor sitting here and then you have the shaft of the motor, you then attach another small electric motor to it, electric machine to it and you take its output and you ensure that this is well linked, there is no slip that happen in between. Then the shaft of this motor will rotate at the same speed as the shaft of the other motor which you are attempting to control, and if this is going to be a DC machine for example, as the shaft is going to rotate you will get a DC voltage induced in it. So, this can then be sensed and this is an indication of speed, we have already seen that for the case of DC motor the induced EMF is K times speed and therefore, as speed increases, so will induce EMF and therefore it gives an accurate indication of speed all the time, if the shaft is going to rotate in the opposite direction, then omega is reversing and therefore induced EMF will reverse and therefore you can also detect which direction the motor is operating. So, this is something that can be used always, but the difficulty here is that it is still a DC machine therefore, even if this is an AC machine, you are having a DC machine there which means that you have to replace the brushes, service it. So, all those requirements will be there to ensure that the machine operates the way you want it to be. So, though this is a very good mechanism to sense the speed, because you get instantaneous velocity either direction, there is sometimes hesitation to use this for actually sensing the speed. The other option is one can use an AC tacho, which means that this is an AC generator that you are going to connect which means that the induced EMF will be AC and this means that you will get an alternating waveform, whose amplitude depends upon speed and also frequency will depend upon speed. So, both of them are going to be speed dependent and therefore you can choose to detect in your circuitry either the amplitude or you can detect how frequently the AC cycle is going to change, anyone will indicate what is the speed. But if you want to also sense the direction of rotation, then it is not sufficient to have a single phase AC, you will need to have R, Y and B and if you have this, then you will have the other phase like that and the third phase will go like this. If you are going to rotate in the opposite direction, then instead of what you have as R, Y and B, which comes one after the other, what will happen when the direction of rotation reverses is you will have R first and then B and then you will get y. So, if you are able to detect the sequence in which the induced EMF occur that will give you an indication of what is the direction of rotation whereas, simply being able to detect the amplitude will tell you what is the speed at which it is going to be rotated so in this way one can detect the speed. And from speed you can get a sense of the rotor angle as well, how can you get that after all d theta by dt if this is the rotor angle, this gives you the mechanical speed and therefore, if you want to get the rotor angle itself, you simply integrate the detected value of speed. So, that will give you where the rotor is at any given instant provided where it is initially. So, if you know the initial position then from there knowing speed, you can always integrate and get what the rotor angle is, but the difficulty is going to be how to get the initial position. So, that is really not one cannot get initial position in these cases so these approaches can be used only if you are interested in speed and not in the rotor angle. So, the next most frequently used mechanism to sense the speed is what is called as an encoder. So, an encoder is of these two varieties; one is called as a incremental encoder, another is called absolute encoder. An incremental encoder can give indication as to what is the incremental angle over which the rotor is going to rotate, that means from now to the next instant, what is the incremental increase or decrease in the rotor angle but whereas, the absolute encoder is another instrument, another means to sense which at every given instance says what is the absolute angle of the rotor. That means, as soon as you energize the system if you switch on the supply, the absolute encoder says this is where the rotor is at an angle of 60 degrees or at an angle of 50.5 degrees whatever, but from the incremental encoder as soon as you switch it on you do not get anything. So, how does the incremental encoder work? Incremental encoders are again of different varieties, the most frequently used one is the one called as optical encoder. The optical encoder basically contains a disk having large number of slots around the circumference, and if you have a slot here and then you have also an LED positioned on this side and you have a detector. So, this is an LED and here you have a detector which means that in between if you are going to have a slot that is going to be located then the detector will be able to see the LED emitted by the LED, but if the rotor is going to rotate a little bit and the slot goes away, then the emission from the LED is not allowed to reach the other side and therefore, the detector does not see anything. So, if you look at the output then from this side as the rotor is going to rotate, you will get a high when the LED is able to be seen and when the LED becomes invisible because the disk has moved, then you get a low pulse, then you get high low and so on, it just goes on. So, you know the angle by which the different slots are distributed and because of that, when you get a signal, high level signal between that and the next high level signal you know precisely what is the angle through which the rotor has rotated. And if you know the interval between this height from low to high instant and you know the instant at which the next high edge occurs, then if you know this instant DT and let us say this is equal to D theta, then speed is given by D theta divided by DT, this is one way of doing it. If you know this angle let us say this is the initial angle, if you know this angle then because you know the incremental angle through which the rotor would have moved when the next edge arises therefore, you know what the next angle is? Therefore, if you know this angle, you know that the next angle must be equal to this plus the delta theta and then at the next stage it is equal to theta naught plus double of delta theta and so on. Therefore, if the initial angle is known, at every edge you know where exactly the rotor is. Therefore, this mechanism can be used as an incremental angle detection and it can also be used to detect speed. So, speed for example, you can detect for every edge let us say high going edges between this edge and this edge you know the angle, you determine the speed. Alternatively, you can now anything that you make has errors. So, if you say that the angle between one slot and the next slot, you want it to be equal to this much and somebody has made the encoder and given it to you, you can be absolutely sure that this angle is equal to that much, there is always an error that is going to be there. So, in order to overcome the effects of this error, you do not determine the speed with respect to every edge, but rather you accumulate the number of edges and say that one is determined at every edge. The other way is, wait for a certain duration, count the number of edges. So, you know that so many edges have occurred, and on an average you hope that the errors that are there in the actual locating of the slots are averaged out and they go to 0, so on an average from this instant to that instant if so many edges are there then you know that it must have gone through certain number of angles. Then, you have therefore the total angle traversed divided by the duration of acquisition will then give you speed. If you want to locate the rotor angle of course, then you take the speed and integrate so that is the other way. So, if you have this kind of an encoder, speed information can at the earliest be obtained when the next edge occurs. If you have located this edge then until the next edge occurs, you do not know what the speed is. So, for example, if you are going to have a very slow speed of rotation for some robotic application, let us say you want the rotor to rotate at very low speed, therefore if you use this encoder you have detected one edge and then by the time the next edge comes, so much time would have gone by and in between that time you do not know what the rotor is doing, you do not know what speed it is rotating in and therefore, it may be in error. So, if you want to have accurate control of speed for slow rotations then it is required therefore, the number of slots in this should be higher so that even at low rotations you get reasonable number of edges, I mean you get edges at reasonable intervals. So, the number of pulses per revolution which is called as PPR, this is one important index of how good an encoder is going to be or how well it can be used or for which speed ranges it is applicable. So, for example, you have encoders that have PPR equal to 2500. So, it means you will have to look at it, the implication of this. So, this means that the angle resolution that it is going to have is 360 degrees divided by 2500, this is the minimum angle which you can minimum movement which you can detect and then you have speed. So, speed is going to determine how long it is going to take in order to go through this angle. And if the speed is too low and you are not satisfied with that, then it means that you cannot use this encoder, you will have to go for something that is higher. So, usually, if one wants a speed of less than about 100 RPM for industrial application you need at least 4096 or probably 10,000 PPR encoders. These will give you very accurate ability to control the speed because you have more frequent information regarding speed, the higher the number is the better your loop will perform, but then the more expensive it is going to be as well. So, you have to take a call regarding what is the accuracy requirement you have as again how much money you need to sink in for this. At very high RPM is not an issue because you are going to get pulses. It will not be an issue because this is going to be handled by electronics. These output signals that are going to become high low signals are going to be given to some electronic circuits. And usually electronic circuits will not face a difficulty with high frequency. Yeah, that is true. So, you have to do your system design appropriately. I am not saying that any circuit will be able to handle anything. All I am saying is normally electronic circuits can handle high frequencies without any issue and therefore you can go for high PPR. No, You can. Usually if you look at very high accuracy applications for example, if you take machine tool applications, these are some of the most demanding accuracy needs. And here you have a requirement to make a step of 1 micron, which means that you are really looking at the angle of the rotor moving by a very-very small angle and then you want to stop. So, the motor energization is only for that much time and in that much manner, so that the rotor just moves by a small angle so you will need encoders that are very high accuracy in such cases. So, it is not well usually these kind of requirements high precision requirements arise only for a proper industrial application. So, yes as you go for higher accuracy industrial applications you will need, but there may be several applications where angle resolution is not so important, speed is what is important and you may not be operating at low speeds in all applications, 100 RPM is a very low speed usually, because I mentioned about how big the motors are going to be. Now, one important thing that is going to determine how big an electric motor is going to be is the level of torque. So, if you are going to look at a high torque motor then it means that more amperes has to flow through it which can be accommodated only if you have large armature conductors, which means the motor size will have to be big. So, if you are looking at a particular application and which says that in order to move my robotic arm let us say I require to have this much torque. So, the issue is whether you will have an electric motor that generates that much torque as such or you can choose a motor that will generate lower torque but step it up using some mechanism. So, usually you do not want to have an electric motor that is a high torque motor, because if you are going to use a high torque motor it is usually bigger and more expensive. So, you will choose a motor that is lower torque and usually as you go to high torque the RPM requirements are not so high. So, you will then choose a motor that is low torque but high speed and therefore the motor will be small, it will be lower cost and then you will use some mechanism to step up the torque and reduce the speed as well. So, this is what one would use and therefore when you say that you want a certain speed and a certain amount of load torque, one has to look at whether you want that speed at the load end or at the motor end. So, even if the load is going to be for example a very, very low RPM load, you may not want to use a motor rotating at that speed, I will give you an example. You have when you look at operation of bridges, there are different ways I mean let us say for example, you have a river that is going there, you have a hillside and you need to have a bridge and the river is in a way such that you need to allow some chips to go through, which means that this bridge has to be somehow let us say opened up. So, you will then do the design such that this is you are able to split it and this part can rotate this way, this part can rotate this way so the bridge opens up and the ship will go through and then you can close the bridge, so this is one way of having the arrangement. There are other ways of doing this as well. Now, the bridge when it is going to rotate, how fast do you think it will move? It is not going to you know, the angular speed would not be something like 1500 RPM. It is a huge mass that is going to move and therefore it will move at a very, very slow speed maybe to go from horizontal all the way to some particular angle, it may take 6, 7 minutes. So, that is a very low speed requiring huge torque, because that is a big iron, hopefully it is iron and that is going to be moving, it is a huge torque requirement on the motor and you will not have an electric motor that will generate that much torque, you can make it if you want but it will be a huge motor, very expensive, so, nobody will do that. Instead and if you see if it is going to move from this angle to this angle, it is not even going to complete one rotation, it is only going to go from 0 to maybe 45 degrees that is all. But the electric motor that is operating it would have finished many, many times rotating in order that this fellow goes from here to there. So, you are going to have a large gear ratio which will step down your motor speed and will reflect that the speed of the load. So, by the time the motor has done innumerable number of rotations, this bridge has only moved by that much. So, the motor therefore will be smaller and therefore, the motor may not really require to rotate at low speeds in most applications, there are certain applications where you do not want gears due to various reasons and you may want to have a motor that can rotate at low speed by itself. But one has to look at the economics of the whole thing and which one works out better for you. So, not many applications would be there where you really need to go to very low speeds and therefore you do not really need high PPR encoders all the time, it will be by and large sufficient to use an encoder like 2500 PPR or maybe even lower can survive. So, in this way then one can find out what is the angle and what is the speed. But then the question is, how do you get this initial angle? There must be some way to get the initial angle otherwise, you still do not know, you can get speed information nevertheless, because you know the incremental angles and you know the duration, you can get the speed. But if you want to get the actual angle, then there is another facility, there is another slot that is given in this, which is only one single slot in the region where it is located. So, you will have a mechanism to sense that as well. And therefore, this will detect the slot only once per rotation and that is then known as index pulse. So, this index pulse occurs once per revolution and you can therefore use that as an indicator of a reference angle because the encoder is physically fixed to the rotor of the motor which you want to control, it will always occur when the rotor goes at when the rotation goes to a particular angle because it is fixed, and therefore you can use these index pulse as an indicator of saying that is what I will consider the rotor angle 0 to be. And then from then on you can determine where the angles increase or decrease so this can be used as angle 0. Yeah, you can always detect this angle. But how do you know where to start? No, you really have a difficulty. Let us say my robotic arm is there, and the earlier instant when I operated it, it had moved all this while and I have stopped it at this point so I know where I have stopped it. I know in the sense the electronics inside the software that you have written knows where it is. But before I switch it on for the next time you enter the lab and you start discussing with all the people who are there and you lean on this arm while discussing and this arm moves a little. So, next time when you come and switch it on, this movement is not recorded by the electronic because it is deenergized. So, now how do you know where it is? Yes, so that is an uncertainty. But that is going to happen in one rotation, within one rotation of the rotor you will get an index plus. Now, if your application mechanical design is such that within one rotation of the rotor the arm is not going to move by a large distance, you may say it is okay. But if that is also an issue for you, then this cannot be used or you can use encoder only in those applications where you do not want the instantaneous position to be controlled, but it is enough if you know the incremental angle, it is enough if you know the speed, if there are such applications, then this absolute encoder can be used. That is more difficult to build, you can do that, you are essentially you are saying that it is equivalent to saying all these are not spaced uniformly, but you can have a difference and because you know where the difference is, how you have designed it you know where the rotor is going to be. Yes, but it is then going to be a non uniform split and it will give you difficulty when estimating the speed, you are not going to be waiting for the same angle all the time. And therefore, when you want to estimate speed, you will really have to wait for one full rotation to complete before you get a speed estimate. So, you have difficulties either way so I have seen an application where you know they have overcome this in some way that is the electronics even if you switch off there is an internal energy storage that have given which will store all the data. So, that when you start next time assuming nobody has hand moved it, you know where the system is and you can start directly that is one approach provided nobody can hand move the system, nobody can upset alter the system in between. So, the other difficulty however, this does not give an indication as to what is the direction in which it has moved, because anyway you will get this kind of on-off signal, whether it was in one direction or the other direction. Therefore, in order to detect that as well you have one more set of slots, which are placed midway between this. Therefore, you get from the encoder a channel A output and then a channel B output as well which would look like this. If this is A then B would look like this which is phase shifted from A by an angle of 90 degrees. So, if this is 0 degree and 180 degrees and 360 degrees of the encoder pulse, it is not 360 degrees of rotation of the rotor, then this angle is going to be 90 degrees and therefore if you are now going to be moving in this direction, then you see that for A and B, at the first instance you have 1 and 0, and then you have 1 and 1, and then you have 0 and 1 and then you have 0 0, this is the sequence in which you are going to be detecting the edges, if the motion is such that your encounter edges encounter levels this way. If you are rotating in the opposite direction, then you will encounter it in a different direction that means, let us say we draw this further. So, if you are going to rotate in the reverse direction, then you start with 1 0 here then the next one will be 0 0 and then it is 0 1 and then 1 1. So, you see that the sequence of occurrence of the levels, whether it is 1 or 0 is altered depending upon the direction of rotation and therefore, if you detect an appropriate thing, you know which direction you are rotating so, this is A and B again. And together with the index pulse occurring somewhere you have to have a means to say that I know what the absolute angle of the rotor is also, provided this operation is admissible that is one rotation in either direction should not cause any difficulty with your application. Then you can detect the index pulse and then from then on you can do the control as you want. So, we will stop here and then continue in the next class. |
Introduction_to_Robotics | Lecture_213_Manipulator_Jacobian_and_Statics.txt | Welcome back, in the last class we discussed about the Manipulator the Jacobian and how the Jacobian can be calculated as well as some of the issues associated with the use of Jacobian like the the Singularities issues, as well as the Inverse of Jacobian. And then, we found that the singularity can be identified by looking at the dexterity which can be calculated using the Jacobian and the Inverse Jacobian when there is when the matrix is non-singular we can use pseudo inverse to get the inverse and then solve the inverse problem and get the velocities, and we saw the method of using Resolved Motion Rate Control for robots where we used the pseudo inverse to get the velocities q dots or the joint trajectory using the cartesian velocity details so, this was what we discussed in the last class. Now, this Jacobian what we have can be used for many other applications also it s not only for velocity relationship we can use it for force analysis excetra, and you not to make it suitable for that analysis we actually, what we do is to we define this Jacobian. So, the previous Jacobian what we discuss is only mention it as a tool configuration Jacobian because, we were trying to relate the tool velocity to the joint velocity. So, you know to look at the Jacobian in a slightly in a different way, we define the Jacobian in this format. Suppose, you have this joints and this is the tool configuration. So, whenever we have a displacement theta k, we know that there is a displacement d phi or the change in the position orientation of the tool. So, we assume that there a an infinitesimal displacement of d q at the joint that will lead to a infinitesimal displacement of d u at the tool displacement and for the given operating point q at that particular operating point q this Jacobian can be defined as a relation between this d u and d q that is an infinitesimal displacement at the joint leads to a infinitesimal displacement at the tool tip or any infinitesimal displacement at the tool tip leads to a infinitesimal displacement at the operating point at the joint. And for any given operating point that is any given current configuration of the manipulator, you will see that you can relate this infinitesimal displacement with a matrix and that matrix is the Manipulator Jacobian. So, in a way it is the same as the tool configuration Jacobian also, what we are trying to say is that we are not really taking about the velocity per say here we are saying that very minor displacement can be related using this J q. Though our normal displacement J q and x are related using the forward kinematics here, we are telling that very small displacement can be related through the J q and these J q Jacobian the manipulator Jacobian can be used to represent this relationship so, that is the way how the manipulator Jacobian is defined. So, now once you have this relationship we can actually relate the displacement and therefore, we will be able to use it for the static analysis also. So, how much it is can deflect apply force here or there can be a displacement at this point and that may lead to some displacement here and that can actually use for finding out the relationship, the static relationship. So, that is the way how we can use the manipulator Jacobian. So, only a slight difference in the way we define it but, other way is the concept remain same they are actually the relationship between the displacement at the tool tip and the joint but, here it was we are considering it very minute displacement so, instead of velocity we are saying a small displacement at very short interval. So, with this definition we will try to now calculate the manipulator Jacobian. So, the principle is the same. What we do the Jacobian can actually be defined as linear and velocity part. So, you have the J q the Jacobian as the A q and B q two part we divide them into two parts the first one is associated with the linear tool displacement and the second one is the angular displacement so, A and B. So, in a way we can say d p which is the displacement the positional displacement at the tool tip can be defined as A q d q and the d phi this minuet displacement in the angular displacement is given as, B q d q where d q is the joint displacement and the d p and d phi are the tool displacement. So, this way if you define the Jacobian now, we can calculate A and B and then find out the complete Jacobian of the manipulator. So, A is for the linear tool displacement and B is for the angular tool displacement. Now, A k j as we saw in the previous example also the A k j is the basically the partial derivative of p k q that is the position change in position with respect to the joint variable A k j q that is the top part so, you have the Jacobian and the top 3 for the linear velocities. So, the linear velocities part is actually obtained by taking the position with respect to q the partial derivative of position vector with respect to q is the A k j that is the elements of the linear part. Now, we have the angular part so, what is the angular velocity or what is the relation for angular velocity component of the Jacobian is this lower part B. Now, this B can be obtained so, B can be obtained as the k th column of B q that is the k th column so, this is k is equal to 1, 2, 3, excetra, the k th column is obtained as B q is small b k q, small b k q is given as a constant zhi Z k minus 1 q that is the way how this B k is defined. So, this B k is a vector it s a vector and then that is defined as a zhi is a constant it is 0 if it is prismatic, 1 if it is rotary. So, that is zhi parameter here and then Z k minus 1 is the rotation axis of k minus 1 joint. So, Z k minus 1 is the rotation axis of k minus 1 joint or we can say that Z k minus 1 is R 0 k minus 1 i3, i3 is the identity vector or 1 0 0 so, you can actually multiple 0 0 1 so this third unit vector so, i3 multiple by R 0 k minus 1 i3 so, this is the Z k minus 1. So, the first top part which is the linear velocity part that is obtained directly by taking the partial derivative so, you have forward relationship T tool to base then you get p x, p y, p z and then you will be able to get the partial derivative but, for each column you have to take 1 0, 2 0, excetra and then get the partial derivative here will be getting the partial derivative and getting the velocity relationship here then, that is p x with respect to q1, p x with respect to q 2, excetra you will be taking then getting the relationship. Then, for the angular velocity part will find out this B k q so, this is given as B k q, B q is B k q that is the vector B1, B2, B3, B4 that is the way how it comes. So, the B1 q will be Z zhi Z0 that is what is the joint axis vector of the first frame that is the Z0 coming here and if it is rotary it will be 1, if it is prismatic then there is no rotation and it is 0 and there is no rotation and there is no angular velocity therefore, this B k will be 0 so, for whenever there is a linear joint or a prismatic joint, it has no contribution in the angular velocity of the tool tip that is what it says. So, if you look up this once again top of the manipulator point of view suppose, you have manipulator like this so, this is the manipulator now, it says that the linear velocity part how the position is changing with respect to the q1, q2, q3 I mean the joint variable so, you take this P x and then find out suppose, this is q1 what is the relation of P x with respect to q1, how the P x changes with respect to q1 is obtained by taking the P x or q1 that is what you can get. Now, the next question is when this q1 is moving, this is theta 1 or q1 when this is rotating what will be the angular velocity of this? that is what actually we are trying to find out and it says that if, this is rotating and the change in angular velocity will be as per this axis what is, this axis and that will be the way how it is changing if, it is with respect to Z axis though orientation will be changing with respect to that. If this says in different axis with respect to the origin then that will be a different way it is changing that is why it says that, the orientation the change of orientation with respect to first one will be 0 0 1 because, the axis is align like this, the Z axis is 0 0 1 and therefore, the orientation changes will be also like that. So, the orientation will change as this changes the orientation change with respect to this relationship that is what actually it says. Similarly, the second one how this joint axis is aligned with this accordingly whenever this rotation happens here, the rotation of the angular velocity at this point will be related to the orientation of this axis with respect to the base frame. So, we will try to find out what is the orientation of this axis with respect to this frame and that will be given as the second one here the z 2 the b2 will be the second joint axis. So, the second joint axis will be this one and this will be the n th axis that will be with respect to base frame. So, this is the way how you get the B k for the manipulator. So, it will be more clear when we take an example but, what we are trying to do is to get the first part now issues with this linear part the lower parts if it is a prismatic joint, a prismatic joint is not going to change the orientation of the tool and therefore, this B k will be 0. If it is prismatic this will be 0 but, if it is rotary joint then the rotation of that joint with respect to that axis will lead to a change in the orientation of the tool tip. So, the orientation of this axis is the one which actually determine the change of orientation at the tool tip. So, this B k will be the orientation of this axis with respect to the base frame so, that will be what we are getting here. So, by doing this multiplication we are trying to find out the we are doing this Z k minus 1 basic the basically the joint axis what is the joint axis and that joint axis orientation can be obtained from the rotation matrix that we have. So, we take the rotation matrix 0 k minus 1 and multiply with the i3 you will be getting the vector which is the Z k minus 1. So, basically there is the Z axis of the k minus 1 joint this is the way how we get the manipulator or compute the manipulator Jacobian. I hope, you got it. So, to explain it once again so, the B k the k th column of B or the B k is given as B k q is zhi Z k minus 1 q and Z k minus 1 zhi 0 if it is prismatic, 1 if it is rotary and the Z k minus 1 can be obtained by taking the rotation matrix 0 to k minus 1 rotation axis will take and multiply with i3, the unit vector in third axis and you will be getting the Z k minus 1 and this is k less than or equal to n. So, you will be getting J q like this the top part is the linear velocity part and bottom one is the angular velocity part B q so, this is way how we get the complete Jacobian matrix. So, in the previous example we do not consider this so, now we are considering the full Jacobian matrix for both linear and angular velocity. So, the algorithm is like this so, set T 0 0 is equal to I that is the Transformation Matrix T 0 0 that is 0 axis to 0 axis, 0 frame to 0 frame, the identity matrix. Then, compute b k q that is now, k is equal to 1 so, set k is equal to 1. So, I have b1 is zhi k that is zhi 1 R 0 0, k is equal to 1 and multiplied by i3 that is b1. So, b1 is zhi 1 R 0 0, zhi 1 is equal to 1 if it is rotary, 0 if it is prismatic. So, it is rotary joint then b1 will be i, 0 0 is 0 0 is the rotation matrix between zero th frame to zero th frame it will identity matrix so, it will be 1 0 0, 0 1 0, 0 0 1 multiplied by 0 0 1 so, what we will be getting is 0 0 1 so, bk will b1 0 0 1. Basically, says that the joint axis is aligned with the first Z axis so, it will be 0 0 1 that is b1 will be 0 0 1 and then compute T 0 k that is next is T 0 1 so, you calculate T 0 1 and find out what is T 0 1 so, that you can actually get from transformation matrix between first 10 0 th frames 0 th to first frame that is the transformation can be obtained T 0 1. Now, set k is equal to 2 then find out now k is equal to 2 then go to b2 and b2 will be again you will get this Zhi2 to R 0 1 so, this will be if it is rotary joint it will be 1, R 0 to 1 i3, R 0 to 1 will be rotation matrix which will tell you how the Z axis of the first frame aligned with respect to the base frame 0 and you will be getting this as again vector here that may be, sin alpha, cos alpha 0 or something like that depending on how the axis is aligned with the zero th frame. So, you will be getting this as the b2 and similarly you go for b3, b4, excetra, and finally get this A k that is the top part of the linear part where taking the partial derivative and form the J q. So, this is what actually will do in the computation Jacobian or the whole manipulator. So, will take an example it will be easy for you to follow how it is done. So, we will take the example of this Five Axis Robot Rhino and see how the Jacobian can be obtained for this robots. So, we need to have T the transformation matrices calculated for alpha is equal to 1 to 5. So, initially we need to have to calculate T 0 0, T 0 1, T 0 2, T 0 3, T 0 4, T 0 5, should be available that what we need to calculate all these transformation matrices. So, normally we do not do this for forward kinematics we do not need to do this 0 to 1, 0 to excetra but, we need to now because, we are interested to know what is how the joint axis 2 is align with the 0 axis or how the joint axis 4 is align with the joint axis zeroth axis this information we need and that s why we need to calculate these matrices T 0 on to T 0 excetra. So, this is need to calculated first are the transformation matrices from T 0 to i, and then we have off course the T 0 5 will give you the P x, P y, P z that is the position of the tool tip. Now, the b1 q so, what will have this b1 so, will be having a k and b k so, you will be having a 1, b1, a2, b2, excetra, that will be the Jacobian. So, this b1 will be R 0 0 i3 so, it will be always 0 0 1. So, this will be 0 0 1 if I take this as the Jacobian so, this b1 will be always i3 because, the joint axis 0 the first joint axis aligned with the base frame so, it will be always 0 0 1 base frame Z axis joint axis and Z axis are aligned so, you will be getting it as a 0 0 1. Now, we will find out what is b2 so, it is written here first part so, we will get j1 is the first column of the Jacobian it is given as j1. So, the first column of the Jacobian you take the partial derivative of P x with respect to q1 you will be getting j1. So, this is first derivative P x over q1 so, this is obtained as P x q1 q and P y q1, P z q1. So, P x is independent of theta 1 that s why you find out 0 here and you have the b1 as 0 0 1. Now, let us see the b2 so, we need to get b2. So, to get b2 what we need to do is R 0 1 since it is rotary joint is equal to 1 so, 0 1 i3 that is what we need to find out so, R 0 1 basically from T 0 1 you can get R 0 1 and it says that how this joint axis or how this joint frame is align with this frame. So, now this Z 1 and this is Z 0. So, how Z 0 and Z 1 are aligned it is obtained from this one R 0 1 i3 so, if you take R 0 1 it says what is the orientation of the coordinate frame to this coordinate frame and then we need to know what is the axis Z 1 how is it related to the axis of the coordinate frame Z 0 I mean, the base coordinate frame and that is obtained by getting this Z 1 so, Z 1 is obtained as like this. So, this is T 0 1 and as you can see here basically we are looking for this vector. So, this is the vector we are trying to get, this is basically the b1 and by doing this multiplication we are trying to extract this out. So, b1 is minus S 1 C 1 0 and that is why coming here is b2 as minus S 1 C 1 0 so, this is b2 0, b2 is minus S 1 C 1 0. I hope you got it so, what we have to do is to get T 0 1 and from T 0 1 get R 0 1 and then multiply R 0 1 with i3 you will be getting this minus S 1 C 1 0. So, once you are comfortable you do not need to really do the multiplication you just look at the approach vector and take that approach vector add it here so, that approach vector will be the one which will be using here. So, this is the way how you get the b2 the same will be repeated for b3, b4, b5. So, we will take T 0 1 now take T 0 2. So, T 0 2 from T 0 2 you will be getting this as minus S 1 C1 0 so, you can see that this one is minus S 1 C 1 0 and this next also will also minus S 1 C 1 0 because, the orientation of Z2 is also same Z1 so, it will be again the same way you will be getting it, because it will not change in orientation so, you will be getting it as this so, this is the way how you will getting it. So, then it will moving to this j3, j4, and j5 will be minus C 1 S 2 3 4, minus S 1 S 2 3 4, minus C 2 3 4. So, if you look at the T 0 5, T 0 5 is not given here but, if you look at our previous derivation of the forward kinematics you will see T 0 5 had the rotation matrix at the approach vector as this so, that approach vector is coming here as j5 q. Now, you have J q full J q S j1, j2, j3, j4, j5 so, these are the columns j1, j2, j3, j4, and j5 so, you have the complete Jacobian identified now, again you can see the Jacobian is a function of the joint angles. So, this way we can get both the linear velocity part and angular velocity part of the Jacobian. So, the linear velocity part is take the partial derivative of the position with respect to theta 4 or theta 5 you will getting the linear velocity part. I hope you understood how do we get the Jacobian. So, the top part or the linear part is the same as what we did and it is applicable to the previous Jacobian in the discussion what we had also but, the angular velocity part is the one which we are actually seeing as new one here and the angular velocity is basically that angular velocity part depends on the orientation of the joint axis. So, that is what you need to understand from conceptually, the conceptually what we are trying to say is that the angular velocity frame. So, the angular velocity of tool tip depends on the orientation of the joint axis so, if the joint axis this orientation and we are make a rotation about this joint axis then the angular velocity will change here depending on how this axis is orientated with respect to the base axis base frame. So, the orientation of the joint axis with respect to the base frame decides the angular velocity of the tool tip and therefore, we take that as the B k here and this orientation of the joint axis is the one which actually determines the angular velocity so, any change in this angle will affect the velocity here and as a function of how this orientation is there, how this two axis are orientated that is the way how you get it so, that is how we calculate the angular the Jacobian for a manipulator. So, we saw this calculation of Jacobian and we see that to do a partial differentiation to get the linear velocity part of the Jacobian and that may not be always feasible so, you need to manually do the partial differentiation and then write it but, there is a numerical way of doing this, a numerical computation of Jacobian. So, instead of going for the partial derivative and then finding out the Jacobian we can have a numerical computation. So, we will see how the Jacobian can be computed numerically. Now, look at the manipulator shown here so, you have a manipulator shown here so, this is the manipulator now, we want to find out suppose, this is the manipulator and these are the joint axis that you can see, these are the joint axis. Now, we can see that the Jacobian can be calculated as the cross product of the joint axis vector and the position vector of the joint to end effector. So, what it says that if I want to find out the velocity at this point then I can find out the vector here this is the joint axis vector and a cross product of this vector with the position vector, this position vector will give me the velocity relationship. So, that is what I actually says the cross product of the joint axis. So, this is the joint axis and this axis when I rotate with respect to this axis, this will be changing the tip will be changing, the position will be changing, and the changing velocity can be obtained as a cross product of the joint axis and the position vector, the position vector which connects the tool tip to the joint that actually decides the velocity of the tip. So, that is what actually it says can be calculated as the cross product of the joint axis vector and the position vector of the joint. Similarly, you want to find out what is the effect of this joint on the velocity of the tool tip, we can get it by looking at the axis of this and taking the cross product of this joint axis with respect to and cross product of this joint axis and the position vector. So, this is the position vector from this joint to the tool tip so, you can take the cross product with this you will the getting the velocity. So, that is what actually the principle of numerical computation. So, you get this J as J1, J2, J3, excetra, and then J1 the linear part will be this and the angular part will this so, the linear part can be obtained as b i minus 1 cross P n 0 P 0 n minus P 0 i minus 1. So, this actually represents the position vector from the joint to the tool tip. So, n 0 is from 0 to n and 0 to i, minus 1 is from if this you are tying from this point so, from here what is the position vector, what is position vector from this joint to this joint given by this. So, P n to 0 is the position vector form base to the tool tip and 0 to i minus 1 is from the base to joint from which we friends which we are calculating the velocity relationship that gives the position vector from i, minus 1 to the tool tip. So, this gives you the position vector and this gives you the joint axis i minus 1. So, the cross product of the joint axis joint axis to the with the position vector gives you the velocities that is what actually given here b i, minus cross P 0 n minus P 0 i, minus 1 that gives you the linear velocity part and then the angular velocity part we already saw i, minus 1 b k will set. So, we will be able to get set b i, minus 1, i, minus 1 is 0 to i, 0 0 1. So, this is the same thing what we discuss in the previous case also that is the joint axis vector. So, b i, minus 1 is the joint axis vector. So, what we need to do is to take the cross product of i minus 1 with respect to the position vector. So, take a simple example of here for a 3 degree of freedom manipulator. So, for 3 degree of freedom manipulator J1 will be b0 cross P 3 0. Suppose, this is a 3 degree of freedom manipulator consider this as a 3 degree of freedom manipulator. Now, what it is saying that J1 the first column of the Jacobian will be b0 cross P 0 3 this is b0 that is the first joint axis b0 that is the joint axis cross this position vector that is P 3 0. So, P 3 0 states that the 0 to 3 that is the vector representing zeroth joint to third joint that is P 0 3 and b0 is the joint vector and b0 is obtained by the from the rotation matrix the approach of the rotation matrix it will the b0 and then J2 will be b1 so, b1 is the this joint axis b1 cross this position vector P 3 0 minus P 0 1. So, P 0 3 is this one and P 0 1 is this one so, the subtracting that you will get this as P 1 3. So, P 1 3 will be the position vector of 1, 2, 3. So, P 0 3 minus P 0 1 is the position vector this one and this one the b1 is the joint axis. So, b1 is obtained from R 0 1 so, now you take the rotation matrix from 0 to 1 and then find out the approach vector that gives the orientation of this axis with respect to the base frame that becomes the b1. So, b1 will be the angular velocity part and b1 cross P 0 3 minus P 0 1 will be the linear part. So, this way you will be able to get the compute the Jacobian numerically. So, now you do not need to do the partial differentiation take the cross product and get the values using this methods. So, that is the way how you get the numerical computation of Jacobian. So, this is an exercise for you home work so, write a program for numerical computation of Jacobian for an n degree of manipulator. So, you can actually calculate I mean now, you know the numerical method so, you can calculate write a program to find out Jacobian matrix because, you already written a program for the forward kinematics. So, you can actually use the same program to get this T 0 1, T 0 2, excetra and from there you can actually get P 0 n, P 0 1, P 0 2, excetra you can get and once you have that you will be able to get the position vector and from the rotation part you will be able to get the b vector and then do the cross product and then get the Jacobian. So, you can write a code, you can use C or Python or whatever the language you like and then write a program for calculation of Jacobian, that is about the computation of Jacobian so, we discuss about the importance of Jacobian and how is it useful for controlling the robot manipulator using the Jacobian inverse to get the joint velocities and then we saw how to compute the Jacobian analytically as well as numerical methods. Now, will just see how to use this one for static analysis. So, we found that the robot joint torques and the forces also need to be computed and the relationship between the robot joint torques and the forces and movements at the robot end effector is basically the static analysis. So, suppose, you apply a force here or apply a movement or force here we want to know what is the torque to be acted upon these joints to balance that one or you want to apply force on the environment using the manipulator or I mean static condition what should be the torque to be applied to apply that force. So, this is the static relationship or you you are actually holding a weight here so, the robot is as to hold the weight in a static condition we want to know what should be the torque needed withstand the load acting or tool tip that is the static analysis and this we can actually use using I mean solve using the Jacobian. So, the manipulator Jacobian is defined as a relationship between the infinitesimal displacement to the joint displacement. So, the same thing applied that you have a infinitesimal displacement at this point that will lead to infinitesimal displacement at this joint and that actually can overcome by a compensated by a torque applied at joint joint to reduce the movement of the joint. So, that is the way how you can use Jacobian to analyse the static forces. So, we can actually write this as if the joint torques are given as tau and the forces and movements acting at the tool tip given by F so, if this is the force and movement acting at the tool tip then the joint torques tau 1, tau 2, tau 3 can be obtained as tau is equal to J transpose F so, that is the relationship tau is equal to J transpose F where, F is the forces and movement and J is the manipulator Jacobian. Now, if you know the forces is acting the forces and the movements F x, F y, F z and movements M x, M y, M z the 3 movements and 3 forces then we will be able to find out what is the torque acting at the joint relationship tau J transpose F. So, this is the relationship will be using to get the static analysis then J is the Jacobian of the manipulator. So, you can write this as tau 1, tau 2, tau n, depending upon the number of joints you have n joints so you will be having the joint torques 1 to N this is N by 1 vector and this is a N by 6 Jacobian and this will be 6 by 1 vector F x, F y, F z, M x, M y, M z and the transpose of the Jacobian used for getting this, the transpose will be N by 6 matrix. So, that is the way how you I mean the static analysis. So, here also there will be some singularities in each situations because, we have tau is equal to J transpose F but now, here the singularity is that the rank of J transpose is equal to the rank of J theta and at singular configuration there exists a non-trivial force F such that, J transpose F is equal to 0, there may be a situation when J transpose F is equal to 0 that is the forces torques at the point will be 0 for any force acting at the end effector or the other way also any torque applied cannot produce a force at the end effector that kind of situation is known as a singularity force singularity. Other words, a finite force can be applied to the end effector that produces no torque at the robot s joints and this is known as the robot lock up, that is you can apply force. Suppose, the manipulator is like this apply force here, a force is applied here now torque is generated at the joint that kind of a situation is known as the force singularities or the singular configuration in the manipulator and this situation is known as force lock up or the lock up condition where, the torque at the joints are 0. And this can actually happen in many situations especially, when its fully extended position singularity conditions you will be able to see the such situation happens. So, let us look at a case where actually this can be shown. Assume that the force acting on the end effector this is the planner manipulator is considered where this is at theta one position the joint 1 and joint 2 joint 3 are 0 angle and you apply a force F here so, apply a force F here and its components are F C 1 and F S 1 so, the components in x and y in direction can be F cos theta 1and F sin theta 1. So, that is the force now there is no movement acting at this stage we assume that the movement act. Now, theta 1 is theta, theta 2, theta 3 are 0 a planner manipulator now, for this tau 0 can be written as J transpose F 0 so, we have tau that is the joint vector tau 0 is J transpose theta F 0 so, this will be the relation that you can get, this is the F 0 so, this is the J transpose F and F 0. Now, we can see that this is equal to 0 for the condition where this will be 0 this will be 0 because, minus F S 1 C 1 L1 plus L2 plus L3 plus F S 1 C 1 L1 plus L2 plus L3 so, this is 0 and this is 0, and this is 0. So, will see that the tau 0 is 0 that means no torque is acting at this joints because, of the force now, it is very obvious from here because these are aligned all these are aligned and then any force is this direction is not going to produce any torque in the manipulator or any torque in the joints because the force is always in this direction and therefore, no torque will acting on this to overcome this force, that force is not really reflected onto the joints and this is the situation is known as the lock up or the force singularity. So, that is the another situation that will be encountered in the manipulator when this in the singular configuration so, we can see that this actually boundary singularity also so, it is actually coming out of the boundary of the particular manipulator. Alright so, that is all about the Jacobian so, we will stop here the discussion. In the last few classes we discussed about the Jacobian, the inverse kinematics then we talked about the tool configuration, joint space velocity and we talked about the Jacobian so, how the tool configuration velocity and joint space velocity are related using the Jacobian and then we looked at the singularities issues, the boundary singularity and the interior singularity and finally we talked about the generalised inverse as well as the pseudo inverse and statics. So, we completed the discussion on the kinematics here so, we started with the basic mathematical foundations for representing the points and objects in space and then we talked about how this position and change of position can be represented by using transformation matrices and if, we have to represent the both translation and rotation we found that we cannot use 3 by 3 matrix we have to go for a higher dimension space. And then we defined the homogenous coordinates and homogenous transformation matrices and after going through the physical elements of industrial manipulator the type of joints and the type of configuration, we started the kinematic analysis and we talked about forward kinematics how the position of end effector to the joints and then we moved to the inverse kinematics where if we know the position of the end effector how can we find out the joint positions or the joint parameters. And while developing this we talked about the DH parameters and arm matrix and after discussing the inverse kinematics we moved, to the differential kinematics, we found out the relationship between the joint velocities and the end effector velocities. So, this relationship can be represented using the Jacobian matrix and then we saw how the Jacobian matrix computed for industrial manipulator for the linear velocity part and the angular velocity. And finally, we discussed how the Jacobian can be used for static analysis. So, with this I am completing the discussion on manipulator kinematics from the next class onwards you will be having a lectures by other faculty members. I hope you enjoyed this these lectures and wish you all the best. Thank You. |
Introduction_to_Robotics | Lecture_32_Principles_of_DC_Motor_Operation.txt | So, in the last class we had looked at need for actuators and the operation requirements that one may expect. So, and we saw the different varieties of actuators, you may take for hydraulic and then electric. What are the advantages that one has with respect to another, and then we therefore we said that there are lot of advantages you are having electric actuators. And the electric actuators are of different varieties, for the case of robotic application we usually say that these are electric motors, and these are of different varieties. There are you have electric machine or electric motors that generate rotational motion, and then you have electric motors that have basically linear motion. Usually when we talk about electric motors, this is what we refer to. This variety is rather rare, it is rather rare and not very much useful to industry; and therefore our discussion will be more focused on motors that produced rotational motion. But, if the end effect that you want with not rotational motion, we want something to move along straight line. Then appropriately we use mechanical linkages or a mechanical an accessory mechanical system to ensure that a rotational motion gets converted into a motion that is a linear, so that can always be done. For example, if you are looking at your automotive application. In the automobile you have you know the wind shield, wind shield is what between front, and you have blades that are going to remove the dust from the wind shield. That has an oscillatory operation, oscillatory in the sense that is not a reciprocating movement; it goes to one direction to another. Any idea how that is achieved? Professor: Yes, so you have this is actually you, this is an oscillatory motion, but this is achieved by having one motor which is rotating motor, a rotational type motor. And then there is a mechanical linkage which makes this into an oscillatory operation. So, you will have to use some such mechanical linkages you will in order to get this rotational motion, converted into some other kind of operation. But rotational electric motion, electrical machine are rather easy to use because the area of operation of the machine is really restricted. If you want to have a machine that is going to move all along the length; then you have to do the mechanical construction of the machines over such a large distance. So, you do have machines that go on linear operation, so these are sometimes used for material handling application. And often people are attempting to higher electric locomotive as well. Not yet implemented but it is at the research level where you have electric locomotives operating on the way on the which use linear motions. So, if you then take a if you take electric motors, these electric motors have let us say you have the motor. These motors have an electrical input and generate a mechanical output; and this operation as I am not sure as I mentioned last time, this happened in the presence of a magnetic field. And the what we essentially need to see, if this is the kind of system that we are going to have, and we want it desirable mechanical output from the system. That means we want to make a shaft of the machine rotate in a particular manner that you want. If you want to make it operate in a particular manner that you want the only thing that you can adjust is the electrical input. There is nothing else in the system that you can adjust, and therefore we are essentially looking at what is it that one can do. In order that the input that is effect to the machine is of an adjustable nature, and how one can go above adjusting it in order to get the kind of operation that one has looking at. And as we see that the electrical input could be in some cases an AC input; if the robot is not going to really move away from place to place. Then it makes sense to operate it with AC input, but on the other hand if it is going to be moving then it may not be very easy to have an AC input like this. In such situations you may prefer a DC input simply because energy storage can then be put on the robot itself and storage is available only in the form of the DC. So, you can use it in that manner as well, and therefore in this in the few actuators regarding electric actuators; what we will be looking at mainly is how the electric motors operated. And the nature of this circuit that is going to sit here, which will enable you to take whatever supply is available. And give it in a adjustable manner to the electric motor to get what you want. This is what we are going to look at and this block which will going to sit here, is essentially a block that it is having an electronic circuit. And this has to hand the full to electrical input that is going to be given to the machine so, this is a high power circuit. It is high power as oppose to the lower the sort of low power electronic circuit which is all your analog electronics, your op-amps and resistors and the different varieties of electronic circuits there. As against that this handles higher output power that is why it is called high power. So, we will therefore have to have some understanding of what the circuit is; the nature of operation of the standard circuit and how it will impact the electric motor. And also we said that you need to get some desirable operation out of that which means that the circuit operating it must first of all know what is happening, in order that it can impose what is desire. If you do not what is happening there how do you know how much adjustment you need to give to the supply. Therefore, in order to have that there need to be some information flow going from this end to the circuit. Which, will then enable it do decide what is it that you must do now to the input given to the motor to get a desirable performance. And therefore all these kinds of systems are basically feedback controlled system. So, you have information that is now flowing back from the output side, and then there needs to be some algorithm based on which you are able to look at what is happening at the output. And then you also know what you want depending upon the difference, you must be able to do something so there, there is then an algorithm. So, you may represent it in this manner that you have the motor and then you have the power electronics and this gets in source of power one may call it tools of electrical power. This is what energizes an electric motor and then you have a shaft which may be coupled to whatever application we want. This now needs to be given an input so you need to have something here, which say how this electronic circuit should act in order that the output performance is achieved. So, there is here therefore some controller and this controller needs to know what is required, so there is a reference. No, I clear it in this way and that requires an input of what is happening on the other side. So, this is the sort of system, this controller itself could be fairly sophisticated; maybe you need a digital signal processing. You need to implement it some kind of digital algorithm whatever needs; and this reference is what is going to tell, what you are supposed to be doing. So, if you are going to be moving a robotic arm to pick up this object from there, and you placed it somewhere else; then this action of asking the electric motor to rotate starting from this position. Rotate the hand is lifting up and then you rotate it further; so something to happen and then come down and place it. All this has to be given as the reference; where will this reference come from? This reference has to come from what the robot has to do. Somebody has to decide apriory, how the entire robotic system has to do these sort of program. So, this comes from a supervisory system that say how the overall system must behave. So, if you are going to have something that is going to move around in this room for example. And it then needs to locate whether there is an obstacle or not; then go somewhere and go around and then display and then return back. So, somebody now has to write an algorithm to find out if there is an obstacle or not; there has to be a sensor that detects the obstacle. And then if there is an obstacle, what is that you need to do? Which direction you should move? When you should move forward? All that will come in the supervisory system. Finally, after do looking at all these environments, you take a decision; yes, move forward. At that decision is implemented by this loop that we have shown. So, the actuator in the entire robotic system, the actuator is the final actuating element; it is not something that is going to involve itself in the decision of what were the robot must do. But, rather implement the decision of what needs to be done. So, we are then going to look at this end action how that can be implemented. So, this is broadly we are then going to look at and as somebody ask me last class, what about the books for understanding this in a maybe different way. So, I would suggest the book called Modern Power Electronics and AC Drives by an author called B.K. Bose. So, this is a book that gives lot of details about how such systems operate, and how you can design this kind of systems and so on. For our purpose however we are not going to go into the level of details that this book gives. We are looking at a very-very introductory can superficial level; so, if you want to get into some more understanding, this is probably the book. But, I am not really aware of a book that will discuss it at a level that we are going to handle in the course. This is what we are going to do is that a very-very superficial level; and you will not really find a book that deals with that level. But, however further information is available here; there is another book called Electric Drives by R. Krishnan is another book that you can refer. So, this is especially speaks about AC drives and this book has material on DC drives as well as AC drives. So, in order to understand I mean if you are going to be a sort of system integrator or you are going to be looking at designing an entire system having the robot. Then it is only necessary for understanding the role of the electric machine, and the role of the circuit that you are going to put here. How do these things operate and the basic operational behavior of these things, so that is what we will be looking at in these things. So, you go ahead, so let us start by looking at a simple electric motor. Electric motors of different varieties the simplest to understand the operation is what is called as a DC motor; and sure you might have a heard about it at some time or the other. And so this then has two basic structural parts, we are ofcourse looking at rotating machine and therefore you have one entity called as a rotor. Another called as stator, the rotor is the entity that rotates and stator as the name implies is stationary. So, you have the stator so the stator is a cylinder, where you have a cylinder with a hole all through; and the rotor is another cylinder that is inserted into this. So, this member the outer member which is called as I said stator is fixed in some way to a base you may put a bolt and fix it. And therefore the stator is stopped from rotating and this member which is inside, this is then called as the rotor, the rotor is free to rotate. And your application whatever you are going to connect is then connected to a shaft which extends from the rotor. It may not be as long as I have drawn, but there is a shaft which extends from the rotor. So, the basic idea of operation of the machine is very simple and same from the simple laws of physics which I am sure you are all aware of. The simple law of physics is that if there is a magnetic field that is going to be there. Let us say you have magnetic field that is oriented along the directions of the arrows that I have drawn. And then you have an electrical conductor that is located in the magnetic field that means I am saying that the magnetic field exists in this space which is going from up to down. And in that you have a conductor that is located like this field is going like this way. You have a conductor oriented like this and then this conductor, if you have let us say that you have a flow of electricity. This is flowing in this direction so you have the field flowing from up to down. You have a conductor and you have a flow of electricity that is coming out of the conductor in this manner. If this is the arrangement that is there, then we know from basic electrical that is basic physics that I can say this is a electrical conductor having flow of current. Then this conductor experiences a force, it experiences a force that somebody remember the equation linking force to the magnetic field to the conductor. I vi, vii, so the force is actually given by I into dl cross B is the equation. So, if you have a situation where as I have drawn, the field is going from up to down, you have this electrical conductor located here. How the angle how much is the angle between the magnetic field and this conductor, equal to 90 degree and therefore this means I into dl multiplied by B into sin of the angle between them which is sin 90 degree. And therefore this reduces to I into dl into B. This is the force experienced by a small length of the electrical conductor dl, but if you are going to have a situation, where all along the region of the magnetic field. The conductor is in the same direction then the net force that is experienced is simply I into l into B where l is the length of the conductor. So, this is your familiar equation B into i into l, so this is fairly simple and this is the basis of all electrical machine operations. So, the laws of physics based on which motors are built is very-very simple. Now, therefore if you can somehow make a field exist inside the machine that let us say goes like this; comes out this way, goes through this one and then comes out this way. Similarly, you make another field or you make a field that exist and goes this way. So, this field line goes around like this and closes, this goes around like this and closes. So, effectively then as far as the rotating member is concerned, it is going to see field lines coming down from this stator in this direction. And field lines going out in this direction and therefore if you sit on the rotor, and look at the space around you; you see lines of magnetic field that are entering from somewhere. And leaving from somewhere and therefore you would imagine that you have a source of field which is there outside for which North Pole is setting and South Pole is setting here. You see that North Pole is the region for which the field exists, and therefore that is a place where it exiting and it is entering somewhere here There is a North Pole here and South Pole here and I am seating right in the middle. Now, therefore since you have established a magnetic field that is going to do like this, and it is going to do like this. It is the situation very similar to what we have drawn there; there is a region now having magnetic field. And now in this space if you now locate an electrical conductor, let me draw a conductor here. I place the conductor here that is going to run all along the length of the rotor. I also place another conductor here which is going to run all along the length of the rotor, along the axial length of the rotor. And then let us say I make a flow of current happened such that it is dot here and cross here. If I draw a circle and represent here by a dot; it means that the flow of the current is out of this. And if I put a circle and represent it this way; it means the current is flowing in to that. If this is the case how one the force be experienced by this conductor, which direction will be the force? You have field line going from top to bottom; you have a conductor setting there with flow of current outside. So, dl is a vector that is going along the direction of I that is outside; B is down, so I dl cross B is I dl cross B is going to exert a force in this direction. How will the situation Be in the other conductor? You have the same direction of field, but the flow of I has now been reversed. And therefore you experienced a force in the opposite direction, which means in this direction. So, you now have a cylinder where a force is exerted in this direction and one hand and exactly opposite if I locate the other conductor. You have a force of acting in this direction and therefore how will this cylinder respond; it will begin to rotate. Why it will not move either this side or that side? Net force acting on it is zero; therefore there is no horizontal movement of this mass. It will only cause a rotation about the axis and therefore this rotor will begin to move; it will begin to rotate, rotation is caused. So, B into I into l if you now look at the way I am going so we somehow need to have a methodology of allowing a flow of current here and bringing it out into this. So, I have an electrical conductor here going all along the length; if I simply make a interconnection with this conductor and that conductor at the back. Then any current that goes in and come out through the other side. So, that is a very simple arrangement which means effectively that if you have a cylinder here and that is going to go along the length. Then I have a conductor here that goes all along the length of the cylinder, emerges out at this point. I have another conductor that goes all along the length of this cylinder, it emerges at this point. I simply short this two and therefore if I have this two as outputs what I can do is send the flow of current into this. This current travels all along the length and then comes back and goes back into the circuit, if I am going to connect here which will be the source of electric current. So, if I do this then I we have already seen that it results in rotation. So this is the basis of motor operation. Now, this is a force that is existing at this location. In order to cause rotation, you need a this force multiplied by this radius is called force into radius is mechanical torque. And you have this force which is going to generate one torque, and similarly that is going to generate another torque. Both of them bending to torque rotation in the same direction, and therefore the net amount of torque is B into i into l into r multiplied by 2. This is the torque that is going to be generated by having these two conductors located in this manner. So, then the issue is how do you generate the magnetic field first; so that is now fairly simple. What you do is in the region here, so the region here you fill it up with that is fill it up with by having a magnet. If you put a magnet that magnet is going to generate a magnetic field so that is your source of magnetic field. So, you will then have to put another magnet to have symmetry in operation. So, the magnet above will be arrange in such a way that this face is going to be North; and that is South and the magnet below will be arrange in such a way that this face is S and that is N. Which then means that you will have magnetic field generated which will come out of this and go into this and this will complete the circle in the iron of the stator. So, this is very simple that is going to be there. So, now in this equation therefore you have B into I, l into r; B is going to be generated, this is going to be generated by having a magnet; l and r are dimensions of the system. So, you really cannot adjust either B or l or r in operation. So, you cannot meddle any of them, but however you can meddle with i because this i is caused by an external circuit that we have connected here. And this is what is going to result in a flow of current here and then coming out of at this point; and therefore you can do something only with this flow of current i. And the goal therefore is if you are going to have an electric motor and you want to control it as per your need, of your of the robotic system. Then what you need to control is how much flow of current is there; so, the question is how to do that. Now, apart from this there is an another phenomena, now that if rotation is going to happen then there is another law of physics enters into the picture which is called as Faraday s law. It says that if you are going to have a magnetic field and some conductor is going to move then there will be an induced EMF there. And you can show that, that induced EMF is again equal to B into l into r multiplied by 2 multiplied by omega, which is the speed of rotation, speed of rotational motion. So, you see that there is an induced EMF, the induced EMF has an expression which look very similar that of the mechanical torque which is generated except that in this place you have i and here you have omega and therefore you can write an expression for the developed amount of torque as some numbers k multiplied by i. You can write an expression for the induced EMF as the same number k multiplied by speed. And this law also has an accompaniment which is called Lenz s law which says that the direction of induced EMF is to so as to, oppose the reason why it is there. And the reason why it is induced EMF is there is that these conductors are attempting to move. And why these conductors are attempting to move is because there is a flow of ampere i; and therefore the induced EMF will attempt to oppose the flow of i. So, looking at all this one can therefore develop a simple equation describing the electrical aspect of the DC motor. You are going to apply an input voltage v and this input voltage v result in a flow of current I which is going to flow in these two electrical conductor. So, if there is a flow of current in an electrical conductor, this conductor obviously has some resistance to it. And therefore this v has to be equal to the flow of current i multiplied by the resistance. But that is not all there is an induced EMF now existing which is opposing this flow of current therefore, V equal to I into R plus K into omega. So, this is a very simple equation describing the electrical part of the machine. The machine has a mechanical part as well which says that the developed mechanical torque; it is actually electromechanical torque is equal to K multiplied i. So, these two equations then together describe everything about this machine. In response to this torque you have a rotating mass which is going to move. Rotating mass means the moment of inertia et-cetera all those things which are there. Professor: Why? Professor: Right, so you are applying an input voltage V and this induced EMF is opposing that V and therefore the difference will result in a flow of current I, which is I into R. I just wrote it on the other side, so now this is going to result in movement which has inertia, other opposing forces and so on. So, it has to overcome all that and then start rotate. Now, therefore what we need to understand is how one can do some control on this. Usually when you talk of a DC motor, I am sure you might have seen small DC motors which you excited a low power DC sources that you supply a nice DC voltage to it, the motor starts rotating. Now, the question is looking at this expression, you may say that you want to supply a DC voltage to the motor. What you normally mean by DC motor? What generally it implies is that if you draw a waveform of this voltage V as the function of time. You talk about this voltage then you draw with respect to time. if one says It is a pure DC voltage what you would expect is a graph which look like this. It is a pure DC voltage, so we say for example a 100 volt DC then I would say that this is equal to 100. And for all time it remains exactly at 100 volts that is pure DC. Now, one can supply this kind of a DC voltage to the motor, in which case you would expect that we say draw a graph of i with respect to time. Then you would expect like this graph is also flats, if you draw some amount of flow of current maybe 5 ampere, 6 ampere or whatever it is, some current will flow. But, out of the two having an applied voltage that is flat and having a flow of I that is flat. Which one is more important? Which is more important? Is it necessary to have a flat voltage profile like this? Is it necessary to have a flat flow of current like this? Why would you say that you need a flat current? Torque will otherwise fluctuate and that electromagnetic torque term is the one that is going to result in motion. And you want the motion to be nice and small; you do not want an oscillatory motion. So, you would like that input current to be a flat profile. It is not essential that the voltage also needs to be flat. So, one needs to therefore look at maintaining the flow of current I to be as flat as you can allow. This is where what we discussed in the last class entered the picture that it may not be feasible to keep it like that; it may be feasible only to keep it like this. Why, we will discuss as we go along, but that is what may be feasible. And if that is what is going to happen then this i multiplied by K is going to give you electromagnetic torque which is also be like that. To the question is what is this amplitude that is allowable for you, for the particular application at hand and what is the frequency that is allowed for the particular application in hand. Given any application like you can always you should always be able to determine the high enough value of the frequency such that the mechanical system is immune to that. If you have a mass and you go on hitting it at small amplitude at high frequency after some time it will just not respond. So, what means to therefore determine what is this and what is this, and how do you get? Why should you go for this kind of an operation? Why it is that we are not able to get this? A smooth DC wave form. So, all this we will see in the next class, we will stop here. |
Introduction_to_Robotics | Lecture_73_Recursive_State_Estimation_Bayes_Filter_Illustration.txt | hello everyone and in this module uh we will start looking at exactly how the base filter algorithm will operate okay so as uh most of you would remember the base filter algorithm is going to have a prediction step right ah and followed by a measurement update or a correction update right so before we get into the working of the base filter algorithm so let us just look at a simple example where we would illustrate this right ah so we are going to call this the door world ah this example is taken ah directly from the textbook right so you have a robot right and so it has a camera and it is standing in front of a door okay so there is a door and the door could be in one of two states right door could be either open or it could be closed so remember that our state signal could be much more complicated than just this one door being open or closed just for illustration purposes so that the math and the computation that you would see on the next few slides is easy okay so we're going to assume that the state is described by exactly one variable which is the door and the values this variable can take is that it is open or it is closed okay so my x t right consists of just one variable it does not a vector and at each time t x could be either open or x could be closed okay is it clear now what is my z right my z is using the camera okay the robot can sense whether the door is open or closed right my z is also a single variable and it is what i sense the door to be right the door could be closed or the road could be open but it does not mean that the door is really open or close right the the camera could make a mistake for example there could be a reflection on the door right which makes me think that there is a light coming from the other side and therefore i could think that the door is open while the door is actually closed right sometimes the door could be open but i might actually it might i might be looking at a darker room and i might think that the door is actually closed right so but so the camera has some amount of noise it is going to tell me whether the door is open or whether the road is closed but i do not know for sure right so my x at every time instant is going to be either open or closed and my z also at every time instant is going to be open or closed whether i sense the door to be open or whether it sends it to be closed okay now the next thing i need to do is look at the actions right so i need to have a set of actions here and so i am going to assume that the robot right now to make things simple the robot doesn't move right it has only one action right that it can open the door right it can push the door and try to open it okay so again there could be some noise it might not push it hard enough right so you might have to push a little harder to get it to open right and there is some noise and of course as always the robot can choose to do nothing so really the robot has two action choices one whether it is going to push or whether it is going to do nothing you know one can just stand in front of the door and then keep looking at it right without doing anything to the door right so these are the two possible actions so two possible states open or closed two possible observations open or closed and two possible actions push or do nothing okay exactly so ah so let us look at how this world is going to look like for us ah so at when we start right so remember my x x is ah open or closed right so when i start i do not know anything about the state of the door right because i have not sensed anything remember we always said that x naught is your initial state if you have any prior knowledge you could put it there but otherwise you have not done any actions you have not made any sensing or any measurements right therefore i start off with thinking my belief right on whether the door is open or closed is basically half and half right so i encode it like this right so right so the belief that the door is open that is i i think the door is open with probability half right and i think the door is closed with probability half and this is true before i make any sensing actions okay is it clear before i make any sensing actions i think the door is open with probability half the door is closed with probability half now notice that this does not mean that there is a half half chance of the door being open or closed it just means that the robot thinks the door is open with a half chance or the door is closed with a half chance the door is either open or closed okay so that's that's something to ah keep in mind right right so now this is the initial belief that we start off with now i'm going to look at the dynamics of the world right so i need to know the sensor model and i also need to know the motion model if you remember right so the sensor model i am going to assume that the camera is a little noisy right so even if the door is open right there's a small chance that i might sense the door as being close so how do i encode that so my z remember is sense open right so it's going to going to think of the observation as the door being open and the door is actually open right so if the door is actually open the robot will think or robot will sense it as being open with only a probability of 0.6 right the door is open but the robot could sense it as being closed with a probability of 0.4 right it could be a variety of reasons the robot could think the door is actually closed when the door is open with some probability point where there could be some kind of occlusion on the other side of the door or or there could be something blocking your view of the door itself right and therefore you think that even if the door is open the probability that you sense that it is closed is point four right so your reliability if the door is open that you will sense it that it is open it's just greater than tossing a coin 6.6.4 okay it's a pretty noisy sensor so the other uh way right when the door is closed right so i have to look at how the sensor will behave when the door is closed as well right so when the door is closed i am going to sense it as being open with a probability of 0.2 i am going to sense it as being closed with a probability of 0.8 okay so that is a fairly more accurate sensing if the door is closed i'll sense it as closed with a very high probability which is point eight and if the door is closed i'll sense it as open with the probability point two i mean these are you know just numbers that we have for illustration purposes so you should not really be questioning this uh too closely as to whether these are realistic numbers or not but they are no no not too bad right so is it clear so now we have the sensor model and the center or the measurement model that gives you what is the probability of zt given x t right so x t could take two values z t could take two values therefore you have to specify four probabilities here right so when x t is open probability of sensing its open is point six when xt is open the probability that you're going to sense it is closed is point four okay likewise closed sense open point two closed since closed point eight okay great so at least the the the sensors don't completely mislead you so the right sense sensory value right has a higher probability okay so going to the next ah etcetera right and now we have to look at the motion model right so what is the motion what is the motion model here for us or the transition model here for us so i think of a particular state right so i am in a particular state xt minus 1 i do an action ut right and then i land in a state x t right so i start in x t minus 1 i do u t i land in x t okay so that's the that's our model right this is a markov assumption so we said we are not worried about history so i know what is the state i am in at t minus 1 i know what action i performed at time t so what is the state i am going to land up in time t okay so let us look at this again again we have to look at multiple cases so now for the first set of values i am showing on this slide i am looking at the action push right so what happens when i push okay so what happens when i push right so if the door was open to begin with and then i push what do you think is a likely outcome it's going to be open right this is by pushing it i am not going to close the door right i have to pull the door to close it i don't have a pull action i only have a push action so if the door is already open and i push it'll continue to stay as open right so that happens with probability one that means it's a certain event right if the door is open and i push it will continue to stay open and therefore the converse event right so the door is open and i push the door will close as a probability zero so it'll never happen so that's what it means right now look at the other case the door originally is closed and i push and the door opens i have a probability of point a right this is a fairly reliable action but the door is closed and i push that's a small chance of point two ah just for that the small chance of point two that the door could be closed could remain closed that could be the door could be stuck you are not pushing with enough pressure or could be some other source of noise that the door remains closed even even when i actually try to push the door open okay and finally ah for the motion ah so i could uh do nothing right that's the other action right so the first action was push and the second action is basically do nothing right and so uh if do nothing basically does not cost the world to change right so if the door was open and i do nothing the door continues to remain open uh if the door was closed i do nothing the door continues to remain closed okay right so this is the basically the the zero probability events or when the state changes the door was closed i do nothing and the door now becomes open never happens probability is zero right the door was open i do nothing the door now becomes closed never happens the probability is zero so both these events have a probability of zero and so if the door is open it continues to remain open if the door is closed it continues to remain closed if i do nothing okay so all of all is clear now right so i have my motion model right i have my measurement model and i also have my initial beliefs now we are ready to start computing our updated beliefs okay so let us assume that i starting off with a the initial belief that we had half and half and then an action that i do is do nothing i'm not doing anything to change the state of the world and then the sensory information i am getting is open okay so basically i did nothing i get a sensory information that is open now the way the uh the base filter algorithm works is i start off by first making the prediction step right so i first compute bell bar and i am going to compute bell bar of x 1 right the bell bar of x 1 is essentially looking at all possible values that x naught can take right and looking at the probability that x naught takes that value then i know what u 1 is u 1 has been given to me which is do nothing so what is the probability of x 1 given do nothing and that particular x naught so i have to do this over all x naught right so this basically gives me the bell bar x 1. remember in the algorithm that we wrote this x naught was actually the summation was actually written as an integral and since we are looking at discrete valued events right everything has only two possible values x has two values z has two values u has two values so everything becomes summation instead of integral here and therefore we are now summing over all possible values of x naught instead of integrating over x naught right so let us look at this a bit by bit so i am going to look at what is bell bar of x 1 equal to open right so i have to look at all values that x squared intake right so this was some particular value of x one here i am actually putting filling in a value here right the bell bar of capital x one equal to open is given by so the first value that i could should consider for x naught which is x naught equal to open is what we are considering here so belief of x naught equal to open since what did i believe was the probability that x naught will be open times the probability that x one is open given x naught was open and i did nothing right the action was do nothing ok is it clear so i am looking at bell bar of x 1 equal to open so the first value i will consider is i started off with x naught equal to open right that's x not equal to open action was do nothing and then i ended up with x one equal to open so what is the chance of that happening okay and then the second thing i have to look at is x naught equal to closed right i have to sum over all possible values of x naught right so i looked at x naught equal to open now i have to look at x naught equal to close and then again i have to look at the probability of x 1 equal to open given that u was do nothing and x was x naught was closed okay so i have to add up these two probabilities so if you remember our values that we had earlier so the belief of x naught equal to open and x naught equal to closed were both half so that is a 0.5 here and the probability that x 1 equal to open given that i started off with open and i did nothing right is 1 we saw that right if you do nothing the state doesn't change so the probability is 1 and given that i started with closed and i did nothing what is the probability that x 1 will be open 0 we said the state doesn't change right so basically the bell bar x 1 equal to open is basically 0.5 as it changed right and so likewise we do the same for bell bar of x 1 equal to closed okay of course you could always take it as 1 minus bell bar of x 1 equal to open but we will just walk you through the computation again so likewise you look at both values of x naught so you consider x naught equal to open first okay so belief of x naught equal to open right and so you start off with x naught equal to open you do nothing what is the probability that you'll end up with closed so that is 0 right and belief of x not equal to open was 0.5 and then x naught equal to close you start off with x naught is closed then you do nothing and then you end up with x 1 equal to closed and what is the probability of that happening we know it is 1 so that is 1 and belief of x not equal to close is 0.5 so the whole expression evaluates to 0.5 right so so you have bell bar of x one equal to open is point five bell bar of x one equal to close is point five and so it really does not change from whatever bell of x naught was ok so bell bar x one hasn't changed its not its not surprising because it is essentially this is our prediction update and our ut was basically do nothing right do nothing is supposed to not change anything in the world and therefore whatever belief that we had on x naught gets transferred to bell bar x1 so the interesting update now is going to be what happens when we do bell of x1 after we incorporate this sensing right so let us go on to the next slide all right so now i had to look at the measurement update so how does my measurement update work if you remember ah so i had this normalization factor right and then there is my bell bar of x 1 right the bell of x 1 is equal to bell bar x 1 times the probability that i will get my z 1 whatever is the sensory action i had given x 1 ok so that is essentially my measurement update right so let's start off with looking at uh let's start off with looking at bell x 1 equal to open right so remember x 1 can take 2 values i have to look at x 1 equal to open so that is equal to the probability of me sensing that it is open given that x1 is was open times my bell bar that x1 was open so what does it really mean right just let's just wait for a second and think about this so i want to know what my new belief is in the fact that my door is open okay i want to know what do i believe now about the door okay so i take whatever was the bell bar right after whatever action it took yeah so after whatever action i took the last time right so what is my what what is the probability that x1 equal to open what will do i believe is the probability that x1 equal to open right times what is the probability that i will actually get this sensory information or or i will actually make this measurement given x 1 equal to open okay so notice that bell bar is still my estimate of what is the probability of x 1 equal to open right it is not really the probability that x 1 is open it is only my html remember that always right its not its not a its not a system parameter its your estimate and and since we are updating in a very consistent fashion very soon you will know what is the true state of the world assuming that you have enough interactions with the in with the world okay right so now uh uh if you remember our probabilities right bell bar x one equal to open was point five ok and now if you remember i said when x one is open right we will sense it to be open with the probability of point six right so if you remember the previous ah the measurement update so this is the measurement update that we had right so when the door is open we will sense it when the door is open right we will sense it to be open with probability 0.6 when the door is open we will sense it to be closed with probability 0.4 you should remember these numbers right and when the door is actually closed we will sense it to be open with probability 0.2 right the door is closed we will sense it to be closed with probability 0.8 now notice that we have given that the robot has sensed the door to be open right so we know that the robot has sensed the door to be open right in the first first instance we have sensed the door to be open therefore the two relevant probabilities for us are these two okay so door is open what is the probability i will sense it to be open and if the door is closed what is the probability i will sense it to be open because that is basically what we have sensed now right right so remember since sensor to be open so we already looked at the prediction step now we are going to the measurement step right so the probability that i am going to sense it to be open given that it is actually open is 0.6 times 0.5 so this this quantity evaluates to 0.3 right i still have my normalization factor and will come to that in a bit right so this is the first update now what is the next thing i have to look at yes i have to look at what is the belief that x 1 is closed right i have to look at the belief that x 1 is closed and again i look at the belief bell bar that x 1 is closed and the probability that i am going to sense something as open or something since the door has open given that x1 is closed right so what is the probability that x1 is close for me it is 0.5 right so that is we already know that right this is our bell bar bell bar is 0.5 the probability i will send something open given that x1 is close right given that x is closed i will sense it to be open is 0.2 remember that the two probabilities i pointed out to you are 0.6 and 0.2 so that is the 0.2 probability so if i simplify this i get ah 0.5 transformed to is 0.1 times eta so the normalization constant if you remember i said we'll add up all the numerators and divide by the sum of the numerators so my eta is basically 0.3 plus 0.1 raised to the power of minus 1 that is basically 1 by 0.3 plus 0.1 so that's 2.5 so my final bell of x 1 equal to open is 2.5 times 0.3 and the belief that x 1 equal to closed is 2.5 times 0.1 okay so that's basically what we have here right so finally we have these two as our beliefs right right so belief that x 1 is open is 0.75 believe that x 1 is closed is 0.25 so remember what did we start off with we started off with a belief state that said x naught is open with 0.5 or x naught is closed with probability 0.5 then what we did we said do nothing now make a sensory action my u1 was do nothing and my z1 was sends the door to be open now given that my sensor is reasonably reliable so i update my beliefs now so my new belief is belief x 1 equal to open is 0.75 belief x 1 equal to closed is 0.25 mainly because i have sensed z 1 as door open ok now i can keep doing this now this will be my new belief that i will start off with i can iterate for the next time step so let us go over one more ah time step of this update now the new sensory information i am getting so i have starting off with my belief x1 and my next u i'm getting is push right so i'm going to push okay just to make sure that everything is open right so i think the door is open with probability 0.75 but i still choose to push the door and after i push i sense the door to be open okay so my u2 is push and my z2 is sends the door to be open okay u2 is pushed and z2 is sends the door to be open and so let us look at what our prediction step should be i have to first look at bell bar x2 is open right so that will look at bell bar x2 is open ah that's given by uh first start off with bell x1 so i have to look at both x1 open and x1 closed as my starting points right so i could x2 could end up as open because i started off with x 1 equal to open and did something that made it stay open or i started off with x 1 equal to closed and did something that made it open so i have to consider both outcomes right that i started off with open and then i made it open i started off with closed and then i made it open i had to look at both outcomes so what do i do now i start off with x1 equal to open right and then my action was push so i look what is the probability that x2 would be open given that i started off with open and i pushed right so we know that if you push when you start off with open at the probability one you will stay as open so that is what our model was earlier right and next component here is i look at i start off with close now i push what is the probability that it will be open right so we know that the push action is fairly reliable so if i push even when the door is closed there will be a pointed probability that it will end up being open right so i take the first one which is probability 1 times belief my current belief that x 1 is open which is 0.75 and this is 0.8 right the second term is 0.8 times 0.25 so i evaluate this i get a value of 0.95 don't don't rush this is still my bell bar right it's not my belief so my belief has not become 0.95 my intermediate my prediction update tells me that bell bar is 0.95 for x2 equal to open similarly i have to do this for x two equal to closed right and like i said because there are only two values you could have done one minus but we will not do that we'll just go through the full computation right uh so i start off with again x one equal to open right and i pushed so i am looking at whether x2 can be closed i know that that never happens right so that probability is 0 even though my belief that x1 is open is very high so that component contributes 0 right and likewise now the door is closed i push the door stays closed right that can happen right so we know that if the door is stuck or we are not putting enough pressure or something like that there are many reasons it could not work and that probability is point two and what is the probability that i actually start off with the door being closed that's point two five right that's what my belief was right so basically have point two into point two five which is point zero five this is my bell bar right so my bell bar that x 2 is open is 0.95 and my bell bar that x 2 is closed is 0.05 okay likewise let us do the measurement update right so i want to look at belief that x 2 is open so that basically starts off with the bell bar that x 2 is open times the probability that i will get sense open given that x 2 is open ok so if x 2 is open i will sense that it is open with probability 0.6 we saw that already right the x2 is open and since it's open with probability 0.6 notice that both in my first step and my second step i have sensed the door to be open right it could very well be that my sense my i could have started off with sensing it the door to be closed and now i could sense it to be open but in this particular example we have chosen the sensory input doesn't change so it sends open so these the values that we are using are also the same values we used in the first step right so i sense it's open given that it is actually open probability is 0.6 right and the belief bell bar that it is open right is 0.95 right and then i take the product in fact here i have not done the normalization computation separately so this whole thing evaluates to something like 0.98 okay and similarly i look at the bell of x 2 equal to closed okay so now i take my prediction update a prediction value the bell bar x 2 is closed and then i look at what is the probability that i sense something as open if it is closed right and i know that probability is 0.2 and i know my bell bar value is 0.05 and i take this product right and then add just by the normalization constant i basically get 0.017 as my ah so you can see that i mean these are all approximations right so i get 0.98 as my belief that x 2 is open and it should be 0.02 if you round it up right the belief that x 2 is closed is that clear so we will give you some practice exercise where you can try out a different kind of update sequence for a similar kind of problem right so the thing to remember is in fact the door even now i don't know if the door is open or closed right so the robot doesn't know whether the door is open or closed in fact all of these computations there is still a 0.02 chance according to the robot's belief that the door is closed captioning not i could think that available the actual outcome has a very little likelihood of happening right i could start off by believing that the door is closed right with probability 0.99 and the door is open with probability 0.01 we started off with 0.5.5 right then it might take uh one more step of iterating through this before i i'm sure of what the new probability that that time the door is open right so right now i think the door is open but that might not be the uh case uh in in the real world so this is something to keep in mind and this is just an estimate of where i am okay so ah from the next set of lectures we will start looking at making specific assumptions about the form of the transition function right form of the measurement model and also the form of the belief distribution itself here i just looked at these as two numbers right just like a set of numbers the belief distribution was just two numbers what's the probability it's closed what is the probabilities open right the sensory distribution again was four numbers the transition was like again eight numbers right but all of these were just numbers right we didn't think of any functional forms right so next we will look at a function that can describe what the motion model is what the measurement model is as well as what the belief function is belief distribution is and then we look at how to operate with those and that allows us to do more more non-trivial inferencing okay |
Introduction_to_Robotics | Lecture_81_Kalman_Filter.txt | hello everyone welcome to the first module in week 10 so this week we are going to continue looking at recursive state estimation question right so in the last week we looked at the base filter algorithm right and so essentially uh so this is how it operated uh so we were given uh the belief in the previous state right and then we had the the control action ut and also the measurement or observation zt that we made and based on this uh we ended up computing uh the the revised belief for the state xt right for the time step t right and the way we did this is we first computed l bar x t which is essentially applying the motion model which is given by p of x t given u t x t minus one right applying the motion model on the belief state right on the prior belief state and then making a correction update or or a measurement update based on the actual sensory inputs that we get or the measurements that we need and there we use the measurement model which is p of z t given x t and with the bell bar that we computed in between right so this is essentially the structure of the base filter algorithm this is the same structure we will be following for all the algorithms that we will be looking at this week as well okay so one of the important things i want to remind you is that when we looked at the base filter algorithm it is more of a conceptual algorithm right so i did say that for practical implementation we will run into problems computing this eta so we have to look at simpler versions where we make assumptions about the model right so the base filter algorithm that we looked at we made no assumptions about the transition model right so it could be of any form right and similarly we made no assumptions about the belief distribution nor about the measurement model itself right so all of these were assumed to be arbitrary probability distributions or mass functions uh which we could then manipulate right and and the very simple example that we looked at they were all multinomial distributions in fact they were all binomial because they were just only two outcomes that we were looking at right and that's the setup that we had last week right so this week we will first start looking at a family of algorithms called the gaussian filter algorithms right so here the basic assumption with gaussian filter algorithms is that the the family of belief distributions that we are going to use right the belief distributions are going to come from a multivariate normal distribution right so the multivariate normal distribution here which will denote by the symbol n of mu comma sigma where mu is the mean of this distribution right and sigma is the covariance matrix right so you should be familiar with this ah if not earlier at least after looking at the revision videos from last week right so the density itself is given the probability of x for under the normal distribution n mu sigma is actually given by ah this expression as you know right so it's 1 by 2 pi root 2 pi really root 2 pi sigma and e power x minus mu squared by sigma right so this is what we know from the univariate and the the multivariate version is this and the idea behind uh the using this is that my belief state is going to be such that there is one true state right and i have some margin of uncertainty around the true state so if you look at the figure here so the true state will be somewhere there right right the true state will be somewhere there and i have some kind of uncertainty around what the true state is and that is represented by the gaussian distribution so that would mean that at every point of time my belief is such that there is some known true state and uncertainty around that right so this is fine so if you think about how we were doing it earlier right so earlier let us say that there were like five different states that the robot could be in i basically had five numbers right so which which looked at the probability that the robot is in each one of those states if you remember the door open door closed so the two states where the door is open or the door is closed and basically i had two numbers that represented the probability that i was in a state with the door open and the probability that i was in a state with the door closed right there is really there is no need to have any notion of similarity between these states because they were just numbers right there's just two two independent numbers uh actually i shouldn't say independent numbers which is two numbers that i were using uh that i was using for representing this right i could have many more such state not just two i could even have like a hundred different states which the robot could be in and i would end up using 100 different numbers for representing the probability but once i move to this kind of a gaussian setting so what is the most important thing apart from the fact that i am looking at a single true state and a margin of uncertainty around that and the other important thing is that i am also looking at a notion of distance in this state space right so the earlier when i was doing the other belief updates right i really did not have a notion of a distance the none of the probability distributions that i was dealing with had a notion of distance as an integral part of it but now when i get get into a gaussian family of distributions so the the notion of distance becomes very important because i have to compute what is the distance that x is from the mean right so very simply there itself so when i say there is a region of uncertainty around a true state that means there is a notion of distance that allows me to model this region of uncertainty right so far the base filter algorithm i didn't quite need to have a notion of distance right it's kind of moot right if you think about it uh in in almost any robotic problem that we are looking at we would have then we would have a notion of distance very natural notion of distance so it's not a very very you know critical thing to worry about but i just wanted to point out that from now onwards we will have to worry about this notion of distance between the uh in the state space right so we actually have to worry about a space that has a distance defined on it okay that would limit the kind of representations that we can use for the uh uh state represent the state space okay and another thing to note here is that in the earlier case right so when we had like five or six uh let's say we have five or six states in which the robot could be in and i was using five or six numbers uh to represent the probability distribution that could have been multiple states uh which the robot considered as equally good candidates right so my my belief distribution could have been multimodal right but when i actually look at simple gaussian distributions uh when is when you start looking at a single gaussian distribution right uh it actually turns out to be a poor choice uh when these kinds of multiple uh hypothesis could exist right if this the belief distribution could be multimodal then the gaussian cannot capture it right so all of this is great but the gaussian assumption is a very powerful assumption because it makes computation rather simple and it is not often the case that we end up in situations where there are multiple distinct hypotheses and therefore gaussian filters are a very popular family of filters that are used very widely in practice okay so the first gaussian filter that we are going to look at is called the kalman filter right so apart from assuming that the belief is ah belief is a gaussian right ah the kalman filter makes the following assumptions about the uh gaussian distribution right so so we already know that the dynamics both the the movement model the motion model as well as the measurement models right both of these are uh satisfying the markov assumption right ah but now we in addition we are going to assume the following for the motion model right the first thing we will assume is uh the system follows a linear motion model so what do i mean by that so the the the state transition expression is given by that right so if i am given x t minus 1 and i given u t so the new x is determined by a linear combination of ah a t x t minus one a but a t is just a set of coefficients for x t minus one and b t times u t right again a set of coefficients for u t right plus some kind of a noise to account for the stochasticity remember we had earlier we had the probability of right the state transition problem we had this probability expression right what's what's the probability of x t given u t and x t minus 1 right to account for this noise right account for the probability so we are going to assume that that is a noise term which is given by epsilon t right let me repeat it again so the dynamics is linear in the sense that x t is given by x t minus 1 times some coefficients 80 without basically constants plus u t which is the action that you take at time t times some other set of coefficients right so so x t this whole expression is linear in both x d minus 1 and in ut and to take care of the stochasticity we actually assume that there is an additive noise denoted by epsilon t and this additive noise is assumed to be zero mean and with the variance rt right it's assumed to be a gaussian with zero mean and a variance of rt right so this is what this notation is so epsilon t is assumed to be a gaussian random variable a gaussian noise random variable right with zero mean and a covariance of r t right so this essentially means that if you keep on repeating this transition many many times i mean i go to x from x t minus one i apply u t and i end up at x t right so the noise would average out right so so i'll basically have zero noise and then the linear model will be the correct prediction just just the linear part alone right so without the just this part alone would make the correct prediction in expectation so that's essentially what we are assuming right now the total state transition probability remember that is what we are interested in right the total state transition probability p of x t given u t and x t minus 1 can now be written as a gaussian variable whose mean is given by the linear part of the dynamics right a t x t minus 1 plus b t u t and the covariance is given by r t which is the covariance of the noise right so this is fairly straightforward right so i take a gaussian random variable and i add a deterministic fixed quantity to it right so therefore the overall distribution of the entire expression is now ah has a mean of that fixed quantity that i added because the original negative 0 mean and now and the covariance stays the same because there is no noise in the term here okay make sense and of course i can i can write it out in in a little bit more scary form but this is the actually the the the multivariate gaussian distribution written out with the mean a t x t minus 1 and and plus b t u t and the covariance of r t right it's it's rather straightforward right so don't get scared with this expression okay so just to recap we are assuming that the motion model follows a linear dynamics right so that is x t minus 1 and that is u t and the expression for x t is basically linear in both x t minus 1 and in ut plus an additive gaussian noise which is zero mean and with the covariance of rt okay so i hope i hope that is clear to people and so what are the other assumptions we make the second assumption we make is that the measurement probability right the measurement model should also be linear in its arguments right so what is the argument of the measurement model remember the measurement model we are looking at the probability of z t which is the measurement at time t given the state is x t okay so that translates to in a linear model that translates to this expression so z t equal some set of coefficients c t times x t plus a noise model delta t right now we had like epsilon t earlier now we will use a noise model that's given by delta t right and so delta t is again a zero mean gaussian with the covariance denoted by q t right so earlier when i was specifying the ah motion model right earlier let us go back when i would have specified the motion model i i would have needed to give you this distribution right so how did we do it when we did the example so for every value that u t and x t minus minus one could take we specified the probability for every value that x t could take right if you remember the sample example that we looked at right for every value that xt minus 1 and ut could take we and for every value that x t could take we specified a probability right but now our job is slightly simpler so we do not have to run over all possible values of this if i want to specify the state transition probability all i need to specify are rt a t and bt right so i have to specify a t i have to specify bt and i have to specify rt to specify the full transition model okay likewise uh for uh the measurement model i have to specify ct which is the coefficient linear coefficients for the measurement model as well as qt which is the covariance matrix of the noise that i add in the measurement model right so the overall measurement probability again is given by a gaussian with the mean is given by c t x t and the covariance which is given by q t just similar arguments as we had in the previous line okay so make sense and because i am adding a zero mean gaussian variable with q t as the standard deviation to fixed quantity so the overall probability distribution has a mean of c t and x t c t x t and a covariance of q t right now again i can write it out in the uh the full multivariate gaussian expression which is basically this right then what's the third assumption that i'm making i said i'm making three assumptions so the third assumption i'm making is that the initial belief distribution i start with right which is bell x naught if you remember right so that is something that i start with the initial distribution and earlier so we had actually thought of it as um something uniform right in the in the in the numeric example we looked at we had bell x naught equal to open as 0.5 del x naught equal to closest 0.5 again the belief was just a set of numbers right but here to make sure that our entire computation is tractable we assume that the belief distribution is also given by a normal distribution just like the noise that we had in the two models right and we assume that the mean is mu naught the covariance matrix is sigma naught right and so basically that's the expression so we have n of mu naught sigma naught and therefore bill x naught which was the is the distribution uh belief distribution at time 0 is essentially given by a covariance matrix with mean mu 0 and and variance covariance sigma 0 right and therefore you can you can see what's going to happen so bell x naught has mean mu 0 and sigma 0 so bell x 1 would have a mean of mu one and sigma one right remember that the measurement model and the uh the motion model right ah the transition models don't change they are given to you a priori right so you are using that and then you are only refining your belief over time right what will change are the actions and the measurements you take but the model themselves would stay fixed right or the distributions themselves would stay fixed right so your a a t b t and c t and your r t and q t don't change right but on the other hand the belief keeps changing at every time step because that is what you are updating in order to uh estimate the state right for in order to localize yourself as we will see later right so ah so your your belief distribution will keep getting updated so bell x naught is given by mu naught and sigma naught and well x one will be given by mu one and sigma one del x two would be given by mu two and sigma two and so on so forth and your entire base filter algorithm will now reduce to an algorithm where you update mu t and sigma t given mu t minus 1 and sigma t minus 1 and u t and z great is it clear so your belief distribution now is the one that gets updated therefore your mu naught and sigma naught will keep getting updated and so you will maintain your belief just by updating these two values okay so just to recap ah your system dynamics is defined by multiple quantities you have a t you have bt you have ct then you have rt and you have qt right so these are the quantities that need to be defined for specifying your system model that's both the transition probability and the measurement probability okay and for representing your belief you need two quantities which is mu and sigma so you start off with some basic assumption mu naught and you start and sigma naught right it could be a very very flat gaussian distribution if you will right without a very significant concentration at the mean right the sigma sigma naught can actually spread out the probability distribution as much as you want but the more informed your prior distribution is the more quickly you are going to converge to something meaningful and at every time step you use your system parameters and your current action and measurements in order to update your belief distribution as to where you are in the state and hopefully as you keep moving around right your belief becomes more and more concentrated around the actual state okay so and we'll see how this operates in a bit okay so ah like we said so we are going to estimate belief x x t based on belief x t minus 1 u t and set t right so very similar to the base filter algorithm so these are the steps of the kalman filter algorithm right and steps 2 and 3 right are actually updating bell bar right sips 2 and 3 right or updating bell bar right steps 4 5 and 6 are updating bell x t right so so 2 and 3 update bell bar x t 4 5 and 6 are for computing bell x t so i'm not going to get into the complete derivation of the kalman filter here right it's it's it's a little cumbersome and so what we would do is give you some material to read up if you're interested in looking at the derivation right alternatively i mean so you this is something that you could just you know take on rates right and then say oh yeah this looks like the right expression but i am not going to let you do that right so i'll give you some material so you can look it up yourself but so so essentially the point here is that this is more additional reading for you to look at how the derivation comes out right so the idea here is that uh so mu bar and sigma bar are going to represent my bell bar right and if you look at it it kind of makes sense right intuitively it makes sense so what is my mu mu mu t hat is right so i'm going to take my old mu t minus 1 which is i have that right because i know bell x t minus 1 right so bell x t minus 1 is represented by these two entities right mu t minus 1 and sigma t minus 1. so i know my mu t minus 1 which is essentially my my previous position right my expected position in the previous time step right so i am going to assume that hey look i am going to take mu t minus 1 right and i am going to apply u t to that right i am taking mu t minus 1 which is my previous position i am applying u t to that right and this is the linear part of my model right this is the linear part of the model if you remember that right so you had a times x t minus 1 plus b times u t right gives you x t plus 1 right plus there was this noise term epsilon t right that was giving you u ut plus so now what we are going to do is that epsilon t right is essentially given you given to you by rt right so what is what is that epsilon t going to do is going to basically make you more confused about where you are so your initial confusion was sigma t minus 1 right your initial confusion about value of r was sigma t minus 1 and rt is the additional noise right that you are adding right rt is additional noise you're adding that's a covariance matrix for the the transition right so that's additional noise in the transition so your sigma t bar basically is a function of your original noise where you are your original uncertainty about your belief that you are at mu t minus 1 plus the new noise that is introduced by the motion right so so mu t bar and sigma t bar together gives you bell bar of x t okay now that is basically your motion uh model or the prediction right the the belief leave after you do the prediction right and then that is your prediction update and now this is your correction update or your measurement update and essentially so this quantity here is called the kalman gain as we will note in the next slide as well this quantity is called the kalman gain and so your mu t bar which is where you assume you are is corrected right by the measurement that you make right so the z t is the actual measurement you make and c t mu t bar is the prediction the the measurement the linear part of the measurement model right that you have right remember so c t mu t bar is ok if i was at really at mu t bar okay and so this is the measurement i should have made if there was no noise right but zt is the actual measurement i make so that difference is basically uh going to change what my mean is right and likewise i have a complex function here that also tells me how much noise i have to add based on the actual noise that is expected in the measurements okay so that gives me my new sigma t so this part of the derivation is a little not too hard right it's just a little involved and i encourage you to just follow it offline okay so at the end of both these updates so this is the prediction update and that is the uh the measurement update so there's a prediction update and then you have the measurement update and after that you get the new belief state which is given to you by mu t sigma t right so this is exactly what we were saying earlier right so lines two to three are the prediction step right right and lines four five and six of the measurement update and the kt that we compute in line four as you saw earlier right is called the kalman gain so the kt is called the kalman gain right two to three are the measurement updates i'm sorry two to three are the prediction updates and four five and six are the measurement updates sorry right so i hope that's that's clear so the kalman filter makes very strong assumptions right it assumes a that the dynamics are all linear right the main component of the dynamics is linear right the motion model is linear the measurement model is linear so this is a very strong assumption that it makes and the second assumption it makes is that the noise or the probability distribution that you are seeing for both the motion model and the measurement model the the noise more is is gaussian and it also assumes that the belief is a gaussian distribution so basically there are three things so the linearity assumption right and the gaussian assumption on all the three models that you use okay so this makes the life little easier right so you could basically i mean if you think about it these updates are actually just simple matrix multiplications right you can very easily implement it on your favorite tool right matlab or whatever is it that you have this is a simple matrix multiplication and inverse operation so you can implement it very efficiently and you do not have to worry about the computational difficulty that i was pointing out in looking at the normalizing factor and it is also nice in a way because everything is gaussian right you are guaranteed that at every state your bayesian your at every state your belief distribution as well is gaussian right if you start off with the gaussian belief it will stay a gaussian distribution because your dynamics and your measurement model both have gaussian noise so this is the reason we make this strong assumption because it makes computation so simple okay and the reason the kalman filters are very popular kalman filters and their family of filtering algorithms are very popular is because these assumptions do work in practice right and or slightly slightly stronger versions of this ah work in practice right and so let us look at how the whole kalman filter thing will look like so let us take a simple one-dimensional localization scenario right so i have a robot that is moving on the horizontal axis right moving in x x coordinate i am interested only in looking at what is the displacement of the robot along the x coordinate right so i am just looking at this right i am not i am going to assume that robot does not move in any other direction right i am not worried about the effectors of the robot or anything so my state is only the displacement along this line just to make things a little easy and easier to visualize and to draw as well right and i also assume that the robot has a gps sensor that can basically query its location and and the sensor has some amount of noise obviously especially especially if the robot is indoor the sensor is going to have a lot of noise but for the time being let's assume that we have a gps sensor that gives you a rough idea of what the location is right and we are also going to assume ah whatever we had looked at earlier that the motion model is linear right so in terms of the velocity with which the robot moves and the original location so we can write a linear model as to what its new location should be and we are also going to assume that the measurement model is linear with gaussian noise right so this is how the kalman filter is going to look like so i am going to start off with some initial belief so let us look at one diagram at a time right so lets lets look at this one right thats the first uh just the initial belief just look at the initial belief alone right so this is the belief that i am starting off with so what does this mean this means that that is my mu t right that is my mu t and i have some kind of a sigma t that represents the current belief current noise in the belief or kind of uncertainty in the belief that i am at this ah x mu t right so that will be my mu t and i will have a sigma t here right now what do i do is i actually make a measurement so this is my first thing to do i make a measurement right so the measurement tells me hey look this is where you are likely to be so the measurements mean is here right the measurements mean is here and the noise in the measurement is this much so this is basically what my measurement model tells me right now once i incorporate this uncertainty into my location right so you can think of it as i don't move at all so my belief doesn't change because i didn't make any movement so no no no no op so my belief my location doesn't change so my bell bar is the same as my bell right bell x t minus 1 is the same as bell bar x t because i did no movement right and then now i do a measurement so given that this is the noise associated with the measurement update that i have made right now i re i basically incorporate my original noise which is this which is my remember now this is now my bell bar of x t plus my measurement right of these two together right so give me that as my belief state right okay so that is my belief right after this now what i do is i move to the right right so basically i say that i am moving to the right here okay now after i move to the right i have to now apply my movement model right so i know what my action is my action was moved to the right i apply my movement model now my movement model also has some kind of noise in it right so what happens is even though i was so certain about where i started out right this is where i started out this is my bell x t now right this is my bell x t now my bell bar x t plus 1 becomes a lot more noisier but at least you can see that the mean has shifted to where it should be so this is where the original mean was right this is where the original mean was now that it has shifted here and that is the effect of the linear part of the model and the fact that the variance is now increased right the variance is increased is because of the noise in the movement model right now once i have this right this is my bell bar i make a measurement right and the measurement gives me a mean that is somewhere here right that's what the measurement model tells me and that's the noise around the measurement model so putting these two together and that is my final belief at x t plus 1. so i started at t minus 1 with the belief of that then that was also because i didn't move that was my belief bell bar of x t x t right and then i make a measurement i integrate that and i get my bell x d then i make a movement here which movement was go to the right that kind of shifts my mu t minus 1 to mu t bar right and and also kind of spreads out the probability mass because of the uncertainty in the movement and then i make a new measurement so that is the new measurement that i make and that tells me this is the more likely location so i combine the bell bar and the measurement and finally get my bell x t plus 1 right so that's how the kalman filter will look will work so every every point of time you can see that all the intermediate computations that we do are all gaussians or normal distributions so that makes it easier for us to do the computation so far we looked at the the simple kanban filter algorithm which assumes everything is gaussian but as we know that you know the fact that the movement model and the measurement model are both linear is a very strong assumption so we look at ways of getting around that one by looking at a slightly different version of the kalman filter and then by looking at other more complex filters as well in the next few lectures |
Introduction_to_Robotics | Lecture_63_Stepper_Motors.txt | In the last class then, we were looking at sensor for speed and position. Mostly encoders are the ones that are nowadays used and these as we have seen, there are two varieties one is incremental encoder, that is what we saw in the last class, how the incremental encoder would work. And then the other variety that we have is called the absolute encoder. The absolute encoder and so if you have a disc and you have, you are looking at a certain angle, then you have slots that are located at certain distances, which then serve as a means of encoding the angle information. So, if you have for example, an 8 bit encoder, 8 bit absolute encoder, then you have 360 divided by 2 power 8 angles which can be there which you can independently locate. And in each of those angular intervals, you then have a specific way of arranging the slots. So, that as soon as you switch it on, depending on which of the slots are open, which positions are not open, you can immediately determine where you are. So, that is the idea behind this. So, if you need absolute angle information without having recourse to locating the index pulse and then adjusting all that, one can use this for an absolute input. So I mean, instead of taking 8 bits, I mean for example, let us say you have a 3 bit absolute encoder then these can be 000, 001, 011 and so on can go all the way to 111. So, it means that if you have a disc and you have certain angle. So, let us say that you are going to put an LED detector in this, that is along the disc you put one detector here, one detector here and one detector here. It means that in the region here you either have a slot or you do not have a slot. Similarly, in the region here you either have a slot or you do not have a slot and similarly here. Therefore, in the sector adjacent one maybe if you have a slot, if this angle is going to is going to represent 000, then it means that you will not have a slot here, there is no slot here and there is no slot here. On the other hand, when you go to the next one, if it should be equal to 001 then there is no slot here, no slot here, but you have a slot here. So, the output from output from the detector there will give 1 in the next angle, it would give no output. Then in the next one you may have 0 for no slot, 11, so you have slot you have slot. So, if the disc is now going to rotate to be there at this angular disc position, then you would have an output as 011. So accordingly, as the disc is going to move, you will get different varieties of outputs. And depending on what you detect, you can immediately know at which angle you are. And therefore, this can be used as a measure of absolute time. But then the difficulty is you cannot really afford to go in this order you cannot go in this arrangement because if you look at this, you have the let us read 000, 001, 011 and then you get to 100. Going from here to here is a difficulty because now you need to have 1 0 and 0. Now since you have to make this disc and you have to have a slot here and these two slots must exactly end at this location hand you must have no slot here. Which means that if there is an error, if you are going to have an error, then you may have a situation, for example, you go from 011. And if these two slots do not end exactly at the same location, then you could have a situation where you have this is going to go from 0 to 1, it may not have gone from 0 to 1, there is a small interval where the slot is slightly, this slot may be a little shorter, in which case you still have 0 existing for some time. So, you go from 011 to 001, that is assuming that this slot has not happened exactly at this end, but it is a little further away, that means the 0 here will last for a little longer time. So, you still have the 0, what did you have, 011. So, it starts with let us say 011 and this is going to stay as equal to 0, because for some reason this slot has become a little shorter. Then from 011 it goes to 0 here again and this has now become from 1 to 0. So, it means that the next one goes to 0 and it may go to therefore 001 or if the next slot also has an error it may go to something else. So, because of this you do not want more than one slot to change from one sector to another. So, this occurs because you are intending to change this from 0 to 1 and at the same time you want to change from 1 to 0, which may not be easy to achieve when you actually make the disc. So, instead of having such an arrangement, you allow only one of them to change, which means that you will have to switch over to what is called as gray code, which is an arrangement of this in a way that only a single bit is undergoing a shift from 0 the 1 or from 1 to 0. So, if you arrange it that way, then you will then be able to determine the actual angle as soon as you switch on the system. So, this is then an absolute encoder and one can use this. It should be 010, but even that is a difficulty because 1 is going to go to 0 and 0 is going to go to, you are right, it be 010. So this is therefore, one way of sensing the absolute angle. Another device, which is used for this angle detection is known as the resolver. So, here for example, the difficulty is that if you are going to have an optical way to measure it, then this entire thing must be enclosed in such a way that dust cannot enter because if you are going to have small slots, and then LED which is going to emit and you have another electronic which is going to, going to detect and then if this is not completely well enclosed, if there is some amount of ambient disturbance, some sort of dust and if it is enters into this then your encoder will stop functioning. So, optical encoders have this difficulty that the environment has to be very good, or the encoder has to be very good, impervious to even small amounts of data. But on the other hand, if you are going to use electromagnetic equipment, which is like what we saw here, especially if it is in AC Taco. Here, you do not have a difficulty because you are only looking at induced EMF and dust is not an issue at all. So, one can go a little bit more into this and have another entity known as resolver. In this case, this is again an electromagnetic equipment you have a stator and you have a rotor, coils are there on the rotor, I will just show it as two leads and you give a high frequency excitation to this. Stator has two windings displaced by 120 degrees, displaced by 90 degrees and the rotor is then linked to your system, electric motors for which you want to determine the rotor angle. Then, because you are giving high frequency excitation, both the two entities here, they are going to have induced EMF and so they will also have high frequency induced EMF because this is a high frequency excitation. But on top of that, since this is also going to rotate, the induced EMF that is going to be there will also depend on how the rotor is oriented with respect to those two. And because of that, if you look at the high frequency induced EMF in the first winding let us say 1 and 2 here. If you look at the induced EMF at 1 with respect to the rotor angle, you will have an induced EMF that has a sinusoidal envelope like this. This envelope occurs because of the variation of the rotor angle. So, if this high frequency excitation is going to lie along the same axis of 1 then the induced EMF will be highest, as it is going to rotate further then the linkage due to this high frequency excitation at this that 1 will decrease and because of that there will be an envelope for the high frequency excited, high frequency induced EMF. And similarly, for the next one, since it has been displaced by an angle equal to 90 degrees. So, if you are having low value of induced EMF here, the next one would have a high value of induced EMF and therefore, that would go like. So, this has a sinusoidal envelope that also has a sinusoidal envelope and you can then have these two signals that are going to come from 1 and 2 and you send it through an envelope envelope detector, this also through an interval of detector, that is an electronic circuit. This will then give as an output a sinusoidal waveform, this will give a core sinusoidal waveform. Student: Hows the stator winding arranged. Professor: Stator winding is like a normal AC machine, it will be a distributed arrangement. So, here therefore, if you know the amplitudes of the envelopes then at any given angle of of the of the rotor this would have an amplitude equal to A sine theta, this will have an amplitude A cos theta and therefore, you can see. So, this is let us say y1 and y2 and then the angle can be determined as inverse tan of y 1 by y2 at any given instant. You have to contend with the situation that the denominator may go very close to 0 that you can address it in some manner. So, in this way one can determine the absolute angles of the rotor using this particular measuring occurs. So, this equipment incidentally is fully electromagnetic and therefore, it does not have the difficulties of optical mechanisms as we saw earlier and therefore where you have an environment which a lot of dust this may perhaps be a very good approach. Professor: See, you need to have an oscillator outside, it will involve bulbs, that you cannot avoid. So, this is then an approach to detect the angle as well. So in all these, so the angle all the instruments or the devices, which we said can be used for angle measurement are to be used here. In order to sense the angle of the rotor, which is required in order to make this AC motor look like a DC motor for the purposes of control. We have drawn a block here, which I said you do something inside the block, which ensures that whatever happens this looks like a DC machine on this side. And for that purpose you need the rotor angle, that rotor angle can be obtained in this manner. Note that in the case of a sinusoidal EMF AC machine, you do not need all switches like the case of the other one, you do not need hall switches here, you need the angle instant to instant because you need to define the sinusoid by which you have to excite and for that therefore you need instantaneous angle. And that is being implemented by having these kind of mechanisms, encoders to sense they are. Professor: See the Hall Effect, this is you see that the resolver is able to give you an output like this which is a sinusoidal output after you do the envelope detection. But a Hall Effect sensor like the one that we had used for the BLDC machine. So, this Hall Effect, Hall switch it only says whether the field is high or low that is all. You cannot infer what the angle is based on the outputs, it is simply high. So, this is not a mechanism that you can use to detect the angle of the rotor, the only information that we are getting from this is that when it goes from low to high you know something about the rotor position. So, it is only at that instant you know where the rotor is, after that you have no idea. Whereas, if you take the other one you have a amplitude output that is going to vary and therefore, you know where the rotor is. So, that is what happens here. But there are some applications which are low performance still. And you may not want to use encoders. Encoders are also sometimes depending on how robust you want the encoder to be. If you really go for high quality industrial grade encoders they may be as expensive as the motor itself. So, it is an expensive expensive thing to have. And encoder also means that you need additional space on the motors because you have to have a motor shaft. So, if you need to use an encoder this is important. You need to have you need to have the electric motor and the motor has shaft, this shaft is going to connect to the load. So, this is known as the drive end, of the motor because that is the end of the shaft that is going to be driving the load. You cannot put the encoder here, I mean obviously, we said that the encoder must be linked mechanically to the shaft, so that the encoder rotates at the same speed encoder or this one. So, this must rotate at the same speed as that on the shaft, which means that they will be mechanically linked together. And since you are going to connect this side to the load, you cannot afford to put the encoder here there is no space. So, if you want to have an intruder on the machine, then it is necessary that you need to have an extension of the shaft on the other side. So, this side is called as the non drive end. So, this is usually called as NDE and this is DE. So, at the non-drive end you need to have a shaft extension or a way by which the encoder can make a physical mechanical connection of the shaft. And then you need to have an arrangement, the encoder obviously has to sit here that will have its own shaft and you make a mechanical connection. But you cannot leave the encoder hanging like that at the end, there must be an arrangement to fix it appropriately. And therefore, you need to have some arrangement that fixes the encoder on to the motor itself. And then there is another difficulty that if you are going to have an encoder which is made by somebody as you are going to get an encoder as another small piece that you need to fix to the motor, this encoder must be aligned such that it is shaft is exactly in line with the shaft of the electric motor and not shifted this way or that way. So, fixing the encoder is not at all an easy job. So, the first thing is the encoder itself may be expensive depending on how you want the encoder to be. If you want a low cost encoder which is not so immune to all the disturbances, you can get very low cost encoders and you can put them onto the electric motor it will work. The only thing is ambient conditions etc you need to take care it may not be very accurate. If those inaccuracies and difficulties are acceptable for you, there are low cost encoders that are available, you can get something for very low cost. But still the aspect of fixing is important, however low cost it is, you have to fix it in such a way that the shafts are aligned and locating that in such a manner is not at all an easy job and therefore, the demand is difficult. So, in applications where you do not want the sophistication of an encoder and indeed where you do not even want to, if you can avoid the encoder altogether that is then good. But you cannot use any of these kind of machines because they need the rotor end. So, one other electric motor that is used in such cases is is known as the stepper motor. The main advantage of this is that it moves in steps and that is an advantage as well as it is a disadvantage. It moves in steps, but it moves in well defined steps. So, that is the major advantage of this machine. So, essentially if you are going to have a machine that has a stator and the rotor, so the stator then has slots like that and so on and the rotor has extensions like this. So, this means that if you are going to have some coil placed around here and you energize this, then the rotor will move in such a way that some tooth which is most adjacent to it will snap on and align itself with that. So, it works based on alignment, it works based on the idea that magnetic magnetic circuit, that means, the flux path will realign in such a way that inductance becomes maximum which then means that the air gap is minimum. So, one can represent this in a manner which is spread out so if you have the stator like this and then you have the rotor which is let us say like this and if I am going to have a coil there and energize this, this means that the rotor will immediately snap into this angle. So, this will move by a small amount and snap into that angle and you must have designed it in such a way that when this moves and snaps into this angle, the next one obviously, the whole thing is going to move and there is an asymmetry there now. So, after you excite this, then when you excite the next one, this should cause a similar amount of displacement here and therefore, if you go on exciting it in steps, first this coil, then this coil then this coil it will move in steps as you excite it. So, that is why this is then called as a motor that moves in steps. Professor: Uneven yes, that means the gap here and here cannot be the same. If this width this width is the same this gap, this gap is the same it will just lock and after that you cannot do anything. So, you must have different number of slots here as compared to the rotor, so that there is an asymmetry that is there and because of that asymmetry when one of them is aligned, the next one is not aligned. And therefore, when you energize this this draws into alignment and the next one now becomes unaligned. Then you energize this that draws into alignment, so it moves in steps and steps have well defined angle because the spacing that you provide here and the spacing that you provide here are fixed by design and therefore, the rotor will move in predefined steps. Student: does the rotor have any coils. Professor: The rotor does not have any coils, nothing is there. Professor: It depends on the speed with which you want to rotate, it depends on the speed with which you want to rotate and the number of such faces that are disposed around the machine. So, it varies definitely yes. Student: what is the rotor material. Professor: Rotor is usually made of iron. So, there are different varieties here. So, you may have the rotor made of just iron, it may be iron plus magnet, this will obviously increase the amount of force with which it is going to align whereas, in the first case it is not so high a force. Now an advantage of this kind of machine is also that once it aligns it is rather difficult to make it unaligned because you are providing an excitation here and it has aligned, so it will tend to hold. Therefore, even if you have a load that is going to make the rotor attempt to make the rotor to rotate, this fellow will hold it and you have a holding effect, which if you want to reproduce in the other kind of machine is not so easy, you have to have a control system which attempts to do that. Whereas, in this case, there is no big closed loop system that is necessary because the movement is well defined. Professor: Rotor does not have anything, it is just iron. On the stator, you have an electronics that will drive this coil, you have to give a signal saying energize this coil and then you de energize this coil and energize this coil and de energize, energize, so you must have a circuitry outside and some arrangement to switch to all that has to be done. Student: So the backlash is minimum. Professor: Backlash backlash is minimum, yes, one can say that. See the difficulty here is so I will just come to the difficulties before that, the advantage. So, one important thing therefore is the movement is in well defined steps. This is one major advantage. And therefore, where this motor can be used, then you do not need position sensing. Why you you do not need to sense the position. Because if you somehow have an independent way of determining where you are starting, then depending on number of sub signals that you have given to energize, first energization given, now you know that the rotor is moved by delta theta next energization you give a further amount of that angle, third energization another angle. So, number of such steps that you have given to indicate what is the angle by which the rotor has moved. So, you do not need to have any special mechanism to detect the rotor angle, so you do not need this. Then holding at a given angle is feasible. So, these are all some of the good aspects about these machines. So, where you want to have a simple arrangement, you do not want to have a sophisticated control system for the motor, this is probably what you want to do. So, let us say that you have a system like this, you have a motor you have a shaft which is then connected to your drum and you have a rope that goes around this and you have a mass. So, if you now want to stop this mass from moving up or down how will you do it? The rotor has to be held. How will you hold it? If you are going to use any one of the other varieties of motors, you need to have a control system that detects what is the angle and then energizes the motor such that that angle does not change. Which means you need a pressure sensor and then error detection mechanism and then that mechanism has to energize the control circuit and whichever circuit you are going to have of across that, all that has to be done. Whereas here, if you are going to energize 1, 2, 3 and you want to stop, you just energize 3 and stop. You do not need to do anything else further, the motor will simply hold. So, all this of course has to be within the ability of the machine, you cannot put something that is beyond the capability of the machine and say hold and expect it to hold, it will just go on. So, the disadvantages then are movement we said is in well defined steps, provided there is no slip. If for example, let us say you want the rotor to rotate so you are giving one by one and you want a high acceleration, then it means you have to move through very fast. And it may so happen that it may miss some alignment. If your rate of if your acceleration is too high as compared to what the motor can really do, so if you really want to accelerate first then there may be a slip and if there is a slip, you have lost information about the angle, you do not know how much it has slipped, because there is no other feedback mechanism and this is gone. Then if you want the rotor to move by a step, what you are actually saying is that you are going to energize something on the stator and the rotor will then snap into the next location. And when the rotor is going to snap into that location, no mechanical system is an ideal mechanical system. Imagine if you want something will snap into location it means that that must have a high acceleration and the velocity must then go to 0 exactly at that position, which is impossible. So, you will have a certain overshoot and when it overshoots, because you have energized this it will tend to snap back into alignment and therefore, it will tend to rotate back and when it rotates back there will be an undershoot. So, when you attempt to energize this and move it into a particular location, the actual movement of the rotor angle, suppose it was here and you want to move it to the next angle you will have something like this. Now, that must be acceptable for your application. And because it is now going to resonate like this, if your acceleration is too fast, let us say by the time it finishes all this oscillation you give the next step, then you will lose control of the system completely, it will just only begin to oscillate and make noise and nothing will happen the way you require it to be. So, these oscillations also have to be considered, therefore you cannot have too fast and increase in speed and that therefore has to be considered. So, though it is going to move in well-defined steps, one has to consider these aspects as well. However, there are some other advantages to overcome this, I mean other ways to overcome this. This is happening because you are discreetly energizing one phase, waiting for some time, and then discreetly energizing the phase face, switching the first one off. But however, if you do not switch off one phase and switch on another, you maintain some excitation here and as you gradually decrease one excitation, increase the other one, then if you do that, you can do much better. I mean for example, if this is the angle by which it is going to move, when you go from one phase energisation to the next phase, you can also manage to hold the rotor anywhere in between, if you give a mix of excitation to both of them. You give an appropriate smaller excitation to one and a larger flow of current in the other, then the rotor will be held somewhere in between. So, that is then known as doing my Microstepping. So, you can do that and attempt to hold the rotor anywhere else. And if you are doing that, then these oscillations are not likely to occur because you are moving in a gradual manner. But one has to be slow enough, you cannot do it very fast, because again, it might slip. So, the use of these kinds of machines is therefore restricted to low torque, low power applications muscle usually less than about 1 kilowatt or so. And if you go more than that, you do not even get these kinds of machines. So, where your application requirement is small, you do not require much, you know mechanical torque to move your load and it is okay if you move in small steps, then these are probably the best machines use for robotic applications where the discreteness of movement if it is not much of concern to you. You want very smooth movement then this is ruled out. Student: is it restricted by what the motor can do? Professor: Yes. So, it is restricted by what the motor can do that is what I said. How the motor, what the motor is designed to operate? Well, it is not just one place like this. If you really look at the motor design, if you say one phase you are energizing, this will not mean just just one single tooth. This face will also go around somewhere else and alignment will be such that one tooth pair, one tooth pair, another tooth somewhere else is going to be aligned. So, the rotor is really held in different locations around it, so that it is not so bad, but nevertheless, everything has its own limits, that is what I mean. So, with that, then we will close the discussion on electric actuators for robotics. And the goal was to give you an overview of what are the various varieties of motors, what sort of control systems are involved and the various descriptions that you may be hearing when you looked at these kinds of actuators and the importance of various things, that was the overall goal. And we also have given some idea about how we are going to select a particular actuator, which are the considerations you will have to look at when you want to say that I will choose, I have this particular application in mind, how do you then decide whether you will choose a DC motor or an induction motor or a synchronous motor, BLDC, stepper motor, what will you select. So, which are the considerations, so that also we have looked at. And some information on the high frequency operation, what is the impact of high frequency manner of switching. Which is necessary because today, if you want to look at a high efficiency operation you have to go for switched mode control, you cannot afford to say that I will give an analog voltage or whatever you want to the motor, it just does not work, you cannot do it. So, what is the impact of that and how you can configure a control system, all this we have seen and I hope this will be useful to you when you really have some application at hand. So, we will close at this. |
Introduction_to_Robotics | Lecture_71_Introduction_to_Probabilistic_Robotics.txt | Hello everyone, and now we are going to be looking at the Computer Science module for the Introduction to Robotics course. My name is B. Ravindiran, and I am a faculty in the Computer Science Department, in IIT Madras. So, during the next few weeks, we will be looking at various issues that have to do with how robotic systems perceive their environments through sensors, and how they act and affect the environment through various actuators. Sensors could be as you have looked at, over the course, already, the sensors could be things that are, you know, from ultrasound, could be infrared, could be cameras, could be more touch sensors, bump sensors. So, there are variety of ways in which the robots look and sense their environment. In fact, in some ways, the amount of sensory information that a robot can get is more rich than what a typical human looks at. And it becomes more challenging to operate with this kind of rich sensory information as well. And then you have a set of actuators, it could be wheels, as you can see this humanoid robots, it could be limbs, or it could be more articulated mechanical arms, like in factory assembly lines. So, there are a variety of circumstances under which different kinds of actuators are used. And these are typically used to manipulate the environment. Now this is one of the core things in robotics. There is a system that perceives environment through sensors, and then operates and affects environment through their actuators that are part of the robotic system. So, what makes a lot of robotics, the algorithmic make aspects of robotics challenging is that, there is a significant level of uncertainty associated with both of these parts of it, there are sensors and their actuators, and the sensors could be highly unreliable. So, you cannot say that the robot knows exactly where it is just by looking at the sensory input. There are two reasons for this uncertainty. So, one of these reasons is that the sensory information is not complete. So, for example, say I am using an ultrasound sensor, that ultrasound, let us say have 8 ultrasound sensors pointing around 8 different directions. So, in each of those directions, the ultrasound sensor is going to tell me where is my nearest obstacle. But that is usually not enough for me to localize or locate myself within a room, I do not know exactly where I am, just because I have this sensory information that is coming in to me. So, the sensory information is incomplete, and therefore it is unreliable in that way. Another reason it is unreliable is due to a variety of different disturbances or different noise sources that might be there in the environment. The sensor itself, the actual electronics, actual mechanics that goes into the building of the sensor could have some amount of stochasticity, some amount of noise in it. Therefore, even though I make the same measurement, I stand in the same place, and I measure distances again and again, I might get slightly different distance readings, it is not that it is going to be stable all the time. The second thing, there are stochastic disturbances in the environment itself. So, I am trying to measure the distance to a wall and it could very well be that somebody just walks in front of the wall for a brief while. And if I measure the distance exactly at that instance, the distance might be much shorter to the wall than it is actually. And I do not really have a mechanism by which I can include all such unmodelled disturbances in the environment, and then make decisions out of them. So, I typically, you know roll them up into what we call noise or stochasticity and then we try to, you know accommodate for all these disturbances by looking at these kind of noisy models. So, and this is just, just in figuring out where you are. And if anyone has tried anything with robots, or even trying a simple line following robot, you know that the actuators that are there even a simple like a gear system that turns wheels, is not completely accurate. It is going to take, you know quite a bit of calibration before we figure out, if I say move 1 meter forward, or 1 foot forward, how much does the robot actually move? So that is going to take a little effort to figure out. And again, there are many, many sources of noise and it could be that like a gear slips a little bit when you are trying to move, and maybe depending on how much you have lubricated it and things could work slightly differently. And so this, the accuracy of this actuators also, if I say do something, whether the robot actually did it, is not guaranteed. So, we have to account for those kinds of inaccuracies and those kinds of noise as well. Now there is a story that I would like to tell my class. So, we were working with a very simple like a robotic platform. So, the goal of this platform is that I have a ball in the middle of the platform, and I have multiple degrees of freedom, I can tilt the platform in X-Y and the opposite directions. And the goal is to make sure that the ball stays in the middle of the platform. So, it turns out that we come up with a very good controller, and then we start executing it. But after a while, the controller starts failing, then I took as a, I mean we were really confused, what the heck was happening, it works fine for a bit, and then suddenly it starts failing. It turned out that the gear assembly that we were using had very high quality plastic gears. And after several hours of operation, the plastic gears started wearing, and therefore the controller needed to be adjusted to account for that. And since we were not getting a very accurate feedback, in fact, we had no way of measuring the wear on the gear. And because we were not able to model that, so even though we had a good controller, it kept failing every so often. And we were able to redesign the controller based on the measurements that we could make, but then that again, did not stay stable. So, there are many, many such complications that arise. And so what we are going to be looking at through the next few weeks, is how to build robotic systems that can robustly handle this kind of noise and uncertainty. So, I would say that, we are going to be looking at what some people call as Probabilistic Robotics, or Probabilistic Algorithms of Robotics. And the textbook there is mentioned on the webpage is again called Probabilistic robotics and that is going to be dealing with these issues in detail. So, the textbook is very extensive so I am not going to be covering the entire book, it is just not possible in the few weeks that we have available to us. So, what I will do is I will touch upon these topics, and I will look at representative approaches for each of these topics. In some cases, I will go in fab detail. In other cases, we will stay at a more higher intuition level, so that you can later on our follow up either by reading this book, or by doing additional courses that take you in depth. So, remember this is after all an introduction to robotics course and we are already packing in a lot of material in this course. So, the first topic that I look at is what we call recursive state estimation. And then under recursive state estimation, so we will look at a variety of different approaches. And so the two main things that I will be looking at are filters for state estimation based on certain Gaussian assumptions, which we called Gaussian filters. And then we will also look at a one class of nonparametric filters that allows us to go away from making any specific assumptions about the system. And then, we will look at both motion models that model the noise in the motion as well as sensing models that model the noise in the sensory systems. Again, I will be looking at very specific examples in both of these and also try to introduce you to the general principles based on these specific examples. And then we will look at two related problems, one is mapping, mapping is basically trying to figure out how the environment around you looks like. So, if you, after we look at path planning, I will also very briefly I will not put it down here in the roadmap, because it is going to be a very brief introduction of learning with robots, learning on robot, robotics platforms. And I will specifically talk about a paradigm called reinforcement learning, but this will be a very, very brief introduction to reinforcement learning itself. So, the first thing we will start looking at is this problem of recursive state estimation. Recursive state estimation is one of the core problems in robotics, so if you look at how a robot behaves in its environment, you can see that there is a robot here, it is in a very complex environment, there are obstacles around it and there are people here who are moving obstacles, and the robot is going to get inputs from all kinds of sensors. You can see a ring of sonar here, and there is a camera on the top and there are bumps sensors, if somebody actually, you know, touches at the bot, it will tell you that there is contact, so all of these, that is a rich set of sensory input that is coming into the robot. And so, with this data, so indoor, it basically has to figure out where it is in the world. So, this is like a world model that the robot has. And it has to figure out exactly where in this world model is the bot? So, so there are obstacles, and there are like walls that are modelled and the robot has to figure out exactly where in this the bot is and in what direction it is moving, at what angle, what orientation it is facing. So, you have to have things like the X, Y coordinates for the bot and the theta the angle at which it is facing, and the velocity with which it is moving. If it is accelerating, what is acceleration. And if it has an arm on it, what are the various angles at which the arm is positioned, and so on, so forth. And so, all of these are information that the robot needs to decide what it is to do next. So, this information, so the information that the robot needs to reliably make decisions about what to do next or how to behave in this environment. So, we call these as our state information or state variables. And sometimes the state information is not directly measurable, as we saw here, so I really need to know the X-Y coordinates of the bot and the orientation it is facing. And even if I have, even if I have a GPS sensor, because I am indoors, GPS sensor is going to be pretty inaccurate. It is just going to tell me within a very broad region where I am, and it is not going to tell me exactly where I am with relation to various obstacles, or humans and other people that I am interacting with in the environment. And therefore, we need a mechanism, where we look at all the sensory data. Suppose I do not have a GPS data, because it is indoors, and it is very noisy, I would have to use all kinds of sonar information that I get and as well as any video information, any visual information that I have, in order to figure out what my state is, so this is a huge challenge. So, the recursive state estimation problem looks like this. So, given all the sensory information I have, and knowledge of what is it that I am doing in the world, the sequence of actions that I have tried to take in the world, can you tell me what is my exact state right now. So that is the recursive state estimation, it is recursive, because I start off with making an estimate for what my state is, then I make an action. And then I make new measurements of my world, I make new measurements of where I am. So, I have an original estimate of my state, so I am in this particular location, I am in this particular location, and I am facing north, let us say that is my initial estimate and then I say, I am going to move 3 meters north, and then I make another measurement. Now, I have to figure out where exactly am I, based on my previous estimate, the measurement that I made and the movement that I have made. So, all of these together helps me estimate, re-estimate my state. So, this is where the recursive part comes in, I use the previous estimate of my state and the action, and the new measurements in order to get a second estimate of the state. So, this is the recursive state estimation problem. And even if you start off with a very noisy estimate of where you are in this world, what exactly is your state, as you do a few iterations of this, you typically end up refining your estimate about your current exact location and orientation, other state variables, and that makes you, that allows you to make better and better decisions over time. And so move on, trying to make this a little bit more formal in the next few slides. And then we will go to the next lecture for the actual algorithms for estimating this. So, the state consists of many, many variables, here I am, I have marked x0 to xt, which is a history of a state here. So, x0 is the state at time 0, x1 is the state of time 1 and x2 is the state at time 2, all the way to xt, which is the state at time t. And so, what will each of these x's consists of, here is an example of things that I already mentioned a few more here. The first is the robot pose or location in the world; the x, y and theta. And if there is more than x, y, I mean, if you can also move in the third dimension, it probably is x, y, z and theta and then I have the configuration of actuators. So how much has the wheel turned, I mean, if there is any kind of other batteries that could, any motors that could move so what is the angle of the motors. Or if I am looking at arms and links in the arms, what are the different, you know relative angles of these limbs of the arms and so on and so forth, so that is another part of the state. Remember all of these constitute one x, so x naught could potentially have many, many, many components, x 1 could have many, many components, and so on and so forth. And the next category of items that can go into a state description is the object location, the surrounding object, where is a table, is there a table next to me, and where is the wall, and so on, so forth. These object locations could be static, in which case, sometimes you just represent them as part of the map. But these object locations could also be moving around, in which case, you would like to put it as part of your state itself. And likewise, not just the locations of the surrounding objects, you could have the velocity as well, if the objects are moving. So, in which case, you have to keep continuously updating the location, and possibly the velocity of this other objects, and any kind of internal measurements that the robot is making. So, things like the internal health of the robot, could be things like battery life, could be things like time to service, could be wear on motors, or wear on gears, and so on, so forth. So, there are a variety of different things you could think of that essentially would be required for you to make decisions about your behaviour in the work so that you are going to accomplish whatever goal it is that you have set out to reach. So, we would consider, in this case, if a state representation is complete, if it has all the information that you need for making decisions, and we will also assume that the state is going to have what we call the, what we call as the Markov property. We will assume that the state has what we call the Markov property and this means that the state xt has enough information for me to make decisions without having to worry about everything that went before xt. So that means I do not have to worry about x naught, x1 all the way up to xt minus 1, if I know what is xt. Likewise, u’s as we will see in the next slide, u is used correspond to the actions that I take, so I do not have to worry about what are all the past actions I took, as long as I know that I have ended up in state xt. And likewise, I do not have to worry about all the past observations, all the measurements, z measurements, I do not have to worry about all the past measurements I have made, given that I have ended up in state xt. So, this we call as the Markov property. And so, we are going to assume that our states are complete, and that we have enough information for us to make decisions, and that our states satisfy the Markov property, so that we do not have to worry about the past. And so, making the rest of it formal, so the interaction, so we are going to assume that z1 to zt are about the measurement data. So it could be the camera image, it could be ultrasonic sensor outputs or it could be bumps sensor readings, contact sensor readings, it could be wheel encoder readings that tells you how much distance you have travelled, it could be tachometer readings that tell you how fast motors are rotating, it could be speedometers odometers, whatever is the measurement data that you have, all of this could go into one z. Likewise, as we saw with the state, z1 is the set of measurements I make at time 1, z2 is the set of measurements I make at time 2, so each one of z1, z2, all the way to zt, each one of this could be a set of measurements. And likewise, I am going to assume that you have a set of control actions that you could do. So, it could be a moving the wheels of a robot, it could be a robot motion, in whatever mechanism it is, it could be actions about manipulation of objects, or it could be actions that just, you know, flex joint or something like that. It could be at some granularity of grasping objects; it could be at the granularity of just flexing joints. There is a variety of things that you could do with respect to these control actions. So likewise, you have control actions. So, u1 to ut so remember that x denotes the state, z denotes measurement data, and u denotes the actual action that I take. And each of these set of control actions u1, u2, etc. could consists of multiple actions that you are doing, it could have something to do with the robot motion, it could have something to do with manipulation of objects or manipulation of the joint, it could be flexing the joint, or it could be the torque that you supply to motors, it could be about grasping an object, or it could be about moving specific joints in the hand, so the actions could be at multiple different granularities. And you do not have to perform all possible actions, action components at every time step. So, you have the sequence of actions, again, u1 is the action you took at time 1, u2 is the action or set of actions you took at time 2 all the way up to ut. So, one small notational thing here, I am going to assume that the first state you start out is x naught. So, you know that you are in x naught, then you take action u1 and then after that action you make a measurement z1 because x naught plus u1 would have moved you to x1. So, this is how the dynamic is going to be, I start in state x naught, I am not making any measurement here, there is some known state typically, some state at x naught that I am going to start at, or sometimes I do not, but we are not measuring anything, we first make an action and then we move to x1, but I do not know what x1 is, all I get from the robot point of view, all I get is a set of measurements, which I call z1. So, from our estimation point of view, the first set of measurements I get or z1 and the first action I have performed is u1, but the state has started out was at x naught. So, z 1 corresponds to measurements made at state x1, is it clear? So our state's notation is going to run from x0 to xt, where state notation is going to run from x0 to xt, while the action notation will run from u1 to ut, and the measurement notation will also run from z1 to zt. And some, when I want to denote the entire set of measurements, x0 to xt, I will then use this notation x0 colon t. Or if I say x0 colon t minus 1, that means all the way up till t minus 1 but does not include xt, so that is the notation that we will use. Likewise, just like we did for x, we do that for u, and for z as well. And so, all of this is fine. Now we have the notations for x, u and z but then we really need to model the system itself, so the stochastic system, in fact, we call this as a dynamical stochastic system of the robot is going to be described by two quantities. So, the first I call as the state transition probabilities, the first or the state transition probabilities and then the second set of quantities we are looking at or what are called the, the measurement probabilities. So, we have the state transition probabilities, and we have the measurement probabilities. So, what do the state transition probabilities tell us, the state transition probabilities tell you, given that I have gone through x0 to xt minus 1 already, this is all the states that I have already visited, 0 to t minus 1. And given that I have taken these actions u1 to ut, I have taken actions u1 to ut and I have made measurements z1 to zt minus 1. Please notice that z1 to zt minus 1, these are all the measurements I have made in the past, these are all the actions I have taken and this is the all the states that I have visited. What is the probability that I will be in a specific state xt? So, let us assume that the state xt, when the state variable basically measures whether I am in a particular room or not. So, if I say in the past, I was not in the room, and now my action is enter the room, then the probability that I am in the room now should be very high. But there is also some chance that I might not be in the room because the door could be closed or something could have just stopped me from entering the room or the door could be too narrow, whatever it is, so there might be some small chance. So, I can say the probability that I am in the room, given that I was not in the room and I entered the room is say 0.9 or something like that. Now, if you remember the Markov property, the Markov property says that the history is not important. Therefore, the state transition probability that we have becomes something much simpler, instead of looking at the entire history of states that I have visited, and the actions and the observations I will only look at the last state I am at, xt minus 1. Notice the difference here, here it was x0 colon t minus 1, which means the entire history from time 0, here it is only xt minus 1, so that means it is the just preceding state and the action ut. Remember, when I am in x0, I take u1, I go to a x1 that is what we, that is what we thought about. So likewise, here, so when I am at t minus 1, I take action ut, so I will end up in xt. So, this state transition probability, so this expression tells us, this expression tells us what is the probability I am going to transition from xt minus 1 to xt when I perform action ut, assuming the Markov property, this is the trace transition probability. So here is one question that you might have. Hey, why is it that my zt does not figure in this expression, why does zt not figure in this expression because zt really does not cause the transition. If I am in xt minus 1 I take ut I go to xt, I mean a zt is useful for us to estimate what xt should be but zt does not really cause xt. So, the state transition probabilities and the measurement probabilities that we are going to talk about next, describe the system itself, describe what is actually happening in the system. While zt is useful for me to estimate what is happening in the system, so there is a difference, so I do not worry about the estimation problem right now, I am just defining the system equations. Therefore, in the state transition probabilities, zt does not figure out, is it clear. So, state transition probability is probability of xt given xt minus 1 and ut, so that is what we will use, we will assume that things are Markov throughout. Even though we will go back to the non-Markov case and then when we are simplifying things, we will make the assumption of Markov property. And therefore, things fall out, just pointing out to you that most of the development that we will do for the rest of the course, assume that your system is Markov. So, the second component that we spoke about, that we mentioned earlier it the measurement probability. So, what is the measurement probability, it is a probability that I will make a specific measurement zt given the history of states x0 to t, the history of actions you want ut, and the history of observations z1 to zt minus 1. So, this is basically my measurement probability, this tells me, what is it that I will see once I have gone from x0 to xt and taken actions u1 to ut and made the previous observations, z1 to t minus 1. So, I am using observations and measurements interchangeably. So, bear with me. And so that is basically the other measurement probabilities, that is the second component of our system. Assuming that you have the Markov property, we know that the observation that you make at time t depends only on the state at time t, it does not depend on what action I did to come to xt. So, even ut is no longer relevant for looking at the probability of zt because ut’s impact on zt is only through what xt occurs. Once I know what is xt, I do not need to know what action bought me to xt and I certainly do not need to know the previous observations and the previous states and the previous actions I took. So, I can simplify the measurement probability significantly into probability of zt given xt. So, notice that we have now written out our system dynamics in terms of a state transition probability, which is probability of xt given xt minus 1 and the action you took ut, and the measurement probability, which is probability of zt given xt. And we will be repeatedly using both of these system equations in order to derive multiple quantities in the next few lectures. Thank you. |
Introduction_to_Robotics | Lecture_23_Industrial_Robot_Kinematic_Structures.txt | Good morning, welcome back to the discussion on robot kinematics. So in the last class we briefly talked about the degrees of freedom. As I mentioned that in general, the degrees of freedom are the set of independent displacements that specify completely the displaced or deformed position of the body or system. But in robotics, we define it as the independent, this is the number of directions that a robot can move a joint. So, if our joint can move in 1 direction, we call it as 1 degree of freedom. So if you have four joints and each joint can actually move in 1 direction, then we call this four degrees of freedom robot like that. So that is the way how it is defined in robotics. Now, as you know, human arm has got seven degrees of freedom, because we have 3 joint motions here at the shoulder and then we have 3 joints motion or 3 independent motion possible at the wrist and we have 1 for the elbow here. So we have like this seven degrees of freedom for our human hands. So we can call this as the pitch, yaw and roll of shoulder. So pitch, yaw, and roll of shoulder and elbow allows for pitch, and the wrist allows for pitch, yaw, and roll. So all of us know that we can actually use these 3 joints to position the wrist at any point. So we can actually move this here and then in any plane, so we will be able to position the wrist in the space using the first 3 degree of freedom, then the next 3 degrees of freedom can be used for orienting the tool. So I have a tool like this, I can actually use this for orienting it in the space. So the three of these movements would be necessary to move your hand to any point in space. And if you don't have the other things, if you want to have it in different angles then we need to have the orientation that is the way how these 3 plus 3 works for positioning and orientation. Now, if a robot that has mechanisms to control all six physical degrees of freedom is said to be holonomic that is normally any object in space has got six degrees of freedom. So, if we can actually provide six degrees of freedom for the robot to control all these, then we call it as holonomic 1, so that is the way how it is defined. Now a object with fewer controllable degree of freedom than total degrees of freedom is said to be non holonomic. Suppose your object has got, suppose something has got only four degrees of freedom and if we can control all those four degrees of freedom, then we call it as holonomic. Suppose the controllable degrees of freedom is less than the number of degrees of freedom, then we call it as non holonomic system. And similarly, if you have more number of degrees of freedom to control, I mean controllable degrees of freedom is more, then we call it as a redundant system. So, that is how it is defined, if you have more controllable degrees of freedom, then total degree of freedom is said to be redundant. So, we look at the human arm, we have got seven degrees of freedom, but to control this object in space, we need only six degrees of freedom because it has what only six degrees of freedom, but we have seven degrees of freedom to control it. So, therefore we call it as our human hand as a redundant system. So, that is the way how we define this in robotics. So, we can help holonomic system, non holonomic systems and redundant systems. Let us look at for robotics in the case of robots. So, we know that this positioning is done by the 3 joints at the shoulder and the orientation is done by 3 joints at the wrist. So, let us see how this is actually happening in the case of an industrial robot. So, we saw that industrial robots are used for positioning and orienting the object and therefore, we need to have the six degrees of freedom in space. Now, if you look at the positioning of the object using a robot, the positioning of the end effector in the 3D space, because 3 degrees of freedom either obtained from rotation or displacement. So we can actually position this object in space because what we need is positioning in X, Y, and Z. So this can be done by 3 degrees of freedom. So the first 3 degrees of freedom of the robot will be normally used for positioning the object in the space or positioning the wrist of the robot in space. So that is how we get the first 3 degrees of freedom for positioning. Then, we can get the orientation at the tip so this tool, I mean the wrist, if there is a wrist and then you attach a tool to this, then these 3 orientation that is the roll, pitch and yaw, as I told you with respect to x, y, z axis, so you can actually say this is the pitching motion and this is the yaw motion and this is the roll motion. So, it is roll, pitch and yaw can be obtained by using another 3 degrees of freedom. So, the 3 degrees of freedom of the robot will be used for positioning or the 3 joints of the robot will be used for positioning and the another 3 joints of the robot will be used for orienting the object or orienting the end effector in the 3 dimensional space. So, positioning and orientation can be obtained by 3 plus 3 degrees of freedom. So, now we look at okay how this 3 plus 3 degree is achieved in the robot. So, as you know, the robot will be something like this, you have a base and we call this as the link 0 that the base is considered as a link 0. Then you have a joint attached to this link, I mean this 1 base 1 that is the joint 1 and then you have a link 1 between the link joint 1 and the joint 2, so, this is the joint 2 so you have another joint here. So, this joint 1 and joint 2 connected by a link, then link 2 is connecting joint 2 and joint 3. Similarly, link 3 will be connecting joint 4, etc. so it will be going like this. Now, if you can see that the joint 3, the three joints joint 1, 2, and 3 will be used to position the wrist of the robot. So, the first 3 joints will be used to position the wrist at one point, and then the wrist will be having 3 degrees of freedom to orienting it. So, these are the first 3 joints; join 1 join 2 and join 3 that can be used for positioning. So, as I mentioned earlier also there will be joints and links in the robots. And these joints provide that degree of freedom so each joint will be giving one or more degrees of freedom, so normally there will be having only 1 degree of freedom per joint, that is why it says, when you have six degree of freedom, it is known as six joint or six axis robot or six joint robot like that. So we will be having joints and links and the first 3, joint 1 join 2 and join 3 will be used for positioning the wrist in space. So, the joints are supposed to give the relative motion between the links. So, you can have a joint 1 which provide a relative motion between link 1 and link 0. Similarly, joint 2 provides a relative motion between link 1 and link 2, joints provide the relative motion and links are the rigid members between the joint, so we call this as the links because they are the rigid members. Most of the industrial robots have this rigid links, of course, you can have flexible links also in some special cases. So this is the link which actually connects between the joints and then that is actually a rigid element. And important point is that these joints can actually be a linear or rotary type. So, you can have a rotary joints, where there will be a relative rotation. So, you can have a rotation with this joint. So, this can actually rotate, this joint can rotate and then this link can actually move so that is the relative rotation, so that is the rotary joint, but we can have a prismatic link also, where one can actually move the other one, this can actually move, the top link can actually move over this one. So you can actually have a linear motion between the links so you can have either a rotary motion or a linear motion for the joints. Whatever maybe the, weather it is rotary or linear, it will be giving you the relative motion between the links, so the two links will have a relative motion because of that joint. Most of the robots use rotary joints but there are many robots which actually use linear or known as prismatic joint also. And each joint provides you the degree of freedom, each joint gives a degree of freedom for the robots. And most robots poses five or six degrees of freedom. So as I mentioned, normally we need to have or in general, we need to have six degrees of freedom to manipulate objects in space. Therefore, most of the robots will be having six degrees of freedom, but does it mean that we need to have always six degrees of freedom, it depends on the applications also, suppose I have an object which is actually moving only in one direction, so I do not need to have all the degrees of freedom. We look at the requirement of the application accordingly we can say that we can have a number of degrees of freedom, it can be 4, 5, 6 or you can actually have 7 also depending on what actually you want to achieve. So, that is robots have five or six degrees of freedom. And as I mentioned it consists of 2 sections; the first section is for positioning and the second section is for orienting. So, we call this first section which is for positioning of the objects in the robot's work volume, we call it as the body and arm assembly. So, the first 3 degrees of freedom is used for positioning the wrist or the object in the 3D space so that is known as the body and arm configuration, so that is the body and arm configuration. The first 3 joints and links are known as the body and arm configuration. The second part is known as the wrist assembly that is for orientation of the objects. So, like in the human arm also. So, we have up to wrist we can say that this is we are using for positioning. So, we want to position the object wherever you want using this and then use the wrist work orienting. So it's the same way, the robot also can be classified or divided into 2 parts, first the body and arm, the second one is the wrist assembly. So, the first 3 degrees of freedom is provided by body and arm configuration, and the second one is provided by the wrist assembly. So, we have body and arm assembly and wrist assembly so, this basically provides you 3 plus 3. And again, depending on the requirements, not necessarily always it should be 3, positioning will be always 3 because you need to have position in 3 dimensional space, but orientation may not be needed in all the cases. So sometimes it will be less degrees of freedom also. But whatever it is, the positioning part is the body and arm assembly and orienting body is the wrist assembly, that is the way how it is defined. So this actually already explained. So, you have the base, link zero joint 1, joint 2 and link 2, okay. So, that is of course, you will be having joint 3 for the third one. So, the series of joint and link combinations. Now, as I mentioned there can be different kinds of joints. So, normally rotary and prismatic are the two types of joints. So, we call this prismatic as the translational motion. So, translational or prismatic joints. So, this is an example for prismatic joints, you would be having a motion, so we can have a relative motion of the link because of the joint. So, there can be different ways you can actually assemble it, this is the one, this is the other one, okay? So, you can actually have, this is P type joint or prismatic joint. So, it can be L or O, L is the linear one, this is orthogonal one, so orthogonal one is that you are getting output link is actually this one, here the output link is same axis that is why it is different, but otherwise both are giving the prismatic motion, so linear orthogonal type. And then you have this rotary motion joints. The Rotary is an example for the what i mean this is the CAD model of a prismatic joint. Now, you can have rotary joints normally specified as R type joints. So, prismatic is P and rotary is R, so these are the two major types of joints R and P, rotary and prismatic joints. And within this rotary we can have variations, so I mean again all provide the relative motion rotary motion between the links, but the way you assemble you can actually get different motions like you can have this one is the normal rotary motion and this is the twist motion and this is known as a revolute motion. So, this basically the way the links are assembled between the at the joints makes the difference and that is why you get this R, T and V joints. Now, these are all single degrees of freedom. So, you can see that each one provides you 1 degree of freedom, because each one has one relative motion is possible. Here it is rotary twist or revolute joint, you will be getting the same kind of I mean 1 degree of freedom. Now, you can have other types of joints also, that is you can have a cylindrical joint where you can have a sliding and turning. So, that actually gives you 2 degrees of freedom, can actually give one 1 sliding and 1 turning that kind of joins are known as cylindrical joints. Or you can have a screw motion or a helical motion joint can be there and can have a spherical joint also, a spherical joint actually provides you 3 degrees of freedom because it can actually rotate with respect to all the 3 axis. And you can have a planar one also again, it's a motion in the plane so, you can have 2 degrees of freedom. So, these are the other possible types of joints, but not really common, not used very much in the industry. But these are all possible, you can actually have joints of this category also. So, important one, the prismatic joint and the rotary joint. So we call this I mean, these are the two major joint types that you can see in the industrial robots. And the representation, so it is basically the rotary joint is represented like this, so this is the way how you represent the rotary joint. This actually shows the axis with respect to which it is rotating. Now, we mentioned about the body and arm architecture. In the body and arm architecture which is used for positioning the wrist, so that is the first 3 degree of freedom which is used for positioning. Now, the robot architecture are classified based on this body and arm architecture, body and arm configuration. By assembling these 3 joint, first 3 joints in different ways or choosing the type of joints in the first 3 joints, you will be able to get different architecture for the robots. So, let us see what are the different architectures possible and how these architectures leads to different types of robots. So, it is a combination and disposition of different kinds of joints that configure the robot kinematic chain. So the kinematic chain that is the assembly of the joints and links makes the robot configuration and since we have different types of joints, and by arranging these joints in different ways, you will be able to get different configuration for the robot and that is known as the robot architecture. There are few commonly used architectures. So, we will see what are these architectures in this section and see what kind of workspace it leads to and what are the complications or what are the advantages of using different architectures. Now, we know there are 3 joints. So, the first 3 joints, 3 joints are used for the body and arm configuration. So, that is the one which actually use for positioning. So, these three joints can be either R or P, it can have either an R architecture or R joint or a P joint. So, you can have an option of having these joints in different combinations. So that is what actually gives you different architectures. So, five common body and arm configuration for industrial robots are there in the market. There are five ways actually these are arranged and that actually gives you five body arm configuration for robots, and the first one is known as a polar coordinate body and arm assembly that is you can actually represent the position of this by using a polar coordinate and so it actually gives you configuration of RRP, two rotary joints and one prismatic joints gives you a polar coordinate body and fixed arm. So, this is 1 rotary joint and this is 1 rotary motion, okay, 1 rotation like this, and 1 rotation like this and then a motion, inward or outward motion okay, that is a linear motion. So, these 3 RRP helps the robot to position the tip at any point in 3D space, because you can actually cover this plane and this plane using these two joints and within this plane you can actually move in and out so that you will be able to cover the distance in other plane also. So, this way, you will be able to position the tip using these 3 joints, I mean RRP configuration that is known as the polar coordinate robot and arm assembly. The second one is, the cylindrical body and arm assembly. So, here you can actually replace one of these R with a P. So, if you do RPP, that is you have a rotary joint like this, which actually cover this plane, and then you can actually move in and out. And then you can go up and down also. So, that actually covers the whole 3 dimensional space can have RPP configuration and that is the cylindrical body and arm assembly. So, you have RRP RPP and then you can have another one Cartesian coordinate body and arm assembly, which is like 3 prismatic joints it can have three prismatic joints. So, you can go up and down, you can go side to side, and then you can go in and out also. So, of all our prismatic joints, so we can have x direction motion, y direction motion and z direction motion using prismatic joint and that is known as Cartesian coordinate robot, Cartesian coordinate body and arm assembly. And the next one fourth one is the jointed-arm body and arm assembly, which is so, this we have RRP RPP and PPP, so the one is remaining is RRR. So, RRR is the jointed arm body and arm assembly where this is our rotary joint. So, most of the robots actually fall into this category, and most of the industry robots actually fall into this category because you can have all rotations, you can have one rotation here, one rotation like this, I mean I can move like this and you can have other rotation also. So you have all the three rotations possible and using these three jointed arm body and arm assembly you will be able to get the positioning in the 3D space. So this is the jointed body and assembly RRR. And the last one is a special category of robot, which we call it as SCARA. So this is known as a SCARA robots, which is basically a selective compliance assembly robot arm. It is actually an RRP robot, so 2 rotary joints and a prismatic joint. So it is something like polar coordinate, but the only difference is that the joints are assembled in different ways. So the joint axis are arranged in a different way so we get a different configuration. And this configuration allows you to get a compliance in one of the planes. So, one plane you can actually have a large compliance, because all the motors are in a different plane. So it actually allows you to get some compliance, it's known as a select few complaints assembly robot arm SCARA. So these are the five body and arm architectures possible using this R joint and P joint. So we use the rotary joint or the prismatic joint, and then arrange them in different ways. And we will get five configuration for the robots. So we look into these configurations in detail and try to find out what are the difference between these configurations and how their workspace changes, and how their kinematics get affected because of this configuration. So we will discuss this in the next class. So till then, goodbye. Thank you. |
Introduction_to_Robotics | Tutorial_2_Probability_Basics.txt | One of the important concepts in probability theory is that of the random variable. A random variable is a variable whose value is subject to variations. That is, a random variable can take on a set of possible different values, each with an associated probability. Mathematically, a random variable is a function from the sample space to the real numbers. Let us consider some examples. Suppose we conduct an experiment in which we roll three dice and are interested in the sum of the outcomes. For example, the sum of 5 can be observed if two of the dice show up 2 each and the other day shows up as 1. Alternatively, the sum of 5 can also be observed if one die shows up as 3 and the other two dice show up 1 each. Since we are interested in only the sum and not the individual results of the dice rolls, we can define a random variable, which maps the elementary outcomes, that is the outcomes of each die roll to the sum of the three rolls. Similarly, in the next example, we can define a random variable, which counts the number of heads observed when passing a fair coin three times. Note that in this example, that random variable can take values between 0 and 3, whereas in the previous example, the range of the random variable is between 3 and 18, corresponding to all dice showing up 1 and all dice showing up 6. Consider the previous example experiment of tossing a fair coin three times. Let X be the number of heads obtained in the three tosses. That is, X is a random variable, which maps each elementary outcome to a real number representing the number of heads observed in that outcome. This is shown in the first table. The first row lists out each elementary outcome and the second row lists out the corresponding real number value to which that elementary outcome is mapped, that is the number of heads observed in that outcome. Now, instead of using the probability measure defined on the elementary outcomes or events, we would ideally like to measure the probability of that random variable taking on values in its range. What we are trying to say here is that when we defined probability measure, we were associating each event, that is, subset of the sample space with a probability measure. When we consider random variables, the events correspond to different subsets of the sample space, which map to different values of the random variable. This is illustrated in the second table. The first row lists out the different values that the random variable X can take and the second row lists out the corresponding probability values, assuming that the coin tossed is a fair coin. This table describes the notion of the induced probability function, which maps each possible value of the random variable to its associated probability value. For example, in the table, the probability of the random variable taking on the value of 1 is given as 3 by 8. Since there are 3 elementary outcomes in which only one head is observed, and each of these elementary outcomes has a probability of 1 by 8. From the previous example, we can define the concept of the induced probability function. Let Omega be a sample space and P be a probability measure. Let X be a random variable, which takes values in the range X1 to Xm. The induce probably function PX on X is defined as PX, X equals to small xi equals to the probability of the event, comprising of the elementary outcomes, small omega j such that the random variable X maps small omega j to the value xi. The cumulative distribution function or CDF of a random variable X denoted by FX of small x is defined by FX of small x equals to the probability of the random variable taking on a value less than or equal to small x for all values of small x. For example, going back to the previous random variable, which counts the number of heads observed in three tosses of a fair coin, the following table shows the intervals corresponding to the different values of the random variable X along with the corresponding values of the cumulative distribution function. For example, FX equals, FX of 1 equals to 1 by 2 because the probability that the random variable x has a value of 1, let us just go back to the previous example. Right. The probability that the random variable X has a value of 1 is 3 by 8. The probability of x, that the random variable X equals to 0 is 1 by 8, and therefore, the probability that the random variable X takes on a value less than or equals to 1 is 1 by 8 plus 3 by 8 equal to 4 by 4 or 1 by 2. A function is a valid cumulative distribution function only if it satisfies the following properties. The first property simply states that the cumulative distribution function is a non-decreasing function. The second property specifies the limiting values. Limit x tends to minus infinity FX of x equals to 0 and limit extends to infinity FX of x equals to 1. The third property specifies right-continuity. That is no jump occurs when the limit point is approached from the right. This is also shown in the figure below. A random variable X is continuous if its corresponding cumulative distribution function is a continuous function of X. This is shown in the second part of the diagram. A random variable X is discrete if its CDF is a step function of x. This is shown in the first part of the diagram. The third part of the diagram shows the cumulative distribution function for a random variable, which has both continuous and discrete parts. The probability mass function or PMF of a discrete random variable X is given by, fX of x equal to probability of X equal to small x, for all values of small x. Thus for a discrete random variable, the probability mass function of that random variable gives the probability that the random variable is equals to some value. For example, for a geometric random variable X with parameter p, the PMF is given as fX of x equals to 1 minus p raised to the power x minus 1 into p for the values of x equals to 1, 2, and so on. And for other values of x, the PMF will equals to 0. A function is a valid probability mass function if it satisfies the following two properties. First of all, the function must be non-negative. Secondly, the summation over all X, the value of the function summed over all values of X should be equals to 1. For continuous random variables, we considered the probability density function. The probability density function or PDF of a continuous random variable is the function FX of x, which satisfies the following. The integral from minus infinity to x, fX of t dt is equals to the cumulative distribution function at the point X. Similar to the PMF, the probability density function should also satisfy the following properties. First of all, the probability density function should be non-negative for all values of x. Second, integrating over the entire range, the probability density function should sum to 1. Let us now look at expectations of random variables. The expected value or mean of a random variable X denoted by expectation of X is given by integral minus infinity to infinity x into fX of x dx. Note that fX of x here is the probability density function associated with random variable X. This definition holds when X is a continuous random variable. In case that X is a discrete random variable, we use the following definition. Expectation of X is equal to sum overall x, such that probability of x greater than 0. That is we consider all values of the random variable for which the associated probability is greater than 0, x into fX of x. Here, fX of x is the probability mass function of the random variable X, which essentially gives the associated probability for a particular value of the random variable, thus leading to this definition. Let us now look at an example in which we calculate expectations. Let the random variable X take values minus 2, minus 1, 1, and 3 with probabilities 1 by 4, 1 by 8, 1 by 4, and 3 by 8 respectively. What is the expectation of the random variable Y equals to X square? So in this question, we are given one random variable, the values which this random variable takes, and its associated probabilities. What we are interested is in the expectation of the random variable Y, which is defined as Y equals to X square. So what we can do is, we can calculate the values that the random variable Y takes along with associated probabilities, since we are aware of the relation between Y and X. Thus, we have Y taking on the values 1, 4, and 9 with probabilities 3 by 8, 1 by 4, and 3 by 8 respectively. Given this information, we can simply apply the formula for expectation and calculate the expectation on the random variable Y. This is as follows, giving a result of 19 by 4. Another way to approach this problem is to directly use the relation Y equals to X square in calculating the expectation. Thus, expectation of Y is simply the expectation of the random variable X squared. So in play, in the formula for expectation, instead of substituting X, we substitute X square. Thus, we have sum of over all x, x square into probability of X equal to x. Calculating the values, we get the same answer of 19 by 4. Let us now look at the properties of expectations. Let X be a random variable; a, b, and c are constants, and g1 and g2 are functions of the random variable X such that their expectations exist, that is, they have finite expectations. According to the first property, expectation of a into g1 of X plus b times g2 of X plus c is equals to a times expectation of g1 of X plus b times expectation of g2 of X plus c. This is called the linearity of expectations. There are actually a few things to note here. First of all, expectation of a constant as equals to the constant itself, expectation of a constant times the random variable is equals to the constant into the expectation of the random variable, and the expectation of the sum of two random variables can also be represented as the sum of the expectations of the two random variables. Note that here, the two random variables need not be statistically independent. According to the next property, if a random variable is greater than equals to 0 at all points, then the expectation is also, expectation of that random variable is also greater than equals to 0. Similarly, if one random variable is greater than another random variable at all points, then the expectation of those random variables also follow the same constraint. Finally, if a random variable has values, which are, which lie between two constants, then the expectation of that random variable will also lie between those two constants. Let us now define moments. For each integer n, the nth moment of X is mu dash of n equals to expectation of X raised to the power n. Also, the nth central moment of X is mu N equals to expectation of X minus mu raised to the power n. So the difference between moment and central moment is, in central moment, we subtract the random variable by the mean of the random variable or expected value. The two moments that find most common use are the first moment, which is nothing but mu dash equals to expectation of X, that is, the mean of the random variable X and the second central moment, which is mu two equals expectation of X minus mu raised to the power 2, which is the variance of the random variable X. Thus the variance of a random variable X is its second central moment, variance of X equals to expectation of X minus mu whole square. Note that mu is just the first moment, which can be replaced. So it can be replaced by expectation of X. Thus, we have variance of X is equal to expectation of X minus expectation of X whole square. By expanding this term and applying linear expectations, we will finally get variance of X equals to expectation of X squared minus square of the expectation of X. The positive square root of variance of X is a standard deviation of X. Note that the, when calculating variance, the constants act differently when compared to the linearity of expectation. This is a very useful relation to remember. Variance of aX plus b is equal to a square into variance of X, where A and B are constants. The covariance of two random variables X and Y is covariance of X comma Y equals to expectation of X minus expectation of X into Y minus expectation of Y. Remember that the variance of a random variable X is nothing but the second central moment. Thus, the variance of a random variable measures the amount of separation in the values of the random variable when compared to the mean of the random variable. For covariance, the calculation is done on a pair of random variables and it measures how much two random variables change together. Consider the diagram below. In the first part, assume that the random variable X is on the x-axis and the random variable Y is on the y-axis. We note that as the value of X increases, the value of Y seems to be decreasing. Thus, for this relationship, we will observe a large negative co-variance. Similarly, in the third part of the diagram, we can see that as the value of variable X increases so does the value of the variable Y. Thus, we see a large positive covariance. However, in the middle diagram, we cannot make any such statement because as X increases, there is no clear relationship as to how Y changes. Thus, this kind of a relationship will give zero covariance. Now from the diagram, it should immediately be clear that covariance is a very important term in machine learning because we are often interested in predicting the value of one variable by looking at the value of another variable. We will come to that in further classes. Closely related to the concept of covariance is the concept of correlation. The correlation of two random variables X and Y is nothing but the covariance of the two random variables X and Y divided by the square root of the product of their individual variances. Basically, correlation is a normalized version of covariance. So the correlation will always be between minus 1 and 1. Also, since we used the variance of the individual random variables in the denominator for correlation to be defined, individual variances must be non-zero and finite. In the final part of this tutorial on probability theory, we will talk about probability distributions and list out some of the more common distribution that you are going to encounter in the course. Before we proceed, let us considered this question. Consider two variables, X and Y, and suppose we know the corresponding probability mass function fX and fY corresponding to the variables X and Y. Can we answer the following question? What is the probability that X takes a certain value small x and Y takes a certain value small y. Think about this question. If you answered no, then you are correct. Let us see why. Essentially, what we were looking for in the previous question was the joint distribution, which captures the properties of both random variables. The individual PMFs or PDFs in case that random variables are continuous, capture the properties of the individual random variables only but miss out on how the two variables are related. Thus, we define the joint PMF or PDF, FX, Y as the probability that X takes on a specific value small x and Y takes on a specific values small y for all values of X and Y. Suppose we are given the joint probability mass function of the random variables X and Y. What if we are interested in only the individual mass functions of either of the random variables? This can be obtained from the joint probability mass function by a process called marginalization. The individual probability mass function thus obtained is also referred to as the marginal probability mass function. Thus, if we are interested in the marginal probability mass function of random variable X, we can obtain this by summing the joint probability mass function over all values of Y. Similarly, the probability mass function of, the marginal probability mass functional of random variable Y can be obtained by summing the joint probability mass function for all values of X. Note that in case the random variables considered here are continuous, we substitute summation by integration and PMFs by PDFs. Like joint distributions, we can also consider conditional distributions. For example, here we have the conditional distribution fX given Y, which is the probability that the random variable X will take on a some value small x, given that the random variable Y has been observed to take on a specific value small y. The relation between conditional distributions, joint distribution, and marginal distributions is shown here. This relation should be familiar from the definition of conditional probability that was seen earlier. Note that the marginal distribution fY of y is in the denominator and hence, it must not be equals to 0. The overall idea of joint, marginal, and conditional distributions is summarized in this figure. The top left-figure shows the joint distribution and describes how the random variable X, which takes on nine different values is it related to the random variable, Y which takes on two different values. The bottom-left figure shows the marginal distribution of random variable X. As can be observed in this figure, we simply ignore the information related to the random variable Y. Similarly, the top-right figure shows the marginal distribution of a random variable Y. Finally, the bottom-right figure showed the conditional distribution of X, given that the random variable Y takes on a value of 1. Looking at this figure and comparing it with the joint distribution, we observe that in the bottom-right figure, we simply ignore all the values of X for which Y equals to 2. That is the top half of the joint distribution. In the next few slides, we will present some specific distributions that you will be encountering in the machine learning course. We will present a definition and list out some important properties for each distribution. It would be a good exercise for you to work out the expressions for the PMFs or PDFs and the expectation and variances of these distributions on your own. We start with our Bernoulli distribution. Consider a random variable X taking one of two possible values, either 0 or 1. Let the PMF of X be given by fX of 0 is equal to the probability to the random variable X takes on a value of 0 equals to 1 minus p, where p lies between 0. And fX of 1 equals to probability that the random variable X takes the value 1 equals to p. Here P is the parameter associated with the Bernoulli distribution. It generally refers to the probability of success. So in our definition, we are assuming that X is equal to 1 indicates a successful trial and X equals to 0 indicates a failure. The expectation of a random variable following the Bernoulli distribution is p and the variance is p into 1 minus p. The Bernoulli distribution is very useful to characterize experiments which have a binary outcome, such as in tossing a coin, we observe either heads or tails or say, in writing an exam, where you have pass or fail. Such experiments can be modeled using the Bernoulli distribution. Next, we look at the binomial distribution. Consider the situation where we perform n independent Bernoulli trials, where the probability of success for each trial equals to p and the probability of failure for each trial equals to 1 minus p. Let X be the random variable, which represents the number of successes in the end trials. Then we have probability that the random variable X will take on a specific value of small x, given the parameters n and p equals to n choose X, that is, the number of combinations of observing X successes and in n trials into p raised to the power x into 1 minus p raised to the power n minus x. Note that here x is going to be a number between 0 and n. The expectation of a random variable following the binomial distribution equals to np, and the variance equals to n into p into 1 minus p. The binomial distribution is useful in any scenario where are conducting multiple Bernoulli trials, that is experiments in which the outcome is binary. For example, suppose we have a coin, suppose we toss a coin 10 times, and want to know the probability of observing three heads. Given the probability of observing a head in an individual trial, we can apply the binomial distribution to find out the required probability. Suppose we perform a series of independent Bernoulli trials each with a probability p of success. Let X represent the number of trials before the first success, then we have probability that the random variable X will take a value small x given the parameter p is equals to 1 minus p raised to the power X minus 1 into p. This definition is quite intuitive. Essentially, we are trying to calculate the probability that it takes us small x number of trials before observing a first success. This can happen if the first x minus 1 trials failed, that is, with probability 1 minus p and the trial succeeded, that is, with probability p. A random variable which has the, this probability mass function follows the geometric distribution. For the geometric distribution, the expectation of the random variable equals to 1 by p and the variance equals to 1 minus p by p square. In many situations, we initially do not know the probability distribution of the random variable under consideration but can perform experiments which will gradually reveal the nature of the distribution. In such a scenario, we can use the uniform distribution to assign uniform probabilities to all values of the random variable, which are then later updated. In the discreet case, say the random variable can take n different values. Then we simply assign a probability of 1 by n to each of that n values. In the continuous case, if the random variable X takes values in the closed interval, a comma b, then its PDF is given by fX of x given parameters a comma b equals to 1 by b minus a, if x lies and then close interval, a comma b, and 0, otherwise. For a random variable following the uniform distribution, the expectation of the random variable X equals to a plus b by 2, and the variance equals to b minus a square, b minus a whole square by 12. A continuous random variable X is said to be normally distributed with parameters mu and sigma square if the PDF of the random variable X is given by the following expression. The normal distribution is also known as the Gaussian distribution and is one of the most important distributions that we will be using. The diagram represents the famous bell-shaped curve associated with the normal distribution. The importance of the normal distribution is due to the central limit theorem. Without going into the details, the central limit theorem roughly states that the distribution of the sum of a large number of independent, identically distributed variables will be approximately normal, regardless of the underlying distribution. Due to this theorem, many physical quantities that are the sum of many independent processes, often have distributions that can be modeled using the normal distribution. Also, in the machine learning course, we will be often using the normal distribution in its multivariate form. Here, we are presented the expression of the multivariate normal distribution, where mu is the D-dimensional mean vector and Sigma is D cross D covariance matrix. The PDF of the beta distribution in the range 0 to 1 with shape parameters, alpha and beta is given by the following expression, where the gamma function is an extension of the factorial function. The expectation of a random variable following the beta distribution is given by alpha by alpha plus beta. And the variance is given by alpha beta by alpha plus beta whole square into alpha plus beta plus 1. This diagram illustrates the beta distribution. Similar to the normal distribution in which the shape and position of the bell-curve is controlled by the parameters mu and Sigma squared, in the beta distribution, the shape of the distribution is controlled by the parameters alpha and beta. In the diagram, we can see a few instances of the beta distribution for different values of that shape parameters. Note that unlike the normal distribution, a random variable following the beta distribution takes values only in a fixed interval. Thus, in this example, the probability that that variable takes a value less than 0 or greater than 1 is equals to 0. This ends the first tutorial on the basics of probability theory. If you have any doubts or seek clarifications regarding the material covered in this tutorial, please make use of the forum to ask questions. As mentioned in the beginning, if you are not comfortable with any of the concepts presented here, do go back and read up on it. There will be some questions from probability theory in the first assignment, so hopefully, going through this tutorial will help you in answering those questions. And note that the, we will be having another tutorial next week on linear algebra. |
Introduction_to_Robotics | Lecture_212_Differential_Relations.txt | hello good morning welcome back so in the last few classes we discussed about the forward and inverse kinematics of manipulator so this forward and inverse basically is a position relationship so we were trying to see how we can relate the joint positions to the tool position or if you know the tool position how can we get the joint position or if you know the joint position how can we get the tool position so that was the forward and inverse kinematics relationships but in most of the robot applications we are not only interested in position but we are interested in the velocity relationship also for example if you have a robot and the tooltip is given like this so you want these to move to this position and that's position we just want this to have a particular velocity also so we want this to move from here to here with the particular velocity in the Cartesian space or we want to say then the tool configuration you want to have a velocity so we want this to be moved from this position here to be with a particular velocity either a constant velocity or a velocity profile defined by the user we want this to move and if that is the case we need to move the joints also so these joints also need to have some velocity to get the Cartesian velocity so if you have an X dot we want to get an X dot we need to know what is the theta dot corresponding to that and then how can this be related so that is the velocity relationship or differential relationship so how these theta dot the Joinville ASSA t and the Cartesian velocity X dot are related is known as the differential relationship or differential kinematics in manipulator kinematics or in the manipulator analysis we call this as the differential kinematics so we will try to see how we can develop this relationship once we have the position relationship how can we convert work derive the differential relationship for manipulated so that is going to be the discussion we are going to help so we'll talk about that tool configuration and joint space velocity so joint space velocity is basically the theta dot and then the theta dot okay and tool conformation velocity is the X dot so you have theta dot and X dot what is the relationship between theta dot and X dot is the on which filters that is the tool configuration velocity and joint space velocity when I say exit it shows that it's the XYZ as well as the angular velocities linear and angular velocities in the Cartesian space and and we how when we develop this relationship we will see that these two are related through a matrix and then that is basically we call this Jacobian of the manipulator Jacobian matrix so we will talk about the tool configuration Jacobian as a velocity relationship and then we we can mention that this is the manipulator Jacobian which is the generalized form of this Jacobian which can be used for other applications also not only velocity relationship we can use them use it for the force relationship also now when we go through this and when we know that if you need to get the dot X dot from theta dot we can actually get to X dot from theta if you know theta dot we will be able to get X dot just heated what is that you can control so you'll be able to get X dot the other way is also possible theta dot can be represented as a function of X dot and we call this as an inverse relationship and when we have this kind of relationship and we will see that in some situation or within the workspace of the manipulator there may be many points where you cannot move the manipulator because of some constraints and that we call it as the the singularity in the workspace so we'll talk about the singularity and how there is actually related to the tool configuration and sway joint space velocity that also will try to understand and then when we talk about singularity there are took time sourcing two types basically the boundary singularity and the interior singularity which we'll discuss and and we'll talk about something called a generalized inverse because when we try to find out this inverse relationship for the velocity we will find some difficulty in getting the inverse of a matrix then we will use something called a generalized inverse to solve this problem and finally we'll that is pseudo inverse is one form of the generalized inverse and finally we will talk about the statics also where we will be using the relationship that we develop in that differential kinematics to solve for the static analysis of manipulator so this is going to be the discussion that we are going to help okay of course we'll solve examples as we discussing all these topics ok so I already mentioned about the differential relationship so we have theta that is a joint parameters or the joint positions and then you have this position of the tooltip which has got the position vector and the orientation also so you have the position and orientation of the tool in the Cartesian space that is known as a tip location in Cartesian space and then you have the tip location in joint space so the tip location joint space is represented by the corresponding joint angles which give you the tip location and as we know that they are related through the forward kinematics so F is forward kinematics is of theta and theta is the inverse kinematics of X so if you know X you can apply inverse kinematics and get the theta and if you know theta you will be able to get the X using the forward kinematics so that is the relationship we have now what we are interested in is getting the X dot and theta dots okay so the the reason why we need to do is to the robot path planning and problem is formulated in tool configuration space because we are interested in the velocity of the tool and most of our problem be formulated as velocity of torque tool or the end effector and the robot motion is controlled at the joint space so always the motion is controlled at the joint space we cannot control them tip directly though we want the tip to have a particular velocity we cannot comment the velocity directly we need to comment the Joinville ASSA t so we can control the theta dots and then we need to get the X dot so what kind of relationship exists between these two is important for us to comment theta dot so how do we actually come and theta dots to get an x dot or for a given for a decide X dot so I have a decide X dot I have a decide X dot I want to know what theta dot will get actually give me this X dot so that is the relationship we are interested so that I can move theta or move the join with a particular velocity so that I will get there they said X dot at the end effector so this can actually be obtained as here if you represent X swq where w is the relationship between Q Q is the joint variable so we will refer Teta in case theta theta is angle but it refer q as a joint variable where this theta or b or prismatic or rotary joint we can use Q as the joint variable and then we can say that X is WQ that is a function of Q instead of FK with the axis WQ but W is a tool configuration function and Q is a joint variable now we can actually X dot if you want to write we will be able to write X dot has a function of Q dot and the relation between this X dot and Q dot this is a theta dot ok theta dot into X dot so this can be written as X dot is equal to JQ q dot so we will be able to write this X dot as the function can be written as JQ q dot where j is known as the Jacobian and it can be obtained a by taking the partial derivative blew with respect to Q so if you have exist W cube so we have X is equal to W Q so we can write it as X dot d is equal to partial derivative W Q dot so that is the relationship that you can get sorry so when X is equal to W Q you'll be able to write x dot is equal to partial derivative of W with respect to Q Q dots so that is the relationship that you can see so you'll be able to write x dot is equal to j q q dots where j is the matrix which relates the joint velocity and Cartesian velocity and this matrix is known as the Jacobian matrix of the manipulate ricotta's tool configuration Jacobian because it relates the tool configuration velocity to the joint velocity so we call it as the tool configuration in Jacobi n and the elements of J so now J will be a 6 by n matrix yes n is the number of degrees of freedom and we have three linear velocity and three angular velocity so this would be six and it will be a six by n matrix and this element of j JK JQ j KJ q is given as partial derivative of w KQ with respect to Q J so that is basically the elements of Jacobian so once we have the paw forward relationship xw q so that is basically the forward kinematics relationship we can take the partial derivative of that relationship with respect to the joint variables and that will give you the Jacobian matrix and then X dot will be Jacobian multiplied by Q dot so that is the relationship velocity relationship that you can derive from the forward kinematics so J is the Jacobian which relates the tool configuration velocity to the join velocity so this is the relationship X dot is j q dot so if you know Q dot you can actually get X dot by simply multiplying Q dot with J so that is jacobian okay so k is 1 to 6 and then G is 1 2 yeah and is the number of degrees of freedom so this can be written in the matrix form as like this so if your ex is WQ you'll be able to write down the J elements as like this so you have this X dot has X dot y dot and Z dot these are the three linear velocities in X Direction y direction in Z Direction Cartesian space and then you have this Omega X Omega Y Omega Z which is the angular velocity with respect to XY and z axis so these are the three linear and three angular velocities and then you get this element by taking the partial derivative of W one with respect to Q 1 Q 2 2 QN so W 1 is the relationship for X P X is a function of the way we write only the position of X we know it's a function of Q ok so you can't say this as W 1 as the function we try to test w1 q then you take partial derivative of W 1 with respect to Q 1 Q 2 Q 3 Q n you will get the first row of the Jacobian that is what the X velocity and for Y velocity you take the second relationship py relationship so you will be having in relation for py is equal to W 2 Q then you take the partial derivative of this w 2 with respect to Q 1 Q 2 etc so this is the way how you will get the Jacobian elements the elements of the Jacobian matrix now you can see this this is a 6 by 1 vector 3 linear velocity and three angular velocity and this is an N by 1 vector because you have n joints and therefore there will be and join velocities so it's an N by 1 vector and then this would be a 6 by n matrix so it need not be a square matrix it will be a Jacobian will be a matrix depending on the dimension of Y n depending on the end the dimension of matrix would be 6 by n so for a 7 degree of freedom it could be 6 by 7 for a 5 degree of freedom it be six by five for a six degree of freedom it will be a six by six matrix so this is the the Jacobian how you get the Jacobian so we'll take few examples to see how the Jacobian can be developed for any manipulator so first we look only the linear velocity part and then later on I will explain how the angular velocity can be calculated I Langille a velocity part can be calculated okay so the for the rotary manipulator it would be J theta theta dots and you can actually get the inverse also suppose you have this X dot known then we can get the theta dot so if theta dot that that joint velocities are known we can easily calculate the X dot using the J theta theta dot oh the other case if you're you have you know but is the Cartesian velocity you want to find out the corresponding theta dot for a given X dot what is the theta dot can be obtained by getting the inverse of Jacobian we take the J inverse X naught so in the workspace suppose you have the workspace this is the workspace and you want this to move from here to here yet to be in the workspace you can specify what is the velocity you want this one this total tip to move from here to be if you can specify the velocity we can find out what is the corresponding joint angles velocity velocity x' using this relationship relationship J inverse X dot so that is the inverse relationship for the velocity so X dot is J theta dot and theta dot is J inverse X naught so the though it's a direct problem of getting theta dot and X dot it actually creates sort of issues when we start computing the theta dot for a corresponding X dot we'll be discussing that what are the issues but some one issue is that the J theta inverse so we have to get an inverse of J and one problem you mediate problem is that when it is non square matrix how do you take the invoice so five degree of freedom robots so for a five degree of freedom proper this would be a six by five matrix and how do you get the inverse of Essex by five matrix and we know that the inverse need to be calculate the inverse we had to have a square matrix and then only we will be able to get the invoice but in this case even it is six by Phi what we will do that does it mean that for this this relationship is applicable only for the six degree of freedom robots or other robot also can actually use the same relationship so that is one immediate problem and another problem is that this J even if it is a square matrix as you know not all matrices can be in inverted okay again depending on the type of metric of the matrix and its properties you may find sometimes that there cannot it cannot be inverted because of its rank its rank is not equal or there is some problem with the matrix condition number then you won't be able to get the inverse so these are the two major problem that will face when we go for the calculation of theta dot from X dot so we will discuss that issue later so first let us see how do we get the J for a manipulator how do we first calculate the Jacobian for a manipulator we will see then later on we'll discuss about the problems associated with the inverting of Jacobian we will take a very simple example initially to see how to calculate the Jacobian so we know that it is differentiation with respect to theta so you take a very simple manipulator of a one degree of freedom so we'll take a one degree of freedom single link manipulator planar manipulator to make it very simple so we'll we know that this is the P I mean this is the tip and this is Theta so you have only one degree of freedom so we can easily write px and py the PX is equal to some and py is equal to something so that will X is equal to something x theta dot o in theta this is so we'll be able to write x dot is something multiplied with theta dot and this elements are basically you will get that this partial derivative of P X with respect to theta and partial derivative of P Y with respect to theta so that will be the element here partial derivative of X and partial derivative of Y so this would be the two elements in this case and yes you know it's only X dot and of course X dot means X dot and y dot in this case px dot V Y dot so this is the way how we can actually get it now if I do this we know that px is equal to R cos theta py is equal to R sine theta and then to get the Jacobian we take the partial derivative and find out what is the Jacobian that you are getting ok so take the partial derivative of P X with respect to theta it will be minus R sine theta theta dot px dot will be minus R sine theta theta dot and py would be R cos theta theta dot now if you write it as a matrix you will get it as X dot y dot when I say X dot y dot px dot P Y dot is minus R sine theta R cos theta theta dot so that is the relationship get and this is the Jacobian for the and single degree of freedom manipulator this minus R sine theta R cos theta so you are talking only about the inner velocity for the time being we'll discuss angular velocity later so this is the X dot and y dot for this manipulator so this is the principle to be followed whether it is a six degree of freedom or a a 7 degree of freedom will follow the same principle and then start calculating the Jacobian that is what we doing in this case so let us take the three are planar manipulator again it's only planar we are talking about you'll see how can we get the check obvious so look at this so this is the planar manipulator which we already discussed during the forward and inverse kinematic analysis so we know what is px and py and the piece that also we know so we have the relationship for pxpypz because if this is taken as l1 l2 and theta 1 theta 2 etc and this is theta 3 we'll be able to get it so the relationship is obtained from the forward kinematics like this px is L 1 C 1 plus L 2 c1 2 and py is L 1 s 1 plus L 2 0 s 1 2 and PZ DS d3 so that is the relationship now what we want to I mean if you want to get px dot what we need to do is to take the partial derivative of P X with respect to theta 1 and so if you write it like this we will get it as X dot this px with respect to theta 1 theta 2 and theta 3 and PZ theta 1 theta 2 theta 3 so that is the way how we get X dot y dot n Z dot so you can see this a 3 by 1 vector 3 by 3 matrix and it's a 3 by 1 vector now we take a px with respect to theta 1 is L 1 C 1 plus L 2 c1 2 so we'll be getting this as minus L 1 s 1 minus L 2 s 1 2 and this would be minus L 2 s 1 2 and this is L 1 C 1 plus L 2 C 1 2 L 2 C 1 2 and this would be zero because P X with respect to theta 3 P Y with respect to theta 3 and PZ with respect to theta 3 are 0 so this would be the Jacobian for the the linear part the velocity of the manipulator so this is how you can get the Jacobian for manipulators so they said three degree of freedom or fi degree of freedom use the same principle because get the relationship from your forward kinematics px relationship or X relationship and then get the take the partial derivative get the matrix so you'll be getting the Jacobian for this ok is another example again a four axis manipulator it says khara manipulator four axis so just to ensure that you have thorough with this process so I'm just showing one more example so again you'll be getting px like this if you do a forward kinematics you'd be getting p XS l 1 c 1 + l you see 1 - 2 then P why is L 1 s 1 plus L 2 s 1 - 2 + PZ this D 1 minus Q 3 minus V 4 so in this case it's a linear joint so you have one linear joint that's why it's known as Q 3 it is the variable joint 3 sorry the last one is basically a I sorry the third one is a prismatic joints let's switch Q 3 is variable and d 4 is a constant again the same principle you play that I take the partial derivative with respect to theta 1 theta 2 and Q 3 and theta 4 so here you have P X with respect to theta 1 theta 2 theta 3 sorry Q 3 and theta 4 same PYP set also okay so these are the joint velocities theta 1 dot theta 2 dot Q 3 dot and theta4 dot and if you do this what will derivate you will be getting it as Jacobian will be this matrix minus L 1 s 1 minus L 2 s 1 minus 2 minus L 2 s 1 minus 2 like that's so you'll be able to get the linear part of the Jacobian by taking the partial derivative like this I hope you understood these methods okay so let us briefly talk about the issues with Jacobian when we try to do the inverse so as I mentioned you can do the theta dot we're taking the Jacobian inverse if you know X dots you'll be able to calculate theta dot by taking the inverse of the Jacobian J inverse but many cases you'll be having difficulty in the in calculating the J inverse because J is a function of theta so the Jacobian is a function of theta as you saw in the previous cases you can see this is a function of theta as theta value changes the Jacobian matrix also changes so for any different position of that tooltip you have different matrix the elements of matrix changes so your Jacobian matrix also changes so it's not a fixed for a manipulator as the position changes you are J also varies and there may be some position in the workspace that this J become non-invertible that is the when that is moving or when theta value changes there may be a situation where the Jacobian may not be invertible for all the values of theta assuming that J is a square matrix for the time being in that case also when the matrix the manipulator is moving in the workspace then you will see that at some for some values of theta the Jacobean may not be invertible that is because the Jacobian loses its rank so if it is a 6 by 6 matrix then the rank of the Jacobian will actually come down to 5 at some points then you won't be able to invert it so that kind of situations will be common in manipulator of workspace and that kind of situations are known as the singularity okay so at certain points in joint space Jacobi induces its rank for some values of theta the Jacobian loses its rank and there is a reduction in number of independent rows and columns when the Jacobian so initially it was a 6 by 6 matrix but then after some time it actually become the seat matrix is still 6 by 6 but the independent rows or columns produced reduces because of the position of theta and it it loses rang it becomes 5 that situation then you such situations are called as joint space singularities and you won't be able to invert the Jacobian at that point and when you are not able to invert you won't be able to find the theta dot because you have an x dot and at that particular configuration of the manipulator that particular joint positions you find that J inverse is not existing and therefore you find it theta-dot cannot be calculators or you for any values of X dot you won't be able to find a TTA dot for any theta dot you won't be able to get an X dot so if you look at from the other point you want to move the join you want to comment the join that very high velocity but still you are not able to get the desired X dot this situation is known as the joint space singularities and this is common because the Jacobian is since it's a function of theta as the T that takes a particular set of value then ad for that particular set of value we will see that this relationship is not possible to be evaluated or any value of theta dot you won't be able to connects dots or to get an X dot we said X dot the theta dot should be infinite that is the way how you can look at it if you want to have an X dot then you need to have an infinite velocity at the joint to get this X dot such situations are known as joint space singularities or that position of that manipulator is known as singular point or a singular point in the workspace it can be multiple points need not be a single point it can be multiple points or as it reaches the that point you will find that the manipulator the Jacobian looses its rank so this situation is the joint space singularity okay so the Jacobian matrix J key Q is a full rank as long as Q is not a joint space regularity so the JQ will be a full rank as long as the q is not a joint space singularity so it is a singular at the joint space that singularity then the JQ loses its rank and this can actually be measured by something called a dexterity measure so how close the manipulator is to a singular point can be measured by the term dexterity so we'd measure this as we defined X TDs the determinant of J transpose J or J transpose depending on the value of n is less than or equal to 6 or more than 6 okay so this is used to get the dexterity measure if it is n greater than 6 then we use GJ transpose so either J transpose J or J j transpose we find out the determinant of j j transpose and that is known as that is that will give you the dexterity of the manipulator okay and for the general case the two Jacobian matrices less than full rank if then only if the N by n matrix J is singular okay and for redundant manipulators determinant of 6 by 6 matrix must be used okay this is for when the n is less than 6 or more than 6 you need to use J transpose J or J transpose now in a manipulator is a joint space singularity if and only if the extra T is 0 so when the dexterity is 0 that is the determinant is 0 for J transpose J or J j transpose then we call this as the singularity or that space singularity text DT will be 0 so when manipulator is starting from one point which is not a singular point or the text is very high then this value will be hanging a very high value then as it moves towards the singular point you will be getting it as 0 so as it moves to the singular point it will start coming down very small values and then finally at the singular point you will be getting a DEXA T value as 0 so this is actually a measure of singular space I mean this DX DT that is if you have a manipulator or workspace so if you have a manipulator workspace like this then there may be a singular point somewhere here assume that there is a singular point here and that this phone in the back city will be 0 and other places you will see that X as it moves here the dexterity will start coming down to 0 so it may be having some value next city here but as it moves towards this you will see that it actually comes down to 0 so neighborhood of this will be having very small value of dexterity and again you will find difficulty in moving the manipulator in this area this whole area you will find difficulty because the next city has come down and therefore this will have difficulty in getting the inverse so the theta dot will keep on increasing acid as it goes close to this area so are getting a different constant velocity if you want this to move with the constant velocity along this path and you find that as it passes through this area you will be having difficulty in getting the desired velocity and if it has to pass through this then you will said somewhere here it will stop and you won't be able to pass through that point so that is the way how the dexterity measure is used to find out the dexterous work space of the manipulator so every manipulator will be having a dexterous work space but the dexterity is very high and other spaces were actually the extra T is low we will try to avoid going towards to going that space so the manipulator when you design the manipulator we look at the dexterous work space and they ensure that most of the our operations will be done within the dexterous work space and we try to avoid the other work space for normal operations and if you want to move to some other point we also try to avoid that extra space and then move or will plan a path in such a way that the singular space is avoided and then the robot is moving to the target that is the importance of knowing the dexterity of the manipulator and then as J is a function of theta you would be able to find out the dexterity of I mean text with you at every point in the work space and you can make a plot of the Express workspace okay so there is the singularity when then there's something called boundary singularity and boundary singularity occurs when the tooltip is on the surface of the work envelope that is when you have a workspace and you have a workspace like this and the tooltip as I'll be reached here the tool de pez already reached this post position then we call this as a okay this will not be the right representation so the tool depressor will be reached this point now this is also a kind of singularity because if you want to move it in this direction you won't be able to move if you want to give a direction velocity in this direction then you won't be able to move okay you actually reached the maximum of that you can reach and then again you have a lhasa T in this direction you won't be able to give but you can actually move in these other directions you can move in this direction or this direction but you cannot move in this direction so that is a kind of singularity and that we call it as the boundary singularity of the manipulator but we can have it in inside that is the interior singularity and when it is at the boundary we call this boundary singularity and boundary single ID axis for our manipulators you cannot have velocities in particular direction when the predator is already reach to the boundary so that is boundary singularity and the other one is the interior singularity just to tell you what is to explain you how the boundary singularity is calculated so look at this manipulator for the manipulator where you have a Jacobian like this so we saw this Jacobian for the manipulator now if you have a Jacobian like this we can find out what is the boundary singularity of this manipulator what we need to do is to find out the dexterity and then see when the DEXA T will be 0 so we can identify this dead tecnique city as like this determined J transpose J and then find out this is the dexterity in terms of the joint angles so we will see that L 1 l 2 s 2 is the dexterity and this will be 0 when s 2 is 0 so when you are sine theta 2 is 0 you will be getting this at X 3 s 0 and that will be the boundary singularity you can see that when theta 2 is 0 or pi the manipulator will be singular so this is the case for the SCARA robots so you can see that when it is fully extended then it is at the boundary and theta 2 is 0 it is at the boundary so it is a singular configuration similarly when it is fully closed then it is theta 2 is PI again it's at the boundary so again boundary singularities so you will see that this theta 2 is equal to 0 or PI you say boundary singularity for the end plate so this is how we get the boundary singularity of any manipulator we can actually find out the manipulator support a single at ease by finding out the dexterity when it actually it becomes 0 then you will be able to find out the boundary singularities so whenever theta 2 is 0 it's actually a at 0 or PI then it'll be a boundary singularity ok now that's about the boundary singularity and the integer singularity as I mentioned it can actually happen it's a very troublesome so boundary singularity is not a problem but indeed a single IT issue this has formed when two or more axis form a straight line the effects of rotation about one axis may be cancelled due to counteracting rotation about the other axis so this kind of singularities can happen when you have one rotation happening but other rotations of the joints actually counteracts and therefore even if the joints are moving your tooltip is not moving ok so you have a joint velocity but you don't see any movement at the tip or the Cartesian velocity becomes 0 when you are joined with Oct still X in that kind of situations are so known as the interior similarities so the tool configuration may remains even though the room the robot moves in joint space so the joint space the robot is actually moving but the tool configuration space no movement is taking place so that kind of situations are known as interior singularity so let's take this example for the microphone robot alpha and then see how the integer singularity happens in this case so you can see this configuration of the robots the robot has actually reached a configuration like this so now this the robot joints are actually a position like this and you will see that this cube B that is the joint angles are like this Q 1 minus beta 2 beta minus pi minus beta and Q 5 so this is Q 1 and this is Q Phi so this is Q 1 and Q Phi and other joints are like - beta 2 beta minus PI and minus beta so other 2 3 joint angles unlike this where beta is less than of PI by 2 0 between 0 and PI by 2 and then this situation you will see that even if Q 1 Q 1 is having making an angle and then Q Phi is making a opposite rotation then this will actually remain safe there will not be any change in the position and orientation of the end-effector even if Q 1 and Q 5 are making motions so that kind of a situation is known as interior singularity provided a 3 is equal to a 2 and a 4 is equal to 0 then JQ loses full rank along the line Q beta and Q beta represents integer singularities of the articulated robots so these kind of situations will arise when it actually aligns in a particular way and I mean the joint angles and the parameters are in such a violent such a way that you have this situation where the tool tip is not moving even if the joints are moving so that is basically known as the interior singularity so this particular robot has got an interior single arity like this so this is an exercise for you for the three axis planar robot show that if he r2 is equal to a-1 and then q q1 by q3 is a locus of singularities which axis are collinear in this case so I try to see how this interestingly happens and which are the axis that are collinear in this case the planar robot and a2 is equal to a-1 so you will be able to see the situation why it is getting locus of singularities all right so that is the singularities in the manipulator now so we mentioned that okay whenever this J is not invertible because of the position of a joint or the joint space values joint variables are in such a way that the matrix looses its rank so that is one situation where actually you can get singularity another problem with this inverse J inverse is that when it is non square matrix you will not be able to get the inverse okay so when it is a six by five matrix you won't be able to get the inverse of J so how do we actually solve this so money is that the J looses its rank in you do not have an inverse and there the other one is and J is not a square matrix you don't get a inverse so that is a situation where you encounter in the manipulators which are having degrees of freedom other than 6 so in this case what we need to do is to get an inverse to some other means and that is known as the generalized inverse so and one square matrix can be inverted and we can use something called a generalized inverse in this case so this is in defined as a generalized inverse is defined as if a is an M by n matrix so if a is M by n matrix then an N by M matrix X is a generalized inverse of a if and only if it satisfied at least but the one or two of the following list of properties so if E is an M by n matrix then we can have an X as an N by M matrix as the inverse of a provided it satisfies at least one or two of this property that is a X a is equal to a and X here X is equal to X so if we can actually have a matrix like that X is like that a X a is a and X a X is X then we say X is a generalized inverse of a if it satisfies property one or two of the this that is this one or two it satisfies ax a or ax ax then we call it as a generalized inverse and therefore once we have that X has a generalized inverse I mean we can have X as a generalized inverse then we will be able to get the J if J is M by n we can find a M by n matrix which satisfy this condition then we'll get that as the inverse so that is known as the generalized inverse of a non-square matrix now very commonly used generalized inverse is known as more penrose inverse okay more penrose inverse is known as a it's very commonly used generalized inverse and sometimes it is known as pseudo inverse or we call this a plus so a plus is known as a inverse of 0 inverse of a so instead of writing a inverse we write this a plus it's a pseudo inverse and it's satisfy all properties if our properties listed here so it actually satisfies all these properties ax a is equal to a X a X is equal to X ax transpose is equal to ax and exe a transpose is equal to XE so this more vandross inverse or the pseudo inverse will satisfy our this condition and therefore a plus will be a an inverse of pseudo inverse of a or a is a non square matrix and how do we get this if a is of full rank so condition is that the ace of full rank so if it is a rank 5 or rank 7 you will be able to get the test is a full rank then we will see a plus is given as here transpose yeah err transpose inverse if M is less than or equal to n ok of course if M is equal to n then it is a inverse only yeah plus will be inverse my if it's a square matrix it will be the same but if it is M is greater than n then a plus is here transpose a inverse then a transpose multiplied by ei transpose so this is the way how you get the pseudo inverse a plus so 0 inverse ei plus can be obtained as either a transpose a transpose inverse or a transpose a inverse a transpose and M is greater than n so now you can see what we are doing is we are actually converting this s L square matrix so a a transpose becomes a square matrix now because a non-square matrix oh yeah a transpose becomes a square matrix and you find the inverse and multiply pre multiply with the a transpose or again here you convert that into a square matrix and get the inverse and then multiply with a transpose so this way you will be able to get the pseudo inverse of the manipulator so that is known as a plus I hope you got the point so even if the Jacobian is not a square matrix we will be able to get the inverse using the more penrose investment notice that's you right inverse so you have J then J transpose so if you are J's inverse this J Plus will be J transpose J J transpose inverse if M is less than or equal to n that is the way how you get the J plus so you can always get used J plus and then solve for the and then get the theta dots and get the values here so non-square matrix is not an issue we'll be able to solve it using the pseudo-inverse that is the generalized inverse and how we use it for calculating the joint velocities if we know the Cartesian velocity other tool velocity all right so this isn't just an example for the pseudo inverse suppose you have a matrix a it is 2 by 3 matrix so M by n m is 2 and n is 3 and the rank is is - it's a full rank that is the rank of the matrix is 2 now if you want to apply the get the pseudo inverse we use the principle of that I take make converting this into a square matrix and then getting the inverse and multiplying with a transpose so we'll be getting it as a plus is air transpose a transpose inverse so when you say a a transpose it is 2 by 3 multiplied by 3 by 2 so it will be a 2 by 2 matrix you get the inverse and multiply with the a transpose so you'll be getting it as a 3 by 2 matrix as the inverse so a plus would be a will be a 2 by 3 and a plus will be a 3 by 2 matrix ok so you will be getting this as the a Plus which is the inverse so it's a 3 by 2 matrix and this is the pseudo inverse of a so this is the way how we get the pseudo inverse and then once you have you can use it for getting the doing the calculation of velocities in the case of Jacobian and getting the joint velocities okay what is the application of this particular method so this is known as the resolved motion rate control you'll be learning this later so in resolve motion rate control you want this manipulator to go with a particular velocity in the Cartesian space so I have a Cartesian space and I will say that from this point to this point it should have a particular X dot and Y dot I'll say this is X dot and V dot or I'll say it should have a constant velocity in Munda so if I have this X dot y dot T one I want to control the robots and then I want to send the commands to the joints to move so that you will get an X dot y dot so I need to control the joints so I need to control the joint so I would say what should be the design velocities and that can actually be obtained by J plus X dot so I can say that the control the joint velocities to be calculated based on this and then I command that velocity will be getting the robot motion control and that is known as the resold the motion rate control so if X T be a differentiable tool configuration trajectory which lies inside the workspace and which does not go through any workspace singularities and JQ is the 6 by n tool configuration Jacobian matrix where n is less than or equal to 6 then the joint space velocity joint space projected Qt corresponding to X T can be obtained by solving the nonlinear differential equation that is you can actually get it as Q dot is equal to J transpose J inverse J transpose X dot core J cube X dot will be Q dot so Q dot will be JQ x dot so the trajectory for the joint trajectory can be obtained using this relationship and the new move the joints as per the velocity that you calculated so that is basically the result motion rate control this was introduced by whitney in 1969 and known as the result motion rate control the motion tool configuration is resolved into joint space component that is the result motion rate control so this is the way how we can use the Jacobian and its inverse to get the joint trajectory and control the joint manipulator joints in order to get there and decide Cartesian velocity at the dole team okay so that's all for today we'll stop here now we will see how we can actually use the Jacobian how to compute the Jacobian in both for the linear and angular velocity and how we define a manipulator Jacobian in order to use it for other application apart from the velocity application we can use it for force applications also so we will see that in the next class thank you |
Introduction_to_Robotics | Lecture_52_Control_of_the_Brushless_DC_Motor.txt | In the last class we had started looking at Brushless DC Machine and these varieties of machines belongs to group a called permanent magnet AC Machines, which is PMAC; and this further belong to a group called synchronous motors. So, the brushless DC machines indeed the entire group, which is the synchronous motor variety they are an inverted arrangement as compared to the normal DC machine. The field winding or the field arrangement is on the rotor, while the armature is on the stator. And in view of this arrangement the voltages induced are alternating and hence you cannot connect a DC supply directly to this machine. And therefore, you make use of a circuit as we had discussed in the last class that is here. And this is then called as an inverter that accepts DC on the input side, and delivers an alternating AC AC output voltage, which is then the given to the motor. The understanding of how this is supposed to operate was what we discussed in last class. That is depending upon in this case of the BLDC machine; where the induced emf waveform looks like this. Then depending upon which two phases have an induced emf that is flat; you select those two phases to be connected to the DC supply. The phase that is undergoing an induced emf variation from one level to another; you leave it open, do not do anything to it. If you do it in that manner, this machine then looks like a DC machine at all points of time through the DC supply voltage, to this DC supply. And hence therefore for operational aspects that is to estimate the performance, and decide how much load it can take and so on; in order to design the control look for the motor. You can consider that to be roughly equivalent to DC motor and then you do all your designs. Now, I had asked you to look at how to determine or each interval which of the two switches will be on. So, let us take this diagram once again, so let us look at this waveform once again; so, this is 30 degree, 60, 90, 120 and 150 degrees and then it is 210, 270, 330, this is 360 and 30. So, here we going to have voltage waveform that looks like this; this is for one phase. Then this phase for y is phase shifted by 120 degrees, and therefore looks like this; and then v is going to be further phase shifted by 120 degree. So, note that each phase is flat for 120 degrees on the higher side and 120 on the lower side. So, we can call this amplitude as A and this is minus A; then it is changing for 60 degree plus 60 degree. So, we are saying that during the interval when the phase is undergoing a level change from one to the other, do not use that phase whereas, when the level is flat, you use that phase. So, if you look at that then we can see that during this interval, so maybe let me draw quickly the switches. Then you have R, Y and B and the switch numbering is 1, 2, 3, 4, 5 and 6, so in this 60-degree interval that we have marked. You want to make use of the R-phase and the Y-phase and therefore you would keep switch number 1 and 6 to be on. In the next 60 degree you will have switch number 1 and 2 to be on. In the next 60 degrees, you have 3 2 and 3 to be on; in the next you would have 3 and 4, and here it is 4 and 5. There it is 5 and 1, so this is part of 5 and 1; and then you go back to 1, 6. So, every 60 degrees you have 2 switches that are on and you can see that 1, 6 which goes to 1, 2. That means the instead of 6 you have now 2; which is conducting 1, 2 goes to device number 2 and device 3. So, instead of 1 you now have device 3 and then that goes to 4, 5, 6 and so on; and that is a way in which divide this switch. So, the issue is how are we going to decide how to switch these devices. So, usually in the motor there are sensors that are put, so these are called as Hall Effect; so those are called as Hall Effect switches, which basically detect the field in under their influenced, under their region of influence. And they give an output that high or low depending on what sort of field you have. So, if the field is N then this gives an output 5; if the field is S to give an output of 0. So, that is a digital output that you get, and therefore if you place Hall 1 around the machine at some particular location such that Hall 1 gives an output of 5. When the induced emf in phase R starts becoming flat; then it will remain high, so therefore for a duration of 180 degrees. Because, the machine has North Pole for 180 degree and South Pole for 180 degree so, this is how the output of Hall 1 is going to look like. And similarly, H2 if you draw that would be phase shifted by 120 degree, so that goes high exactly when the Y-phase emf starts becoming flat. Therefore, 30, 60, 90 and the 30, 60, 90; and then you have H3 which is also located somewhere in the machine. The location is such that it starts going high, when the B-phase emf becomes flat so, this is how it is going to be. So, using this three-sensor information, it if it is feasible to determine the instance for the duration; for which for which each device is must be turned on. For example, if you want to consider device number 1, you see that device number 1 is on in this region, this region and here. Sorry this is not correct; this should be 4, 5 and then 5, 6 and then 6, 1 so device number 1 is on during this interval. So, if we are able to implement the algorithm such that for device number 1; I say Hall 1 and Hall 3 bar, Hall 2 bar. That means whenever Hall 1 is high and Hall 2 is low, turn on device number 1; so, device number 1 will get a switching pulse during this interval. Similarly, you can look at device number 2, device number 2 needs to be turned on in these two intervals. And how do you determine the algorithm then for switching on device number 2; Hall 1 is high and Hall 3 is low. So, that can be written as a digital expression that Hall 1 or and Hall 3 bar; so this is a Boolean expression. So, the signals to turn the devices on can be obtained by using logic gates appropriate AND, OR, NOT combinations; can be used to arrive at as a simple circuit to switch on the appropriate devices, depending on the rotor angle. So, you see that this Hall sensor Hall switch information give you information about the rotor angle at 60 degree intervals. That is you get an information about the rotor angle, only when one of the switch is going to change state from one level to another. In between the range you have no idea where the rotor is and you are not really interested to know as well. It sufficient if you know at those instances, because that is going to switch is the turned on or turned off. So… Professor: During this slope that is you mean during this region. So, let us say that you have the stator of the machine and then you have rotor; and on the rotor you have placed one magnet here another magnet here. So, this is full of magnet material; this is also filled with magnet material. Then this phase is North and this phase is designated South; and here this is North, here is South. Under this condition suppose the rotor is like this at given some instance in the rotation of the rotor; you freeze it at one instance. And what you do is you walk around the rotor, let us you start it at this point. And you walk around the circumference of the stator and you comeback here. If you do that what do you think will be the wave shape of the… so let me just field. I want to draw the magnetic field that you will encounter, as you go from one point to around the circumference of the stator and comeback to same point. What do you think will be the variation of magnetic field, sine wave; why should it be a sine wave? Anything that vary is a sine wave. Now, this is normally if you look at this magnet and this magnet; they are uniformly magnetized. Which, means that which means that flux density anywhere on the surface of the magnet. Professor: No, the flux density anywhere on the surface of the magnet is the same; that is what you meant by uniformly magnetized. Now, as far as air gap is concerned between you know if you look at air gap which is intervening here; that air gap is the one that is going to see the magnetic field that is going to come out. And as long as the magnet air gap is small in length, whatever is the magnetic field that is magnetic flux density at the surface of the magnet. That is what is going to be seen by the air gap, and the person is moving around as well. So, if you move around and then measure the magnetic field; by measuring magnetic field means which quantity you measure. You measure the flux density, if you recall that if you want to describe the magnetic field. How do you describe the magnetic field? Either you can talk about flux or you can talk about flux density or you can talk about magnetic field intensity. So, if you recall high school physics or your physics might you have done in first year; this is given usually the symbol B. And this is given the symbol H, and B and H are linked by one term; what is that called permeability of the material, B by mu this H. So, when you say that you want to measure the magnetic field; you have to measure one of these things. And usually what is a good indicator of magnetic field is the magnetic flux density. So, when you are having a uniformly magnetized ring, what it means is that if you draw the magnetic flux density; this will in in the region that is here. It is by and large going to be flat and, in the region, here; it is again by and large going to be flat. And in between it has it has to go from one sine of the flux density to another; and therefore, there will be a region where the flux density reverse. So, that is going to be the shape roughly of the flux density in the air gap. And you are going to place this Hall switch, let us say that the Hall switch is placed here; which means it is placed at this point. This variation is with respect to an angle that you are going to travel around the air gap. Now, if you are located there it means that at that instance at that location, you say that flux density is zero. Now, as the rotor begins to move either little bit that side or little this side; then this Hall switch will encounter two different varieties of magnetic field. In one case there is flux going from the rotor to the Hall switch, and in the other case it is going from Hall switch the rotor; because the sine of the magnetic field reverses. So, that means if this rotor is now going to move; this magnetic field distribution will either move this side or little move that side, depending on the direction of rotation. So, let us say assume that this magnetic field is going to be moving in this direction with respect to time, moving with respect to time. That means if you draw the output of the Hall switch with respect to time; how is it going to be. As this flux density pattern moves to the right; this Hall switch which is located there will start seeing field which is negative. And therefore, this switch output will remain 0; for how long will it remain 0, as long as the rotor rotates by 180 degrees. If this if this wave so if this wave begins to move to the right side; then this point will be seen flux density is that then is where here. And so the imagine entire waveform is moving to the right, and therefore it will see negative flux density for 180 degrees of angular rotation. And therefore, it will remain 0 for 180 degrees and then it will again reverse flux direction; so it will go high, this is what will happen. If on the other hand the rotor was rotating in this direction, then what will happen to this? It will be high for 180 degrees first; then it will go low. Are you able to imagine that? Are you able to imagine? It is not related to the induced emf at all; that is what I was attempting to get that. You are asking what will happen to the Hall state, when the induced emf is moving; it is not related to that at all. Professor: Not at all, these are these are waveforms of induced emf, not flow of current. The induced emf waveform is purely dependant on rotor angle, and the Hall switch output is also purely dependant on the rotor angle. What we are attempting to do is place a Hall switch at a suitable position around the circumference; such that this Hall switch going high synchronizes with this position that is all. You could have placed anywhere on the circumference, it is a circular stator; you can put it where ever you want. We are choosing the to put it at a particular place, such that this goes high when that becomes flat that is all. Why we are doing that? Is just for ease of operation; just to get some simple expression here; otherwise it is not going to be easy. So… Professor: Yes, so this is if you say that this is North Pole flux density, this is South Pole flux density. Professor: No, in the first case if the flux density pattern is moving. See if the ro… you are standing at this point, the rotor is going to rotate; which means that this flux density distribution that we have drawn, is a distribution in a space at a given instance. If the rotor rotates then the flux density distribution itself moves, is it no? So, if the distribution is going to move to the right; then that point will see the negative flux first. Now, it is seen 0 flux density, if the distribution move that side; what we will be see is what it will be see is flux density, that is here. So, if you start giving 0 output first and then after the rotor is rotated by 180 degree; it will see flux density reversed, and therefore it will give high output. So, like this one can go on and determine the functions for all the six devices. So, as I had mentioned earlier, we had looked at the DC motors speed control; so, this is the same sort of structure that will be used even now. But what is the difference, so let us go I need to copy this, now let me copy this. So, what we will do now is then you have the BLDC motors here, and BLDC motor is connected to the inverter. And then you have the shaft and then you have Hall sen… switches that are placed here. You take the output from the Hall switches, and then you have a digital logic that is placed here. This digital logic in turned gives you six signals; each of which determines when a particular device is going to be on or off. So, these are the switching signals for devices in inverter, and these six switching signals are to be given to the inverter for operation. The rest of the circuit is then going to be the same as the rest of the closed look structure, is going to be the same as earlier. So, there is no difference, you have a opposition reference and then you have a speed loops; and then the loops for flow of current, all that is going to be there. But, when it comes to the final stage that is here, this block which we said is going to is going to take the output of this controller. And then convert it to switching control signals. So, this we implemented by means of having a high frequency ramp signal; and comparing it with the output of the controller to generate this kind of a waveform, switching waveform. So, here also we do the same thing. But what we can do is at this point instead of giving this directly through the inverter devices. What we do is take this output from the controller that is a controller output, and then you pass this through modulator; then you get a high frequency on off signal. This simply says that in order to apply the voltage that the controller is asking, you cannot allow the devices to be on all the time. But rather you have to switch it to the certain duty ratio, so that you apply a lesser voltage; that is the meaning of what the output of the controller is. So, what we can do is simply take this signal and to each of the each of the signal given by the digital logic. You AND it with this signal and give it to the inverter; so, there are six signals that again here. I mean for example, if you take the signal generated by the digital logic, so digital logic for the device number 1; so, that is going to give you a signal. This signal you simply AND it with this on off signal, and this goes as switching control signal, switching control for device number 1. Similarly, you take the switching control signal for device 2, again combine it with AND gate using the same signal. That goes as switching control for device number 2 and so on. So, this means that even though the digital logic is going to say that in this region; I want device number 2 say to be on. You are not allowing it to be on all the time; but rather that is dictated by how much voltage you want to apply, as per the dictates of the of the signal that is coming there. That is off this signal which is going to be there from the last controller in the loop. So, by doing this then we are able to con…we have a speed control of the brushless DC motor. Now, this is the sort of motor that is usually used in the drones, simple robotic devices, unmanned vehicles and so on. There are certain limitations of this kind of motor, the limitation is that we are going to be operating the switches; that is switch number 1 is let us say operate is meant to be operated in this region. Whether it is kept on or kept off is going to be dependant upon the high frequency signal; that is going to come up as comparing with the RAM and output of the controller. So, during this region device number 1 will be operated, maybe you are operating with on and off on and off and like that. But, during this 60-degree interval you do not know what is happening to the rotor; in the sense you do not know where the rotor angle is. The information is just not available, because rotor angle information is known only when the switch is going to change state. Either it goes from low to high or high to low; you know where the rotor is at that instance, in between you do not know. When this kind of an operation happens, the difficulty is that the graph of electromagnetic forces that is generated electromagnetic torque; that is going to be generated. Every time you have a switching the device, switching in the sense you go from 1, 6 here operation of one switch, you go to 1, 2. At this instance or around that interval around that instance; what you will have is if you plot the electromagnetic torque versus time let us say. And during this region device is 1 and 6 were on, and then it has to go to 1, 2. It will not go smoothly like this, what will happen is invariably there is likely to be a dip. Similarly, after device number 1, 2 finished, you go to device number 2, 3; and here there will be a dip and then it goes. Then it goes to 3, 4 then there will be a dip and it goes on. In some cases it is a dip, at some other space you may field; you may see the torque rise like that. That depends upon the speed and therefore what you have is a ripple torque, whose frequency depends on speed. So, this is one difficulty with this motor that you cannot avoid this ripple torque at top. It happens because one phase is being switch out and a new phase is being brought in. When one phase is goes out and next phase is entering; there is likely to be a disturbance in the load torque. So, if this restriction is not going to cause any difficulty, because this is just a ripple in the generated electromagnetic torque. It may not have an impact on the speed and on the rotor position, depending on what inertia is connected to it, what kind of system is that. So, maybe it is acceptable; for low performance application it is just alright. But, when you go to really high-performance application, where accurate rotor angle the rotor angle need to be controlled accurately. Then this sort of ripple will really cause a deter in the load angle, in the angle of the rotor; and may not be very desirable at all. In such cases what you may have to do is not go for this machine; but you will have to go for a Permanent Magnet Synchronous Machine, which is generally abbreviated as PMSM. Because BLDC motor has a repel torque and if you have a if you have a repel torque like this; it may have an impact on the accuracy of the rotor angle. Because, you are going to apply force to the rotor which is going like this; all of a sudden there is a decrease in the torque. And then it moves again with the load torque, with the same amount of torque. When there is decrease in the load torque, you are not able to rotate the load with the same amount effort that you are doing; which means that the rotor angle will not increase its same level; there maybe a sudden lag in the rotor angle. If that is going to be an issue for the particular application at hand; then this motor is not suitable. If it is not going to be an issue, because whether the rotor is slows down during that time or not; depends upon what is the moment of inertia that is rotating. If you have huge enough moment of inertia, the small difference of load torque will not be felt at all; the inertia will simply continue. So, if there is a large inertia, there is nothing wrong with having this kind of a system; or if you do not mind the speed dip that is going to come, then also there is no problem. So, for simple low precision application like for example drone; the fan has to rotate at by enlarge a fixed speed. If you are going to experiences, slight lag for a short few millisecond of interval; it may not matter at all for the drone. But, if you look at certain other application let us say that you are going to look at applications you know like machine tools. In the area of machine tool, you are required to accurately locate the job; if you I mean if I am sure all of you have cell phones. The way the outer box of the cell phone which is made up of aluminum. Aluminum box, aluminum shelf is made by having an aluminum block and removing the aluminum that is unwanted; to make the shape that is required. So, the metal is being removed and for for having that sort of an operation; you need to be locating the aluminum block. You will have a tool which is going to cut metal and remove it, and for that this block has to be move. And the accuracy of the locating with block is very very important; in order that the financial move and the the shape is very exact. In such a situation if you are going to have a dip in the electromagnetic forces, electromagnetic field that is generated like this; then this will have an impact on the dimensioning of the output of the job. So, in those kinds of situation this is definitely not acceptable. So, one needs to go for higher accuracy system, and the higher accuracy system is this kind of machine. So, behaviorally or the way it is built, it is the same as this variety. So, the machine looks very much similar to this in the sense that the field arrangement is put on the rotor, the armature placed on the stator. No change in that, but the induced emf in the first case was a flat top emf; whereas in this case the induced emf will be sinusoidal. You will have a sinusoidal emf, so you are going to have a sine wave that is vary respect to time; now, this bring about more difficulty in operating the machine. To understand that we need to go back to this kind of machine, and then look at let us say this particular interval; where 1 and 6 are on. So, if 1 and 6 are on, how does the machine look like? So you have the DC source, the circuit goes like this. Device number 1 is on and then this is connected through this phase, goes into the machine, and then comes out 6 is on. So, this is the circuit that is complete. Device all other devices are off, so it means that the blue phase is completely open. So, if you draw the circuit in that case, so the BLDC when 1 and 6 are on; looks like the DC source. When it comes over here, device number 1 is on and then you have the R-phase, and then device number 6 is on. So, you have the Y- phase and then this switch is on; go here this is 1, this is 6 and then you have induced emf. In this region the induced emf in the R-phase is greater than 0, the induced emf in the Y-phase is less than 0. That means that in the circuit we can represent the induced emf as plus here and then minus here; this is connected. So, this is a simple equivalent circuit of the inverter machine system, during that particular interval. Student: are the two emf’s are same direction? Professor: Pardon, the two emf’s are same direction Student: are the two emf’s are same direction? Professor: Yes, it is so because here you see this is negative voltage. How do you how do you interpret that this is negative voltage? It is negative because and because it is negative, is why we are drawing negative plate here and plus here, understand. So, if it does not clear, let us say that the machine has one phase winding; this is R-phase, in the armature so many turns of wire et-cetera are there. And the Y-phase also has those many turns of wire, and they are usually this is B-phase. The way the machine is arranged is such that one end of all these three phases is shorted inside the machine. The other end is what is available to you as R, Y and then B. You can connect inputs into these three locations, these three input points to the machine; now, these phases are going to have induced emf. So, when we say that induced emf is negative, we mean that this is positive with respect to this terminal. When we mean that induced emf is greater than zero; it means that this is positive with respect to this terminal; and that is exactly what I have drawn. So, if you have this circuit then whatever flow of current going to be there; this source is DC source, and these two together is another DC here. And you will certainly have a resistance that is intervening due to the several loads of wire and all that is there in the armature. And therefore you can expect that the flow of current here is also DC. It is this DC voltage minus this DC voltage, divided by the total resistance is the flow of current. And therefore, what will happen is in the first 60 degree interval; when you are having the voltage of the R-phase is high and the voltage of the Y-phase is low. This is emf plot if I draw the flow of current I, then during this interval the current is also flat; current meaning this current that is going in. So, the fact that flow of current is steady ideally is ensured by the fact that the circuit is a DC circuit. That it is a DC circuit ensure that the flow of current is flat because it is DC. There is nothing there is not anything that is undergoing a change. That is one of the reason why it is looks like a DC motor; but when you come to a sinusoidal emf, this no longer applied. You are not going to have a DC voltage here, but you are going to have sinusoidal voltage. That then brings about certain other differences in operation which we will briefly see in the next class. |
Introduction_to_Robotics | Lecture_210_Inverse_Kinematics.txt | Good morning, welcome back. So, we will start the discussion on Inverse Kinematics today. In the last few classes we talked about the forward kinematics, starting from the coordinate transformation matrix and how do we go to identify the DH parameters and then using DH parameters how do we get the transformation matrix and then use it for forward kinematics analysis. And we saw few examples in the last few classes and see how to use the forward kinematics for getting the position and orientation of a manipulator that can be expressed as a function of the manipulators parameters, the joint parameters as well as the link parameters that is what we discussed. So, today we will start the inverse kinematics problem by briefly explain to you what is inverse kinematics problem. So, we will see the details of this problem and how that can be solved for manipulators. So, what we are going to cover in this is basically the problem definition, so we will try to define what is the inverse kinematics problem all about. And then we will see something called solvability, so it is not that necessary that always the inverse problem can be solved, so it can be solved under certain conditions only. So, we will see the existence of solutions and then the problem of having multiple solution, so some cases there would not be any solution and some cases there will be multiple solutions, so we need to see what is that solvability. And then we look at the solution strategies, there are different strategies to solve this, so we look at the closed form solutions, either algebraic or geometric solution and then we look into the, I mean then there is another method called numerical solution, we will not go into the details of numerical solution just for to tell you that that also is a method for solving inverse kinematics, but we will be talking only about the closed form solution. And then we go into the velocity relationship, of course that is not the inverse kinematics part, so we look into the velocity relationship, that is the relationship between the joint velocities and the tip velocities, there we discuss about the Jacobian and the singularity. And finally, we will talk about the statics also, so the force relationship under static condition. So, that is going to be the discussion for the next few classes. Now, we already mentioned about the inverse problem, what is the manipulator inverse problem. As you know the manipulator tasks are normally formulated in terms of the desired position and orientation, so as I told you, so I have an object here, I have an object here, this is an object, now I have a coordinate frame assigned to this object, so that I know the relative position and orientation of the object with respect to the base of the robots. Now interest is to see how can I use this robot to pick this object from here and you to pick this object from here and place it somewhere here, I want to place it somewhere here and I know this position also, so I can define this position where I want to place. So, I have this object in this position and I want to pick up this using this robots, so if I had to bring this tool to this position, what should be my joint angles is the problem in inverse kinematics. So, that is the way normally are the manipulator problems are formulated. So, we knowing the joint angle and get where to where will it reach the forward kinematics is useful but not in the practical scenarios like this. Here we are interested to know, what should be the joint angles to reach here and similarly what should be the joint angles for the tool tip to reach here to drop this point object here. So, that is what is the requirement in the practical application. So, that is the what says that they are normally formulate in terms of the desired position orientation. So, we have a desired position and orientation for the tool and we want to know how can I reach the how can we reach that position and orientation using the joints. And that is the inverse problem that is of interest to us. So, we have something from here to pick and then somewhere else, so how to reach these positions are the interest to us. And a systematic closed form solution applicable to robots in general is not available, that is this problem cannot be generalized, because each robot will be having its own way of reaching that position. And solving that position or getting the joint positions to correspond to that position cannot be generalized because of the robot configurations, we will see why the what the problem is later, but there is a problem here, it cannot be generalized and unique solutions are rare and multiple solution exist. So, many times we will be having multiple solutions, that mainly because suppose you have a point here or assume that this object is here and you want to reach this point, so you can actually reach from here, you can actually reach from here and reach here, or you can actually reach like this also. So, it can actually have an elbow-up or elbow-down kind of a solution. So, this kind of multiple solutions are very common in inverse. And because of all these you have this inverse problem as a very difficult then the forward problem. So, forward problem was very generalized for any set robots configuration we can have a general solution and therefore we are not able to we are able to get a generalized solution. But in the case of inverse, this is not possible, because of a various reason which we will see later and unique solutions are not there, so all this actually leads to the inverse problem as a complex one compared to the forward problem. So, now if you want to solve this inverse problem, so we can look at what is happening in the forward relationship, so we have this arm matrix, which we discussed in the forward relationship, which represents the position p and orientation of the tool in base coordinate frame as a function of joint variables q. So, now when we do a forward kinematics where q is the joint variable here, so q the q the set of joint variable, then we can actually represent the position p, so this position vector p and it's orientation, orientation of this tool frame can be represented using this rotation matrix and the position vector Rq and pq rotation matrix and the position vector can be used to represent this. Now, what is this p? p is basically the position of this point, position of this point p and represented, in terms of Px, Py and Pz, so we represent it as Px, Py, Pz and then we have a relationship Px is something, py is something, this is what we get in the forward relationship and this will be a function of the q. So, we know that it is a function of q depending on the type of robot, it will may be a function of q1 to qn. And this also is a function of q1 to qn. Similarly, R also we have this as a vector, I mean three by three matrix, so you have the normal approach and sliding vectors, so you can call it as nx, ny, nz like that. And each one also we saw that this also is a function of joint variables, so nx is equal to function of joint variables, ny is a function of joint variables like that. So, we have these equations available us, now the question is, how can we actually use Px? Suppose we know Px, how can I find q1 q2 q3? So, we have to start from the forward relationship and then solve this relation to find out the q. So, that is the requirement in inverse kinematics. So, given the desired position p and orientation R of the tool, find values for the joint variables q which satisfy the arm equation? So, now if suppose this p is given to you, now you want this p to be reached here, you say that okay this is the p which I want to reach and I have an orientation like this, if this is known, if this p and this Rq is known, how can I find this q corresponding to this p and R? That is the inverse problem. So, given a desired position p and orientation R for the tool find values for the joint variables q which satisfy the arm equation? So, this is the inverse kinematics problem. So, if I to represent this relationship between the direct and inverse kinematics, you can see that if you have the joint space parameters q, if you know this q1 to qn that is we called as joint space, joint variables. Now, we use the kinematics or the forward kinematics to get the tool configuration space Rn, which actually is the p and R, that is the position vector and the rotation matrix can be obtain. So, for any value of q, you will be able to get the position and orientation using the forward kinematics we called it as a tool configuration space. Now, if you apply this inverse kinematics to this suppose you have, do you know p and R then you apply inverse kinematics and you get the joint space variables. So, use this tool configuration space information using inverse kinematics, then get the joint space is the inverse problem. So, you can see that if we want to solve the inverse kinematics, we need to have a direct kinematics relationship, because that direct kinematics relationship is the one which is used to solve the inverse kinematics problem or we need to know the forward relationship of the manipulator to solve the inverse kinematics, the arm matrix should be known to us. So, now look at the solubility of this relationship, so as we know if p and R are known then we in use some method to solve for q, so that is the inverse requirement. Now, see how the p and R are represented, suppose we have this relationship for the arm matrix, it says that px, py, pz and nx, ny, nz and sx, sy the normal sliding and approach vectors normal sliding and approach vector and the position vector is the one which is given to us, because we assume that we know this or this is known to us. And we want to solve for the joint variables which corresponds to that one. Now, in the case of normal case of forward kinematics, how is it represented? This Px, Py, Pz will be represented like this, for example I taken one robot configuration and for that robot we will see that Px is equal to this one and Py is equal to this one and pz is equal to this one. So, we can see that there are can be three relationship three equations can be written for that particular manipulator we have a relationship at Px is equal to C1 multiplied by something, Py is S1 multiplied by something and Pz is 215 minus something something, so that is the three equations that you can have. Now, similarly we can write ax is equal to minus C1 S234, ay is equal to minus S1 S234, az is equal to minus C234. So, like this we will see that we can actually get 9 equations from here and 3 equations from here, so we have totally 9 plus 3, 12 equations from this matrix relationship, so arm matrix is given, so for Px is equal to this, Py is equal to this, like this we have 9 plus 3, 12 equations. And using this 9 plus 3, 12 equations, we need to find the joint variables q1 to qn, so you have q1 to qn to be found out you, because if Px is given suppose I have Px is equal to 170 and py is equal to 50 and Pz is equal to 50, now I have to write 170 is equal to this, 50 is equal to this and 50 is equal to this and solve it to find out what will be the values of theta 1, theta 2 etcetera which satisfy this condition. So, that is the way how we need to look at this. So, we have 9 plus 3, 12 equations and n unknowns, n maybe 6 or 5 or whatever it is. So, we will see that there are 12 equations and you have n unknowns to solve. So, things seems to be simple because we have 12 equations and we have only n unknown, this n will be normally 6 or 5 or whatever it is. So, that seems to be easy problem to solve. But if you look at closely this one you will see that there are 12 equations and n unknowns, n is equal to 6 for 6 axis robots. But if you look at closely this, this three equation against a Px, Py, Pz they are independent because you can actually place the robot in the position the end effector can be placed anywhere in the x y z plane within the work space, so that it can be independent, so px, py pz there is no relationship directly as long as where you want to place it you can place. So, you can actually get these 3 equations are independent, they have 3 independent equations here, but these 9 equations, they are not independent, the reason is that this actually represents three vectors, so this actually represents the normal vector, sliding vector and approach vector and they are orthogonal and of course they are independent vectors. But these three, out of this nx, ny, nz they are not independent, there is a relationship between nx, ny and nz to satisfy this condition of the vector. So, nx square plus ny square plus nz square is equal to 1, similarly sx square plus sy square plus sz square is equal to 1, nx square plus ny square plus nz square is equal to 1. So, that is the unit vector the three orthogonal directions. So, these three equations that you are seeing here, this three equations are not really independent and similarly these three equations are not independent, these three are not independent. So, effectively we have only one equation from here, which is independence and this is only one equation from here and this one equation here. So, we can have only three independent equations from the rotation parts, this rotation matrix gives you three independent equations, though we have nine equation since they are not independent, what we can see is only three independent equations. So, these three independent equations from this part and then three independent equations from the positioning part give you totally 6 equations, so you can actually get 6 independent equations from the arm matrix. So, using the arm matrix you will be able to get 6 independent equations to solve for the n variables. So, 6 equations now we have n variables, so if the n is equal to 6 or less than 6 we can solve the problem. But when you see n is equal to 7 you would not be able to solve because you have only 6 equations and 7 unknowns cannot be solved. So, you need to choose one of these you have to select and then you have to get other 6, so that actually leads to multiple solutions depending on the values of the first assumption. So, effectively we have only 3 independent equations and 6 independent equations and n unknowns and nonlinear they are nonlinear equations and then difficult to solve. So, 6 independent equations and n unknowns and nonlinear equation, we need to solve them in order to get find out the joint variables, but they are not that easy to solve because they are nonlinear equations and n is more than 6, again you will have difficulty. So, that is the difficulty with the inverse kinematics, but that actually tells you the solvability aspect of the inverse problem, we have 6 independent equations and n unknowns and the relationships what you are seeing here are the nonlinear equations and not that easy to solve. Now, we have to see how to address this problem for manipulators. So, first thing we look at this, is there a solution existing or not? So, we know that there are 6 equations and 6 unknowns, suppose if 6, n is equal to 6, so we have 6 equations and 6 unknowns, so we can actually solve the equations that is possible, but if there is a solution existing or not? If there is a solution existing then only we can solve it, so if we have to solve using the 6 equation first thing we need to ensure is that, yes there is a solution existing for it and therefore start solving it. And how do we ensure that there is a solution exists? So, we look at this a manipulator is solvable if all the sets of joint variables can be found corresponding to a given end-effector location. So, a location is given to you and if you can find the all the joint angles corresponding to that location, then we call it as a solvable one, that means we need to get all the sets of joint variables which can actually reach to that position and orientation. So, then if we can find it then we call this solvable. Now, there are few conditions under which we can say this is possible, so not all positions are solvable and there are some position which cannot be solved also. So, first we look at, what is the necessary condition to have a solution? That is if you want to have a solution, what is the minimum condition to be satisfied? What is the necessary condition to be satisfied? The first point is that suppose you have a robot like this, suppose this is the robot, i will say it's workspace is something like this, just roughly mentioning, this is the workspace that the robot has. Now, it is a program and I may not be knowing that what is the workspace it can reach or what is the all the points I may not be knowing, suppose I give a position here may be it may be possible to solve, but if I say that I want to reach this position then it is definitely not possible to reach this position because it is beyond the workspace of the manipulator. So, the necessary condition to have a solution is that it should be within the workspace that is without that condition you cannot never can never solve the problem. So, first condition is that the tool point within the workspace. So, the tool point what you want to solve should be within the workspace, that is the first condition. Now, we know that if I want to reach this position, I will be able to reach this position and if I want to get a particular orientation also, so I need to have orientation desired orientation and desired position, then I know that there should be 3 degree of freedom for positioning and I need to have another 3 degree of freedom for orienting, that means I can get any orientation and any position within the workspace using these 3 plus 3 degree of freedom that is 6 degree freedom. Suppose I have only 5 degree of freedom, suppose I do not have I mean you have only 5 degree of freedom, then it is clear that I would not be able to reach the orientation needed, I can use the first 3 degree of freedom to reach the position, but then I have only now left with only 2 degree of freedom, out of 5, 3 used for positioning, so I have only 2 degree of freedom for orientation, that means I would not be able to get all the orientations I can control 2 orientation and the third one will be depending on the other 2. So, I cannot really say all the three orientations I want and then only have only 5 degrees of freedom, it is not possible, then you would not be able to solve. So, that is the condition, if n is greater than or all 6 to have any arbitrary orientation, so if I had to have any arbitrary orientation of the tool, I need to have minimum 6 degrees of freedom or more than 6 degrees of freedom. So, if n is equal to 6 or more, then I can have all the arbitrary orientation. And if I have only 5 degrees of freedom, then I say that I need to get all the orientations, it is not possible, that means I would not be able to solve for any arbitrary orientation if there is a there is only 5 degrees of freedom or number of degrees of freedom is less than the less than 6. So, that is the second condition necessary condition to have the solution. So, first condition is that the tool point should be within the workspace. The second condition is that end should be 6 if I want all the orientations achieved. Now, another condition is that the tool orientation is such that none of the joint limitations are violated, so all the physical all the robots will be having some physical limitations in joint movement. So, there your wrist will be having roll pitch and yaw axis and each one will be having some joint limits, so maybe roll can have all the 360 degree, but pitch and yaw cannot have a 360 degree rotation because of the physical limitations. Now, if your orientation says the orientation what you want actually can be achieved if the joint limitations are violated or will be a getting a solution, but if you say that the joint has limits then you would not be able to get the all the desired orientation. So, we need to ensure that the tool orientation such that none of the joint limitations are violated. So, in these three conditions, these three necessary conditions to be satisfied, if you want to get a solution, or you to have a to make the manipulator solvable. Now, that is the necessary condition, so there are two kinds of solutions, so if you know that this these necessary condition is satisfied, then we can actually solve for this, we can solve for the solutions. And the two methods of solutions are one is known as closed form solution, the other one is numerical solution. So, closed form solution is that you can use the algebraic expression or the analytical expression that you already developed using forward kinematics, so you can use these analytical expressions and solve for it, that is basically known as the closed form solution. So, the analytical method can be used to get a solution then we call it as a closed form solution. So, analytical method can be either this equations or you can have a geometry method also, so two methods are there either you directly solve this algebraic equations or you go for a geometrical approach and then find out the possible joint angles which will provide the position and orientation. So, that is the closed form solution and the other one is known as the numerical solution where you do an iterative you search and then find out the solution. So, you start with some value for each joint angle and then search all other joint angles which can actually lead to the desired position and orientation. And if you do a search and at some point you will find that this meets the requirement. So, that is the numerical methods which is an iterative and time-consuming methods. So, whenever the closed form solution is not possible we have to go for time-consuming method of numerical solution. Similarly, when you have large number of degrees of freedom you have multiple solutions then also you may have to go for a numerical solution to get most feasible solution. So, how do we know that there is a closed form solution existing or not? That is the second question, so the first question was that whether it can be solved or not. Now, if it can be solved, which method to be used whether go for a closed form solution or a numerical solution. And to get this closed form solution we call there is a sufficiency condition, we say that there is a sufficiency condition to get a closed form solution. So, any manipulator can be solved for its inverse using closed form methods if it s satisfy the conditions given here, that is known as the sufficiency condition. Sufficiency condition says that, if three adjacent joined axes are intersecting or three adjacent joint axes are parallel to one another, then we will be able to solved the inverse kinematics using the closed form methods. And if this condition is not satisfied then you would not be able to solve it using closed form method you need to go for a numerical method, that is the situation or that is the condition that says sufficiency condition where you can solve this using closed form methods. And what does it say? So, suppose you have this axes, like this, you have one axes here, one axes here and one axes joint here and all axes are parallel then all the adjacent joint axes, so all the adjacent axes all the three adjacent joint axes are parallel then you can have a closed form solution. Otherwise if the three adjacent joint axes, suppose you have one axes here and then one axes like this and one axes at this point which actually again intersect the this three, suppose you have one axes and another axes here which actually like this, than these three axes are intersecting, if the three adjacent axes are intersecting then you will be having a closed from solution. So, this is the necessary sufficiency condition to get a closed form solution. So, whenever you get a manipulator to be solve for inverse kinematics first thing you need to check is whether the closed form solution existing or not, the closed form solution is not existing then you would not be able to solved it using algebraic or geometric methods you have to go for numerical method only. And form where this condition comes you can actually see later when we try to solve, so probably if you remember the forward kinematics relationship, if there are three adjacent joint axes, you might have seen that it actually comes like theta 2 plus theta 3 plus theta 4 format or we write it as theta 2, 3, 4. So, that is the form it was coming in the arm matrix or the arm equation and that is the reason why we need to have this kind of parallel axes, so that you will be getting it this format, so it can actually be solved. They are not this way then you will be getting 2, 3 and then theta 4 as another one then you have to solve for 2, 3 and 4 separately that actually leads to problem in the solution. So, this will be more clear when we try to solve the equations, but the relationship comes from is this condition comes from that relationship develop or it actually happens in the forward kinematics. Similar, is the case with the intersection also intersecting the joint axes, so these are the necessary and sufficiency condition for getting the solution. So, necessary condition for existence of solution and sufficiency condition for existence of closed form solution. And as I mention there will be a multiple solutions, so when it is a redundant robots, so when you have n is equal to 7 or more then we called it as a redundant robots, because what we need in space is 6 degree of freedom and if you have more 7 degrees of freedom or 8 degrees of freedom, controllability of freedom then it becomes a redundant robot. So, whenever the robot is redundant then you have multiple solutions, because you can 6 only use can solve, so 7, 8 and all you need to make assumptions and then for each value of theta 7 or theta 8 you have to find for theta 1 to theta 6, so that actually leads to large number of multiple solutions. And another one is the elbow-up and elbow-down solution which I already mention so you have an elder elbow-up solution like this so you can actually reach this position form here or you can actually reach the position form here, this is the elbow-up or elbow-down solution, that also leads to multiple solution. So, always there is no unique solution you will be most of the time you will be having multiple solution for the inverse problem, that talk about the basic requirement for the system to be solvable. Now, as I told you in the closed form solution algebraic and geometric approach is there, so the algebraic approach is, obtain the equations from the matrix, so the 12 equations that we already mentioned, so obtain the matrix equations the scalar equation from the matrix and then try to solve them that is the algebraic approach. And in this geometric approach we will not be discussing because geometric approach normally is more of a graphical approach so you know the position and orientation to be reached and you know the link length and other orientation, other parameters, so you try to geometrically construct the manipulator position and orientation to see what position of this joints this will actually reach the required position. And then find out the joint angles corresponding to that. So, we will not be going to that part we will be looking only at the algebraic approach. So, here we do this by getting the scalar equation from the matrix form, so as I mention there will be 12 equations, so we can actually write down 12 equations using the matrix that is the arm matrix. Now, these 12 equations always in the I mean there are nonlinear equations and consist of trigonometric identities parameters and therefore we need to use some tricks to solve them. So, they cannot be solved very easily, there are a lot of difficulties in solving these 12 equations, so we use some trigonometric identities to combine two equations, first square them and then add them eliminate certain variables. So, there are different tricks to be used for example, you have something like a cos theta plus b sin theta and then we can solve a1 b1 like this and then probably you can actually when you add them cos square theta plus sine squared theta will come and then we will be able to eliminate some of this parameter that is what actually says. So, such as first square them and add them and eliminate certain variables, so the trigonometric identities are like sin square theta plus cos square theta is equal to 1. So, such trigonometric identities can be used and we can actually get them by combining equations, so you square them add and you will get this and then you can eliminate some of the variables that is the trick one we can use. And then the trick to be used is that you use the substitution, so you can substitute for this trigonometric variables with some other and then convert them into polynomials. So, the trigonometric equations can be converted to a polynomial and then solve the polynomial that is another way to solving. So, for example, you substitute u is equal to tan theta by 2, then cos theta can be written as 1 minus u square by 1 plus u square and sin theta can be written as 2u by 1 plus u square. And then whenever the cos theta sin theta is coming you substitute these parameters these values and then convert that into a polynomial equation and then solve the polynomial and then get the theta. So, finally we need to get theta, that is the trick 2 and then another trick to be used is that find out expressions for both sin and cos of joint angle theta, so you write sin theta you find out what is sin theta, suppose you want to sin theta 1 is equal to k and then you find out cos theta 1, so cos theta 1 you try to write it as the square root of 1 minus k square, so that will be cos theta 1. So, you have sin theta 1 and cos theta 1 and then use a function called A tan 2 to get the theta value. So, A tan 2 sin theta cos theta, so we write it as a function called A tan 2 function normal are tan is not used but instead you say A tan2 function. And that can actually give you a unique expression for theta. So, when you use the normal arc tan, so theta can actually be anywhere here, but then when you use a arc tan it will not clearly tell in which quadrant it lies, because arc tan value will be the same but the quadrant will not be specified. But if you use an arc tan 2 function Atan2 function you will be able to clearly tell which quadrant it lies, so you will be able to get the theta value uniquely defined using arc tan 2 function, I will explain this when we consider the examples, how to use the arc tan 2 function, what is the importance of arc tan 2 function in solving this equation. So, to solve these equations we need to use some of these trigonometric identities, or trigonometric tricks which actually helps us to simplify the equations and then solve them without much of difficulty. So, that is about the method or the way to solve the thing the inverse problem, so let us take a very simple example of a 2 degree of freedom planar manipulator. So, this planar manipulator is given with a link length l1 and a link length l2 will say and this one is assumed that this one is theta 1 and this one is theta 2 and as I mentioned the first thing to you know to solve the for the inverse kinematics, we need to have the forward kinematics relationship and that will give you the a matrix. So, we will do the same process we assign the coordinates from X0, Y0, X1, Y1, X2, Y2 etc and then see how to get the forward kinematics. So, now we need to get this forward relationship, so if we take this as a Px, Px1 and this is for a so the Px is actually this one, so this is the Px and then this is the Py. So, this is being a simple 2 degree of freedom planner manipulator, we will be able to write this Px and Py very easily, so Px can be written as l1 cos theta 1 plus l2 cos theta 12. Similarly, Py can be written as l1 S1 plus l2 S12. Now, we need to solve for theta 1 and theta 2, so we need to find out for any value of Px and Py how can you get theta 1 and theta 2, so that is the position part, so we are looking only the positioning part right here just to give the feel of how do we solve it. So, theta 1 and theta 2 is to be solved. So, here is equation 1 and this is equation 2, now how do we solve for theta 1 and theta 2 because we have only these two equation l1C1 and l2C12, l1S1 plus l2S12. So, what we do? We will try to see some of the we will trying to use some of the tricks to solved this. So, one of the tricks is basically to square and add these two equations, so if I write down as Px square plus Py square, so if I square and add them, then I will be able to write this as like this, Px square plus Py square is equal to l1 square plus l2 square plus 2l1 l2 C2, because 11 C1 square l2 C2 square l1 S1 square will actually become l1 square and similarly get you get l2 square plus 2l1 l2 C2. And we can get C2 as Px square plus Py square minus l1 square by l1 square l2 square 2 l1 l2. So, we are solved for C2. So, once you have C2 we can find out theta 2, we need to find theta 2, so instead of using directly cos inverse we do not use a cos inverse to get theta 2 because that will not tell you in which quadrant the theta lies, because at the four quadrants we want to clearly get which quadrant theta is like. And therefore what we do we will try to find out S2 from here, so S2 can be obtained as square plus or minus square root of 1 minus C2 square, so that is the way how we can get the S2. And once we have S2 and C2 then we get theta 2 as atan2 S2 2, so this is the way how we can get theta 2 and why we are going doing this because as I mentioned the function atan2, atan2 denotes a four-quadrant version of arc tan function, it allows us to recover angles over the entire range of minus pi to pi, so it will clearly tell between pi and minus pi what is the value. And how is it doing this a tan 2 function, so suppose if atan2xy, so if you take a tan 2 x comma y, so it will look at the values of x and y and then decide which coordinate lies, so if x and y are 0, then it is indefinite, x is greater than 0, y is equal to 0, then a tan 2 is 0, then x greater than less than 0 sorry, less than 0 and y is 0, then a tan 2 is pi and if y is less than 0 then minus pi a tan 2 is 0, it lies between 0 and minus pi, if y greater than 0, then it is between pi and 0 and pi. So, this way it clearly tells you which quadrant it lies between 0 and minus pi or minus pi and 0 and pi and 0 and minus pi. So, this ay you will be able to get the theta clearly defined which is in this four quadrants, so that is why we go for a tan 2 function instead of going for a cos inverse or sin inverse or simple a tan function. So, this is the way we need to get theta 2, so we will get theta 2 using the a tan 2 function. So, S2 is plus or minus square root of 1 minus u 2 square and theta 2 is a tan 2 S2 C2. And now we can see because there will be two values for S2, you can get two values for S2, so one is for plus and one is minus. Now, you will be getting two values for theta 2 also, you get theta 2 for one value of S2 and theta 2 for another value S2. Now, if both the values are actually lying within the joint limits of the manipulator, then you have to take both the values as a solution and if is outside the joint limit, then you can eliminate and then proceed to select the find the next joint angle. But if both are in the joint limits somewhere both happens to be in the joint limit that shows that there can be two ways the theta 1 value and theta 2 in this case and you have to use both the values for solving the next joint angle, so you will be getting multiple solution that way. So, that is how do you get who you get the theta 2, now you need to solve for theta 1, so you have to substitute for this Px value, Px, l1 C1 plus l2 C12 that relationship and then solve it for the joint angle next joint angle. So, we will take a more complex one and then go to the solution, get this one I just want to tell you how do you use the tricks to solve for the joint angle so we will see look at solving for theta 1 and other joint mean if you have more joint angles how do you solve, so we will take more complex problems and then try to solve it. So, this is the three degree of freedom case, so we assume that there is another joint here which actually projecting out I mean this manipulator is projecting out from the plane and with the distance d3 and then you have this other joints 1, 2 here joint 1 joint 2 and you have the joint 3. So, here since it is projecting out and there is no other thing, so the orientation will be given by this, so always the axis is always vertical, so we will be having nx, sx, ny, sy and 0 0 1, 0 0 1 as the orientation matrix, orientation of the tool will be always in this format, nx ny xx xy and the position will be Px Py Pz. So, Px Py is the position and Pz is because of the d3 which is projecting up. Now, if we have to solve this problem we will see so first we will assign the coordinate frame X0 Y0, X1 Y1 and X2 Y2 and then tool tip will be having the final coordinate position X3 which is actually projecting out. So, we can see Pz will be always d3 because it is projecting out and then it's actually rotating, so it will be d3 distance Pz, so you can actually find out the parameters here the DH parameters, so first we need to solve the forward kinematics. So, to solve the forward kinematics we will find out the parameters a d alpha and theta, so here is l1, l2 is a and d3 is this is 3 and then alpha is 0 because all the joints are axis are parallel and theta is a variable, so theta 1 theta 2 there variables. Now, if you write this forward relationship using the transformation matrix you will be able to find out the forward relationship or the arm matrix it will be like this, so your Px will be l1 C1 plus l2 C12 as we saw in the previous case also py will be l1 S1 plus l2 S12 and Pz will be d3. So, this will be the position vector for the tool or this coordinate frame position will be given by this relationship l1 C1, l2 S2, l1 S1 plus S12 and d3. Now, the orientation will be given by this the matrix C123 minus S123, S123 C123 001 and you have this three equation Px Py Pz and nx ny, sx sy, so you have 4 plus 3, 7 equations are there what you need to solve is theta 1 theta 2 and theta 3, you need to solve for that 1 theta 2 theta 3, so you have these three equations plus two equations here, so five equations are there, so we can actually solve it easily. And we will write this one relationship now, so Px we will write Px is equal to this, py is equal to this, pz is equal to this and then we will write in nx is equal to C123, ny is equal to S123 etcetera. And similarly sx is equal to minus S123, sy is equal to C123, so this way we will write this relationship and then we will try to solve it. So, as we saw in the previous case also, so Px we will be getting l1 C12, Py is l1 S1 plus l2 S12, so if you square and add them you will be able to get theta 2, so theta 2 can be solved. And so theta 2 can be solved using this method. And then theta 1 2 3 can be solved using this relationship, because you have C123 and S123, so you can actually use that to do find out a tan 2 C123 and S123, you will be getting theta 123. So, theta 1 2 3 can be solved and theta 2 can be solves using these two, so theta 2 can be solve directly from here and theta 1 2 3 can solve from here that we saw how to get theta 2 in the previous case and theta 1 2 3 can be easily solved from here. Now, the question is, how do we get 1 and 3, so this is are the relationship you have nx sx, ny sy and Px Py. So, we have these 6 equations written here and since Pz is equal to constant we do not write that it is a separate equation. Now, squaring and adding 5 and 6, we get theta 2, so we know C2 can be obtained. And by using the same method that we discussed in the previous example, we can get theta 2 as a tan 2 S2 C2. So, theta 2 can be obtained. Now, using the other equation we will be able to so this one I already explained, how do you get, how you use a tan 2 to get that theta 2, we can actually solve this 5 and 6 for theta 1. So, now theta 1 is to be solved, so theta 1 cannot be directly solve, so we need to have a substitution here, so what we do? We do a substitution here saying that Px is equal to k1 C1 minus k2 S1. And Py is equal to k1 S1 plus k2 C1 where k1 is equal to l1plus l2C2 and k2 is equal to l2C2, so we will make a substitution like this k1 is equal to l1 plus l2 C2 and k2 is equal to l2 S2. And when you do this substitution, then we can write Px is equal to k1 C1 minus k2 S1, so the previous equation can be expressed like this, because of this substitution. And then we write down here again substitute k1 is equal to r cos gamma, k2 is equal to r sin gamma, where r is equal to k1 plus k2, k1 square plus k2 square, square root and gamma is atan2 k2 k1. So, this kind of a substitution is needed in order to solve for S1, because we have this relationship Px is equal to l1 C1 plus l2 C12. So, to solve that one we need to have a substitution like this. And when you do the substitution, we will get it as Px by r is equal to cos gamma cos theta 1 minus sin gamma sin theta 1 and Py by r is equal to cos gamma sin theta 1 plus sin gamma theta 1 and from there we can get cos gamma plus theta 1 as Px by r sin gamma plus theta 1 as Py by r. And since we know gamma, so gamma is actually given as a tan 2 k2 k1 and k2 and k1 are known because k2 is known l2 S2, k1 is l1 plus l2 C2 both are known, we will be able to get gamma and then we will get cos gamma plus theta 1 as Px by r and sin gamma plus theta 1 as Py by r. And using these two relationship will be able to get cos theta 1 as. So, gamma plus theta 1 can be obtained as a tan 2 Py by r Px by r and so a tan 2 is Py by Px, r is constant, so we can write gamma plus theta 1 is a tan 2 Py Px and therefore theta 1 is a tan 2 Py Px minus a tan 2 k2 k1, k2 and k1 are known therefore theta 1 can be solved. So, this is the way how we can get theta 1 solved and once theta 1 is solve then theta 2 is already solved and we can get theta 1 2 3, so theta 1 2 3 can be obtained as atan2 ny nx as I mentioned and theta 3 can be obtained as the theta 1 2 3 minus theta 1 minus theta 2. So, this is the way how we can solve, so we have solved for theta 2, theta 1 and theta 3 here, so all the joint angles have been solve for the this particular robot. So, this is the way how we solve the inverse kinematics problem for manipulator. So, you may find it a bit difficult that you have to do lot of substitution here, so how do we know what kind of substitution to be made? And this can actually be solved by a I mean can be addressed by a different method where we do not really solve all these, we will use some standard trigonometric equations and then you can solve it, so we will discuss that in the next class, how can we use standard methods to solve the inverse kinematics without going for too many substitutions, we will discuss that the method of solving in a simple way and we will see few more examples in the next class. Thank you. |
Introduction_to_Robotics | Lecture_51_The_Brushless_DC_Machine.txt | So, we had been looking at the DC motor operation and control and we said that the DC machine has certain difficulties due to which it is not really used much in the field now a days if you want to select a new machine, however the ability remains remains very much there, if you look at tools in your labs, because this is still the simplest machine that you can just take and evaluate some action that you got, you do not require elaborate setups and so on. If you just want to test something in the lab it is okay to connect a resistor in the armature in external resistance to the armature and attempt to adjust the speed, it will work. We will not be able to have a resistor always connected for the normal operation of the machine because it is inefficient, but if you are just looking at a few hours use in the lab they evaluate something now then it still okay. So, if you look at it from that part of view then the the machine is would be something definitely a of use, but not in the industry. In the industry where DC machines are used it is used because they have been historically used and we do not want to replace it, so we just go on with such situation, but as I said new installations people would rather go for something else. And the next best option is the one that is called as brushless DC machine. And if you see why brushless have been used in the DC machine at all, the main aspect why brushes have been made necessary is that you have a stationary. So, you require a brush and the commutator arrangement, because you have a stationary supply, stationary power source, which has to be linked to a rotating member, which has to energize a rotating member and it is the facilitated facilitate that, that you require this arranged. So, if you need to have a brushless DC machine, then it is necessary, so if it has to brushless, then it implies that stationary source is of course that all sources inherently are stationary, you do not have sources that keep rotating by themselfs. So, so so since sources are necessarily not going to a rotating, it then means they have to energize the stator only, difficulty comes because you have a stationary source and in the DC machine that needs to energize something on the rotor, so if you do not, if you want to avoid this arrangement of brush and all this, then the source must supply the stator, there is no other alternate. So, if the source has to supply the stator that means that the field arrangement should go on to rotor. Therefore, what you have is that you have the stator which contains the armature that means you are going to have that is the inner circumference of the stator and in the inner circumference of the stator you will have slots, so I just draw one slot here. And we had earlier for the DC machine we had earlier put we had earlier made an arrangement like this, where the armature conductors were placed around the circumference of the rotor, now instead what happens is that they would be placed around the circumference of the stator. So, we would now have conductors going into all these things, these slots and you have for example, a conductor here then this conductor goes all along the length, exits out of the cylinder and then it makes a connection with this conductor which similarly runs along the length and then comes up. And then there are interconnections in different ways in order to make flow of current happen through all. And you then have a rotor, you have a rotor and on the surface of the rotor you can put magnets. So, this phase of the magnet maybe north that is south, then this is south and that is north. So, it is then means that the magnetic field lines go out of the North Pole here and enter into the South Pole here and this rotor then contains a shaft which can then be connected to load and you can make the load rotate. So, this is the arrangement and if this is the arrangement what we see is that if this rotor is now going to rotate in some direction, then the field lines which are going to be linking this one turn, for example that is going to vary with respect to time and if it vary be with respect to time you have an induced EMF which is proportional to the rate of change of flux linkage. And since this rate of change of flux linkage is always going to be there as the rotor keeps rotating, so let us consider for example, you have this conductor which is there here goes along this length you have the return conductor, so this is connected and you have the field like this, at this particular instance when you have a magnetic field going upwards and then returning back like this. Therefore, in this case, flux cutting across the plane of the loop is equal to 0, because this loop if it has to have flux that is cutting across the loop then it has to flow this way. But in this rotor position, there is no flux flowing in the horizontal direction, all flux is going like this, therefore the flux linkage is 0. But if the rotor is now going to rotate and this S comes over here, then you will have all flux lines going in this direction, no flux lines going in the along the loop itself and therefore you have maximum linkage of field. And then when the rotor is going to rotate and S is going to come over here, then the field lines would be going like this and then again the linkage is equal to 0. And when S comes over here, then the flux lines will be going like this, then the flux linkage will be highest but in opposite direction. So, you have as the rotor is going to rotate, flux linkage goes from 0, to maximum, 0 to negative maximum, back to 0 and then goes up. Which therefore means that the induced EMF e will be an alternating EMF. Because d phi by dt is going to change signs, therefore since the induce EMF e is alternating this implies that you cannot connect a DC source to it, it is not feasible you have an alternating source you cannot put a DC source on it and get anything useful out of that. So, if this is an alternating source, then you want to connect an alternating source that is the only way and this alternating source as you can see the rate at which it is going to alternate one full full one full alternating cycle is going to depend on one full rotation of the rotor, what if the rotor is going to rotate by one full 360 degrees, then the flux linkage goes from 0 to maximum, 0 to negative maximum to 0. And therefore, how many times will it alternate per second? It will depend on the speed of the rotor, if the rotor of rotating slowly, then the number of alternating induced EMF s will over a certain durations will be smaller, if it is rotating at high speed when the number of times it will alternate will be both. So, you also see that since induced EMF is proportional to the rate of change of flux linkage, the EMF will be an AC waveform, frequency depends on speed, higher the speed, higher it is going to be, amplitude also depends on speed, d psi by dt. So, if the rate at which flux linkage is going to change is more, then the magnitude of induced EMF is also higher. And therefore if these things mean that you have to you cannot put a DC source to this, as we said earlier, what we need to put is an AC source, you have the put such an AC source is that is you need to connect an AC source is magnitude and frequency can be controlled in synchronism with the induced EMF. It has to be done in synchronism with induced EMF, you cannot supply a voltage of different frequency to this motor, than it will simply not run, in case it is very important to do. And so how does this then work? So, one of the ways of this induce EMF being generated is like this. So, these machines are best design as 3-phase machines that means all the different conductors that are put around the circumference of the stator are split into groups, 3 groups, each group is than called as one phase. And therefore, we then talk of induced EMF in each phase. So, one of the ways in which that can be done is like this, the induced EMF can be made to look like this. So, this is a trapezoidal EMF and not a sinusoidal induced EMF. And I have drawn it that is why if you so this axis is with respect to time, this is induced EMF and this happens as the rotor rotates that is with respect to time it is understood that the rotor is rotating at a fixed speed and as the rotor therefore gets moving there is an induced EMF on the stator phase. And since the rotor is moving at a fixed speed we are drawing with respect to time, you can also scale this in terms of omega into time, which then means the angle through which the rotor more. So, if that is then the case I can draw this to be 0 degrees and then this is 30 degrees 60, 90, 120, 150, this is 180 degrees, 210, 240, 300, 340 and 360. So, I am drawing this at intervals equal to 40 degrees, then the next phase is design to start 120 degree later that means this way, so this is waveform that goes on continuing, it would have existed here then, so this waveform then looks like this and then there is one more phase which is phase shifted further by 120 degrees, this machine is interesting if this is the way the waveform is going to behave we said that this magnitude, magnitude of the induced EMF is simply proportional to speed, frequency is also proportional to speed, both are proportional to speed. So, we have drawn this waveform for a particular speed of operation. Now, the nice thing about machine is that if you look at let us say this interval, this is a 60 degree interval, in that 60 degree interval you find that one phase is going from one amplitude level to the negative amplitude level whereas this phase and this phase are at fixed level of amplitude, when if you consider the next 60 degrees one of the phases that was at a fixed amplitude level now begins to change and the other two phases are at a fixed amplitude level. So, in this manner if you look at 60 degree blocks, so take the first that 60 degree mark, then the next 60 degree, next 60 degrees, so let us call this as r phase, call this as y phase and call this b phase, then in the first 60 degrees we have b varying, r and y are fixed. In the next 60 degrees we have y varying,, r and b fixed. In the next 60 degrees we have r varying y and b fixed. Further, let us write this again in this 60 degrees we have b varying, r and y fixed. So, it is quite evident therefore that in every 60 degrees that we select like this one of the phase EMF varies with respect to time, the other two remaining are remaining fixed. Now, if you look at these two intervals, in both these two intervals you have b varying, r and y are fixed, but what is the difference? The difference is in this interval the potential of r is greater than 0, the potential of y is less than 0, in this case the potential of r is less than 0 the potential of y is greater than 0. Which than means that in this interval if you take the difference e r y that means the potential difference between r phase and y phase if you take this will be greater than 0 and this potential difference e r y will be less than 0. If you call this amplitude as A when this amplitude is minus A, then this potential difference e r y is 2 times A, whereas it is minus 2 times A here. Similarly, you can work out for all the intervals. Now, we have seen that if there is going to be a flow of current i into the machine. Now, in the case of DC machine, in the case of DC machine, we said that if there is going to be a magnetic field that is going to be there at the place of the rotor conductor, the conductor carries a certain direction of current and there is a magnetic field. Then there is a force that is exerted on the conductor, which is given by this ideal cross B and therefore this conductor tend to move. Now, you look at this situation now you have the exact inverse of this, by row of physical arrangement, never the less. You are having a same situation here as well that there is a magnetic field that is going to go, like this and there is a conductor which can carries some current you can supply some current it is now on the stator you can connect what you want with it. So, will this conductor experience the force? Will it experience? It has to, I mean that is the law of physics, there is a magnetic field, there is a conductor, there is a current, there has to be a force. But now the difference is this is in the stator and what is the meaning of stator? The stator means you are not going to allow it to rotate, you are going to prepare bolt and bolt it to the base plate, bolt it towards, so what is going to remain fixed, than what will be the effect of this force? It will attempt to make this it will attempt to make this move but it is not going to move it is going to be fixed. But every force has an equal and opposite force which will now be felt by the one that is generating the magnetic field. And therefore the reaction force the rotor will now going to rotate. If you allow it rotate, if you are going to put a clamp on it and rotate that also, then it will not rotated, but nobody will do that, the whole idea is to get something to move and therefore you are behaving the rotor free to rotate, therefore we are keeping this fixed and the reaction force now causes this fellow to rotate. So, the the first magnitude is still the same, why? Newton's law says, it is an equal and opposite force. So, the first magnitude that is felt by the rotor will still be the same, but it will now start rotating the rotor instead of the stator because you have chosen to put the stator fixed. However, if somebody is going to ask what will happen if I leave the rotor free and I leave the leave the stator free, then both will start rotating, one will be rotated in one direction, one will rotate in other direction, that is all difference. In the other machine also the same thing would have happened, the stator would have rotated in one direction, rotor would have rotated in other direction, after all the laws of physics are still the same. But there also keeping the stator fixed here also the stator is fixed, the stator happened to be the member generating the field in the earlier case we use the rotor is the one that has generating the field that is all the difference. So, in both cases the stator is fixed and therefore the rotor begins to move. So, now it is still the flow of current i which is the cause of the movement, the same expressions as we had derived earlier still continue to hold that means i is going to add along with this vector B and along the length of the cylinder l and multiplied by the radius of the cylinder is going to give the electromagnetic torque. And what we want is we want by electromagnetic torque to be something that does not vary with respect to time, we want if you send a fixed current, we want the fixed torque and not something that is going to vary respect to time because we want to rotate a load with the smooth torque, smooth within cotes we have already seen, therefore if torque we wanted to remain fixed and we find that the induced EMF in this blue phase in the first 60 degree interval is changing with respect to time, then it does not make sense to energize that phase. Whereas the phase s r and y are having a fixed induce EMF, so if you send a flow of current into that then that is likely the generator fixed it up. Therefore, what you do is, in this 60 degrees you select r and y for energization, keep b phase open do not do anything to leave it open. In the next 60 degrees interval you have r and b is fixed, so energise r and b keep y open, in the third 60 degrees energise y and b keep r open when you come to this point again you have to select r and y for energization and keep b open, but what is the difference because the induced EMF sign is going to change you have to energize it in the opposite sense. So, how does one do this? So, if you say that this is your motor and you are having r phase y phase and then the b phase and then what you have is a DC supply since the first 60 degrees you want to connect r and y, such that e r y is greater than 0, so that means this terminal of vg can be connected to the r phase, through a switch and you connect here through switch, so in the first 60 degrees e r y is greater than 0 and we are connecting vg in the same sign and therefore flow of current into this will be opposed by the induced EMF that is there and therefore it will result in active flow of electrical power from the source to the motor. Now, in the next 60 degrees you want to energize r and b keeping y open, so in the next 60 degrees, what you do is you will keep y open and what is the nature of the voltage e r b, r b is still greater than 0, so e r b is greater than 0, so you want to connect r like this and then you need to connect b here. So, e r b is greater than 0, you are connecting vg greater than 0, so induced EMF is still opposing the applied voltage, there will be a flow of current and active power flow can be there. In the next 60 degrees you want to energize y and b keeping r open, what is the potential difference e y b greater than 0 that means you now have keep r open to connect y here then now you have e y v being greater than 0 again induced EMF opposes the source voltage flow of current will happen. In the next 60 degrees b is going to vary therefore you want to keep it open and you want e r y to be less than 0, so what does it you can do? So, you want to keep b open and e r y is less than 0 that means you can take this and connect it here. Now, you have e r y less than 0, because vg is connected to this point, r is connected to this point, so r y is now less than 0. And by now you get a hang of it, that you put one more switch here, so if you have a circuit that looks like this, then you can generate all the possible sequences that you want. And these are all then your semiconductor switches may be MOSFET s, maybe IGBT s and as before you need to put a freewheeling diode across each one of them in order to ensure that flow of current is not interrupted. And this circuit is called as an inverter, because it takes DC on one side and delivers an AC waveform on the other side. Why is it delivering AC waveform? Obviously, you can see that this voltage that needs to be applied is reversing. So, that is the functionality achieved by this circuit. So, the operation then is that in each 60 degrees the circuit of the machine is always going to look like this, you have the DC source and then you have the armature of the machine which could be picturized as a resistance. And the another DC EMF whose magnitude is 2 times A, this is vg and you can connect. In every 60 degree that is how it looks and therefore as far as the DC source is concerned the motor looks like a DC motor. If in every 60 degrees that is going to be the equivalent circuit, the DC source does not see any change that all from the conventional DC machine. What we have now done is by flipping the arrangement of the DC machine field and armature and putting an inverter here we have made that entire set look like a DC machine to the source. And that is why this is called as a brushless DC machine. The only drawback is that you are now applying a fixed voltage vg to this so called look alike DC machine and we know that if you apply a fixed voltage to the so called DC machine it will run at a fixed speed depending on the load and some load characters will be there and depending on what it demands, it will run at some state. Now, if you need to control it what we have seen earlier is that you have to reduce the voltage to that is applied to the DC machine, how do you reduce the voltage use the same methodology that we used earlier that is in this case the full DC voltage is applied to the machine, but if you have a switch here you can now switch it at some rate and operate at some ON time and OFF time, therefore the reduced value of DC voltage will be applied. So, it means that during each 60 degrees you want for example, let us call this is switch number 1 and switch number 2, in this case in the first 60 degree switch 2, let us say 3, 4, 5 and 6 that is the numbering I am giving, so in the first 60 degrees r and y, so 1 and 6 were turned on, so what it would mean is that this switch was switched 1, this switch was switch number 6. So, what do you do is do not keep them ON all the time, in this 60 degree interval itself you operate it at some rate such that they are kept ON and OFF repeatedly. Then you get only the average voltage for which the motor will respond and therefore by varying the duty ratio vary duty ratio, you can change the average voltage applied to the motor. And thereby, you can control the speed of the motor. We require you to spend a little bit of time with this, you see what exactly is happening. So, essentially for every 60 degrees, one pair of switches are ON, every 60 degree interval once in 60 degrees one switch is turned OFF and another is turned ON, switches in each 60 degree interval can be operated in high frequency PWM to control motor voltage and hence the speed. So, this is the manner in which this circuit can be operated in order to control the motor speed. So, one can then figure out I will leave it to you as an exercise to figure out in each 60 degree interval which we have already seen we discuss this, go back and put down in each 60 degree interval, which of the switches will be ON? And which switch will be OFF? Which are always obviously two switches are going to be ON, which are the case which is that will be ON every 60 degrees? And the others therefore OFF? Now, obviously the switches that are ON are going to depend upon which parts which are the EMF better flat and which is varying? And which is a EMF that is flat and which is varying depends upon this omega t, it is a function of the X axis variable omega t. And therefore, this omega t is nothing but the rotor angle, it means that the switches that are to be kept ON are dependent upon the rotor angle. So, you need to get the rotor angle information at every instant or once in 160 degrees to decide which switch you will turn ON, which switch you will turn OFF, you need to know what are to be kept on each 60 degrees and when does the 60 degree interval start, so that you can turn the switch ON. How to do that? We will see in the next class. |
Introduction_to_Robotics | Lecture_26_DH_Algorithm.txt | Last class we discussed about the kinematic parameters of industrial robots. And we mentioned that there are 4 parameters. The first one we called as joint parameter. Joint parameters define the relative position and orientation of two successive links. And they are the joint angle and joint distance. These are the two joint parameters. And they are defined as this is the joint distance and the angle at which it has to rotate is known as the joint angle. So, that is defined as the rotation about the z k minus 1 to make axis x k minus 1 parallel with axis x k. So, you have x k and x k minus 1 are the two, k and k minus 1 or the two coordinate frames, so x k and x k minus 1. So, how much you have to rotate x k minus 1. So, how much this has to be rotated to bring it parallel to this is known as the joint angle and that is with respect to the z k minus 1 axis. So, with respect to this axis, so, with respect to this axis, how much you have to rotate this, you are to make it parallel to x k is known as the join angle. And the joint distance is, so how much you have to move with respect to x z k minus 1, how much it has to be moved to make x k minus 1 intersect with x k and that is known as the dk or the joint distance. So, this joint distance and the joint angle completely define the relative position and orientation of these two links. So, that is how it is known as joint parameters that define the position and orientation of two successive links. So if you have any number of links, you can apply the same rule, you will be able to get theta k and dk, that is the joint angle and joint distance. So, this is what we saw in the previous class. And also I mentioned that, if you consider it as a single degree of freedom, either D or theta will be a available. So if theta is a variable, it is a rotary joint. And if D is a variable, then it is a prismatic joint. So one of these will be always constant, the other one will be a variable. So that is what we saw in the previous class about joint parameters. And then we discussed about the link parameters also. So the link parameters basically define the position for position and orientation of two successive joints. So we have two joints here. So how these two joints are relative positioned and oriented is given by the link parameters. So one parameter is the link length, ak, the other one is the link or the twist angle, which we call as alpha k. So, these are the two link parameters, and they define the relative position and orientation of the two successive joints. And link length is the distance. So, link k connects k minus 1 to k and then x k, referred with respect to x k that is the common normal between the axis x k minus 1 and k and the are the parameters of link, link parameters are defined with respect to x k, that is the x axis of kth joint. And that would be normally a common normal between z k minus 1 and z k. So, x k will be always a common normal between z k minus 1 and z k and the parameters are defined with respect to that axis x k. Now how they are defined? The ak, the link length, we call this as the link length is the translation along x k needed to make z k minus 1 into z k. So z k minus 1 and z k, how much it has to translate along the x k. So, if this is the x k along xk how much this has to move and that is a two intersect the zk access and that is known as the link length ak. And twist angle again measured with respect to x k, if this is x k, twist angle will be measured with respect to x k, what is the rotation of z k minus 1. So, how much this k minus 1 which is like this how much it has to be rotated to make it vertical. So, in this case it is vertical, so how much it has to be rotated with respect to x k is basically the twist angle. So, twist angle alpha k is the rotation about x k needed to make z k minus 1 parallel to zk. So that is basically the twist angle alpha k, rotary joint, it is like this, this joint. The other one is this joint, so one is this, other one is this. So, you have a vertical axis, rotational axis and you have a horizontal axis. Now, the question is suppose this is x k, so, how much this has to be rotated to make it like this. So, that is the twist angle and how much this has to travel in order to intersect with this is the distance link length. Yeah. So, these are the four parameters. So, we have now A k alpha k and then we have theta k and dk. So, these are the four kinematic parameters associated with every joint. Pardon? which one? You mean acute in the sense. Now what you meant to say plus or minus or 90? You want you are asking whether it can be 60. See, technically it can be 60 or anything but normally we will make it into 90 degrees. The alpha will be normally multiples of 90 only, because then you can actually have either direction of motion. If you make it 60 or something and be you can still do it, theoretically you can do but your kinematics will be much more complex, because alpha or depending on alpha, you may not be getting the xy another degree of freedom completely. Because then you have a one joint like this and one joint like this and actually moving into a different plane. But if you make it 60 you will not be able to completely move to that different plane. So you may not be getting a full motion in that case. So normally we will make it as 5d or multiples of 90 you can have. No positive or negative depends on from which direction you are looking at. So it is convention is that anti clockwise is always positive. No, no no, I will tell you this is xk. Now you look from here, if you look from this point, x k. Now I am rotating it like this, so it is a clockwise motion. So it will be minus 90 sorry. Yeah, minus ninety, anti clockwise is always positive. So we start, I am looking for x k. And then it has been rotated like this. So it is more a clockwise motion. Therefore, it will be 90, minus 90 degree, anti clockwise is taken as positive. Any questions. Anyway we will be going through this because when you do the forward kinematics multiple of this will come. I did not get your question. No, no we are always saying z k minus 1 to zk. See, we are telling that make axis z k minus 1 parallel with the z k. So we are always rotating zk minus 1 to make it parallel to zk. So these are the four parameters and we saw that theta k or dk one of this will be variable, any one of this, one will be a variable and other will be constant. And this ak and alpha k for a given manipulator. Once you decide to design the manipulator, this will be constant you cannot change this because the robot configuration, whether it is a Cartesian or articulated or whatever the configuration, we will fix this, because the link length everything will be fixed for the design. So, once we have already designed the robots, then ak and alpha k will be constant. So, out of these four parameters, only one parameter will be varying for every joint, so, that can be either theta or d. So, it is a rotary or prismatic joint. So, that parameter only will be varying others will be constant for a given manipulator. So, that is the thing, out of the four, out of these two these are always constant, because they are part of the mechanical design. So, we have already designed it, so, the mechanical design decides what should be the link length and what should be the twist angle. So, basically where do you want the joint. You want a joint like this or like this will be always decided and what should be this length also will be decided by the mechanical design. And therefore, you will see that they are mechanical parameters and that will be always fixed for the given manipulator. And the only thing that you can change is this joint angle. So once you change the joint angle, then only the position varies. So if you want to change the position at this tip, then I can change any one of this angle, I can change this angle or I can change this angle or I can change this angle. Of course, this changes the angle I do not change the position but I change the orientation. So, this way always there will be only one parameter which will be varying, all others will be constant. Out of these four, three will be constant. So, this actually shows an animation but I think I already explained all this to you. I am not sure whether it will work let me check. Somehow it is not working but do not worry. It is working now. So basically it talks about the getting the common normal and the x axis and then finding out the distance travelled in along the z k minus 1. So, this is along the z k minus 1 travel that is the dk and this is along the x k, which is the a k. And the angle between this axis and this axis is alpha, again with respect to measured with respect to this x. So these are sometimes known as DH parameters also, Denavit and Hartenberg parameters, because Denavit and Hartenberg, long ago they came up with their methodology for getting the systematic kinematic analysis of serial linkages. So they are called as DH parameters also. So we will be using these parameters in the forward kinematic analysis. So to summarize, we have theta, d, a and alpha and this will be variable for revolute joint. And if it is a prismatic joint and all others will be fixed. So for an n axis robot. So now we know that for each axis, you will be able to get the four parameters. So when you have a n axis robot, you will be having four n parameters associated with the robots. So, if you have a 6 axis robot, you will be able to see you will be able to get for 24 parameters as part of the kinematic parameters of the robot. For each axis we will be having these four and these parameters will affect the position and orientation of the tool. When you have multiple joints, each parameter will be having its own influence on the final position and orientation. So the question is, how do we actually get the final position or how do you get the kinematic configuration of the robot and that actually will be getting by this. So, these 6 n parameters, sorry 4 n parameters decide the kinematic configuration of the robot. Any question on the parameters, the kinematic parameters? Yes. Numbering in the sense k. Oh yeah, we are coming to that. Now we will be discussing about how do we actually systematically assign the, find out the parameters of a n degree of freedom robot, will discuss that, it is part of the kinematic analysis. So before going to the kinematic analysis, just want to tell you this. So we mentioned about the three angles that we can represent as the orientation, yaw, pitch and roll. So this yaw, pitch and roll basically represented in terms of three angles, but we can actually represent it as a, each one as a vector also. So the configuration of the tool in Cartesian space is normally represented using a 3 by 3 matrix. So the orientation, so we will be having one vector here, another vector here, the vector here. And this one we call as the normal sliding and approach vectors because these YPR, yaw, pitch and roll change the orientation, and the orientation can be represented as a vector, and they are the normal sliding and approach vectors, N S and A. So the approach vector is along the z axis and this is along the y axis and this is along the x axis. So that normally is the x axis, with respect to x axis and this is the y and this is z, so that is the normal sliding and approach vectors. So normally if you want to specify the orientation of a tool in Cartesian space, what we do is we will write it as nx, ny, nz, similarly sx, sy, sz, ax, ay, az. So, this is the orientation that we can specify in Cartesian coordinates. And if they are aligned with the base frame, then you will be getting it as 1 1 1 0 0 0 0 0 0. So, if the tool frame and the base frame are aligned, you will be getting it yes 111, if they are not aligned, you will be getting this kind of a matrix non zero elements here and that depends on how much you have rotated, the coordinate frame is rotated with respect to the axis or yaw, pitch and roll. So you can represent either using the angles or using a vector like this. That is where so most of the time will represent the final orientation that we want the tool to have in terms of these vectors. And then we will try to find out what joint angles will actually give you that kind of motion. So, once you understood this, what we need to do is to go to the, so I already mentioned these things so it is not clear. So approach vector is aligned with the roll axis. And sliding vector is orthogonal to the approach vector and yaw, pitch roll motions or motion rotation about normal sliding and approach vectors. That is what this one says. So we found the, I mean, we saw how to get the configuration of a robot based on the type of joints and the number of joints. And then we saw what are the parameters that actually will define the kinematics of robots. Now, the question is, by knowing these parameters, how do we actually develop the forward kinematic relationship or the kinematic relationship? So, first we talk about forward kinematics and then we go to the inverse kinematics problem. So, as I mentioned in the previous class. So, I already mentioned about this direct or the forward kinematics problem. So, given the joint variables of a robotic manipulator, now we know what is joint variables because we define the parameters and we found that either theta or D can be a variable. So, if you know these joint parameters corresponding to each joint. So, if you have an n axis then you will be n joint variables. And if you know these joint variables how to determine the position and orientation of the tool with respect to a coordinate frame attached to the robot base. So, this is the basic forward kinematics problem or the direct kinematics problem. So, what we know is theta and of course, we need to develop or formulate the kinematic problem in such a way that though the 6 n parameters are involvrd. So, we know that 4 n parameters are involved totally out of these 4 n we have only n parameters varying. So, the other 3 n parameters will be constant. So, if we know these 3 n parameters, we can have a formulation where these parameters can be assigned, then you will get the final relationship for the forward position of the robot. That is what we need to develop. So, how do we do this? So, we need to have a very concise formulation, the general solution to the direct kinematics problem. So, for any robots, what are maybe the configuration of the robot, we should be able to develop this relationship raised on the parameters that we identified. So that is the a concise formulation, which we call as the forward kinematics relationship, or forward kinematics problem or directly kinematics problem. So how do we do this? Suppose we have a robot configuration given to us, first thing, what we need to do is to assign a coordinate frame to the robot. So, the problem is that suppose this is the tool and this is the base of the robots, so I can fix the face somewhere. I want to know for different joint variables of the theta, I call the generalized variable as theta for various values of the theta, what will be the position and orientation of the tool is the problem that we are trying to address or trying to formulate. So, as a first step. So, we call this as the Denavit and Hartenberg representation. So, that is the method we will be following Denavit and Hartenberg or DH representation. So, the first step to do is to assign coordinate frame for all the joints because we know that when this joint is moving all this gets affected, when this joint is moving all this gets affected and with respect to the base the position changes with respect, i mean whenever there is a change in any one of these variables. So, to take care of all the joint variables, what we do? We try to assign coordinate frame at every joint. And finally, we try to find a relationship between this coordinate frame and this coordinate frame using coordinate transformation matrices and get the relationship. So, suppose this is the robot configuration given to us, what we do is we will try to make a simple free body diagram and say that it is a home position of the robot because robot can take any position this angle can be anywhere this can actually change. So, it can be in any position, it can be like this, it can be like this or it can be like this. So we consider home position and then try to assign the coordinate frame. So, what I will do, I will assume that there is a joint here and there is a link here then I know there is a joint here and then there is a joint here and there is a joint and like this and then I finally say, this is the robot configuration. So, I will make a home position, it can be any position does not really matter, you can actually have whatever. If you want put it like this, or you want to put it like this you can have. So, I will have a home position like this and then try to assign coordinate frame for this configuration. So, first one is I will assign a, okay this one i already explained. So this, we call this, we number of the joints from 1 to n, starting with the base and ending with the tool, yaw, pitch and roll in that order. So we say that, this is J 1, this is J 2, J 3, you can write 1, 2, 3, 4 etc. That is the joint and the last joint will be nth joint. So, if you have 6 degrees of freedom you will see that 6th one is the roll axis. So the roll joint is the 6th one or the last one and the 1 to n. So start with J 1, J 2, so this is 1, 2, 3 etc. That is the first step. Now, so I will remove this. Now given the link numbers also. So, we will start with the link 0 and link 1, link 2, link 3, etc. So link 0, 1 and 2 connected by joint 2 like this. Now, see, the robotic researchers follow different styles. Someone put this as link 1 and then put as link 2 also. So, if you are following a particular convention you can keep following that, but if you have followed some other convention then you need to keep following that. Finally, the results will be the same, whatever maybe the link number you are taking the results will be the same. So, for example, if you refer Craig, Craig is the book or you refer Schilling, there will be some difference in the way the coordinate frames are assigned and the numbering is done. So, I follow the Schilling convention that is why I am doing this link 0, 1, 2 etc, but if you follow Craig there may be slight difference, but the formulation, final formulation will be the same. So, but try to follow one of these otherwise, you may find it difficult to mix of the things. Now. So, J 2, J 3 etc., we will have till J 6 whatever it is. Now, first thing to do is to give a base coordinate frame assignment. We need to assign a base coordinate frame based on your convenience, you can have it as whatever way you like, but you need to follow the right hand rule. The right hand rule is that if this is x, then this is y and this is z axis. So, this is known as the right hand rule. So, you always follow the right hand rule for assigning coordinate frame. So, if I put this as x, so, this is x 0, z 0, y 0 will be inverse or it will be. So, you will always follow this rule. So, if I fix this as x, y, z then it will be easy for us to analyze it later also how the rotations take place. So, yeah, this is my x axis, this is the z axis and this is the y axis. So, first thing is, so we will call this as the base coordinate frame and we call it this is l0, there is a base coordinate frame l0. So assign a right handed orthonormal coordinate frame l0 to the robot base, making sure that z0 aligns with the axis of joint 1, that is very important. So, you do see what is the axis of rotation, here you can see the axis of rotation with respect to this the first joint rotates with respect to this axis. And therefore I will align the z axis along that and then x axis, I can take x axis this direction or this direction, any direction. For my convenience I taking towards the opposite to the direction of the robot, just for convenience only, so that I will get it as a positive value for the position. But you can actually decide x0, whatever way you would like to do it, preferably to take it in the positive direction. So, this will become x0 and then y0 indicates you will get. So you know how to get the z axis and then x can be decided based on your convenience, then you get the y axis. I said k is equal to 1, so this is l0, x 0, y 0, z0. That is the base coordinate frame l0. Now, k is equal to 1, we get the coordinate frame l1, so we need to get l 1 as the next coordinate frame. And the next coordinate frame basically represents the joint 2 or with respect to what point the joint 2 rotates. So we are defining a coordinate frame l1. So let me remove this, okay that is z0, x0 and y 0 frame. So the first thing is align the z axis with the joint axis and then get x and y based on the right handed coordinates frame. The next one is align axis with the axis of joint k plus 1. So now find out what is the next z axis or z 1 and align the axis of joint, I mean zk with the joint k plus 1. The joint 2 is 1. So we need to find out what is the joint axis here. What is the joint axis? Inward or outward. You can take any direction you can take outward or inward. So I am taking for the time being I am putting it like this. So I will say this is my z1, that is the z1 axis. Now, we need to find out the origin of the coordinate frame l1. So, this is part of l1, but the origin need not be at the joint. So, there is no need to have the origin of the coordinate frame at the joint, it can be somewhere else also. So, to get the origin, what we do is we try to find out where this z0 and z1 are intersecting, where z0 is intersecting with z1. And that will be the origin of the next coordinate frame. So, locate lk, so now k is equal to 1, so, locate l1 at the intersection of z k and zk minus k. So, z k and z k minus 1 intersect at this point and therefore, your origin of l1 will be that point. So now we got the origin and the z axis, so the next question is you need to get a. In case they are not intersecting find a common normal between z k and k minus 1 and see where z k is intersecting with that common normal. So if they are parallel, we will see that case next case. So if they are parallel, they are not intersecting, that you find a common normal, and then see where zk is intersecting with that common normal that will be the origin of the coordinate frame. Can we have? You mean when can we have such a situation? No, no, see look at this joint. I am coming to that so, I do not want to jump to that. We will see that case where actually they are parallel. So, lk we fix the origin of l1 and we identified the z axis also. Now, we need to find out what is x1, what is the direction of x1. So, x1 should be always normal to, orthogonal to look z0 and z1 that is a requirement. So, xk to be orthogonal to post zk and zk minus 1. So, now, this is one direction, the other one is in this direction, so, it has to be in this direction only, x k should be along this direction. Because it should be orthogonal. So, your z0 is this direction and z 1 is in this direction. So, it has to be x k should be this direction. So, your xk will be this direction and then now you have a xk. So, this is xk yeah, this is xk, this is zk and therefore, your yk on the right handed rule y k will be like this. So, yk will be this. So, your y axis will be like this. So, x1, z1 and y1 you identify the coordinate frame. So, always follow the right hand rule, then you will always get it. Just now your z is like this. So, this is the x is and then y will be downwards. And again they are parallel point xk away from zk minus 1. So, they are parallel, move it out of focus, both will be the same direction if zk and zk minus 1 are in the same direction, then away from this you point it, that is the direction you can take. So, we found that we can follow this algorithm to get the first joint and its coordinate frame and then the second one joint and its coordinates and can you obtain. Now, we can follow the same thing, change k is equal to 2 and you will be again getting that next one. So, you can keep on doing this till you reach the last one, the last one we will do it in a different way. So, till k is equal to, so yk from right-handed coordinate frame lk and then set k is equal to k plus 1 and as long as k is less than n, go to step 2, continue. So now, suppose I want to get l2, l2 is the next coordinate frame, that is at J3 you have l2. I do not know where actually l2 is, but I know coordinate frame is l2. So, I need to find out the coordinate frames associated with the joint 3. So, what will I do? What is the first step? First step is align zk with the axis of joint k plus 1. So, what is the joint axis here? Assume that this is the same kind of joint, so, I will be having z2 here. Yeah, see when you say it is outside basically I am telling it is 180 degrees apart. My z axis is rotated by 180 degrees. I can take this, this direction or this direction it is optional, but normally we try to reduce the rotation in the frame but you can still have it as this direction or you can even assume this direction nothing wrong in assuming this direction. Basically I am telling, here I am telling there is no rotation, if it is this direction I am telling there is a 180 degree rotation, z x, between two z axis, z1 and z 2 180 degrees apart. So, we got the z axis. Now, we have to find the origin of the coordinate frame. How do you get the origin? Intersection of z1 and z2. So, you want to find out the intersection of z1 axis and z2 axis. Where is the intersection? There is no intersection. So what will we do? Because they are parallel. So when they are parallel, what is the strategy, common normal. So, you find a common normal and then see where the common normal intersect z2. So the common normal will be always this and it intersects here and therefore, your origin will be l2. So, that will be the origin l2. Now, you assign x axis, and because x should be away from z1 because they are parallel. So, x should be away from z 1, so we make this as x2 and then do the same as y2. keep on doing this till we reach k is equal to 6. So, if this has 6 degree of freedom so k is equal to 6 we got follow this, tell k is equal to 5 we will be able to do this. So, we will be keep on assigning the coordinate frames like this. And when k is equal to 6 you have to follow a slightly different one, basically that is the tool points coordinate frame. The last one will be the tool point coordinate frame because l0 is the first joint, so l6 will be the last point, it is the tool point coordinate frame. So this is the way how you get it and the last one, set the origin ln, that is the l6 that is the last origin of the coordinate frame, last coordinate frame at the tooltip set the origin at the tooltip. So, I will set the origin l0 or l6 at the tooltip, l6. Align z 6 with the approach vector. So, the approach vector is the roll axis. So, if this is rotating or rolling, then this will be my z axis, that is the z axis. last one, l6 or l5 depending on the number of joints, if there is a 5 degree of freedom, then it will be different 6 degrees of freedom. But n is equal to 5 or 6 depending on there is a 5 or 6 of freedom robot. Then approach vector yn with the sliding vector. So, the sliding vector is this, so the open or close section is known as an sliding and the normal 1. So, based on the xyz, this 2, y and z you will get the x also using the right handed coordinate frame, coordinate rule. So, this way the last coordinate frame will be decided based on approach vector, sliding vector and the normal vector. So, z, y and x will be approach, sliding and normal vector. So, that will be decided based on that other rules. Till then we will follow the rules what we discussed and the last one will be assigned like this. So, finally, you will be getting the point, all the coordinate frames assigned. So we will get to how to coordinate frames assigned like this. That is our first step in getting the DH formulation for forward relationship. Any questions? No, it need not be a unique one. The assignment of coordinate frame need not be unique, but the final formulation will be unique for that particular robot. Whatever you do your alpha, theta d and all will decide how you do. For example, instead of this direction if you have decided in this direction, this alpha will be 180 instead of 0 and that will actually affect your final calculation. So, that way you can have different assignment of coordinate frames and your DH parameters also maybe slightly different because of that, but the final result will be the same. See this is not complete because it is actually a 5 degree of freedom robot, I have just shown only 3 joints here. 2, yeah, yeah. So that is why I am telling if it is 5 degree of freedom then it will be l5 that will be coming at the frame. But assign coordinate frame will be assigned based on this. Whatever maybe the number of degrees of freedom available, which is only 2, but still we need to have a coordinate frame with xyz coordinates. Because the final orientation depends on the way it is oriented with respect to x y z axis. Any other questions? So, now, the final requirement is that we need to find out this position. So assume that this is the tool position we are interested, we need to know how this actually moves with respect to this. So we need to have a relationship, how this coordinate frame will change its position and orientation with respect to this base, that is to be decided. So, we can actually say this is a position vector, there is a position vector, which actually decide the position of the tooltip and there is an orientation which decides what is the way in which this is oriented. We can see the z axis like this, but here it is like this. So, there is a rotation of this coordinate frame with respect to this coordinate frame. And that actually depends on all these joint motions, all these joint parameters will affect the position and orientation. We need to have a concise formulation which should take care of all these elements and then give you a formulation. So, I will take another 5 minutes to tell you a few more steps involved in this formulation, then we will stop here. Now, we define a few points. Some virtual points, thats not physically any relevant, but to get the DH parameters we define some points. So, we define point Bk. So that Bk, now k is equal to 1. So, bk will be B1 at the intersection of xk and zk minus 1 axis. So, x 0 and x 1 and z 0 where they are intersecting, that point we define as B 1. So, you can see that the x1 is here, x1 is here, z0 is here. They are intersecting at this point, so we call this as b1. Now if you want to find b2, what will we be b2? Where will be B2? x2 and z1. Where are they intersecting, x2 and z1. Where will it be? This is x2, this is z1. So it be here only, right? So b2 also will be here, it has got some significance in calculating the parameters. So like this you get b1, b2, b3, so this will be b3 and then it will go on like this. So, we will be getting all the Bks. Now, once you have this B1, B2, B3 etc. How do you get the theta k? So theta k is the angle of rotation from xk minus 1 to xk, measured about zk minus 1, that is theta k. We have to find out what is the rotation of x0 to x1 with respect to z0. So x0 to x1 if there is any rotation, that will be theta k, measured with respect to these axis. So you have to look from this axis and then see whether it is rotating clockwise or anti clockwise. We will be getting theta k, theta 1 will be x0 to x1. So x0 to x1 what is angle? So what is the angle here? What is the rotation of x0 to x1? 0. So in this case, so current configuration theta 1 is equal to 0, that is all. It can actually change, you can actually rotate it then x1 can be in a different orientation, you will get it as 90. When it comes out, it will be it will be 90 also. And similarly you can get theta 2, what we will be theta 2, theta 2 is get x1 to x2 with respect to z1, again it will be 0. Since it is in a home position we have got everything as 0, but if you change this if you rotate this or this you will get a different joint angles. So, theta 2 is 0 in this case similarly you will be able to get all the joint angles. So whatever maybe the rotation you can get using this rule. Now Bk, Bk as the distance from the origin of frame lk to point Bk measured along zk minus1. That is the joint distance we say we will call the joint distance is the distance from this frame to this frame. So, l0 to b1 measured along z1, z0. So l0 to b1, lk minus 1, I think this figure is this k minus 1. So lk minus 1 that 1 has gone to the side. So lk minus 1 to bk. So, l k minus 1 is this l0 to b 1 measured along this axis is known as the dk. So, this is the distance d1, l0 to b1 measured along z0 is dk. And similarly, you will be able to get d1, d2 also, so d1 we found something the length d2 also you will be able to get. But you will see that b1 and b2 are here. So, l1 and b2 are at the same point. So, you will see that d2 is equal to 0 because b2 and l1 are at the same point, so l1 to b2 there is no distance so d2 will be 0. And then ak is the distance from bk to lk. b1 to l1, b 2 to l2, b3 to l3 measured along xk that is the ak as the distance. So if you want to get a1, so a1 will be the distance from l1 to b1 measured along x1. So what is the distance from l1 to b1? 0, right? So in this case, a1 will be 0. What will be a2, a2 l2 to b2 measured along x2. So, b2 is here, l2 is here. So, l2 to b2 is the distance, so this will be the a2. The distance this will be the distance a2. And finally, alpha as the angle of rotation from zk minus 1 to zk measured along xk, measured about xk. So, you have z0 and z1 measured about xk, x1. So, assume that this is xk, your, this is z axis, yeah, z axis pointing. So this is your x axis. So, if this is x2, this is x2. Now, how much z0 has rotated to come to this position? So if z0 was like this. Now, it just rotated like this to come here. So if I am looking from x2, it has rotated from here this position to this position. So you have a 90 degree rotation in the clockwise direction. So alpha k will be minus 90, the alpha 1 will be minus 90 degree. Similarly, z1 to z2, you will see that there is no rotation, they are parallel, so alpha 2 will be 0. z2 to z3 you can see, the same logic you apply, and you will be able to get the parameters. So this way you will be able to get all the 4 parameters, theta k, dk, ak, and alpha k for every joint. So if you have 6 joints, 6 axis, each case k is equal to 1 to 6, you will be able to get all these 4 parameters. So finally, you will be able to create a DH matrix or DH parameter matrix of that particular robot. So, for a given robot configuration, you will be able to get this DH parameters using the DH algorithm what we discussed. So, this is the way how we assign coordinate frames for the manipulator and get the DH parameters for all the joints. So all the joints, we first assign coordinate frame based on the right handed coordinate rule and the method mentioned. And then once you have this then, then assign bk. And based on this you get all the parameters, joint parameters for the, joint and link parameters for the robot. And that will give you the first step in getting the kinematics, forward kinematics solved. So please go through this, you may find it a bit complex by looking at it. But once you understand the basic principle, then you will be able to get it very easily, just by assigning coordinate frames, you will be able to get the parameters. Now we will do a few examples to make sure that you are comfortable with this. And then we will be having a tutorial session on Thursday to solve some problems also. Any questions? All right. Thank you. So I will meet you tomorrow. |
Introduction_to_Robotics | Lecture_22_Homogeneus_Transformation_Matrix.txt | Good morning. Welcome back to the discussion on Robot Kinematics. In the last class we discussed about Coordinate Transformation Matrix. That is if you have a mobile coordinate frame and a reference coordinate frame, how do we map these two coordinate frames using Coordinate Transformation Matrix was the discussion we had in the last class. And we found that the transformation can actually be represented using a 3 by 3 matrix. And the point in the mobile frame and the point, point in the mobile frame can be mapped to the point in the reference frame using a relationship where we use this PF is equal to a matrix A multiplied by PM, where PM is the point in the mobile frame and then P is the PF is that same point represented with respect to the fixed frame. And this A is known as the Coordinate Transformation Matrix. And we saw this Transformation Matrix for a fundamental rotation motion and we call that as a Fundamental Rotation Matrix. And if there are two coordinate frames and if they are rotated with respect to a axis, then you will be able to get the rotation matrix like this, this 3 rotation Fundamental Rotation Matrix. The rotation with respect to the first axis or the second axis or the third axis, you will be able to get this rotation matrix like this. So, this is the Fundamental Rotation Matrix. Then we saw that suppose we have multiple rotations happening in the coordinate frames, then we can actually use a Composite Rotation Matrix to get the complete Transformation Matrix. So, that is the Composite Rotation Matrix. Here, if you have a sequence of fundamental rotations about unit vectors of either a fixed frame or the mobile frame, you will be able to get the Composite Rotation Matrix using an algorithm. And this algorithm we saw that if the rotation is with respect to the axis of the fixed frame, you have one procedure if it is with respect to the mobile frame or the rotation of the mobile frame is with respect to its own axis, then you will be having a different algorithm to choose. I mean different way of computing the Composite Rotation Matrix. So, if the mobile frame is rotated about the kth unit vector of F that is with respect to fixed frame, then you do a pre multiplication or if it is rotated with respect to the mobile frame kth vector on its own frame or its own axis then it is post multiplication. By using this pre or post multiplication, you will be able to get the Composite Rotation Matrix irrespective of the number of rotations you make. So, you can have any number of rotations, you will be able to get the final rotation matrix using this algorithm. So, this was what we discussed in the last class. So, let us see how this is actually used for something called a Yaw-Pitch-Roll Transformation. So, Yaw-Pitch-Roll Transformation is one thing something which is commonly used in the mobile, Sorry in the manipulator Kinematics, because the rotation of the tool point or the list of the manipulator is represented using the Yaw-Pitch and Roll motions. So, it is rotation with respect to X, Y and Z axis. So, X, Y and Z axis is the Yaw-Pitch-Roll that is you have a wrist like this. So, this is the Z axis, this is the X axis and this is the Y axis so, this is how it is defined. So, you have a Yaw motion with respect to the X axis, you have a Pitch motion with respect to the Y axis and you have a Roll motion with respect to the Z axis. So, that is known as Yaw-Pitch-Roll transformation. Suppose, you make a Yaw-Pitch-Roll rotation for the wrist, you want to know what will be the transmission of that tool point with respect to the reference frame. So, that is the Yaw-Pitch-Roll Transformation Matrix. So, if you have this one, so, this is the tool, so you have this Yaw axis, Pitch axis and Roll axis. So, if you make a Yaw motion, then a Pitch motion, then a Roll motion all done with respect to the fixed frame, then we get this as R3, R2, R1 because R1 is with respect to the X axis, this is with respect to the Y axis and this is with respect to the Z axis. So, you will be getting it as pre multiplication and R theta is R3, R2, R1. So, this is the Composite Rotation that you can get. Now, we know what is R1 because we saw this R1 is the fundamental rotation matrix with respect to first axis that is R1 that is with respect to X axis. Rotation Matrix R theta as YPR, which is C2 C3, S1 S2 S3 minus C1 S3 etc. So, this can see this would be the Rotation matrix or the transformation matrix that you can get once you have these rotations R3, R2 and R1. Now, you can take an example for a this Transformation Matrix before we move forward to translation we will see the will take one example. Suppose we rotate the tool about the fixed axis. So, if the rotation is all about with fixed axis, starting with the yaw off pi by 2 followed by a pitch of minus pi by 2 and finally, a roll of pi by 2. What is the resulting composite rotation matrix? Okay, you can actually try this, what will be composite rotation matrix. So, what we do we initially said R is equal to I and then we take this first it is yaw of minus pi by 2. So yaw is with respect to first axis, so it is R1 minus pi by 2 and these are all interested to fixed axis and then a pitch is R2, okay this is plus pi by 2 sorry R1 plus pi by 2, this is R1 minus pi by 2 R2 minus pi by 2 and R3 so, it is R3, R3 pi by 2. Let me do this again so, I then you have this R1 pi by 2, R2 minus pi by 2, R3 pi by 2. So, that is the R that you can get. So, this would be the transformation matrix that you can get. Now, you substitute the value of pi by 2 in this R1 and similarly minus pi by 2 in R2 and pi by 2 in R3, you will be able to get the rotation matrix. Now, suppose there is a point P with respect to mentioned suppose the point P at the tool tip has a mobile coordinate of 0,0,.6. Now, assume that PM is 0,0,.6 and then we want to find out PF following this YPR transformation. So, YPR is the Yaw, Pitch and Roll that we saw and then these angles are given as 45, 60 and 90. So, we need to find out PF using this relationship so, apply the values of 45, 60 and 90 in the YPR transformation matrix that is theta 1, theta 2, theta 3 and then you get this matrix and multiply this with PM, you will be getting it a the new point PM will be obtain PF will be obtained by this relationship. You can try this and find out how the point 0,0,.6 will be represented with respect to P with respect to the fixed frame after the transformation of Yaw Pitch, Roll. Can I give this to as an exercise for you, you can try it. Alright. So, so far, we discussed about the rotation of coordinate frame with respect to the axis of one of the either the reference frame or it's on a frame. But as we discussed any moment of the coordinate frame may not be always a rotation. So, if you have this is the fixed frame or the reference frame and this as your mobile frame, this mobile frame can actually rotate with respect to one of the axis, and it can actually translate also, it can actually translate to here and then can have a rotation also. So, this is the rotation angle and this is the translation. So, you can actually translate by P and it can rotate. So, that would be the new M1, M2. Call this M1 dash M2 dash. So, now suppose we want to represent the both the rotation and translation. So, you have a rotation and you have a translation also of the frame, because this frame has rotated and translated. And we know that the rotation can be represented in the three-dimensional space, the rotation can be represented using a 3 by 3 matrix. Now, suppose we have a Translation also, then how can we represent this translation using this 3 by 3 matrix it may not be possible to represent the rotation and translation using a 3 by 3 matrix though we are in the three dimensional space, we may require to have a higher dimension space to represent the rotation and translation. But all of our points are normally represented using 3 dimensional coordinates. So, you represent the P, suppose you have a point P then we will say the coordinates are X Y Z. So, and if you have to represent this as PM, and then the PF also will be 3 dimensional. So, PF also will be 3 dimensional with an X Y Z. So, the rotation transformation can be represented using a 3 by 3 matrix so, we have no issue in multiplying this and getting this 3 by 3 matrix and multiplying. But if you want to have represent it represent the translation and rotation then we are not it is possible to do with the three-dimensional space. So, we need to go for a higher dimensional space to represent both translation and rotation of the frame to represent the points in 3D space, if the coordinate frames are translating and rotating in 3D space, then a three dimensional space is not sufficient to represent the points we need to go for a higher dimensional space and this higher dimensional space is known as the Homogeneous coordinates. That is, so if you had to represent characterize the position and orientation or the point related to the coordinate frame attached to the base, so but both rotation and translation are needed. Now, if you have to represent this, well rotation can be represented by a 3 by 3 matrix. So, the rotation can be represented by a 3 by 3 matrix It is not possible represent Translation by the same. Therefore, we need to go a higher dimensional space and which we call it as the Homogeneous coordinates. So, we call this as the Homogeneous coordinates. So, this is known as the Homogeneous coordinates. What is how is the homogeneous coordinate defined? So, what we do we will define a point in space in a 3D space, a point can be defined using a 4 dimensional vector, so that is the Homogeneous coordinate. So, let q be a point in three-dimensional space and F be an orthonormal coordinate frame of R3. So, we have a point q in three-dimensional space and we have an F in orthonormal coordinate frame R3. Then if sigma is a non 0 scale factor, then the homogeneous coordinates of q with respect to F are denoted as qF is sigma q1, sigma q2, sigma q3 sigma. So, this is the way how we define a Homogeneous coordinate the point p, point q can be represented as sigma q1, sigma q2, sigma q3 and sigma. So, this is the way how you can represent a point in four-dimensional space. And we take sigma is equal to 1 for convenience. So, the point p point q can be now represented as q1 q2 q3 1. So, this is known as the Homogeneous coordinate of a point. So, we are not making any major changes, we are simply saying that we can represent a point in space at in a three-dimensional space, a point can be represented using a four-dimensional vector. And the first three is the coordinates of the point in three-dimensional space and the last one is unique that is the one is the last element in the vector. So, this way, we are saying that any point in three-dimensional space with respect to a fixed frame can be represented using a four dimensional vector. So, that is known as the Homogeneous coordinate of a point q. So, once we have defined this as a four dimensional. Now, we can actually represent the transformation of the translation and rotation using a 4 by 4 Matrix. So, now we can actually go for a 4 by 4 Matrix because your q is a 4 by 1 vector. And therefore, we can use a 4 by 4 matrix for transformation, because earlier it was a three-dimensional vector. Now, we have a four dimension one, so we will go for a 4 by 4 matrix to represent that represent the rotation and translation of a coordinate frame with respect to fixed axis or a mobile axis. So, the homogeneous transformation matrix now, we have a homogeneous coordinate frame, homogeneous coordinates therefore, we can actually define a homogeneous transformation matrix. And if a physical point in three-dimensional space is expressed in terms of homogeneous coordinates and we want to change from one coordinate frame to another, we use a 4 by 4 transformation matrix That is, you can have a 4 by 4 transformation matrix to represent the coordinate transformation from one frame to other frames. So, the if the space is expressed in terms of homogeneous coordinates and one coordinate frame the other coordinate frame transformation can be expressed using a 4 by 4 matrix and that matrix is known as Homogeneous transformation matrix. So, previous one what we saw was a three-dimensional rotation matrix. Now, when we convert that into a four-dimensional space, we call this a Homogeneous transformation matrix. So, in general, if T is given by this a rotation matrix, a position vector P and sigma and eta transpose. So, the general structure of the homogeneous transformation matrix will be like this, you have a 3 by 3 rotation matrix. So, you will be having a 3 by 3 rotation matrix, which represents the rotation of the coordinate frame, then you have a position vector, a vector which represents the translation of the coordinate frame and then you have one which represents the sigma of the homogeneous coordinate and then here something called an Eta a vector. Eta transpose here, then this eta is known as a perspective vector and normally set to 0. So, this perspective vector will be set to 0 and this will be sigma and therefore, you will be having this as the 4 by 4 homogeneous transformation matrix. And this transformation matrix can be used for calculate finding out the position of coordinate the point in three-dimensional space, because now PF can be written as this transformation matrix T multiplied by PM, because PM is again homogeneous coordinate, PF is homogeneous coordinate and this is a 4 by 4 matrix. So, this way the transformation of coordinate frame when there is a rotation and translation will be able to represent using this 4 by 4 matrix and the first 3 by 3 part of the T represent that rotation. This vector represents the translation and this is the sigma and this is a eta transpose which is a eta is a perspective vector normally set to 0. So, this is known as the homogeneous transformation matrix. I hope you understood. So, what we are trying to do is to represent the point in homogeneous coordinates and then get a 4 by 4 matrix to represent the rotation and translation of coordinate frame. So, now the, the translation of the frame can be represented using this vector rotation can be represented using this matrix, and therefore, you have the 4 by 4 matrix. So, in terms of robotic arm, P represents the position of the tool tip, with respect to R. R its orientation. So, in terms of robotic arm, you can say, this is the orientation of the tool and this is the position of the tool with respect to the reference frame. So, that is the way how we look at the homogeneous transformation matrix. Now, since we have a 4 by 4 Matrix to represent the transformation, we can actually convert the rotation matrix, the 3 by 3 rotation matrix can be represented using the homogeneous coordinates, and then we call it is homogeneous rotation matrix. Or we can see that the fundamental rotation matrix can be represented as a homogeneous rotation matrix by assuming that the translation is 0. So, this is your fundamental rotation matrix and this is the translation which is 0, that does not translation the pure rotation. So, the fundamental rotation matrix can be represented as a fundamental homogeneous transformation rotation matrix by setting this to 0. So, now your fundamental rotation matrix or fundamental homogeneous rotation matrix is a 4 by 4 matrix so, hereafter we will be using only 4 by 4 matrix, and therefore, if it is only pure rotation, we will make it as a 4 by 4 homogeneous rotation matrix. Similarly, you can get a fundamental homogeneous translation matrix also which can be represented like so, you have a sorry this is the translation if there is only translation is there, then we have P1, P2, P3 as the translation along X Y Z axis. And this will be a unit this will be identity matrix, because that is there is no rotation. So, this will be identity matrix and you have a translation vector. So, P1, P2, P3 would be translation and translation P is known as the fundamental homogeneous translation matrix okay. So, this is the fundamental homogeneous translation matrix. Now, if you can see that how it is represented, so if initially you have the fixed frame f1 f2 f3 then you have the mobile frame M1, M2, M3. Now, if this is translating P1, if the mobile frame is translating P1 along f1 then you will get just a new mobile frame. That is P1 along f1 then you have P2 along f2 and P3 along f3, so this would be the final position of the coordinate frame, if there is no rotation, this would be the final position of the coordinate frame and this transformation of the coordinate frame along f1 f2 and f3 can be represented using this fundamental homogeneous translation matrix. So, this is the fundamental homogeneous translation matrix. So, now the rotation and translation can be represented using a 4 by 4 homogeneous matrices. So, you can have fundamental homogeneous rotation matrix and fundamental homogeneous translation matrix and if you have composite rotation, we need to follow the composite rotation composite transformations, we follow the composite transformation principles and then get the composite transformation matrix. So, again you will be getting the homogeneous composite transformation matrix. So, we will see how to get that one so, if you have a okay before that we go to the composite let us talk about the inverse homogeneous transformation matrix also. So, if T be homogeneous transformation matrix with rotation R and translation p between two orthonormal coordinate frames, and if eta is equal to 0 and sigma is equal to one, then the inverse of the transformation is given as T inverse is R transpose minus R transpose p. So, you do not need to really go for finding out the inverse using the normal matrix inversion principles, what you need to do is if you have I mean if these conditions are satisfied, T is a homogeneous transformation matrix and they are orthonormal coordinate frames, then we can write T as T inverse a so, T inverse will be so first you take the R matrix and get the transpose of R because T will be R and p and then you take this minus R transpose p. So, take minus R transpose and multiply it p be a vector and then you have 0,0,0,1. So, this will be the inverse of T. So, the inverse of a homogeneous transformation matrix can be directly obtained by using this principle R transpose and minus R transpose p. So, that is the inverse transformation. Now, if you have a suppose we will take a simple example for the transformation matrix and then see how do we get the transformation matrix suppose, in this the composite homogeneous transformation matrix, so, as I told you when you have multiple transformations, rotation translation again translation rotation etc. Then you will be able to get the Composite Rotation Matrix by the principle of Composite Transformation Matrix that we already discussed. So, for the sequence of actions translation of M along f2 by 3 units that is there is a translation of M along f2 second axis by 3 units, and then rotate M about f3 by 180 degrees and find the okay find the Composite Rotation Matrix? Find the composite rotation transformation matrix from this one? So, you have one translation and one rotation, you have to find out the composite transformation and rotations are all about fixed axis. So, it is both f2 and f3. So, you can find out what is the composite rotation. So, what we do? The principle is that you assume it as I this identity matrix first and then the translation along of f2 by 3 units. So, this T 3, 2 represents the transformation matrix or the fundamental homogeneous translation matrix with respect to second axis by 3 units. That is T 3,2 and then pre multiply that with the rotation matrix, fundamental homogeneous rotation matrix about the third axis of fixed frame by an angle 180 degree. And if you find out these individually these two and then multiply you will be getting the composite homogeneous transformation matrix. So, if you write this T 3,2 so T 3, 2 will be will be equal to, so you can actually write it as so 1 0 0, 0 1 0, 0 0 1 because there is no rotation it is a fundamental translation and it is transition with respect to the second axis. So, you have 0 3 0 then 0 0 0 1 so, that is the fundamental translation, homogeneous translation matrix. Similarly, you can get R180 third axis. So, this will be we can write it as, so the translation is not there so, it will be 0, 0, 0 and this will be 0, 0, 1 and then the rotation with respect to the third axis, so you will be getting it as 0 0 1 0 0. Now, this will be cos 180 minus sin 180, sin 180 cos 180. So, this will minus. So, that is the way how you will be getting the translation I mean the matrices for rotation and translation. Then you multiply these two and get the Composite Rotation Matrix. So, this is how we get the Composite Rotation Matrix. So, let us look at physically how it is actually happening. So, now to get an idea what is happening it what is happening. So, let us assume that you have a fixed frame. So, this is f1 f2 f3, f1, f2, f3. Now, I consult this as initially they are aligned M1 M2 M3. So, this is M1 M2 M3. So, the sequence of action is translation of M along f2 by 3 units. So, this M is translated along f2 by 3 units. So, assume that it is reached here. So, it has reached here now, so this 3 units and this is the new position of the frame. Now, what happens this then rotate about M, rotate M about f3 by 180 degree. So, now this coordinate frame is rotated about f3 so, this is f3 and it is rotated 180 degrees. So, what is happening, so, it is not rotating with respect to its own axis. If the mobile frame was rotated with respect to its own axis, then it will be rotating with respect to this point only. So, only thing this will actually come to the side, but since the rotation with respect to f3 this will be rotating all the way like this and coming up to here 180 degree. So, this will be the position and orientation this will be the position of the coordinate frame after this because it is rotated by 180 degree with respect to this axis. So, it can be actually rotated. So, the coordinate frame is somewhere here. So, it should be rotated like this and it will be coming this side. So, that is the way how it moves because the rotation with respect to this axis, so you will be getting a rotation. So, this is the way how the coordinate frame will be moving. Now, when you do this kind of a transformation and therefore, suppose there was a point p here. Now, this point p will be moving here and then that point p will be reaching somewhere here. So, the point pf now you calculate pf, pm will be here some assume that it is 1 0 0 initially now it we actually moved out the way this set so, we will be getting a completely different coordinate point for the p with respect to the fixed frame. So, this is what actually happens when you do the coordinate transformation. Okay, so, it is shown here the same thing to actually moved all the way up to here, so we will be getting it as M2, M3, M1 like this. So, this is what actually happens when we do the transformation okay. So, assume this is the work for you, now reverse the order of transformation and find out the transformation matrix. Suppose, the rotation was done first and then translation was done, what will be the Coordinate Transformation Matrix? Are they going to be the same or there will be difference? You can check what will happen when you reverse the transformation reverse the order of transformation, then you can find out the composite transformation matrix at the end of these transformations. I hope you got the point what we are trying to say that whenever you have a moment of the frame, the mobile frame with respect to the reference axis there will be a, always there will be a matrix associated with the this transformation which can be used for calculating the position and orientation of the new coordinate the coordinate frame with respect to the reference frame. And if you are to represent both rotation and translation in a translate in the matrix, then we have to go for a higher dimension space. So, we use a 4 by 4 matrix to represent the transformation and this 4 by 4 matrix is known as the Homogeneous Coordinate Transformation Matrix. And there are many other transformations also we talked only about this homogeneous transformation. So, there is something called a Screw Transformation, Screw Pitch etc. So, I leave this as a self study topic for you if you are interested you can actually refer to some standard textbooks and understand, what is Screw Transformation? So, this is a class work or probably I will give this as an assignment later, suppose, we rotate tool about the fixed axis starting with the Yaw of minus pi by 2 and translation of 10 about X followed by a pitch of pi by 2 and a translation of 20 centimeter about Z and finally, a roll of pi by 2 and the translation of 30 centimeter about Y. What is the resulting composite homogeneous transformation matrix? So, there are a lot of transformation, there are many transformations taking place. So, you need to follow the same principle of Composite Rotation Matrix calculation. So, follow the principle whether it is rotation, I mean, the moment is about the fixed axis or the mobile axis. And then accordingly decide whether it is a pre multiplication or a post multiplication. And keep doing this till you look at the last transformation, and then find out the Coordinate Transformation Matrix. And finally, to verify that you can actually plot the movement, so you can use the same method I explained the previous slide. So, create the mobile frame here. Initially they are aligned and then see whether what will happen when it is each one each transformation takes place. How is it moving? How is it rotating? How is it translating? And then see whether it actually matches with what you are actually seeing in the Coordinate Transformation Matrix. So, take any unit vector and then see whether you are able to get the correct result. So, that is the homework that I will be giving you later. Okay, I am just skipping this for the time being okay. Now, let us see why are we learning all these transformation matrices? What is the importance of having this transformation matrix in about kinematics? So, as I mentioned earlier in one of the classes, so of course you have a robot like this you have a joint here and you have a joint here and then you have another joint. So, this is the assume that these are the joints for the robot is another joint. Now, we know that our interest is to see what is this happening to the tooltip? So, because this is the one which actually we will be using for manipulating objects picking and placing an object and all. So, we are interested to know what is the coordinate of this particular point? So, what we will do we assume that there is a coordinate frame attached to this. So, we have a coordinate frame attached to this. Now, as these joints move, this point will be moving so, after some time this may be coming like this or it may be coming like this because of the movement of these joints, so we can actually this can actually take place you can actually move around and reach many places within the work this space. And this coordinate frame will be something like this. Now, here it will be something like this. So, it will be having this coordinate frame here. Now, I am interested to know what is happening to the, what is happening to this point? This point. what is happening to this point as these joints move? And how do I get this? I need to have a reference frame. So, I will have a reference frame here, I will say that, this is my reference frame. So, this is my reference frame, I will call this as a f1, f2, f3. And then I will say, this is my mobile frame, I will call this as M3, M2, M1 etc. So, I have a frame here. Now, I am interested to know what is the position and orientation of this frame with respect to this as these joints move. So, we have a now we have a transformation problem, how this coordinate frame is getting transferred or transformed because of the various motion and then how can I represent the position and orientation of this tool with respect to the base frame. So, the robot kinematics is basically looking at the position orientation and velocities of this coordinate frame with respect to this frame. And for that, we need to represent the relationship between relationship between this frame and this frame using a transformation matrix. So, we need to have a transformation matrix which maps the tool point and the base. So, we call this as the tool to base transformation. So, what is the transformation from this tool to this base is of interest to us. And then we represent that as a 4 by 4 matrix. So, we represent this as a 4 by 4 Matrix the transformation from the tool frame to the base frame. And how is it moving because if we want to get this transformation matrix, I need to know how much it is rotated and how much it is translated. So, that is my interest. So, we got this can actually have can actually rotate and then translate with respect to the base frame and if I know the rotation and translation then only, I will be able to get this. So, the relation between this and this I can get if I know the how much it is translated and how much it is rotated. And this translation and rotation take place not because of one joint, because this translation rotation can take place because of rotation with respect to this joint or this joint or this joint, this joint or this joint or a combination of these joints also, because all these joints can actually move and whenever the joint moves, there is a movement of the coordinate frame here. Because any joint movement will affect the tooltip and it starts moving either linear or a translation or rotation it can happen. So, we need to know how these are related suppose it rotates if this one is rotating, then there will be a rotation and translation of this. So, I need to know how much it is rotated and translated so, that I can get this rotation transformation matrix. Similarly, this also. And to do that, we cannot directly get it. So, we need to look at each joint Okay, how much this has rotated. So, how much this has rotated with respect to this or what is the relationship between this rotation and this moment or this rotation and this moment we need to know. So, all these can actually be represented using again using transformation matrices. So, what I will do I will assign a coordinate frame here and then find out what is the relation between this coordinate frame and this coordinate frame. Similarly, I say another coordinate frame here and find out what is the relation between this and this when there is a rotation about this. So, now I know there is only rotation between these two, these two can only rotate, rotate. So, I will find out what is the relation between this coordinate and this coordinate frame and it is rotate. Similarly, I will do this and for all these, all the joints I will try to find the relationship and finally using all these transformations. I get the final transformation between the tool and base. So, I will be getting a transformation matrix connecting this point to this point using the Coordinate Transformation Matrix. So, that is the way how we will be using the Coordinate Transformation in the Kinematic analysis of the Manipulators. So, to know this we need to know how the robots are being constructed and what are the ways in which these joints are arranged? And then only we will know how to actually get this transformation matrix or how the transformation matrix can be generated for each of these joints. And to know that, we need to have some basic understanding of manipulators their physical construction and what are the major design parameters that will affect the transformation matrix. And unless, we know that it is difficult for us to understand the transformation taking place between coordinate frames. So, what we are trying to do is the next one or two classes, I will go through the basic mechanical features of the Manipulator. Try to find out what kind of joints are used and what kind of actuators can be used and how this can actually leads to different kinds of construction of robots and the different robot classification happens because of the joint and then the positioning and then how this affects the overall workspace of the manipulator and some basic fundamental understanding of the Manipulator we will have in the next few classes. And then we will move forward with the kinematics using the Transformation Matrix. So, that is what we are going to do in the next few classes. So, probably I will give you some basic introduction today and then we will continue in the next class. So, if you look at the industrial robots, to the whatever we discussed was more on the mathematical foundation for understanding the kinematics. So now, let us look at the components of an industrial robot. As you can see, this is known as the manipulator. So, the industrial robot is normally known as a manipulator. A manipulator is the one which actually manipulate physical objects in the 3D space. And that is why it is known as a manipulator industrial manipulator. So, that is the, the main element of the robotic system and industrial robot. And anything that can be attached to the tip of these robots, and this is the tip of the last joint or the wrist point whatever you call, and that is known as the end of arm tooling. So, you can have a grasper for grasping object or you can have a welding tool or you can have a paint gun. So those, those things can be attached to here and that is known as an end of arm tooling. And this will be having a motor and links to control it. I mean to move the joints and the other important element is the controller of the robots. So, the controller is the one which actually give the necessary commands to the joints to move depending on what you program you can actually make the program and store the program and depending on the instruction, it will actually give the commands to the joints. And there is something called a teach pendant. A teach pendant is something which can be used for teaching the robot about the position and orientation of in the 3D space or you can use it for simple programming of the robot or effectively teach pendant can be used to control the robot and to move it move its joints and the way the operator wants. And that is connected to the controller so the how the instruction comes through the controller, controller will process the information and then give the necessary commands. So, these are the major elements of a industrial robots. So, you have the manipulator, the end of arm tooling, controller and a teach pendant. And we can have some external axis. So, it is not always used, but we can actually assemble this robot onto an axis like this, so that you can actually move along that axis. So, you can actually some kind of a mobility for the robot by adding some additional motion capabilities that is that external access. So, you can add something and this robot can actually be placed onto this platform and then there can be another mobility for it. So, the robot can actually move along this in this platform. So, it getting more work space capabilities. So that is the, these are the basic elements of an industrial robots. Now, the basic robot morphology, we talk about few things, there is the Kinematic Chain, and Degree of freedom. And this actually leads to the kinematic chain and the degrees of freedom lead to different architectures for the robots. And depending on the architecture, you have different workspace for the robots also. So, we will see some basic features here, what are what are the different kinematic chain that you can have and how the degrees of freedom for the robot can be defined and basically how this actually leads to different architectures and workspace. And payload and precision are the again the features of mechanical the features of the robots. So, as I mentioned, the industrial manipulator is basically a kinematic chain, when I say kinematic chain, so you can see that there will be a base and to which link will be attached, there will be a base and a link will be attached. And then there will be a joint then there will be a link and joint, so like this it will be attached. So, this is known as a Kinematic Chain. The chain of links and joints is known as the Kinematic Chain. Now, when you have it as a serial connection of this link and joints. So, this is the link join, link join like that. So, there is a serial connection, then we call it a Serial Manipulator. So, if the manipulator is obtained by serially connecting the links and joints, then we call it as a Serial Manipulator or normally it will be an open Kinematic Chain. So, the chain will be their link joint, joint link, joint link like that and open Kinematic Chain. So, that is known as a Serial Manipulator. So, most of the industrial robots are serial manipulators. Okay sometimes we call this Anthropomorphic robot, like a human hand like now that is why it is known as Anthropomorphic. So, you can see that the all these robots are serial robots because you have a join here then a link here then join link, join like this an open kinematic chain. So that is the Serial Manipulator. And another category that is available in the industry is known as a Parallel Manipulator. So, when these are not connected serially and we connect them parallely then we get a Parallel Kinematic Chain. So, parallel kinematic chain is that you have a link here, you have a link here and they are connected parallely using different joints. So, this is one link this is another link, so link one and two are connected through joints then this is known as a Parallel Linkage. So, this parallel linkage leads to Parallel Manipulators. So, these are some example for the Parallel Manipulators. So, you have one base link and then their movable link and they are connected through these joints. Now, this link can actually move It can actually have all the six degrees of freedom are all the three-dimensional space motion can be possible and this is the Parallel Manipulator The first type of industrial robots where this type Serial and most of the 90 percent of the or 95 percent of the industrial robots are Serial Manipulators. But you can actually see parallel manipulators also in the industry nowadays, they are having some specific applications and when that can actually have a lot of weight carrying capacity etc. So, there are so finding some applications in the industry. So, these are the two major categories of industrial manipulators that is Parallel and Serial based on the Kinematic Chain. So, based on the kinematic chain, we can classify them as Serial Manipulators and Parallel Manipulators. And most of the literature that you see in the I mean whatever is published, you will see most of them related to Serial manipulators. But nowadays in the last few years, we had a lot of people doing work in the area of Parallel Manipulators also. So, that is about the, the Kinematic Chain. Now, as you can see here, there will be there will be a lot of joints and links, but then how many links and joints we should have. So, what is the criteria for deciding that number of joints and links? Because we can I can have three links and three joints or I can have 5 links and 5 joints. So, what should be the optimal number or what should be the way we decide these numbers that actually is decided based on something called the Degrees of Freedom. So, every robot will be having a degrees of a degree of freedom specified based on its kinematic characteristic, characteristics. So, in general, the degrees of freedom are the set of independent displacements that specify completely the decide or default position or body of the system that is a general definition for degree of freedom. So, we normally say that any object in space, so if you take any object in space, it has got some degrees of freedom. So, I hope all of you know how many degrees of freedom this object has. If you take any object in 3D space, you can see that it has got many degrees of freedom. But in three-dimensional space, we say that he does what 6 degrees of freedom. So, any object in 3D space has got 6 degrees of freedom. And there are 3 motions X, Y, Z directions, and then three rotation with respect to the X, Y, Z axis. So totally, we have 6 degrees of freedom for the object. Now, in the case of suppose, now if you want to manipulate this object in space, so if I have to manipulate this object in space, I need to move it to X direction, Y direction, Z direction, and then rotate, I need to have minimum 6 degrees of freedom for my hand also, because if I my hand cannot have, if my hand is not having six degrees of freedom, I would not be able to manipulate this object And therefore, we say that in general, a robot also need to have 6 degrees of freedom, because then only it can actually manipulate objects in 3D space. And therefore, we define the degrees of freedom for robotics as. So, in robotics degrees of freedom is often used to describe the number of directions that a robot can move a joint. So, the degrees of freedom defined for in robotics is defined as the number of directions that the robot can move a joint. So, if I can, if a robot has got a joint, it can actually move in one direction, then we call it as one degree of freedom, it can actually move in with respect to two axis, then we will call it as two degrees of freedom like that, that is the way how it is defined for robotics. And therefore, we have, a human arm is considered to have 7 degree of freedom. That is, you have 3 degrees of freedom here and another 3 degrees of freedom here. So, we have three degrees of freedom in at the wrist and three degrees of freedom here and we have, another degree of freedom here. So, we have 7 degree of freedom for the human arm. And the shoulder gives you three degrees and the wrist allows another 3 degrees of freedom. So, 3 of first 3, or the any 3 of 3 of these moments allow us to place the wrist in one location in the 3D space, I can actually position it wherever I want in 3D space. And then this allows me to have the orientation also so this way, the position and orientation. So, I can have, I can position this object anywhere in the 3d space using these 3 joints. And then I can orient it whatever way I want using the other three degrees of freedom. So, I have 3 plus 3 6 degrees of freedom. And since we have the objects are having six degrees of freedom, and we need to have six degrees of freedom for the robot to manipulate objects, all the industrial robots need to have minimum 6 joints, guess each joint assuming each joint is 1 degree of freedom. We need to have a minimum 6 joint for the robot to have 6 degrees of freedom so that it can manipulate physical objects and a robot that has mechanism to control all 6 degrees of freedom. So, a robots can have all 6 physical degrees of freedom he said to be holonomic that is, we need to have minimum 6 and if it has got all the 6 degrees of freedom, then we call it as a holonomic robot. And robot that has mechanism to control physical degrees is said to be holonomic. An object with fewer controllable degrees of freedom than total degree of freedom is said to be non-holonomic. So, whenever it has got, less degrees of freedom than the controllable I mean it has less controllable degree of freedom, then the actual degree of freedom that object has, then we call it as a non-holonomic object. And whenever that is more degree of freedom, then we call it as a redundant object. So, we can have an holonomic robot or a non-holonomic robot or a redundant robot depending on the degrees of freedom. So, whenever the robot has got enough degrees controllable degrees of freedom as required for the motion of the object in the 3D space, then we call it as a holonomic robots. Whenever it has got more degrees of freedom, it is redundant. Whenever it has less degrees of freedom, we call it as non-holonomic robot, so thats about the degrees of freedom. So, most of the industrial robot has got 6 degrees of freedom because we need 6 degrees of freedom to manipulate physical objects in 3D space. So, I will stop here. We will continue this discussion in the next class, how these degrees of freedom are used for positioning and orienting the objects in 3D space, so let me stop here. Thank you very much. |
Introduction_to_Robotics | Lecture_94_Range_Finder_Measurement_Model.txt | welcome to the final lecture of week 11 and so in this lecture we look at the last component that we need for making our estimation models work right so our state estimation models work which is essentially the measurement model so remember the measurement model tells you what the what is the probability of z given x t right what is the probability of zt given x t and if you are using a map is given x t comma m right so the measurement model tells you what is the probability of z t given x t comma m right and so what i am going to do in the next few slides is basically look at one specific kind of sensor and develop a measurement model for that sensor alone right and so we are going to look at what are called range sensors in the following slides right but whatever principles i'm talking about now right so they basically can apply to other kinds of sensors as well right whether it is a camera sensor or a barcode operated landmark detector and so on so forth right so in fact there's a funny story but which one of my students did this when when there was this job to build a robot for to move around in office space in a company while people were trying to come up with very complicated algorithms for localizing the robot figuring out which room or which cubicle the robot was on he came up with a very simple solution he printed unique barcodes for each location and then equipped the robot with the barcode reader right so the robot just moves to a particular cubicle reads the barcode and figures out exactly which cubicle it is in didn't have to worry about all the cubicles looking similar right so sometimes if you get the right engineering solution problems become easier than trying to come up with something more sophisticated but anyway so getting back to the main lecture here so the the idea here is i'm going to look at a typical sensor so it could be for example an ultra sound sensor right so here is a here is a illustration of that so there is a mobile robot and a corridor and its basically has a rangefinder with these multiple ultrasound detectors running off in different directions right so typically each of these ultrasound detectors is going to return the distance to the the nearest object in the direction of the scan right so you can see that each of these rays here is one direction in which the ultrasound sensor is scanning and most of the cases it is returning to you the distance of the nearest obstacle right in some cases it fails right there is an obstacle here it fails to detect there are a couple of cases where it fails to detect objects in other cases it basically stops shot right so even though there is no object here it basically returns that there is an object so some kind of failure of these sensors could also happen but typically it tries to detect what is the distance to the nearest obstacle in the direction of the scan and if there is no obstacle it is supposed to give you back whatever is the maximum value for the sensor range right in that case you know that in that direction of scan right there are no obstacles right so that is basically what the range sensor is going to give you and let us see how you can put together an actual model for this range sensor right so just like we saw now right many sensors generate more than one numeric measurement value so so if i look at a rangefinder as a sensor it's going to give me the all the ultrasound readings that we saw right so we saw multiple ultrasound readings right so each one of this is going to return back a specific range value and so when i say that i am using a single sensor which is range sensor it could still correspond to a vector of measurements right so we are going to assume that these are z 1 to z k and at every time t i am going to have all of these capital k measurements available to me right and so at every time step t i am going to make this measurement and i am going to use z sub t k small k right for a specific individual measurement so it could be a specific ah value so this could be k equal to one this could be k equal to two this could be k equal to three and so on so forth right so for specific values i will be looking at each one of these measurements right and the other thing that we are going to make life easier right so we are going to make an assumption that will make life easier is that i am going to assume that each of these different values that my range finder returns are independent right so probability zt given x t comma m is actually equal to the product over k of each of the individual sensors probability that z t 1 given x t comma m times probability of z t 2 given x t comma m and so on so forth right so that is basically what i am going to assume to make my life easier you can see that almost surely this is not true but we're just going to make that assumption so if i if the map had not been given to me right if i didn't have a map right the probability that this being 2 is even lesser right because because without knowing the map if i hit an obstacle in a in a particular range then it's quite likely that i will see an obstacle in a slightly displaced right if i see an obstacle in the direction theta equal to 5 right i am going to likely see something in the direction theta equal to 6 right but given that i know the map right so when i say i know the map that means i know where the obstacle is exactly right uh in in such a case when you know where the obstacle is right the the probability of me getting a uh uh reading that obstacle is there right when i'm pointing at five degrees or six degrees is independent of whether i got the obstacle reading when when i pointed at six degrees right so because i know that there is an obstacle at that distance because of the map if i didn't know then this this would have been a harder independence to write okay so this this this gives me a little bit more leave it here okay so what we are going to do is if you look if you if you remember that we talked about multiple different kinds of errors that were happening right in some cases there was an obstacle that it was missed in other cases there was no obstacle but it still returned an obstacle and so on so forth so i am going to break down the kinds of errors right into four different quantities right so i'm going to say there is a measurement noise a small measurement noise right basically this is due to things like you know temperature variations or the sensor is getting little bit heated or even atmospheric variations and things like that so that could cause a slight change in the reading right so that we call as a small measurement noise right and then the second kind of errors are due to unexpected objects the map says there is no object but i there might be an object and therefore i'm sensing something which i'm when i'm expecting to not sense anything right and the third kind of error is due to failure to detect an object right could be because the suddenly the reflective index is is very high or the object is too black and i'm not able to actually get any reflection out of it so whatever depending on the kind of sensor that you are using there are many reasons why the sensor might fail to detect an object so how to accommodate for that and the finally the last thing is despite however clever i am there is always a chance that something goes wrong and how to accommodate for that right so i am going to say that the model that i have the probability of z t given x t comma m is a mixture of all these four uh sources of error right so the the overall stochasticity in the measurement comes from all these four and uh we will look at how to model each of these densities right before we go on so just a small notation when i say z t k star right suppose there is an object in the direction that the kth sensor is looking at time t then z t k star is the true distance of the object okay so ztk is the actual measurement okay ztk is a measurement that is going to be influenced by all of these errors right so ztk is a measurement that is influenced by all of these errors and ztk star is the true value of the distance of the object okay ztk star is what ztk is trying to measure but it is getting corrupted by all these noises okay it is just the notational thing we will see how we use it in the next slides right so the first thing we are going to look at is a small measurement noise right so the sensor roughly gets the range to the object it roughly gets the distance to the nearest object in the direction of the sensor right but then the sensor could have a limited resolution right so it could basically be rounding off because it does not have enough resolution there could be things like like i said earlier the the sensor could heat up right or there could be some kind of an atmospheric effect that's affecting the signal and so on so forth multiple reasons right so what how how we model this is as a very narrow gaussian whose mean is ztk star i don't know this but for the modeling purposes i am assuming that zt k star is the true distance right and i have a standard deviation for this gaussian which we call as sigma hit remember this has to be unidimensional gaussian right because i am measuring a single variable this is a distance to the object so it's a you need a it's like a univariate gaussian not a multivariate question right so i'm going to model it like this so z t k star is the the actual distance and i'm going to model it as a narrow gaussian and remember that so we'll denote this by p here right and remember that this measurement value right cannot exceed z max right it cannot go below zero either like the distance is zero right i mean so it can't go below zero and z max is the maximum value that this sensor can measure so it can't go beyond zero or z max and therefore it is not really a gaussian right because we are going to truncate it so between 0 and z max if ztk actually the the value that you are plugging in here if it is between 0 and z max then the probability is given by a gaussian which we call ztk so where ztk star is the mean and sigma hit is the variance of this gaussian and we add a normalization parameter eta so that we compensate for the probability mass that was cut off beyond z max and beyond zero basically we renormalize this by dividing by this uh the the area under zero to z max so that's basically what it is okay and if it's outside the zero to z max range the probability is zero so it can never happen okay so now we say this is the distribution p here so this is the noise that comes from small measurement errors if there was no error i would have measured it as zt k star the second source of error is due to unexpected objects right so why is this happening so typically we are assuming that maps are static right i am assuming that there are objects at certain positions in the map and i'm not updating the map on a very frequent basis right but typical environments which mobile robots are operating could be dynamic i mean there could be other robots moving around there could be people moving around or even things like paper flying around and stuff like that right and these are objects that are not contained in the map right and but can make the rangefinder give you a very short reading suppose there is an object at this distance right from the robot but there is a paper flying somewhere in between right so the robot is taking reading it will hit the paper and it will come back it will not get to the actual distance of the object right so it gives you a much shorter reading than uh the the actual distance to the object ztk star right so one way to think about this okay let me put all these moving objects into the map right or putting put all these moving objects into the state right so the map is there but i can put them into the state and i'll estimate their location as well right but this is very hard right i mean come on paper flying around all those things it's hard to so what we'll do is instead of that we'll just treat it as a sensor noise right we'll treat it as sensor noise so what do i mean by that i'll just say occasionally my sensor might give you a wrong information ah we'll have to account for that as well right so one thing you should note here right when i start reading this as a sensor noise right so remember that i have i have a cone right so if there is an object that is passing by very close to the robot it is very likely to be sensed because it is going to block my sensor quite likely right but if it is flying away flying farther away from the robot right this is the cone if it's flying close i am very likely to sense it right if there is a moving object that is farther away from me right i might not sense it right depending on the quality of the sensor the resolution and things like that i might actually miss this object and then most of my uh my ultrasound emission might go past it and actually hit the true obstacle right so what it really sums ups it so the closer the object the moving object is to the robot or is to the sensor the more likely that i will sense that object therefore the likelihood of sensing these kinds of unexpected objects decreases with the range right the closer you are the more likely that i'll sends an unexpected object right so what we do to accommodate for that is to treat the whole thing as an exponential distribution right so the closer i am to the robot the closer i am to a distance of zero right that's a higher probability so the farther i am i have a lower problem and notice that beyond z t k star i don't care right because i would have hit the actual obstacle so i don't care if there is a you know unexpected object after the obstacle right i am not going to see that so only before the obstacle i am likely to see the unexpected object right so i expect to see an unexpected object before the obstacle right therefore uh so this should this is this is again uh incorrect uh so this will be between zero uh ztk less than zt k star okay not zmax it should be zt k star and so i am going to model this as an exponential distribution with the parameter lambda short and again i have a normalizing factor because after z t k star i set the probability to 0 right so all this probability mass has to be redistributed before z t k star and and we'll take care of that right so so this is the second model and what is the third one right so the third one is when i fail to direct the object altogether right so what happens when i fail to detect object that means that i am completely missed object so whatever reason right so it could be because the sensor failed right it just stuck somewhere so my sensor keeps just returning the maximum value right or it could fail because the object was very good at absorbing the light that it was emitting and therefore it just didn't get any bounce back and therefore it just assumed that there is nothing in that direction and that till the range of the maximum range the the thing is free right so at no no conditions will i actually accept a reading that is greater than the maximum range right i i will not accept a reading greater than z max because it doesn't make sense because nmax is the maximum that the sensor could look at right and so what we'll do is we'll just model it like a short noise right so it's just a point mass at z max right so point mass is z max and this is essentially the probability that i completely miss the object at ztk star right so i'll just basically return the value of z max so now the thing is probability that z t k is equal to z max is one it's zero otherwise right so for this this component of it just assigns some additional mass to z max okay now we are coming to the last one which is the random measurement right so it could basically miss the whole thing or it could generate some kind of a phantom reading when the bounce of walls i mean so that could be i could detect something as much being much closer than it is actually is right there could be some kind of a crosstalk some interference with with other sensors that makes me put it at some some random location right so i'm not going to overthink this i'm just going to say that hey look after all of this careful consideration there is a chance that i might miss i could put the object anywhere between 0 to z max right i could put the object anywhere between 0 to z max so i'll say the probability is one by z max this is basically uniform distribution so what are the things we have four different things here right so we had a gaussian then we had exponential we had a short noise right like a like a impulse function and now we have a uniform distribution so there are four different probability distributions and the actual probability of z t given x t comma m is actually a mix of all these four distributions as we will see here right so the four distributions i basically look at it as a mixture right that is p hit p short p max and p rand right so p max is when i actually miss the reading altogether p short is when i hit an unexpected obstacle sp randis i make a random error and p hit is when i am actually measuring it correctly but i have a small measurement error and each of this is weighted by a corresponding z value here z hit z short z max and z rand right and the condition is they are all positive and they sum to one right so that p is a probability distribution right and so this is for uh the individual ray z t k right just just the one one ray k i have to do this for all k right and so for the set of readings that i am going to get since they are all independent i am going to compute the probability for each measurement right each k and then take the product right so the the final value q that i returned is the product of all the probabilities computed from each one of the 1 to k sensors that i have in my range sensor right so that's basically the probability of z t given x t comma m so likewise we do this kind of computation for various kinds of sensor models and i mean the book talks about a couple more and if you want to get other flavors you can read the book right so you can just look at the book but as far as i am concerned i am i'll be happy if you understand the rangefinder measurement model thoroughly and because others are all simple expansions of this okay thanks |
Introduction_to_Robotics | Lecture_84_Binary_Bayes.txt | so welcome back to the the fourth lecture in week 10 and we are going to continue looking at non-parametric filters right so if you remember we started looking this filters as a way of doing recursive state estimation right so what we are going to look at in this lecture is a very special case right where i really really want to know what my state is what my current state is right and i am going to keep on making repeated measurements right i am going to continue to refine my estimate of where i am right such problem settings right are called problems with static state right so i am basically i am going to assume that i am either going to right i i i have i am not changing my state right my state is fixed and i am going to only repeatedly make measurements until i'm sure about my state and we're going to look at a very specific ah instance it's it's it's a non-parametric filter as you can see from the slide uh because it's going back to the original base filter setup right not making any assumption about what the distribution is so i'm just going to assume that it's a problem with the static state but i'm also assuming it's a binary state problem right so even though now i i'm presenting to you in the context of uh you know recursive state estimation we will see later that this binary base filter with static state has other users right and it's more important in those contexts that we will see later on but i am just introducing it here for you so that it stays with all the other filters that we are studying right so the goal here is to look at problems that are formulated as binary state problems basically i have a state variable x right that can either be true or false right so i so my states are either x or not x right x can be the true or it can be false right and so the robot just needs to estimate what is the value of x is x true or whether x is false i mean it could be something as simple as okay is there a door here or the no door here it's or door open or not open or as you'll see later is there an obstacle in the cell that i am looking at or is there no obstacle in the cell right is is it clear so what the the space in front of me is it clear or not clear it's basically that's it right so it's a binary state estimator right door open door closed right you know do i have fuel or no fewer so it's a single indicator variable which is either 0 or 1 and i am assuming that i am going to make multiple measurements until i am satisfied with my estimate of that right now since the state is a static state right so the actions don't really matter right so the actions don't play a role here right actions don't play role because the state is static right so static meaning the actions don't affect the state right and you can see that by the fact that we don't put any any in time index on the actions but the observations still have a time index so my belief state at time t so notice that it doesn't say bell x t anymore it says belt t of x that is because my belief still keeps changing right so notice that my belief state is going to be only two numbers whether x equal to zero or whether x equal to one so the probability of x equal to zero is one number probability of x equal to one is another number right and so on that estimate is going to change with time so therefore i have a time index on the belief but the x itself is static therefore i removed the time index from x right and so the belief instead of being the probability of x given all your observations and all your actions from the past is basically the probability of x given your observations alone right given all the observations you made up till from time 1 to time t what is the probability of x in this case what's the probability of x being true right and so in many in these problems you should remember that so 1 minus bell of x gives me bell of x bar because if x is not true x has to be false right so 1 minus bell of x gives me bell of x bar and this is true for every time t right so that's basically the the problem setup that i am currently looking at so one of the things that we do as we will see uh in a bit uh is uh the belief right instead of representing it as the probability distribution directly i uh represent the belief as something known as the log odds ratio right so uh in you know in probability theory the odds of an event x right is basically defined as the ratio of the probability that x happens divided by the probability that x doesn't happen right so this is something that's uh that's quite familiar with some uh people mean if you if you look at the outcome of say sporting events and things like that people say what are the odds of that happening right so when the odds here actually refers to the ratio of p of x divided by p of not x right and in our case we can say that it is p of x divided by 1 minus p of x so this is the odds of x happening right and log out obviously is going to be the log of p of x divided by 1 minus p of x is that clear so the log odds is essentially log of p of x divided by 1 minus p of x and we are going to denote that by the symbol l right so l of x equal to log of p of x divided by 1 minus p of x now i am just going to give you the base filter algorithm for the log odds representation right just to tell give you the motivation as to why we are looking at the logs representation and then we will actually go back and derive this right so remember what does my the base filter algorithm do it takes my current belief which is l t minus 1 here which is the log odds at time t minus 1 right and my current action and my current observation right the action at time t and observation at time t but i do not need my action at time t because i have static state therefore i ignore the action i only take my observation at time t right and so l t obviously is going to be my belief at time t is basically this additive expression right so i am going to look at l t minus 1 which is my belief at time t minus 1 times log of this expression right which is the probability of x given z t divided by 1 minus probability of x given z t okay minus log of p of x divided by 1 minus p of x so what is this p of x so this p of x is essentially my prior probability of whether x is true or not right i mean so p of x is a probability that x is true when i have not seen any observation so this is my initial belief as to whether x is 2 or not and and i keep keep basically adding or subtracting this odds right my initial odds on x every time i make the update right so we'll see why that's the case as we as we go along in the next few slides right so i'm just going to read the expression again so my belief at time t which is represented as a log odds so l t is equal to the belief at time t minus 1 which is lt minus 1 plus log of the probability that x is true given zt has happened so zt is observation at time t divided by 1 minus probability x is true given z t which is basically the probability that x is not true given z t so this is basically the log odds of p of x given z t right minus log of p of x divided by 1 minus p of x where p of x is the probability that x is true before i have seen any observations you can think of this as the log odds this this whole expression can be thought of as l naught right the belief at time 0 log odds of the belief at time 0 is essentially what this expression is so you can think of this as lt equal to lt minus 1 plus log of this expression minus l naught right and once i have done this computation i just written lte notice it is fairly straightforward right it's a very simple additive expression and then the nice thing about it is because i'm working in this additive space and with these the logs right i can handle more effectively numbers that are very very very small right so if i if i'm in a situation where the log odds of something being true or being false is very small right or if the probability of something being true or false is very small using the log odds expression right allows me to be more stable in my updates numerically more stable in my updates and allows me to handle this more elegantly so that is the reason we go in for this log odds and later when we look at where we use these especially in places like map estimations right we will see that the probabilities do tend to be very small and therefore using this kind of the adaptation of this binary base filter is very useful in such cases right so one thing i want you to again notice is that uh in the updates here right i'm sorry so in the update here we are using probability of x given z t normally our measurement model is written as probability of z t given x t right normally we will look at what is the probability of the measurement happening given x but in this update you are using what is the probability of x given the measurement has happened right so and so this is called an inverse measurement model right and the reason we are using the inverse measurement model here is because our state is actually very simple right instead of in the normal case our state would be a very could potentially be a complex vector right very high dimensional state space but in this case we're talking about binary state right so x is either true or false right so it's a it's a fairly simple state but whereas the observations could potentially be very complex so let us go back to one of our old examples right the observation could be an image from a camera right and our state could be whether the door is open or closed it's a binary state right open or closed but the input right the z could be a full blown image from a camera or like a small small bit of a video from a camera right and therefore the z could be very complex so if i'm going to learn the forward model right if i'm going to represent the forward model then i basically we have will have to represent the distribution over a very very complex space right which is the space of the observation this is based space of all images that my camera could capture that's a fairly complicated uh endeavor right so what we do here is because the state is so simple we try to see if we can work with this ah this uh inverse measurement model right so we have to be so one of the things that you will be finding out right as we keep going along is that we are learning a lot of tools right there is nothing like that's one single tool that's the best thing to use at every point right and and depending on the application depending on the situation that you are actually using these tools you have to pick whatever is the best one for you so in this case because the state is simple observations are complex i would i prefer to use a inverse model as opposed to the usual forward measurement model okay uh so now uh so so once i have the log odds ratio so i will will come to that in a bit ah the rest of the slide just ignore that for the time being just look at this um just just just just edit everything i said after this slide came on okay so i keep saying that the log odds ratio is the belief right but if you want the actual belief distribution the belief distribution is the probability of x given z 1 to t right so i can recover the belief of x at time t by just doing this right so 1 minus 1 by e power l t of x or lt if you remember is a log odds ratio or time t so how did that come about ah just take you back to the original definition so if you look at this definition right so l of x which is our log odds is equal to log of p of x by 1 minus p x now if i take if i take the log to that side so that becomes e power l of x right and then i take the 1 minus p x to that side so that becomes e power l of x minus e power l of x times p x so i bring that back here so i'll get p of x into 1 plus e power l of x right take that back that side so i'll get e of x e i'm sorry e power l of x divided by 1 plus e power l of x i'll simplify that to get this expression right so if you take this up you can see that it's 1 plus e power l of x minus 1 so that that will go away so you'll get e power l of x divided by 1 plus e power l of x so that's that's essentially the belief expression just a little bit of algebra to recover the belief from the log odds therefore that's the reason i keep saying that the log odds ratio is essentially the belief because you can easily go back and forth between the one between between each other and the reason we keep it this log odds is that our updates are nice and simple uh additive updates right so remember that bill x is the probability of x given z1 to t right so del x at time t is a probability of x given z one to t right so i am going to use the base rule just now if you remember that so i am going to take just z t alone right so you can you can think of this as probability of x given z one to t minus 1 comma z t right so i'm taking that as my p and i'm moving things around so i'm going to rewrite this as probability of z t given x and z 1 2 t minus 1 times probability of x given z 1 to t minus 1 divided by probability of z t given z 1 to t minus 1. okay now we have the markov property right so the barcode property does not go away so as soon as i have x right my zt is no longer dependent on my previous measurements right so i can remove this i can go back to my usual measurement model which is which is probability of zt given x so given my markov assumption i can go back to my old measurement model which is probability of zt given x and the rest of it carries over right now let me try and simplify the measurement model also right i can apply the bayes rule again to the measurement model which is probability of zt given x and that gives me my inverse model which is probability of x given z t times probability of z t which is the kind of the unconditioned probability of making that measurement set t divided by probability of x which is my prior probability before i have made any measurements right you remember we already saw that in our base filter algorithm so we this is where it gets introduced here right now there are a few things which i don't want to compute right remember i don't want anything where z t is the variable over which i am defining the distribution because z is a very complex space i would like to get rid of any dependence on or rather any any place where i have to compute a probability over z t that's the reason we went we wanted to use the inverse model right so i'm happy with this but i don't like this right nor do i like this right so probability of zt given z1 to t minus 1 is even more complex to determine than probability of z t right so i want to get rid of these somehow so let me plug this back into the expression and see what we can do right so now substituting my base expansion of probability of z given x right i get this right probability of x given z 1 to t which is my belief of x our belief of x at time t is equal to probability of x given z t right so probability of x given z t into probability of z t divided by probability of x into right so this part is done into probability of x given z 1 to t minus 1 divided by probability of set t given said 1 to t minus 1. so those those parts come in here right so i have probability of x given z t times probability of z t divided by probability of x right here again there are remaining terms from the expansion earlier right probability of x given z 1 to t minus 1 divided by probability of z t given z 1 to t minus 1 right this is for the probability that x is true i can write something similarly for the probability that x is false and i would get these quantities right the probability of not x given z t times probability of z t probability of not x given z 1 to t minus 1 the whole divided by probability of not x and probability of z t given z 1 to t minus 1. so here is where our thing comes in right so i am going to take the odds so which is probability of x divided by probability of not x right given z 1 to t that's right so this is basically the odds of our belief representation right probability of x this is probability of x is true under the belief at time t this is probability of x being false under the belief time t so i take that and that is my odds please edit that so so now given that's odds i start dividing this right what is the nice thing when i start dividing all these zt terms which are common to both the x and not x will go away right so this zt goes away and this term also goes away so what i am left with is probability of x t i mean sorry probability of x given z t divided by probability of not x given z t probability of x given z 1 to t minus 1 divided by probability of not x given s z 1 to t minus 1 and this will go up and that will stay down so i will get probability of not x divided by probability of x and we all know that probability of not x can be written as 1 minus this so i basically get this expression right so all the not x parts get simplified as 1 minus x now what do i do i take logs on both sides right so this is my log odds for the belief at time t so that's lt of x equal to log of px given zt divided by 1 minus px give it zt this is essentially the the the second term that we had in the base filter expression that's the log odds for the the inverse measurement model for the current measurements at t and then i get log of p of x given z 1 to t minus 1 divided by 1 minus p of x even z 1 to t minus 1 what is that that is lt minus 1 of x that's my previous belief right so that's that's my previous belief and then i have this term which is log of 1 minus p x divided by p x and i can flip it around i take minus log p x divided by 1 minus p x which is my l naught of x as we saw earlier so that is essentially my whole expression so lt of x equal to l t minus 1 x plus the log odds of the inverse measurement model minus l naught which is the log odds of the initial belief before i make any measurements so that gives us the expression for the uh the binary base filter update so we'll go back here and so that's the binary base filter and it might seem a little silly here because we are looking at looking at a single state variable but later on one of the applications that we look at for this algorithm is where there are many many many such binary variables that we are trying to estimate in the world at the same time and having a more convenient uh additive updates right uh or very useful okay and that's it for this week |
Introduction_to_Robotics | Lecture_27_DH_Algorithm.txt | So, in the last class we discussed about DH Algorithm, how do you assign the coordinate frames and once you assign the coordinate frame, how do you get the DH parameters. So, for each joint axis there will be one coordinate frame assigned including the base frame and then looking at the coordinate frames you can identify the DH parameters. So, this was what we discussed. So, we saw the algorithm as first number the joints from 1 to n starting with the base and ending with the tool yaw, pitch and roll. So, a right hand orthonormal coordinate frame to the robot base, making sure that set 0 aligns with the axis of joint 1 that is the first coordinate frame you assign and then align the next joint axis, Zk with the joint axis and then identify the origin of the coordinate frame. So, align the axis, then identify the origin Lk at the intersection of Zk and Zk minus 1 and if they are not intersecting, identify a common normal and the common normal intersection with Zk will be the origin of the coordinate frame. Then select Xk orthogonal to both Zk and Zk minus 1 and if they are parallel point Xk away from Zk minus 1. You get Z axis and X axis and the origin, now follow the right handed coordinate frame and assign yk to form the complete coordinate frame Lk. So, that is the way how you assign the coordinate frame. So, continue this, till you get the last, till you get to the last point where n is less than k, k is less than n and for k is equal to n you assign the coordinate frame at the tooltip based on the roll pitch yaw axis. So, the normal approach and sliding vectors will be the XYZ and therefore, you will be getting the last coordinate frame also. And once you get the other coordinate frames, then you go for the bk, point bk, which will be the intersection of Xk and Zk minus 1. So, b1 will be X1 and Z0, b2 will be X2 and Z1. So, like that find out the intersection and if they are not intersecting, use the intersection of Xk with a common normal between Xk and Zk minus 1. So, this way you will be able to get all the bks and then you can find out theta as the rotation from Xk minus 1 to Xk measured about Zk minus 1 and then dk as a distance from the original frame Lk minus 1 to Bk. So, Lk minus 1 to bk will be dk measured along Zk minus 1 and then compute ak as a distance from point Bk to the origin of frame Lk measured along Xk. So, that will be the ak. So, you have dk, ak and theta k and finally alpha will be the Zk minus 1 to Zk, the angle of rotation from Zk minus 1 to Zk measured about Xk. So, this way, you will get all the four parameters associated with each joint and depending on the number of joints you will be getting 4 by n parameters associated with the manipulator, that is the first step in getting the forward kinematics of this manipulator. So, few points. So, as I mentioned earlier, this need not be unique because you can have the Zk direction differently and therefore, you may get a different assignment of coordinate frame. So, need not be a unique assignment of coordinate frame. But finally, the parameters and if you use the parameters properly you will be getting the final forward relationship will be the correct one and another important point is that this Xk should always intersect with axis Zk minus 1 when k is equal to k is less than n. So, we found that Xk should be orthogonal to Zk and Zk minus 1 that was one condition we mentioned and if they are parallel, you have keep it away from Zk minus 1 and when one condition, another condition is that it should always intersect with the axis XZk minus 1. That is, if you have a link like this and joint here and another joint here. So, you will be having one Xk here, one coordinate frame here and then you will be having another Z axis here, so I put it as Z axis is like this, is a Z axis Zk minus 1, Z0 and Z1. So, now, you can see that this Xk if you, it can be normal like this also, it should be orthogonal to both. So, this is orthogonal it is away from here that rule is applied, but apart from that, we should ensure that this Xk intersect with the axis Zk minus 1. So, if you do this, in this way if you assign Xk, it will not intersect and therefore, what we need to do is, you have to assign a X axis in this direction. Xk will be assigned like this, this will be Xk minus 1 and this will be Xk. So, it actually intersect with the Z axis. So, this will be Xk minus 1 and Xk, you will see an angle theta. So, this theta will be a permanent, it is not a variable, but there will be a constant theta always should be existing between these two. So, that is basically what it says intersect with the X axis, intersect with the Zk minus 1 axis. So, this is what I already explain and as I mentioned it is not unique. So, the direction of any of the Z axis could be reversed and therefore you will be getting a different assignment of coordinate frame also. But, once you assign the Z axis then you should follow that it should actually be right handed coordinate frame and all other rules are applied. So, hope you understood, how do we actually assign coordinate frames and then get the DH parameters. Any questions? So what we will do, we will take an example, a very simple example which I already indirectly showed you. We will take a real robot and its parameters we will try to find out, an existing commercial robot. So, this is a 5-axis articulated robot. So what is an articulated robot? Anybody? What do you mean by articulated robot? Yeah, all the joints are rotational, then we call it an articulated robot, that is all. So now, this is alpha 2, its a commercial robot manufactured by one company. So, this is the configuration of this robot. So we can see, so there is a vertical joint, it is known as the shoulder joint, there is a joint here and there is a shoulder here and there is a, this is a base joint, which is actually shown here. So, base joint is this one, there is a base joint, then you have a shoulder joint, then you have an elbow joint, then you have a tool pitch and then you have a tool roll. So, there is no tool yaw axis that is why it has 5 degree of freedom, it does not have all the 6 degrees of freedom it has only 5 degrees of freedom and the dimensions are given like this from the base to this shoulder is 215, and from this shoulder to the elbow is 177. 8 and then elbow to pitch is 177. 8 and then the pitch to that tooltip is 129. 5. So, these are the dimensions. Now we need to see, we need to find out what are the DH parameters. So, if I want to know the tip position, position of this tooltip with respect to the base for any value of this joint angles, I need to find out a relationship and that relationship can be developed only through DH parameters. So, we will try to find out how to get the DH parameters for this robot. So what is the first step? What? Base frame. So first step, probably you can make it as a more like to find a home position, cause all these positions, you will be having different joint values. So, we will take a home position a comfortable home position and then try to find out the DH parameters. So I will take it as, suppose this is a home position I will say and then I will say this is the shoulder joint, this is the elbow joint, this is the pitch joint and this is the vertical, the roll axis, roll joint and this is the tool. So this one we, I call this as the, a home position of the robot. You can assume any home position as I mentioned in previous class. So, this one I will assume as a home position. So now, I have a joint here which actually can rotate with respect to this, I have another joint here, I have a joint here, I have not join here and I have a joint here also, it is for the roll. Yeah but it is a roll axis, so there may be a joint here, which I do not know where actually this, it can be anywhere. So, I put this as a roll joint and then finally the last point will be the tooltip, this is the tool. So, the first step is to number the links and joints. So, we normally give this as the link 0, the 0th link and then 1, 2, 3, 4 and then one more 5 because this is a thing that, we call it as a fifth link. Then we have the joints. So, I will put this as the joint 1. So, I will call this as joint 1 and I call this as joint 2, joint 3, joint 4 and I have a joint 5 which is the roll axis or the roll joint. Depending on the mechanical if I know the exact point where the motor is fixed and the joint is fixed then I can do it, but for the time being you can assume it anywhere this joint can be anywhere in this land, so we do not know. So, that is the first step you give the numbers, link and joint can be numbered J1, J2, J3, J4, J5, etc, that is the joints and then the link 0, 1, 2, 3, 4, 5. You just 6 degree of freedom, you will have 1 more joint, sixth joint also will be there. Now, next question our next step is basically to assign the base coordinates frame. So, we need to give a base coordinate frame. How will you assign the base coordinate frame? What is the procedure? Chinmay knows it right? No. What will you do? First thing is find out their axis. Any coordinate frame, the first step is to identify the Z axis and how do you find out Z axis? Student: Rotational axis. Professor: Rotational axis. So, what is the rotational axis the first one, this is the rotational axis. So, that going to be the, your Z0 axis and once you know the Z0 axis then you need to find out the X 0, what is the X axis. So, X axis should be orthogonal to it and in this case I can assume it in the positive direction, I can assume it in any direction I can actually assume it this way or I can assume it this way also. So, that is up to me your first one. So, I assume this as, this way because I want to operate the robot in this XZ plane, I assume that my operation plane is XZ. So, I assume that this is the X0 axis. So, I have this X and Z axis. So, what will be the Y axis, Y axis how do you get? Basically on the right hand coordinate frame. So, this is X, this is Z, if this is X, this is Z, this is Y. So, it will be inside so towards the, so I will put it as, so X0, Y0, Z0, done. I hope all of you are clear about this, there is nothing complex here. Just you need to identify the Z axis and then all the other things you can get it. I can actually think of an X axis in this direction also, then the Y will be in the other direction. So, I can actually have that option for the time being I am just taking it for convenience. So, what is the next step? So, we have got the base coordinate frame L0. So, I call this frame as L0, coordinate frame L0. Now, next step, I have two find out the next coordinate frame, next joint coordinate frame I have to find and what will you do Z axis, first thing is Z axis and Z axis is always assigned based on the joint rotational axis. So, this is the rotational axis, this is the point. So, I assume that this is the Z axis, I am mean with respect to that axis it is rotating. I put it as Z1. I put it here. Once you assign the Z axis what is next, not X axis. We do not know where the origin is. So, origin need not be at the joint always. So, do not take it for granted that the joint will be always the origin. So, we have to find out where is the origin of this coordinate frame and this origin of coordinate frame is obtained by taking the intersection of Zk minus 1 and Zk. So, find out where they are intersecting. Zk minus 1 and Z k, they are intersecting at this point. So, we will assign the origin L1 at this point that is the origin of the coordinate frame here. Now X, X1 should be orthogonal to Z0 and Z1. So Z0 and Z1 should be orthogonal. So I will take it like this. Since I took my first X in this direction, it is always good to take in the same direction. So, it will be always along the axis, otherwise you will get end up with the DH parameters differently. So, this is how you can take the X1. So what will be y, downward, so you will get the Y axis like this. X naught, how will you get it. Yeah, so first you fix X naught. So, X naught I can actually say that X naught is in this direction or this direction I can say. I have an option. Yeah you can. So, basically when I do this, I am telling that initially I assumed the robot to be in this plane but now I am saying that my X is in this direction my robot is actually in this plane. That is what I am telling. So, there is a theta already there with respect to the previous one that is our only thing. So, does not that really matter whether you are take the X0 this direction or this direction, Y also will change, accordingly the parameters also will change later. So, do the same thing for the next joint. Look at the axis. So we will find out the Z axis which is this joint. So, we got the coordinate frame for this Y1. Then do this, repeat the same thing for the next one also. So, when you do this. Yeah, you can actually assume it in this direction also. So, that is what you are saying. Right? That rule is only when they are not, they are parallel. If they are parallel, it should be away from that one, away from Z0. Your question is whether this X can actually go in the other direction? No no okay. Okay we are coming to that, you are jumping. So, this is Z2. So, Z1 and Z2 and origin the next is origin where is L2. So, this is L1, this is L2. So, where is the origin of the second coordinate frame L2, what do we get, intersection of Z1 and Z2. So, where is the intersection, are they intersecting? No they are not intersecting. So, what do you do? You find a common normal and the intersection of common normal with Z2 will be the origin. So, your L2 will be, origin will be here and then you can assume X2 here since all the X2 are same way, I mean you are assuming this direction you can assume this X2 as this way and therefore, Y2 will be this, X2 Y2 Z2. So, please assign next one, next coordinate frame please assign, let me see whether you are able to do it. Pardon. Coincide. Where is Z axis? This is Z axis and where will be the origin, Z2. Correct right then where will be the origin, origin is intersection of Z2 and Z3, they are not intersecting, they are parallel. So, take a common normal and find out where the common normal intersects Z3 that is your L3. So, this is L2, L3. Now, you have a joint which is along this direction. So, the next joint we do not know where the origin is, where the coordinate frame is. So, we try to find out what is the next Z axis. What will be Z4, what will be the direction of Z4? What is Z4, basically what is that the joint? Student: Roll. Professor: Roll. What is the roll axis? This tool is rotating, so this tool is rotating with respect to an axis, what is that axis? So, that axis is this one. That is the Z4. Direction of Z4 is along this. Where will be the origin of the frame, where will be the origin? Intersection of Z3 and Z4. Basically that is the way how you get the origin. So, here Z3 and Z4 are intersecting at this point and therefore, your L4 also should be at the same point. So for clarity, I will just mark it somewhere here, because I do not want to show both the frame there. So, I will share that it is the same point but I am trying to mark it. So, this will be Z4, X4 will be orthogonal to both. So, it can be up or down, you can take in both directions. So, basically we are telling the tool can actually go up or go down, that is basically we are saying. So, you can actually assume it is either up or down and accordingly you give a Y axis also. So, you can actually find out, I mean first find out the Z4 axis and then decide that your X4 axis and once you identify X4 you identify that Y4 axis. So, I assume X4 as up and then Y4 will be in this direction. You can assume this direction and Y4 can be other directions, just basically we want this to be orthogonal. I hope you understood this point because I have found that many people make mistakes here. They do not know where actually the origin is. So, they will put somewhere here in origin, saying that there is a joint here also I will put an origin here. See there is no joint, there is no origin there, the origin of the coordinate frame should be always the same point in this particular case and many a cases where the roll axis is there, you will find that this is happening and there is a roll, roll, pitch, yaw. So, most of the time you will find that this is the situation, because roll is the last joint, so always you will be having this kind of a system. But you have 3 dimensional case no. So, our theta is only giving you one plane, but this may be already moved to another plane and then start moving there. So, how do you actually relate that position to this. So, we need to have the Cartesian representation here and you have only one joint variable. So, not possible to represent it that way, got it. So, now, the last coordinate frame, when k is equal to n, k is equal n is k is equal to 5. So, you need to have a coordinate frame L5. So, this is L3 and L4. This is L4, so your frame L5, where will be the L5? So, L5 is always assigned at the tooltip. So, the origin will be always here and the coordinate frames are assigned based on the normal sliding and approach vector. So, normally is X sliding is Y and approach is Z axis. So, normal sliding and approach vector. So, approach vector is basically along the tool rotational axis. So, if this is the tool or this is the tool. So, this axis is known as the roll axis, so you will be calling this as the approach vector. So, this will be Z5 and then Y is basically the sliding and then this will be the normal vector. So, a normal, sliding and approach, so this will be X, this will be Y. They are not joints, L6 is, L5 is not a joint, it is a coordinate frame. We are not saying that it is a joint, we are saying it is a coordinate frame you have to assign to get it. So, L0, L1, L2 represent the coordinate frames, not the joints, but some of them will be attached to the joints, the last one is not attached to any joint. So, any robot if you take n degree of freedom robot, you will see n plus 1 frames, because we need to have a tool frame also. Pardon, L4, L4 is for the roll joint, roll of the tool. So, because there is a motor here which actually provides you the roll motion and that is basically L4. Because this is pitch motion, then there is a roll motion which we need to find out. So, that is provided here. L5 basically tells that it is a tool tip point, which is of our interest. So, we want to know where the tooltip is moving. So, we assign a coordinate frame there and tell that wherever the coordinate frame goes, that is the tooltip position. So, that position of the origin that coordinate frame is the tooltip position. So, that is what actually we mean by that. So, what is next? What is the next step? Next step is basically to find out something called bk. So it is an imaginary point, which we want to assign in order to get the parameters. So, what is bk, bk is the intersection of Xk and Zk minus 1. So, bk is the intersection of Xk and Zk minus 1. Now look at X1 and Z0, so b1 is X1 and Z0, b2 is X2 and Z1, b3 is X3 and Z2 like this. So, find out the intersection of X1 Z0. So, X1 Z0. So you have the b1 here. Where will be b2? b2 will be the same point because X2 and Z1 intersect at the same point. X2 is this one and Z1 is this, so X2 and Z1 intersect that b1. So, this will be b2 and then you will be having b3 here, because Z2 and X3 and b4 will be here, b5, what is b5? X5 and Z4, where will they intersect? X5 and b, this is X5, X5 and Z4 where do they intersect? Student: Tooltip. Professor: Tooltip. So, you have b5 here. So, we have b1, b2, b3, b4 and b5. So, now we are ready to get the DH parameters. For each joint we will be getting a DH parameter. So we can actually have a table theta, a d and alpha, joint 1, 2, 3, 4, 5. So theta a, d, alpha these are the parameters associated with joints and we will be able to get the parameters. So, in this case which one is the variable and which one are, which are constants? Which one will be a variable here? Student: Theta. Professor: Theta, because all are rotary joints, so all thetas will be variable. So, we do not need to really worry about the actual value of it because depending on this configuration it may change. So any theta, it can take any value of theta, so in this configuration you can find a value, but when it changes the configuration theta will keep changing. So, we do not really need to know the theta this stage, so we will be make it theta 1, theta 2 as variables, theta 1, theta 2, theta 3, theta 4, theta 5. But if you want you can find out what is the angle between X0 and X1, X1 and X2, X2 and X3, X3 and X4 that is the way how you get the joint angles, you can calculate it. There is no offset in this case. So, for this home position, you will see that there is a 90 degree theta, for this position. Yeah, so mean you can actually, that is what, you can actually find out because this X3 and X4, this is X3 and this is X4, so there is a rotation. So, you will find there is an angle between X3 and X4, X4 and X5 it will be there. So, why I am saying is that our whole aim is to get a relationship in terms of theta. So, this is need not be considered as offset, because this offset depend on what how you actually just assumed it. Student: For home position. Professor: Yeah, so for home position, so you can say home position it is 90 degree that we can say in this case. So, for the time being, I will make this as a variable and I assume that there is no need to calculate the value per say in this case, because it is a variable in any case. For different configuration, you will get different values of theta. So, I do not really worry about the current value for this home position, because this is just a arbitrary position I have taken from this position. Therefore, I am not really worried about the actual value here now, but I can, if I want I can find it out, no issues. Now, the next one is, let us go for dk. So, let us find out what is dk. How is dk defined? dk is the distance between L0 and, Lk minus 1 and bk measured along Zk, Zk minus 1. Can you check? So, basically this is L0 the distance from L0 to b1, L0 to b1 measured along Z0 is basically d1. So, d1 is measured as the distance from Lk minus 1 to Bk along Zk minus 1. So dk, this will be the d1, so d1 you will see that the distance from here to here, measured along this axis is bk, which is given as 215 here. So, d1 will be 215. What is a k? ak is the link length, ak is defined as the link length measured along Xk and Lk to Bk. L1 to b1 measured along X1, that is a1. Please check your notes. What is ak, so ak is the distance from, what is it? No one has the notes. What is ak defined? bk. Yeah. bk to Lk along Xk. So, b1 and L1 are at the same point and therefore, you will see this as 0. a1 is 0, b1 is 215, what is alpha 1? Alpha 1 is the angle of rotation from Zk minus 1 to Zk measured with respect to Xk. So, alpha 1 is X0 to X1, Z0 to Z1 measured about X1. So, what is the angle X0 to X1, Z0 to Z1? Z0 to Z1 measured with respect to X1 is alpha 1. So, alpha 1 is Z0 to Z1 measured with respect to X1. Now look at this, this is X1 and this is Z0 and this is Z1. So, now you have a X axis, you have an X axis like this. So, this is the X axis and this is the Z1. So, originally this X0 was like this, Z0 was like this and then with respect to this it is rotated like this. So, you look from X1, this is X1 with respect to X1 the rotation is like this. So, it is clockwise 90 degree it has rotated. So, Z0 has rotated by an angle 90 degree in the clockwise direction with respect to this axis. So, what will be the angle, what will be alpha 1? Alpha 1 will be minus 90. So, alpha 1 is minus 90, you have to do the same procedure for all the joints, then you will be getting all this parameters. Yeah, each one we have to see, Z3 and Z4 with respect to X4. So, now what we will do we will assume this Z4 and X4 or we will go one by one or we want to jump to this. So your Z, so this is this one, Z4, X4 and Y4. Now, the Z3 was here. Now we have to measure with respect to X4, the rotation with respect to X4 to be measured. So, it was like this, Z3 was like this it has rotated like this. Now, if you look at from this point, this moved from, this Z has moved from here to here. So, if you look from here, it is again a clockwise rotation of 90 degrees. So, you will see this as minus 90 degree rotation. So, again alpha will be minus 90. So, this is the way how you look at the rotation with respect to the X axis and then find out what is the alpha here. Because we have now all the parameters, we have these Bks. So, using Bk we can always find out dk and then find out the ak, also alpha can be find out with rotation of Z axis with respect to the X axis. So, this way if you do all these steps you will get the parameters as this way. So, you will see this d is 215, alpha is minus 90, a1 and a2 will be this one, this is a1, a2 and a3, a1 is 0, this is a2, a3 and there is no a4, a4 is 0 because there is no length over here, link length, because origin is at the same point, therefore you do not see an a4 and then you will find that d5 is this distance. This is the d5 distance and alpha is minus 90 and this minus 90 which I already explained how you get the minus 90. So, from here this Z3 to Z4 with respect to X4 is actually a clockwise rotation of 90 degree and therefore you have a minus 90 rotation. You want me to explain again, have you understood? Is there anyone who needs one more round of explanation of getting these parameters? Then I assume that you have understood this and how to get the parameters. So, what we will do, tomorrow I will give you some examples in the class. I already, these questions are there in the moodle. So, please try to solve it and then come to the class and then you see whether you are able to get the right answers, then submit in the class. So, I want you to submit the answers to make sure that you have understood the procedure for assigning coordinate frames and getting DH parameters. Because one of the most important task in manipulator kinematics is getting the parameters. Once you got the parameters then things are very straightforward. You just need to substitute this values in your relationship you will get the parameter, I mean the forward relationship. So, why do we need to know all these parameters because you can actually give the relationship. So, once you have these two coordinate frames here, so one coordinate frame here and another coordinate frame here, assume that this is Z1, X1 and Y1. So, the relationship between these two coordinate frames can always be expressed as a function of these parameters and therefore, the relation between these two coordinate frame can be represented using a transformation matrix, which will be 1 to 0. I can find out a transformation matrix a homogeneous transformation matrix, which relates this coordinate frame to this coordinate frame and that can be expressed using this four parameters theta, d, a and alpha and so, up to here also if there is a coordinate frame here, I will be able to again represent it using this four parameters. So, what are this four parameters or how they are related. You can say that how much it is moving along Z axis the coordinate frame is represented by d. How much move moves along the X axis is given by a, so the coordinate frame can move in this direction I mean in one axis and the another axis also and then it can actually rotate with respect to its own axis, about X or Z axis. So, theta represents the rotation with respect to Z axis and alpha represents rotation with respect to X axis. So, you can have two rotations and two translations, which will completely define displaced position of this coordinate frame. I can say that this coordinate frame is obtained by having one translation, then another translation, one rotation and another rotation will bring this coordinate frame completely to this point. So, if I have a coordinate frame here, I can get this coordinate frame, this coordinate frame can be obtained by four transformations, four individual homogeneous or fundamental homogeneous transformations will make this coordinate frame to this coordinate frame and therefore, I can say that, this is explained here. So, you have this Z axis here, this coordinate frame here. Now, how this coordinate frame is becoming to this coordinate frame how they are actually, how can I say that the transformation of coordinate frame, you can say it translates along Z axis and then translates along X axis then it rotates this and then there is a theta rotation which I cannot show here. So, it can rotate with respect to Z axis and you get a rotation about theta also. So, all the transformations can be represented using this four fundamental operation, which is rotate Lk minus 1 about Zk minus 1 translate Lk minus 1 along Zk minus 1, translate Lk minus 1 along Xk minus 1 and rotate Lk about alpha. So, these four fundamental transformations make the transformation of this coordinate frame to this coordinate frame and therefore, if I can find out this four transformation matrix using this parameters theta k, dk, ak and alpha k, I will be able to find out what is the transformation from this coordinate frame to this coordinate frame, if I know these parameters. And same way I can do this for any number of coordinate frames and finally, I will be able to get the transformation of this coordinate frame to this coordinate frame using the fundamental transformation and this one, we call it as the arm matrix, a matrix that maps the frame k to k minus 1 coordinate is known as the arm matrix, homogeneous matrix. So, we will see how to get this matrix and then represent the transformation using the four parameters. So, that is the next step. So, we can represent between these two coordinate frames by one matrix, the next coordinate frame if I have another coordinate frame here I can represent this transformation by another coordinate frame, like this I will be able to get all the coordinate frame transformations and finally, I can get the transformation from this frame to this frame. So, this is the transformation which relates the coordinate frame, one coordinate frame to the other coordinate frame. So, we will discuss this in the next class, how to do the transformation get the forward relationship. So, please come prepared for tomorrow. The tutorial questions are already there, come prepared. You have to submit the answers at the end of the class. Thank you. |
Introduction_to_Robotics | Lecture_211_Inverse_Kinematics_Examples.txt | Hello, welcome back. So, we are discussing the Inverse Kinematics of Manipulators, in the last class we briefly mentioned about the method by which we can solve the inverse kinematics and we took a very simple example of a 2 degree of freedom planar manipulator to show how the equations can be solved and then we consider a 3 degree of freedom manipulator a planar manipulator as shown in the slides. And we identified the DH parameters and then we got the forward kinematics relationship. And once we have the forward relationship, we write down the equations which actually represent the matrix and the using the arm equation we will try to solve the solve for the joint variables. So, here you can see that PX was l1 c1 plus l2 c12 PY is this and then nx, ny, sx sy, so this is the way how we get the equations. So, we write down these equations and then try to solve for the joint parameters. So, we start with these two equations PX and PY and we found that the PX square plus PY square if you write you will be able to get this in terms of c12. So, we will be writing this as c2 is this c2 is PX square plus PY square minus l1 square plus l2 square divided by 2 l1 l2. So once we get c2, we do not directly get (theta), use c2 for getting the theta 2 we will try to find out s2. And from s2 we will using s2 and c2 we will find theta 2 using the function atan 2. So, this is what we discussed in the last class the atan 2 gives you the value of joint angle in the correct respective quadrant. And then, so once you get theta 2, we will go for solving for theta 1, so solving for theta 1 is not that easy, so we need to do some substitution, so what we do? We assume that k1 is l1 plus l2 c2 and k2 is l2 s2, so we assume this because since we know c2 and s2 and we can say that k1 is l1 plus l2 c2 and k2 is l2 s2. And then we write PX is k1 c1 minus k2 s1 and PY is k1 s1 minus k2 c1. So, now this is in terms of theta 1, so PX and PY are expressed in terms of theta 1 here. Since k1 is known k2 is known this would be only these are all constants. Now, we cannot directly again solve this directly, we need to do one more substitution, so what we do, we will write the k1 as r cos gamma and k2 as r sin gamma, where r is k1 square plus k2 square and gamma is a tan 2 k2 k1. Again, since k1 and k2 are known, we will be able to get r and gamma. So, we will write k1 is r cos gamma, k2 is r sin gamma. Now we will write this equation PX is can be written as PX by r is equal to cos gamma cos theta 1 minus sin gamma sin theta 1 and PY by r is this one and therefore cos gamma plus theta 1 can be obtained, similarly, sin gamma plus theta 1 can be obtained and since we know this, since we have this, we will be able to get gamma plus theta 1 as atan 2 of this. So, we will be writing this gamma plus theta 1 as a tan 2 PY by r PX by r and so it will be PY PX and theta 1 is a tan 2 PY PX minus a tan 2 k2 k1. So, this is the way how we get theta 1. So, now theta 1 and theta 2 we obtained, now we need to get theta 3, so to get theta 3 we will find out theta 1 plus 2 plus 3 and then find out theta 3. So, theta 1 plus 2 plus 3 can be obtained from this relationship, so ny nx can be used to get theta 1 2 3 because ny is sin theta 1 2 3 and this is cos theta 1 2 3 so there will be a tan 2 of this will be giving you theta 1 2 3. And once you get theta 1 2 3 you would be able to get theta 3 as theta 1 2 3 minus theta 1 minus theta 2. So this is the way how you can actually solve for all the join angles theta 1, theta 2 and theta 3. Now you can see that there are lot of substitutions we did to get this. So, one question will be how do we know what to be substitute and how to we actually solve it? So, this is one challenge in inverse kinematics because for each manipulators the equation will be different, there is there is no standard equation that you can see for the manipulators depending on the number of degrees of freedom depending on the configuration of the robot, your PX the relationship PX, PY, PZ extra will be completely different and the if you have to solve this we need to know how to approach the problem and then how to solve it, that is a bit complex that is why the solution inverse itself is a difficult area. To make it simple, we normally what we do is to identify some trigonometric identities and then or standard trigonometry relationship and then get the standard solution for that, so whenever we can actually convert this equations to some standard form, then we will be able to easily solve it, or there are standard solution for the such standard forms so we will try to convert these equations to some standard forms and then solve it. So, we will they see what are the standard form that we can use, or what are the commonly used equations in this kind of equations and then how do we solve it. So for example, if you have a is equal to b sin theta as a form, if a is equal to b sin theta then we can write sin theta is equal to a by b and cos theta is square root of 1 minus a by b square and therefore, theta can be written as atan 2 a by b square root of 1 minus a by b square. Similarly, a is equal to b cos theta, that also we can theta is equal to atan 2 square root 1 minus a by b square a by b, so this is the way how we can actually get the solution, these are simple one so we can directly get it. Now, suppose we have another one as a is equal to b cos theta c is equal to d sin theta, so here a is b cos theta c is equal to d sin theta. So, again we can write sin theta is equal to c or d and cos theta is equal to a or b so you will be getting theta as atan 2 this format. So, these are all standard formulations and similarly next one is a sin theta plus b cos theta is equal to 0, suppose we have relationship like this a sin theta plus b cos theta is equal to 0, then we can write theta is equal to atan 2 minus ba it is atan 2 minus ba because we can actually get it as a sin theta is equal to minus b cos theta, that is why we get it as theta as atan 2 minus b a. So, another important formulation is a sin theta plus b cos theta is equal to c, a sin theta plus b cos theta is equal to c is a formulation with a format or a standard equation, then solution will be theta is equal to atan 2 a b plus atan 2 plus or minus a square plus b square minus c square comma c, that is a sin theta plus b cos theta equal to c. So, whenever you see an equation like this or whenever you are inverse problem comes up of the equation or you can actually bring down that equation into this format, then you do not really go for substitution, you what you need to do is to use this rule and then get the theta because this is a standard solution for this equation. So, what we are doing is atan 2 a, b plus a tan 2 plus or minus square root of a square plus b square minus c square comma c, so that is the solution. Now, suppose you have something like a cos theta i plus b cos theta j equal to c and a sin theta i plus b sin theta j is equal to d, then the solution is this one theta i is this and theta j is this this and the solutions come from using the substitution and then solving it since it is standard equation that is a standard solution, so we do not need to really solve it, we can just substitute this and then get the answer. So, this kind of thing will be normally coming like a cos theta 1 a cos theta 1 plus b cos theta 2 is equal to c something like that is the format here. Similarly, a sin theta 1 plus b sin theta 2 is equal to d, if this is the format, then this is the solution for theta 1 and theta 2, that is what actually it says. And you will often encounter this kind of equations in inverse problem. Another one is, this is a sin theta plus b cos theta is equal to c, a sin theta plus b cos theta is equal to c, a cos theta minus b sin theta is equal to t, that is the relationship, then solution is this one, ac minus bd ad plus bc a and c are known b and d are known therefore you can directly get this as theta is equal to this. So, this is again format of a s1 plus b c1 equal to c and ac 1 plus, ac 1 minus minus b s1 is equal to d that is the format here. So, theta 1 can be obtained using this relationship theta 1 is atan 2 ac minus bd ad plus bc, so that is the way how we can solve this equation. The next one is another format a cos theta i plus j theta i plus theta j plus b cos theta i is equal to c a sin theta i plus theta j plus sin theta is equal to d, that is you have a cos theta 1 plus theta 2 plus b cos theta 1, so it is a cos I will write it as 12 a cos 12 plus b cos 1 equal to c similarly, a sin 12 plus b sin 1 equal to d. So, this is a very common thing that you will be seeing in the inverse kinematics ac 12 plus bc 1 is equal to c and as 12 plus bs1 is equal to d. So if this is the format, c12 stands for c theta 1 plus theta 2 cos theta 1 plus theta 2. So, if this is the format that you are having, then the solution is this, theta j so theta 2 will be this and the cos theta j is this and sin theta j is this, therefore you will be getting this as atan 2 sin theta j cos theta sin theta j cos theta j. And then theta i is atan 2 rd minus sc rc plus sd where r is a cos theta j plus b s is equal to a sin theta j so this is the way how you will be getting the solution. So, this was the same thing what we actually we saw in the previous example also where we tried to substitute for all these values and then try to get the solution, but now if you get this kind of a formulation, then directly you can write the solution as theta i and theta j in this format, you do not need to really go substitute and then solve for it. So, in the previous example actually we solved it and then got the same result what we are seeing here like PX is l1 c1 plus l2 c12 and PY is l1 s1 plus l2 s12 it is the same format that you can see. So, l1 c1 plus l2 c12 l1 s1 plus l2 s12 and then we saw that c2 is PX square plus PY square minus l1 square plus l2 square by 2 l1 l2 which is the same as this and the same way we actually solved it for solved for s2 also by substitution. So, once you have this equations in this format or if you can bring down the equations into any of these standard commonly used equations, then you can use the solution of that equation to get the to solve the problem. So this is, again you do not need to remember these equations, I mean this standard formulations and all, so you can actually take it as a reference and then use this reference and then solve for the inverse, but you need to bring this the equations in the inverse problem to any one of this format and then you will be able to solve it. So, with that background let us take an example of an industrial manipulator and then see how to solve for the inverse kinematics of this manipulator. So, this is a 5 axis articulated arm Rhino XR-3 we have discussed this in one of the example in forward kinematics. Now, you want to solve this for its inverse kinematics, that is if I know this position I want to find out what should be the joint angles to make this tool point reach is this position or whatever may be the position I give this PX, PY, PZ I want to know what should be the joint angles to reach this position, that is the solution and that is the problem here to be solved. And once you have a solution for a given manipulator, that is there always no you do not need to worry about that manipulator again because every manipulator first we need to find a solution then that can be applied for any other situation. So, it is all very specific to the manipulator, so if you if you design a new manipulator, you need to see whether you can actually get inverse solution either by close form method and if you can get into a closed-form, then you can solve it and then use it for all other applications. So here, this rhino RX-3 is an industrial robot 5 axis robot and the DH parameters are given here, so we can actually find it out from the method that we discussed already. So try to find out the DH parameters and then get the forward kinematics solution, so that is the first step in solving the inverse we need to have the forward kinematics solved so that we will be able to write down the relationship the arm matrix to be solved. And for that we need to get the DH parameters and then use the DH parameters to get the transformation matrix and then get the arm matrix. Once you have this relationships, so let us see how do we get this, so we can actually write it as like this, so you have PX will be getting the forward relationship like this PX is equal to c1 multiplied by a2 c2 plus a3 c23 plus a4 c234 minus d5 s234. And this will be PY equal to s1 multiplied by the same factor and PZ is d1 minus a2 s2 a3 s23 a4 s234 d5 c234 and then and next so this is PX, PY, PZ then you can see nx is this, ny is this, ax is this, ay is this, of course you can get others also from the forward relationship. It is a 5 axis robot so any arbitrary orientation is not possible, you can solve for 5 joint angles here, theta 1 to theta 5 can be solved. So, the first question is about the solubility of the manipulator, so we know that the necessary condition is that the points should be within the workspace and there we do not give any arbitrary orientation. So, in this case you cannot have arbitrary orientations because it is only 5 degree of freedom, so we can specify only two orientations and then the third one will be automatically obtained. So, the sufficiency condition for closed form solution is the next one to be checked, so if there is not sufficiency condition is not satisfied, then we will not be able to solve, this relationship will get but the sufficiency is not satisfied, then you would not be able to solve it using algebraic methods, and how do we check this the conditions sufficiency condition for closed form solution is that the 3 adjacent joined access should be parallel or 3 adjacent joined access should be intersecting at one point. So these are the two conditions, any one condition should be satisfied in order to have a closed-form solution. So, now if you look at this you can see this is the first axis z0, then you have this z1, then z2, z3, z4 and this is the z5 and these are the axis 1, 2, 3, 4, 5 axis this is the final coordinate axis. So, these are the joined axis up to this is the joint axis. Now, we know that the intersection or parallel we have to find out, so we can see this one so z1, z2 and z3 they are actually intersecting, they are parallel so all the three joined axis are parallel they are adjacent, so the adjacent joined axis are parallel and therefore we will be able to get a closed-form solution for this manipulator, so that is the first thing that you can get the closed form solution for this manipulator because it satisfy the sufficiency condition for closed form solution. And since it satisfies, we know we can actually solve this equation, these equations can be solved. And one important point you can actually see here is that since they are parallel this 1, 2, 3 are parallel, so this is basically theta 1, this is theta 1, theta 2, theta 3, theta 4, theta 5 so this is the joint angles. Now, since there is 2, 3, 4 that is the joint 2, 3 and 4 are parallel you will see some relationship like here, c234, s234 so you will be able to see this. So, whenever the axis are parallel adjacent joint axis are parallel you will be getting this is as the compound angles theta 2 plus theta 3 plus theta 4. And once you have this as 2, 3 like this then it is easy to solve. Suppose they were not parallel, so theta 4 was not, I mean this axis was not parallel, then you will be able to getting this as d5 s23 and s4, so this will be d5 s23 s4 if it is in this condition or it was in this format, then it be difficult to solve this algebraically and that is why we say that when they are parallel adjacent joined axis are parallel we will be able to solve it because they are coming as a compound angle and you will be able to solve it, otherwise this difficult to solve that is from where the sufficiency condition appears. So now, we got these equations and we know that we can write this PX is this, PY is this and then this one, now the question is how do I get theta 1 to theta 5 as a function of PX, PY, nx, ny etcetera that is the question. So, let us see how to solve it but you get to look at this equation and then try to find a method that is now standard procedure the only thing that we can do is look at these equations and then see what how we can solve it. But one thing is sure that we can solve, yes it is a it is satisfying the sufficiency condition so we are sure that it can be solved. So we will take this, the first two equation, so we will see that PX is this and PY is this and now looking at this equation, we will be able to see something can be done to solve it. So, what is the thing that we can do? Now what is PX? How we can write PX c1, can we write c1 from here? C1 is PX by something s1 is this one. So, we can actually see that PX over PY so we can say PX over PY or PY over PX, so PY over PX is equal to s1 over c1. So we can see this because this factor is same for both, so if you take you divide this equation, then you will be getting this as s1 c1 which is nothing but tan theta 1. So, we can actually use this one and then get theta 1 as atan 2 PY PX, so theta 1 can be easily obtained as atan 2 PY PX. So if PY and PX are given to you, you can easily find out what should be the theta 1 for this robot to reach the this side PY and PX. Now, if you look at the manipulator if you look at this manipulator then you will see that, see this is in the x0 xz plane, so this manipulator now it is shown in xz plane. Now, if you know that if it has to come out of this plane, so if this is the z plane, then if it has to come out and then reach at y position, the first join has to move, that is by join moving this theta 1 it will be coming out of the plane and it can actually reach a y position. So, whatever is the y position given, that is decided completely by this joint angle theta 1 and that is very clear from your this relationship also, so theta 1 it is basically depending on what is your PY value and PX value you want to reach that will be decided by theta 1, so you can come out of the plane using theta 1 only, that is the PY PX. So, you got the theta 1 now. Now we have to solve for theta 2, 3, 4 and 5. So, let us see how do we solve it, so look at this relationship, ax is given as minus c1 s234 ay is given as s1 s234. So, if you now c1 is known and s1 is known because theta 1 is known so we know what the what is theta 1 cos theta 1 what is sin theta 1. So, we can write it as s2unwUt3kkgvE34 is ax minus ax c1 plus ay s1 because this ax is minus c1 s234 this ay is minus s1, so you multiply ax with c1, then it will be c1 square and this will be ay s1 will be s1 square and if you do this, s234 minus of this, you will be getting s234 and then c234 can be obtained from minus az. So, az is minus s234 c234. So, we know s234 and we know c234 and therefore we can get theta 234 as atan 2 minus ax c1 ay is 1 minus a. So, we do not need to do too much of substitution here, we can directly look at the equations and then see it can be solved, so we get theta 234 from this relationship. So, we have theta 1 now and then we have theta 234 but we do not know what is theta 2, theta 3 and theta 4 but we know the total of 2 plus 3 plus 4 is this. And again, if you look at the manipulator we can see decide the ax, ay and az that is the orientation in the plane will be decided by theta 2 plus 3 plus 4. So, the approach vector is actually decided by theta 2 plus 3 plus 4, that is ax, ay and az so we can see ax, ay, az is used to get the theta 234, of course theta 1 also besides that one because it is coming out of the plane, so theta 1 and theta 234 completely decide the approach vector for the manipulator. Now, so we have theta 1 and theta 2, 3, 4 now look at other equations, we have nx is equal to c1 c234 s5 and ny is s1 c234 c5 minus c1 s5. So in this case, we know this c1 and c234 only unknown is c5. Similarly, s1 is known, so s5 is not known, here also s1 is known c234 is known c5 is not known, here s5 is not known. Similarly, here also sx is there minus c1 s234 s5 s1 c5 sy is minus s1 c234 s5 minus c1 c5. So, these are the these are the two equation nx, ny, sx, sy. Now, if you write this as if you take nx s1 so now you multiply nx with s1 and ny with c1 and then subtract, you will get it s5 because this will be nx s1 will be c1 c234 s1 c5 and s1 square s5 and this one will be ny c1 will be s1 c234 c5 c1 minus c1 square s5 and when you subtract this two terms will actually cancel, what you are getting will be s1 square s5 c1 square s5 and when you add you will be getting only s5. So, you will be getting this is as s5 is nx s1 minus ny c1 and the same way if you substitute sx s1 so it will be getting sx s1 minus sy c1 as c5. So you will get s5 as this and c5 as this and since you know nx I mean since you know s1 you can easily find out s5 and c5 and therefore theta 5 can be atan 2 sy c5 and sy is this one nx s1 minus ny c1, so theta 5 also can be obtained from the orientation part and you know theta 5 actually decide the orientation, I mean the normal, approach vector will be, the sliding and the normal and sliding vector will be decided by this, so you will be getting it as theta 5. So, theta 5 is atan 2 s5 c5. So, we got theta 5 also, theta 1, theta 2, 3, 4 and theta 5 we obtained but still we do not have theta 2, and 3 and 4. So, we have to see how to get that one. Again we have to look at this equations and then see how can we solve this. Now if you look at this part, PX part so look at this PX part, so in PX so PX is equal to c1 multiplied by something and we know now theta 1, we know theta 2, 3, 4 we know this 2, 3, 4 also. And therefore, we will be able to write PX over c1 is equal to or px or c1 plus d5 s234 minus a4 c234 is equal to a2 c2 plus a3 c23. So, we will be able to write it like this and we know this term we know and this term we know because theta 2, 3, 4 is known and therefore we will be able to write down this and c1 is also known theta 1 is also known, so this becomes a constant now, I mean this is not is a constant now and this is only what is a2 c2 plus a2 c23 we do not know. So, we can write it as a2 c2 plus a3 c23 is equal to a constant b, we will say b is equal to a2 c2 plus a3 c23. And in this one we know 234 and this 234 and therefore we will be able to write this as PZ again we will be able write this as a2 s2 plus a3 s23 is equal to another constant d, that is PZ from using the PZ relationship, we will be able to write down the these two equation using these two equation we will be able to get these two relationship and that is given here. So, we can write this as, so PX so P equation 1 gives a3 c23 plus a2 c2 is equal to this one and equation 3 gives a3 s23 plus a2 s2 is equal to this one. So, we get these two relationship and now you can see that they are actually falling in a standard form as a standard equation that we discussed earlier. So this is actually in the like it like a3 c23 plus a2 c2 c equal to a constant a3 s23 plus a2 s2 is equal to another constant. And this is a standard format that we saw in the one of the slides earlier and since this is a standard equation, we can use the standard solution for that. So, this is actually in the format of a3 a cos theta i plus j b cos theta i is equal to c format. So, this is the same format what you can see here, theta i plus j is theta 1 plus 2, that is this c23 and this is b cos theta 2. So, cos theta a cos theta 23 cos 23 b cos 2 is equal to c format, similarly this also. So and since we know this c, d and a and b, we can write down sin theta j cos theta j is this and sin theta j is this and theta j can be obtained like this. And similarly theta i theta 2 can also be obtained. So, theta 3 can be written using this method and theta 2 and theta 3 can be obtained using this relationship. So, we will be getting cos theta 3 as c square plus d square minus a3 square minus a2 square by 2 a3 a2 cos theta 3. And sin theta 3 will be 1 minus cos square theta 3, so that you will be getting sin theta 3 and cos theta 3, therefore theta 3 can be obtained as atan 2, that is the way you get the theta 3. Now, theta 2 can be obtained again from here, you can see theta 2 will be atan 2 rd minus sc rc plus sd. So, apply the same r and c because we know r, r can be calculated a cos theta or cos theta j and plus b s is a sin theta j and therefore we will be getting theta 2 as atan 2 rd minus sc rc plus sd, where r is a cos theta 3 plus a2 s is a3 sin theta 3 and therefore we get theta 2 also here. So, theta 2 we got, theta 3 we got so we got theta 1, theta 2, theta 3 and theta 5 and plus we have theta 2, 3, 4 also. So, the only thing what is remaining is theta 4 and theta 4 can be obtained, of course from theta 234 you subtract 2 and 3 you will be getting theta 4. So, that is the way how you can get theta 4, so theta 4 can be obtained so theta 4 is equal to, you can write theta 4 is theta 2, 3, 4 minus theta 2 plus theta 3 because we have solved for theta 2, theta 3 and we know theta 2, 3, 4 also and there theta 4 also can be obtained. And this way you will be able to get all the joint angles solved for this manipulator. So, that is the way how we get the inverse solution for a manipulator. Now whatever maybe the manipulator configuration whether it is 6 axis or a 5 axis or a 4 axis we will be able to solve the equations provided it may satisfy the sufficiency condition for closed form solution. If the closed form solution is not existing, there is no point in solving it you would not be able to solve it, you may have to go for a numerical solution, so that is what actually we have to do to solve any manipulator inverse problem. So, as you can see it is not a standard formulation for any particular manipulator, you have to look at the equations of the forward equations or the matrix and then see what is the best way to solve the equations. I hope you understood the principle of, I mean how we actually how we actually solve the inverse kinematics problem. So, this is an home work for you, so you can consider a 4 axis SCARA robot, it is ADEPT one SCARA it is a commercial industrial robot and we need to solve for its inverse problem. So, we can so this is the configuration, so you can see there is an axis here, and axis here there is an axis here and then there is a rotational axis also here, these are the 4 axis so you have one rotation another rotation and one up and down motion and a tool throat so for joint axis. And the parameters are given, so the first step is the first step is to so draw a home position, assume a home position and then identify the DH, assign axis assign all the joint axis and the coordinate frame and then find out the DH parameters and then find out the forward relation or the arm matrix. Then, write down the equations and solve. So, when we before going for the solution, see whether you can actually get a solution or not. So, the necessity condition is that the three joint axis should be parallel or three joint axis adjacent joint axis should be parallel or the adjacent joint axis should be intersecting. So, in this case you can see that these 3 joint axis are parallel, so all the adjacent joint axis are parallel and therefore it will be always able to solve it with a closed-form solution. So, that I hope you will be able to solve it, so please try to solve this example and then if you have any difficulty, please let me know. So, just giving you the DH parameters here going to make your life easy. So, these are the joint parameters, so you can see a alpha pi in this case because its direction is changed. So, we discussed about the forward kinematics and inverse kinematics, what time the manipulator kinematics. We need to see, where are we applying all these things and what is the significance of this in the real industrial application? So, the inverse and forward kinematics and of course the coordinate transformation matrix are widely used in industry, though the user may not be really knowing what is happening inside but as a designer or as an engineer, we should know where actually we are applying all these things. So, to give you a very brief idea of what is all these kinematics doing in the robot or in the industry, let us take a very simple example of a robotic works cell. So, I as I mentioned in one of the classes, a robot alone cannot do any work or we need to have something around the robots as a system in order for the robot to do some meaningful work. And that work environment is called as a robotic works cell. So, a robotic works cell typically consists of a robot and then a sensor, some sensors to sense the presence of objects and then some mechanism to convey things from the robot or to bring something to the robot and take something from the robot or some kind of palate or something to place objects or something should be there around the robots in order for the robot to do some meaningful task. So, in this typical works cell, we are consisting of a robotic inspection, a robotic inspection and sorting of components. So, though it is not a purely robotic inspection per se but there is an inspection of object and the robot is used to sort the object based on whether this it is good quality or a bad quality product, so that is basically the robotic inspection and sorting work cell. Now, this assume that this is the work cell, so you have a robot here a manipulator as you can see here, this is the manipulator with the some 4 degree or 5 degree of whatever the degrees of freedom can imagine and there is the this is the base of the robot and this is considered the work cell and I mean the work area of the robot and we assume that there is a pass bin and a reject bin, this is the pass bin and there is a reject bin. And we assume that there is a conveyor which actually brings the work piece from some other location to here and then the robot picks the object and places in the pass or reject bin with the help of sensors to identify the condition of the object. So, now the robot will be the parts will be coming in this work area and there is a camera here, camera is placed at a fixed location here a camera is placed here if at a fix location and this camera is collecting the information from the object. So, it actually do a visual inspection of the object and if that is perfect in terms of dimension or whatever the checking it is doing and if that is good one, then the camera gives the information to the robot and robot picks the object and place it in the pass bin and if it is not good, it is placed in the reject bin. So, that is the robotic cell, a very simple robotic works cell, of course you can have much more complex work cells but to explain the working of this cell I am just showing this. So now, assume that this object is coming here, camera is at this location and his part need not be at the same location every time, it may be in different positions and in different orientations. So, the part may be coming in different maybe something coming like this, but it will stop here and then the camera captures the image and then passes the information. So, here we can see that the camera takes the information from here the position of the object and its orientation as well as the condition of the object good or bad. And that information is passed to the robots and the robot goes there and picks the object and then place it here or here and then comes back. So, this is the work to be done. Now, how is the robot able to do this? Because the camera takes him image of this object or takes the image and then find out its position, the position and orientation of the part with respect to the camera base. So, you can actually get the position of the object, so I say the object is p or the part, the part the position of the part with respect to camera. So, what is the position of the position of the part with camera? So I would say, what is the position of the part with respect to camera can be obtained and then the robot has to this information is passed to the robot base. Then the robot has to calculate what is the position of the part with respect to its base? So, we need to get the position of the part with respect to the base. And then, once we get this position of the what with respect to the base, the robot will find do an inverse calculation to find out how to what should be the joint angle to reach this position and then it places picks the of object from that location, and then it does a another inverse calculation to see how should be the joint angle to reach here or there depending on where actually it is be placed. And then the joints move and then it place here and it comes back. So, here you can see there is a inverse kinematics is involved and the position of the part need to be converted to the robot based frame, so we need to have a coordinate transformation from this base to this base. And to convert that one, we need to know what is the position of the camera with respect to the robot base? And again, we need to convert that transmission need to be found out and accordingly we apply this here. So, assume that the T part to camera is known, so the part to camera so the T part to camera is obtained like this, something like this and then we need to know what is the base to camera, so what is the camera position of the camera with respect to base because this is already fixed, camera is in a fixed position so will get the T base to camera as this one. And then T part to camera is obtained from the camera image frame. Now using these two we need to find out what is a location of part with respect to the base and that can be obtained by taking this t part to basis to camera to base multiplied by part to camera which will give you part to base, that is the transfer transmission part to base. And you will be getting this as the location of part with respect to base. So, we can see that by taking camera to base and part to camera, we will be able to get the position of the part with respect to the robot base, though the robot know that the point is the object is at 30 distance in x and 15 distance in y and 1 unit in z axis. Once we know this and orientation is known, then the tool need to be oriented to this orientation and the position has to be reached at this location and that is to be done using an inverse kinematics application. So, we know this subject grasping by the robot, so the tool gripper orientation is given like this and therefore the tool to base will be like this. So, now the position of the object is this but the gripper is in this orientation, so therefore the tool to base transformation is given by this 1 minus 1 minus 1 30, 15, 1. So, this is the tool to base. And if you know the robot configuration, then we will be able to use this robot configuration to find out the inverse of the do a inverse calculation and find out what should be the joint angle theta 1, theta 2, theta 3 etc. to reach this position and get this orientation. And once that is known we will be able to move the robot to this position and then of course can continue this to place here and here. So, this actually shows how we can use the transmission matrices and forward and inverse kinematics to get some work done in the robotic work cell. So, this is a very simple explanation or extra example for use of kinematic analysis in the robotic work cell in order to get some simple work done. So we will stop here, we will continue the discussion in the next class, so till then bye. |
Introduction_to_Robotics | Lecture_103_Path_Planning.txt | hello everyone so this week so far we considered the problem of mobile robot localization we looked at the localization taxonomy we also looked at the markov localization and we saw a little bit on how to do this localization given a feature based map right most of the algorithms are looking at grid based map right now for the next lecture uh we are going to look at uh path planning strategies for robot motion okay so what do these path planning strategies mean the path planning strategies means given a map right and under localization algorithm so figure out a path right that is feasible right so that i can get from some starting location let us say here to some ending location which is say in the top right corner of the world right so there is some obstacle here in the middle so all of this is known to me in the map and my goal is to find a path a feasible path right starting from a given location right and going all the way to the destination which here is in the top right corner right so here we are going to assume that the map is given to us right and whatever discussion i am doing will be in the context of a a a location-based map right like a grid based map or anything but then you could think of other kinds of strategies as well right but the thing with path planning is it is good to know the open space as well as the occupied space in the uh in the world right so in the map so that makes it easier for me to plan a path a feasible path would always take my robot through open space right i mean you must have seen this in c space how you look at avoiding collisions and things like that so feasible path would do that right so i'm going to look at strategies for searching for feasible paths and then it happens that the algorithms that we will look at all actually look at optimal paths basically they return the shortest path to go from one location to another and you can think of other algorithms where in more complex environments where looking at all possible pathways are not really that feasible right even examining examining a good fraction of the pathways are not feasible there are more probabilistic path planning algorithms right uh which with a high probability give you a good path right but they'll return a feasible path right so those are not those are for the future so for things that you could look at later right so we'll start off with a very simple path planning algorithm called breakfast search right breadth first search is an algorithm that was originally designed for traversing all any kind of graph data structure starting from a node right so in our case the graph data comes from the connectivity in the map right suppose this is like my grid map and let us say that each one of these squares is a grid right so i can say that each grid cell right is connected to eight of its neighbor so you could think of this as a graph where each node has eight neighbors so some nodes will have only three neighbors right some will have fewer right so so this was this one has only ah five neighbors right this node has only five neighbors and this node has only three right and so nodes that are here actually have neighbors that are not reachable right so so like that so you can think of this free space here as a graph right and so breadth first search allows you to search for a path on this graph right so you can think of that each node has eight neighbors mostly right and and then i have i have one node which is the start node for me which is here and one node that is the end node uh which is there and i'm going to do breakfast search right basically i am going to search this graph till i get a get to the goal right so here is how breadth first search will operate right so given a start vertex v and a graph g so in this case the graph would be a representation of the free space of the ah of the map right right so basically it starts with v right it puts v into some q right so it's some some kind of a data structure q and as long as there are nodes there in queue right as long as q is not empty it's going to take the first element from the queue that's what dq means it will take the first element and if v is the goal then say return hey i found the goal right but if v is not the goal then we look at all edges that go out of v right so what i mean by all edges that go out of v so in this case it will be all the nine all the all the eight eight uh neighbors right so take a node so it'll be all the eight neighbors that are there for that node v okay ah for all the a all the edges that go to v right then if w has so so you call this vertex w right so if w is already not labeled as discovered right then label w as discovered say hey now i have seen w if it's already been seen okay ignore it right but if it is not been seen before then i will say now i have seen w right and i maintain this parent data structure so that i can actually recover the path in the once i reach the goal right and then i will say that the parent of w was v so what does this mean that w was visited for the first time when i expanded the graph from v right and now i put w in the queue and i keep going right so when will q be empty either ah so when will this loop end basically the loop will end if i find the goal or i have explored all the vertices in the graph right there are no more vertices for me to see okay we explored all the vertices in the graph and there are no more vertices for me to see right ah so i will be saying vertex and node all you know exchangeably to denote a cell in the in the graph right so so each each cell here each cell in the free space becomes a node in the graph and if there are the adjacent cells to that or connected by an edge in the graph right so ah i will say sometimes node i will say vertex that's a common terminology in a computer science so that i will go back and forth right so is it good so let us look at pictorially how this is going to look like so here is my graph so you can think of this as a place where each cell has either two or three neighbors right ah one two or three neighbors right so this is a map in which each cell has one two or three neighbors i am starting from the cell marked green right so that is my start node right then what i do is ah i first visit the node to the right right and then i mark that as visited right then i'll add these two nodes to the cube right so first i'm sorry when i visit this i'll add this and this to the queue let me let me pick up so when i first visit this node i am going to put this in the queue and this in the queue book and i am going to put this in the queue and this in the queue these two nodes go into the queue right then what i do i look at the first node in the queue which is this right so i'll pick up so i'll pick up this node in the queue right that's the first node in the queue and then i will add this node and this node to the queue but remember this is already there in the queue so that is the number one right so this will be number one in the queue this will be two this will be three and this will be three okay so i have one i have two and i have three so what will be the next node that i will take up for expansion exactly so the note that i marked 1 right now what will happen in the queue i will have this is 1 this will be 2 and that will be 3 in the queue so the next node i will take up is the 1 on the bottom left right so the the node that i consider will be that and now i don't add anything more to the queue so my q will still have this as 1 and that as 2 i will continue with the 1 right and that is not going to change much right and then i'll go to the next one now what happens is the queue was it would have become empty but i am going to add these two to the queue now and therefore i will continue i'll finish that right and if any of these had been the goal i would have stopped much before right so here is just an illustration for how breadth first search will look at every node in the graph right and if there is a goal somewhere in between before this i will find it and it is guaranteed as long as the edges don't have any weight as long as the edges are all having the same weight right either they don't have any weight or is another way of saying it is they all have the same weight right then i am guaranteed to get to the get the shortest path to the goal guarantee to get the shortest path to the goal so how is that so if suppose you remember whenever i whenever i mark a node as visited right whenever i mark a node in green i'll add a parent to that node right so what will be the parent of this node the parent of this node is that right and what will be the parent of that node that is that suppose i want to reach from here to here now what i want to look at is i'll go to the the target node i'll keep looking at the parent of the parent of the parent of the node until i reach the source node so that is the way i will reconstruct the path right suppose i want the path from here right to here from here right so i will keep going and from this one i look at the parent which will be this and for this i look at the parent which will be this for this i look at the parent ah that is the source node that i wanted right so given a source node i can find the shortest path to any target node by just following the parent marking that i did in my graph okay other than in my ah data structure so that's basically how this algorithm will operate right and so it looks at every vertex and every edge at least once right that doesn't look at it more than once right every vertex it looks at only once and and every edge it looks at only once and therefore the time complexity is order of v plus e right and it it needs to store uh uh the pointers and other things right the parent pointer and other things for every every vertex like whether it has been visited and also the parent so therefore it requires storage of the order of v right and so bfs is a complete method right so a method is described as complete right if it is guaranteed to find a goal state if the path exists right if there is a path from the start to start state to the goal state then a complete algorithm is always guaranteed to find it and the breadth first search is complete right but an equivalent algorithm called depth first search where i just keep going so if i find a node has a child i'll just go to that child first right so i will not go to the other nodes in the queue so if the node has a child i'll keep going to the child and then after i finish with the child i'll come back and i'll see if there is another child to the node right if you think of the graph this way so instead of instead of doing so if you think of breakfast search it went in this order right depth first search would go in this order so this this node will be looked at right will be marked as visited only after this node is complete in fact this node will be mapped as visited only after this node is completed so it so death research could get stuck exploring a wrong path if it is arbitrarily deep you know when if if the path is infinitely long right then it will never come back and find a goal that is here but first search will find a goal that it's a finite depth even if the graph is infinitely large so if there exists a path breakfast search will find it so that is why what we mean by saying breadth first search is a complete algorithm but the problem with breadth first searches it expands a lot of nodes it visits a lot of nodes looks at the neighbors of a lot of nodes before it moves on to the next right it expands a lot of nodes right for every node it expands all the children so it looks at all the children puts them in the queue before it moves on to the next level right that so for every level it is going to expand the children of all the nodes at the previous level even if it is very clear that it is not going to give a short path right so this is a little tricky especially when i start having weights on the edges because all edges are not the same anymore and i would really like to follow edges that are more likely to give me optimal paths and keep the edges that are less likely to give me optic optimal paths for later right so for this kind of weighted graphs right where the edges have some cost right so people look at other algorithms so the most popular of this is something called the dijkstra's algorithm so the dijkstra's algorithm ah is essentially it finds the shortest path between any two vertices a and b in a graph ah and um this works on uh graphs that have weights right i mean i mean from uh if you want to be more picky about it it works when graphs have positive weights and not negative weights but then we are talking about path planning here right so paths have positive cost they are not going to have negative cost for us and therefore uh dijkstra's algorithm works perfectly all right right and so for among all the vertices that i haven't visited so far right the extras picks the vertex with the lowest distance right and calculates the distance from that vertex to each of the unvisited neighbors of that vertex right and updates the neighbor's distance if it's smaller so we'll just see how it works right so just keep look at the algorithm here at at any point of time ah we start off with the full graph right and so we have the uh the vector except q uh which has to be examined right just like we had the q earlier for bfs we have the q right and for every vertex we start off by saying the distance from the source to that vertex is infinity because i don't know right i have not found out the distance from the source to that vertex so it might as well be infinity so i am just putting it at a very large distance right infinity means a very large number i mean of course i can't represent infinity itself right it means that probably the largest number that i can represent in my computer or whatever right and then i say that the previous node in the graph like we had the parent right in bfs the previous thing is now starts off with undefined because i don't know how to reach to v right and now i put all the vertices in q so i have i need all the vertices in q and then at the end of it the distance to the source is 0 because i know how to get to the source i am starting there right and then just like we had in bfs i'll say with q is not empty right so i'll pick the vertex in q right that has the minimum distance u remember what is distance the distance u is the distance of the vertex u from the source right how much what is the cost of the path starting from source to get to u is what i mark as distance u now notice that at any point in the algorithm the distance u might not be the correct shortest distance but at the end of the algorithm the distance you will be the shortest distance from the source okay so now i so what do i find now first i pick the u that has the shortest distance so far that i have estimated from the source right and they'll take the u i'll examine this you know now u is going to become visited right so what i'll do is i'll take the u from q right and for each neighbor of u right so i look at how f how far i have come from the source to reach you right and how much more i have to go to reach v from u okay so i'll compute that so i call that as the current alternate distance okay if the alternate distance that i have computed is already is less than the distance i already have stored for v remember i start off with infinity but it's possible that from some other u prime i would have visited v already right from some other u prime i could have found out a short shorter path than infinity to v right but now what is going to happen is if this alternate length i computed is shorter than the original length i had for v original distance i had for v then i will change that i will also change the parent of v to u because this is the one that gives me the shortest distance right and then finally after this algorithm completes i'll return distance and that gives me the shortest path length shortest path length from every from the source node to every vertex in the graph right every vertex in the graph right that is what distance gives me and what does previous give me for every node in the graph right previous tells me what is the previous node in the shortest path from the source to that node right suppose i want to look at previous of you previous of you will tell me what is the shortest path i mean what is the node just before you in the shortest path from the source to you okay so that's basically uh what these things are so again this little confusing so let us look at it in a in a pictorial form right so i start off with a as my source right so the vertices are labeled 1 2 3 4 6 right and i start with 1 as my source vertex right and 5 as my destination vertex remember i was trying to get from a to b right so 1 is my source 5 is my destination that i really like to get right in fact i can run it on the whole graph so the extras actually gives me it's called the single source shortest path problem so i start off from 1 it will give me the shortest path to all the vertices if i run the algorithm completely right and remember i have the full graph right so these numbers right now that you see are the edge weights so so 7 here means to go from 1 to 2 i have a weight of 7 that means i have to you know spend 7 units whatever it is i have to spend 7 units to go from 1 to 2 and likewise to go from 2 to 4 i have to spend 15 units right so i really like to get to 5 in the shortest cost possible right the shortest cost possible and it's not immediately just looking at it in a glance it's not clear what is the shortest path and so we have to do some computation right so now i start off with 1 right and then what i do is i look at the neighbors right so what are the neighbors 2 3 and 6 right so i'll update the distance of 2 to 7. right i'll update the distance of 3 to 9 and it'll update the distance of 6 to 14 because that is the path clause cost from going from here to there and the previous cos was infinity therefore i can replace it and for all of these right two three and six i'll say the previous node is one okay the previous will be one okay now which is the node i should be looking at next the one that has the least cost so far right so which will be 2 right so i look at 2 next then i look at the distance from 2 to 3 that is 10 right so the total here is 7 plus 10 so the total to get to 3 from 2 through 2 is 17 right so 17 is more than 9 therefore i will keep the 9 so the previous of 3 is still 1 the previous of 2 is 1 right but i will keep the distance here as 9 right and then i look at next neighbor of 2 i look at is 4 so i have a 7 here that i have a 15 here so the the distance of 4 is going to become 22 and the previous of 4 will be 2 right remember that so previous of 2 3 and 6 are all still 1 the previous of 4 is going to become 2. okay so now and the distance of 4 is 22 which is 7 plus 50. okay great now i am done with 2 right so what is the next node i should consider 3 right because 2 has a cost of 22 3 has a distance of 9 and 6 has a distance of 14. so next node i will consider is 3 so 1 and 2 are already been considered so i will already completed so i will not look at 1 and 2 i look at only 4 as a possibility right so remember 4 has a distance of 22 already now if i come to 4 through 3 so i will have a distance of 9 plus 11 right which is 20 which is less than 22 therefore i will replace the distance of 4 as 20 and what other change i will make i'll so previous of 4 earlier was 2 now previous of 4 will become 3 okay so previous of 2 3 and 6 are all 1 and now previous of 4 was 2 earlier now it will become 3 because that is the one i have used for getting the 20 computation okay next what will i do i look at the neighbor 6 right so what do you think is going to happen so for 6 it is 9 plus 2 which is 11 right and earlier we had a distance of 14 therefore i'll replace that for 6 as well so i'll make it 11 and not only that the previous of 6 from 1 will now become 3. it turns out that getting from one to six through three is shorter than taking the path directly from one to six whatever could be the reason right so there could be ah slightly uh some obstacles here so where i have to spend more energy to get through while going from going via one might going by three might be a easier path for the robot to move right so whatever reason so it's just an arbitrary graph here so i am not assuming any kind of triangle inequality or anything to hold right that could potentially be like a small ramp here which might actually take more energy for me to consume while going through 3 might avoid that ramp okay so now 3 is done which is the node i should look at next which is 6 and so the neighbor of 6 that i will consider is 5 right and because other neighbors of 6 are all done so when you look at the neighbor of 6 so going from through 6 to 5. so 6 already has a distance of 11 right so i don't have to go back in the past and examine i already know how much distance i have to travel to get to 6 so it is 11 and i have to travel a further 9 to get to 5. so the total distance to get to 5 is 20 right so 6 is done and we are actually overdone completely right because the other node i have to look at is 20 is 4 right other node is 5 but 5 is what i am interested in right so i have reached 5 and there is no way that i can get to 20 i mean i can get to 4 better than 20 because getting to 5 already needs 20 and vice versa right so i can't get a cost of 20 better than 20 to reach 5 because to reach 4 itself i need 20 right now this is done and so what is the best path to get to 5 remember that my parent of 5 right or the previous of 5 is going to be 6 because that is the one that i used here so from 5 i'll trace the path back right the previous of 5 is six previous of six is three previous of three is one so the path is one three six five is the shortest path to reach five and likewise one three four is the shortest path to each four and others others are directly connected to one so i've basically computed the shortest path to reach every one of the nodes here in the graph but then i don't have to look at all the nodes in the graph if i had a specific destination in mind if i was my destination right and suppose there is a whole graph out here right i do not have to look at it because i have reached my destinations i do not have to examine any further neighbors of 4 primarily because 4 has a cost of 20 right so i don't have to examine anything further okay so let's quickly uh look at illustration of this right so i'm waiting for the animation to restart so that we can start from there so cut off all this part right so notice that we start at this corner right so then we start examining all these nodes so all the red ones are the ones that are done and the blue ones are the ones that there are yet to be visited right and then we continue here and then when you hit the obstacle you'll see that all of these nodes i don't have any further neighbors to examine so they're all being stored and then these are all having so the color coding is basically the the distance right and so the the the redder you are the closer you are to the goal and greener you are farther away and then eventually you will reach the uh you will reach the goal right so this is essentially how the dijkstra's algorithm is going to work right in fact here since the cost is all uniform is very similar to a star search but this is how the extras i'm sorry since the costs are all uniform it's very similar to breakfast search but in the case of uniform cause dijkstra's could potentially work very similar to breadth first search okay so if you think about how long dijkstra's algorithm is going to take if i am using only you know regular arrays to implement this it's going to take order of v squared because every time i have to go through it to find out what is the what is the minimum cost vertex right or minimum distance vertex every time so that i can expand but i can be more efficient if you use something called the min key that's always going to return to me the minimum cost vertex with the with the small cost right the constant cost in which case i can be more efficient here right so we do not have to really worry about analyzing the algorithm for full efficiency and of course the space complexity is still order of v the same things that we stored earlier you have to store now right whether you have visited node and what is the uh previous note right so so we still have to store that so the challenge with dijkstra's algorithm is that it always looks at how far you have come so far right on the other hand if i have knowledge about how far i have to go also some kind of an estimate of how much more distance i have to travel maybe maybe i can be little bit more efficient in my search right so the idea of being able to use this kind of you know rough estimates guess guesses as to how far more i have to go and so on so forth or what are sometimes called as heuristics right is what we use in the next algorithm we look at which is called a star search right so not only it looks at the cost of the path so far right which is which will store in a variable called g but it also uses what is called a heuristic to estimate the rest of the cost needed to reach your goal right so this heuristic we will call as h right and so at any point of time when i want to select a node right a star selects the node that has the smallest g of n plus h of n right notice that when you start at the source right you have no g of n is zero h of n is the entire distance to the goal right when you are at the goal h of n is zero because you reach the goal g of n is the distance from the source to the goal right at any point in between you would have had some distance you have traveled to get there which is very similar to what the dijkstra's algorithm has but then you also have some guess for what is the rest of the distance you need to go and a star always uses both right it tries to select a node n right that minimizes g n plus h right in dijkstra's what did we do in some sense we are picking a node that has the least g of n right here we look at g of n plus h of n and pick the node that has the least g f n plus h of n right and notice that the heuristic function is problem specific there's nothing very generic about i can't just say okay here is a here is a heuristic function that will work all the time right and the heuristic function is also goal specific sometimes right because if i change the goal from the top right corner to the top left corner of the room i might i have to use a different heuristic right and we would like heuristic functions which are called admissible right so admissible is the the t term here so we would like heuristic functions that are admissible right so admissible functions are those uh that never overestimate the actual cost of getting to the goal it should not say okay if it's going to take you say 10 10 units effort to get to the goal your heuristic it's okay if the heuristic says it takes nine units or even okay if the heuristic says it takes five units right the heuristic should never say it will take 12 units right if the true cost is 10 the heuristic should never say 12. it's okay if the heuristic says 5 right because that means you will explore it anyway because you think it's better path than what it actually is but if you think it's worse than what it actually is there's a small possibility that you might never explore it and therefore you might not get a the least cost path you might not get optimal path to the goal right so since you don't want to miss the optimal path to the goal right we assume that your heuristic is admissible right so let us look at algorithm the algorithm looks a little complex but it is not right it is very similar to what you had in the in the dijkstra's algorithm case except that instead of looking at the g score which is essentially the distance from the start to the node i look at the the g score plus the h of the neighbor like the g score of the neighbor plus the h of the neighbor which is essentially the estimate of the distance to go the h is initialized a priori right h is something that is given to you the function h is given to you as part of the algorithm specification so all you need to know is the start on the goal and the whole graph that you are traversing right and the most of the paths are similar right so i start off with saying that all the distance is infinity right and then i start off by saying all the f score is infinity right and then i say that all the f score has to have so the the f score of the start state is the h right remember i was telling you that uh the the start this uh start state it basically is all the distance to go and the goal state uh is is got zero heuristic but that comes from the thing so and i also start off with my g score for the start to be 0. why is that because right i mean i'm already there at the start state right so there is no g score g remember g is the distance to the node and h is the distance from the node to the goal right and then basically we just keep updating the we pick the node that has the least g score plus h core right we pick the node that has at least g score plus h core and then use that for expanding our search right so just look at this here how it is going to work so here is a simple graph right here is a simple graph and i have given the h score for each node here in brackets right so h of a is 4 h of d is 4.5 this is 2 here is 4 and here is 2. notice that this is essentially a over estimate right so if you think of this right so if the cost is actually 5. so the overestimate part we have to we have to delete okay so so we are given the h scores here in in brackets here so remember that the h is actually a underestimate only then it is an admissible heuristic and if you think about it so the distance from d to the goal is actually five if you think on the graph right and we do not know that yet if i know what the shortest path is easy but then my heuristic says it's only 4.5 and likewise the distance from a to the goal is certainly much larger than 4 but it's ok because it's still an admissible heuristic right so and then these brown numbers that you see here are the actual distances of these edges these are the weights of the edges and so i start off at node s right and i'm going to compute the f score for both a and d right so the g score for a is 1.5 and the h is 4 and likewise the g score for d is 2 and the h is 4.5 so i'll take the sum right 1.5 plus 4 gives me 5.5 and 2 plus 4.5 gives me 6.5 and therefore i am going to pick a as the next node to expand right now i will expand a now what are the nodes that i have to consider i have to look at b right what will be my g score for b the g score for b is 3.5 right so g score for b is 3.5 so which is basically the g of a plus the distance from a to b which is 3.5 and my h core of b is 2 so the sum of these 2 is 5.5 so my f score of b is 3.5 plus 2 which is 5.5 and remember my f square of d was 2 plus 4.5 therefore it is still 6.5 right so um okay that's a bad thing so now what i'm going to do see notice that my admissible heuristic on this path right is lot lot lesser so i'm just being more optimistic about the distance from a to the goal from b to the goal uh while i'm being more realistic right so this is the correct h score this is the correct distance and this is more or less correct so between five to four point five right so the guys who are more optimistic i'm exploring them first but i'll eventually get to d as well right so what i'm going to do next is explore b now the two options that i have to consider are c and d right so what is the g score for c it's the g of b which is 3.5 plus 3. so 6.5 is the g score for c and the h score for c is 4 right so i because i'm right next to the goal right so i can always initialize it to the correct distance right therefore my f of c is 6.5 which is the g score plus 4 which is the h score it gives me a total of 10.5 and my d was 6.5 ah so i'm going to finally expand my d so i am not expanding my c i am going to finally expand my d and therefore what do i get my new competitors are e and c c we already know is 10.5 what about e e is the g score of d plus the distance which is 3 so 2 plus 3 which is 5. so g score of e is 5 and the h score of e is 2 so the f score is 5 plus 2 this is 7 okay and therefore i'm going to expand e next right 7 versus 10.5 i'm going to expand e next and so e's g score is 5 and the distance to g is 7 of course the h score of g is 0. so i get 7 and i have reached g right so and now i'll backtrace the parent node remember we had the parent so the parent for g was e the parent for e was d the parent for d was yes and therefore that's my shortest path okay it's a very simple like simple example so that you can see what is happening right and notice that we never expanded c if i had done breadth first i would actually expanded c also before getting to g right but i never expand ah c right and also the problem here is because my heuristic function was a little off right if i had been a little bit more clever about this heuristic right this heuristic instead of being 4 let's say it had been 7 right which is more more likely right already it's even 7 is an underestimate because the distance from a to g is 9 so even if my heuristic had been 7 i would have gone down this path right because with this path uh my total distance itself is seven and here uh my hq my my f score would have been eight point five so i wouldn't i wouldn't have expanded this whole path right because my heuristic was a little off i ended up doing the this path a lot right before i got down to the correct path to get to the goal so we'll see how this works in this particular example again waiting for the start just okay so here the heuristic i am using is the direct distance to the goal like like just draw a diagonal line whatever is the shortest path to the goal assuming there is no obstacle right and you can see that when i started here it directly expanded that line because this is the shortest path to the goal and my heuristic says that hey this is going to take you to the goal very quickly therefore i only expanded that path right and now after that i'm just going to expand this whole region behind the obstacle but once i got to a point notice here now i am expanding the whole region ah that's blocked by the obstacle because it's still trying to go to the shortest path to the goal then once i reach a point where i get a direct line of sight to the goal right there is no obstacle here anymore right it will now only go through this path that path stops and only this is expanded and then i finally get to the goal right and this give this is the path to the goal okay so a large fraction if you remember in the dijkstras we had expanded almost up till here right in dijkstra's we had expanded a region almost up till here before we found the path shortest path to the goal and in a star we are finding it while a large number of nodes are still not expanded so it turns out that this is actually a reasonable heuristic to have and quite often uh that is what the heuristic that we will employ in the path planning problems is essentially the distance as the crow flies to the obstacle and to the i mean to the target and that will always be a admissible heuristic okay right and the time complexity of a star really depends on the heuristic how hard is the heuristic to compute right and and the the point is if b is the branching factor right branching factor is the number of neighbors that we typically on an average have so in our case the number of neighbors was eight right so the branching factor was eight and on the shortest path to the goal let us say is 10 steps right if i am doing a dumb breath first search i will really be expanding something of the order of 8 power 10 the huge number of nodes right but then if i'm using a star i can prune away a large fraction of those because uh they are too far away on the by as measured by the heuristic so anything that's going to take me farther away from the goal i will ignore right and therefore something like a star where the heuristic allows me to uh you know kind of focus my search is a much more efficient way of finding paths right in fact if you in in practical algorithms right the practical implementations a star and for most most uh you know simple robot navigation problems ah so these were the these are the kind of the workhorses like bfs a star and dijkstras are the workhorse algorithms for path planning but in more complex environments people prefer what are called probabilistic path planning algorithms where you do small amounts of sample of the of the world to figure out whether you are going the right direction and these return feasible paths and might be optimal but they don't typically don't guarantee optimality they return feasible paths but they do so much much more efficiently than something like a star or dijkstras so we have looked at multiple topics here but we have covered that covered these topics at a very very high level so starting from the notion of state representations and state estimation bayesian filters with different variants of base filters looking at measurement models motion models map building localization and path planning and we have again touched upon these topics just to give you a flavor of uh what are the various things that go into building good robotic systems right and this in conjunction with whatever else you have seen already in the course should be should be enough to get you started right but no means sufficient uh for you to um say that i know how to do robotics now right and uh largely the way to way forward is to get your hands dirty i mean start building small robot models and try to play around with those okay thank you |
Introduction_to_Robotics | Lecture_28_Forward_Kinematics.txt | So, in the last few classes we discussed about the DH parameters and then saw how do we actually assign coordinate frames and then using coordinate frame how do you, how do you find out the DH parameters basically the four parameters theta, a, d and alpha. So, today we will see how do we actually use this one for developing the forward kinematics relationship for an industrial manipulator. So, the purpose of forward kinematics is basically to map the coordinate frame at the tooltip to the base frame here. So, if any point represented with respect to the tool frame, you should be able to represent that with respect to a base frame and there will be many joints in between the manipulator tooltip and the base frame. So, how do we use the DH parameters and then using the DH parameters, how do we develop a matrix called Arm matrix to map the coordinate frames and then use these coordinate frame to get the forward relationship is the question that we are trying to address. So, the Arm matrix is defined as a homogeneous matrix that maps frame k to k minus 1 coordinates. So you have a kth frame, you have a kth frame and a k minus 1 frame. How do we actually map this two frames using a matrix is and that matrix is known as the Arm matrix. So, we use a homogeneous transformation matrix, which is a 4 by 4 matrix which includes the all the joint parameters and the link parameters as well as the DH parameters, what will be using for getting the transformation and we saw that we can use the four parameters that is d, theta d, a and alpha. So, these are the four parameters, that we are, that is of interest between two coordinate frames. We will see how can we actually develop this arm matrix using this formula. So, assume that this is the base coordinate frame, and then you have another coordinate frame at this point given by XYZ coordinate here. So, now here you can see this is the Z axis, this is the X axis and this is the Y axis. So this coordinate frame, base coordinate frame and this two coordinate frames we want to map or we have to find the transformation how this coordinate frame is transformed to this coordinate frame using this parameters. So, how do we actually represent this. So, what we can do is we can first look at, assume that this two coordinate frames are initially aligned together, that is this X0, Y0, Z0 and XYZ are initially aligned, so initially they were actually the same XYZ here. Now, assume that this coordinate frame rotated by an angle theta, assume that this XYZ rotated by an angle theta with respect to the Z0 axis. So, if it is rotated, then you will see that this will become your X, the rotated X and it is rotated with respect to Z axis. So, Z remains same and then your Y will be something like this. That is the first parameter theta. You can see that the transformation of this two coordinate frames, first it is rotated with respect to Z0 axis by an angle theta and then it has translated along the Z axis by a distance d. So, it reaches here. So, first it rotated by an angle theta. So by an angle theta it rotated X0 to X, with respect to Z0 it rotate by an angle theta and then it translated by an angle d to reach here and then it translated again along its X axis. So, this frame I am just showing it like this because, it will be along the link, so it is translated along X by an angle by a distance a and then it rotated by an angle alpha with respect to X axis. So it rotated by an angle alpha, so Z axis has actualy came here and you got this frame here. So, the four transformation takes place for this coordinate frame to reach here with this orientation. So, it rotates by an angle theta with respect to Z0 axis and then translates along d axis and the Z axis by d and again translates along X axis by a distance k and then rotates with respect to X by an angle alpha to get the Z axis. So, by this four transformations, you are actually getting this four transformations, individual transformation for this coordinate frame to get this coordinate frame. So the mapping between this two coordinate frames can be represented using these four transformation. So, that is why we need these DH parameters to understand how this particular coordinate transformation happens from the original coordinate frame. So, this is what we are trying to understand here. So, we have this transformation. So, it moves and then it finally reaches here. So, rotate Lk minus 1 about Zk minus k angle Theta and translates along Zk minus 1 by dk and then translate along Xk minus 1 by ak and then Lk minus 1 about Xk minus 1 alpha k. So, all with respect to its own frame, so all the transformations are with respect to its own frame or with respect to the base frame. So, all are with respect to its own frame and therefore, what we will do to get the full transformation it s a composite transformation. So, what we need to do, do a post multiplication. So, you will be getting this k to k minus 1 transformation that is the kth to k minus 1 transformation, how the k minus 1 and k are related or any point in k can be related to k minus 1 by using this transformation k to k minus 1, which is R, T, T, and R. So, these are the four individual homogeneous transformation we will be using in order to get the composite transformation of the coordinate frame from k to k minus 1. So, this is known as the Arm matrix or the homogeneous transformation matrix from k minus 1 frame to kth or the source frame to the destination frame. So, this is what actually the coordinate transformation matrix. So, between any two coordinate frames of the manipulator, any two adjacent coordinate frames we will be able to find a transformation matrix with this four parameters included. Out of these four, only one will be a variable. So, finally, you will be getting a matrix with one variable and that relates the coordinates frame k minus k to k minus 1. So, this is basically that Arm matrix or the homogeneous transformation matrix between two coordinate frames of the matrix. So, how do we get this, so first you get this matrices. So, you know how to get the fundamental homogeneous rotation matrix with respect to third axis, then translation matrix again with rotation matrix with respect to X axis rotation matrix, translation with the X axis and rotation with the X axis. So, all this four transformation matrices we know the fundamental rotation matrices and translation matrices. Multiply these matrices. And then you get the final transformation matrix which is this one, k to k minus transformation. So, this is the final DH matrix which relates to adjacent coordinate frames with the parameters, all parameters theta, alpha a and d. So, you can see this is a function of all this parameters theta, a, Alpha and d and k to k minus 1 is obtained by this relationship and we saw that if we have multiple coordinate frames, you will be able to get the theta 1 to theta n. Similarly, a1 to an, etc, for any and manipulated frame and therefore we will be knowing all this information and therefore, we can actually find out T1 to 0, that is if you have a manipulator and the base frame is 0, the next joint is 1 then we can find out the transformation from the first joint to the base frame by as a T1 0 and similarly, we can actually get T2 1, T3 2, etc., Tn to n minus 1 we will get because we know these parameters and therefore, we will be able to get these transformations, between adjacent frames you will be able to get as a transformation matrix n to n minus1. So, the only thing what you need to remember is this matrix. If you cannot remember then you need to go to the fundamental matrices and then multiply it, otherwise if you can remember this then no need to worry. Just what we need in forward kinematics is only this particular matrix. Once we know this matrix everything is done, because it is only substitution of the DH parameters into this matrix then you get the transformation. Any questions? So, now you know about coordinate assignment, you know how to get DH parameters and we know how to, if you know DH parameters, how do we get that transformation between two adjacent coordinate frames. So, that is the basic requirement for forward kinematics. Now the forward kinematics is basically. So, if you want to do the inverse also you can get. So, if you want to k minus 1 to k if you are interested in getting k minus 1 to k, it is basically an inverse of this one and the inverse of homogeneous transformation matrix can be obtained by taking this R transpose and minus R transpose P that is what we saw in the previous class. So, the same application can apply here and you will be getting it as inverse transformation also k minus 1 to k also we can get. That is known as the inverse transformation. Again, no need to, if you know this then you can actually get it directly by taking the transpose, R transpose n minus R transpose T, you will be getting this vector here. So, now let us see how do we get the direct kinematics or the forward kinematics of the manipulator. So, as I mentioned the point of interest is this point that is the tool tip. We are interested in knowing what is happening to this coordinate frame. Because we assume that the coordinate frame attached to the tooltip is this one and whatever happens to tooltip the coordinate frame will be rotating or translating. So, we will be able to get this point and the orientation of that particular frame by getting a relationship. So, we are interested in knowing this vector P. So, that is how far this tooltip is from the base frame that is the position of this tooltip. So, the position vector is of interest to us and the orientation of this frame with respect to the base frame. So, once we have this, we have the complete definition of the tooltip with respect to the base frame. We want to know the position vector and the orientation of the tool frame, then we have the complete information of the tool with respect to the base frame. So, we are interested in knowing this information, what is Px, Py, Pz that is this tooltip with respect to the base frame. So, that is the interest and then what is the orientation of this frame with respect to this frame. So, assume that this is Zn, this is Zn. So, we are interested in knowing how this Z0, Zn and X0, Xn and Y0 and Yn are aligned. So, if they are aligned you will be getting on 100 010 001 if they are aligned, otherwise you will be getting this as a normal sliding and approach vectors corresponding to the orientation of the tooltip. So, this is finally what we are interested in. We want to know what is the position of the tool and its orientation with respect to the base frame, whatever may be the joint position. As a function of this joint variables we want to get this Px, Py and Pz. This is our what we are looking for. So, how do we get this, it is by looking at that joint variables q1 to qn. So, I define the joint variables as q in this case, because it can be theta or d any one of this can be a variable. So, I can take as a q1. So, we have n variables, because an n degree of freedom robot, have got n variables. So, given the values of joint variables, so the end-effector location, that is the position and orientation in the Cartesian space of the robot base frame. So, effectively we are telling that we want to get the transformation n to 0, what is this transformation is of our interest. Here, this is the nth frame, this is your Ln and this is your L0. So we are interested to know what is the transformation of nth frame to 0th frame. So, this is the transformation we are interested and once we know this transformation, we can easily find out what is the position because the this one gives you the position and this gives you the orientation. Now, the only catch is that, in between you have many points and there are many variables. So, we need to find out how n to 0, how can we calculate n to 0 by using the individual transformation. So, now, we have 1, 2, 3, etc to n. We want to get Tn 0 and we can easily find out what is T1 0, what is T2 1, what is T3 2 can be obtained from individual transformation. So, if we know the individual transformation, we can use this individual transformation to get Tn 0 by simply multiplying the transformation matrices and finding out what is the relationship. So, we know T1 0, so I will write it as T1 0 multiplied by T2 1 multiplied by T3 2 n minus 1 gives me Tn 0, that is the only thing we need to know. So, if it a n degree of freedom joint and if you can assign coordinate frame, find out all of these parameters, you can find out the new the transformation between adjacent coordinate frames and using this individual transformation, you can find out the final transformation T into 0 using this is a relationship 1, 2, 3, etc. So this is the, and once you have this n 0, you will be having the forward relationship. That actually this matrix will tell you what is the relationship of this coordinate frame to this frame and we know that each one of this is a function of the joint variable q. So, this would be a function of not only joint variables, all the four parameters, this will be four parameters, this will be having the four parameters like this. So, this function finally this will be an expression Tn 0 will be an expression, which has got all these parameters in that expression and out of this only one of this will be variable. So, you will be having 6 variables and you will be having all of this as constant. Finally, you will be getting Px as a function of all these parameters, you will get Py as a function of all these, Pz as a function of all this. This is the way how you begin forward relationship, the position and orientation of the tooltip as a function of the joint parameters and the link parameters for the DH parameters of the kinematics. So, Tn q will be 1 0, 2 0 etc. So, now we are using the tool to base. So, this is tool and this is base, so 1 to base 2 to 1, 3 to 2, etc, n minus 1 will be the forward relationship for the manipulator. That is if you know all the DH parameters, you will be able to find the relation between the 2 lambda base as the 4 by 4 matrix representations. Any questions? So, since I mentioned that we can have the first three degrees of freedom for positioning and the next three degrees of freedom for orientation, you can actually represent this also in this way, between wrist and base and the tool and wrist. Only for convenience because the positioning is done by the first 3 joints and the orientation is done by the next three joints, we can represent this as two transformations, wrist to base and the tool to wrist, only for convenience, otherwise, it is the same. Only they have up to three we are bifurcating and then making it as two. So, tool to the wrist suppose this is the wrist point, then this will be three and then the first the other three will be deciding the wrist position. So, the wrist position is decided by the first three joints and orientation of the tool is decided by the next three joints. So, finally the equation will be like this, the tool to base you will be getting it as say 4 by 4 matrix with this as the position vector and this is the orientation matrix. So, your P will be this and your orientation matrix will be like this. So, that will actually having this three vectors, this three vectors will be the approach, normal, sliding and approach vectors. So, this would be the normal, sliding and approach vectors. You will be seeing this as three vectors and that will be one first will be the normal vector, sliding vector and approach vector and you will be having this three positions Px, Py and Pz. So, that gives you see the P and then gives you the orientation of the tool frame with respect to the base frame. So, this is what we will be getting at the end of this transformation we are doing individual transformation between coordinate frames and finally finding out the transformation from base to the tool. We will take one or two examples to make sure that you understand this. So, let us take an example of Alpha two Arm which we already discussed in the class. How to get the DH parameter? So, we will not go into that stage now, we will see how to get the transformation matrix and how do we represent the position of the tool. So we are interested in this tool position and orientation. So, we will be assigning a coordinate frame here as you can see there and then you will be having coordinate frame here and a coordinate frame here. So, this are the two coordinates frame will be assigned. We want to find out what is the relation between this coordinate frame and this coordinate frame as a function of all these joint parameters. Now we have different joint and link parameters, we have multiple parameters, we need to represent the relationship between this two coordinate frame as a function of all these parameters. Also, that is the forward relationship. So, the first step in forward relationship is assigning the coordinate frame, we saw how to do this. So, you assign first coordinate frame here, next one here, then next one here, here and the next origin will be the same place, next coordinate frame and finally your fifth coordinate frame will be at the tooltip. We are interested in finding out what is T5 0. So, this is what we are interested in and that should be as a function of all these parameters, this is what we need to develop as a forward relationship. So, we assign the coordinate frame and then get the DH parameters based on what we discuss in our previous class. So, this is the parameters that we will get the table of DH parameters that we have. So, theta seems the all are joint variables I mean all are rotary joints we make this as variable and write the equation or the relationship 5 0 as a function of theta and since these are all constant, we will put the RDC to actually to the equations, the direct relationship. T5 0 can be obtained by first finding out what is T 1 to 0. So, we find out what is T 1 to 0 and then we write what is T 2 to 0 then T 2 to 1 then T3 to 2 etc. So, since we have this parameters come we can easily write down the relationship T1 2 to 2 to 1, 3 to 2 etc and then you multiply this and get the final relationship. So, once you have these DH parameters. The first thing is to go for 1 to 0. So, 1 to 0 can be obtained by substituting this values of theta 1, alpha 1, a1 and d1 by substituting here. This is your Arm matrix or transformation matrix 4 by 4 transformation matrix between 2 adjacent frames. So, this is what I showed you in the previous slides. Now substitute this values and then get what is the T1 to 0. So, what will be T1 to 0 assuming theta 1 not known, so theta is a variable. So, of cos theta 1 can be written as C1, then sine theta 1 is s1, then this is a1. So, a1 is given here 0 and theta 1 is not known, so this also will be 0, a1 is 0, so this will be 0 d1 is there, so you can write it is 215 or you can write this d1 also. Finally, you can substitute, so it will be 215 and then what will be this one, alpha 1 is minus 90. So, cos minus 90 is, what is cos 90, 0. So this will be 0, this will be 0 and sine minus 90, that is sine 90, sine 90 is 1 sine minus 90 minus 1 yeah. So, you will be getting it as the matrix here, minus S1. So you will be getting it as, okay we have written it as d1 itself, you can substitute the d1 the numerical value can be given or you can write this d1. So, this will be the T1 0 matrix. Same way you can get T2 1, you can get T3 2 also. So T2 1 and T3 2 also can be obtained by simply substituting these values. You do not need to do anything other than that, just substitute these values and then get the matrix 2 to 1 and 3 to 2. So, finally the wrist to base. So, if I bifurcate this into wrist and base, wrist and tool. So, wrist to base is T. So, if I write this as wrist to base, it will be 1 to 0, 2 to 1 and T3 2 that is going to be the wrist to base transformation. So, multiply these three matrices that is going to be the wrist to base relationship. That is the position of the wrist with respect to the base and the position of the and the orientation of the wrist frame with respect to the base can be obtained by this transformation matrix that is wrist to base. So you need to multiply this, this is maybe somewhere you make mistakes. But nowadays with all this symbolic computation, you can do it in a computer and then easily get it. So, you can see here this is a2C2 and a2S2, this a3C3 and a3S3 for the positions and if you go back to the pictures and then try to understand you will be able to understand why it is coming as a2C2 and a2S2, because we are trying to go up and down in the same plane. So, your X will be a2C2 and Y will be a2S2. So, we will get this as this relationship. So, once you multiply this, you will be getting the wrist to base as a 4 by 4 matrix and you will see this one expression, just got cos theta 2 and theta 3 and theta 3. So, C23 stands for cos theta 2 plus theta 3. So, this is what actually S23 C23, this is cos theta 2 plus theta 3 is given as cos 2 3 nearly sine theta 2 plus sign theta 3 is given as S2 3. So, this wrist to base, so if you go back to that diagram, it was something like this one join here, one join here, one. This was the wrist point. If this is the wrist and this is the base and then you have this two degree of freedom here, the roll and the pitch. So, the wrist point here you can see this is the P position of the wrist and the orientation of the wrist is basically the frame attached to that. So, this says that here the position of the wrist is a2C2 plus a3 C2 3 minus multiplied by C1 that is the Pxw and this is Pyw and this is PzW, that is the position of wrists in x y z with respect to the base frame is given by this and the orientation of the wrist with respect to the base frame is given by this matrix. So, this is the position vector of the wrist point, this is the orientation of the wrist frame with respect to base. Same way you can get from tool to the wrist also by multiplying 4 to 3 and 5 to 4, we have only 2 degrees of freedom for the wrist. So it will be, this can be obtained as T5 to 3, so i could adjust this so if 4 to 3, 4 to 3 and the 5 to 4. So, that will be the tool to wrist. So, 5 is tool, we will be getting this two matrices, multiplying you will be getting and you can see it is a function of d5 and S4 similarly C5 and S5. So, it is a function of theta 5 and d theta 4 and theta 5, function of theta 4 and theta 5 and d5 is constant. So, variable will be theta 4 theta 5 and d is a constant. We will see C4 C5 minus C4 S5, S4 C5 minus S4S5 this will be the position of the tooltip with respect to base frame, with respect to wrist frame and this is the X position of the tooltip with respect to wrist frame, this is 5 position with respect to wrist frame and Z position is here, because rotation with respect to Z axis of that positon that is 0. So, now we got this two transformations that is wrist to base and then tool to wrist. Finally, we are interested in knowing what is the transformation from tool to base? So, this is what we are interested in. Now, multiply these two matrices, multiply this two matrices, you will be getting the tool to base transformations, which gives you the complete forward relationship of the manipulator and when you do this, again it will be a the expressions will be slightly longer in this case. So, you will be getting this as tool to base as this matrix relationship. So, this is the final transformation matrix between the tool frame and the base frame, you will see that the Px is given by this. The position of the tool with respect to the base Px is given by this, this is Py and this is Pz. So, Px is equal to cos theta 1 multiplied by 177.8 C 2 plus 177.8 C23 minus 129.5 S 234 will be the Px. Now we can see that the position of the tooltip Px is a function of the theta 1. So as a first joint changes, its Px changes, as theta two changes, it changes, theta 3 changes, position changes, theta 4 changes and it is not a function of theta 5, because theta 5 is the roll axis, the roll axis is not going to change the position of the tool with respect to the base frame. That is why Px is not a function of theta 5, but it is a function of theta 1, theta 2, theta 3, and theta 4 plus are the DH other parameters, fixed parameters. Similarly, here you can see Py and this is also Pz. This is the orientation of the tool frame with respect to the base and here you will see it changes with respect to all the joint angles. So, theta 1 to theta 5, it is a function theta 1 to theta 5. So, any changes in any one of joint angles will change the orientation of the tool with respect to the base frame. Pardon. Column. This one, yeah, so this one right? Yeah. So why is it so? So what is this vector? This is an approach vector. So approach vector is, so I am holding this, this is the approach vector and any rotation of this, so it is not going to change. So, it is going to be the same one. So, the 5 is not going to affect along that axis the approach vector is not going to change. This vector will remain same even if I rotate like this. But if I rotate with respect to this then the vector is changing. So, that is why approach vector is not a function of theta 5. So, you can actually go back to the diagram, yeah, now look at this manipulator. So, this is the approach vector and even if you rotate with respect to this, the vector remains the same vector is not going to change and similarly, you can see the position of this is a function of theta 1, theta 2, theta 3 and theta 4, theta 5 is not going to affect the position, because theta 5 is again roll. So, the position will not get changed, by simply be rolling this having that roll rotation, roll motion. So, that is why it is the position is independent of theta 5 and orientation of course gets affected. Now you see all these parameters are actually during the relationship, 129.5, 177, alpha has been taken already into account. So, finally you have the relationship. Now what is the benefit of having this relationship? Now you do not need to worry about the orientation of the robot or anything. Now you have a complete mathematical representation of the position and orientation of the tooltip. For any value of theta, you can actually find what is the position and orientation of the tool. Whatever maybe the values if they are within the limits of the rotation, you would be able to find out the position. Now if we have a manipulator, we have a robot and we want to find out its, where are the what are the points it can reach, you simply substitute the values of the theta 1, theta 2, theta 3, theta 4 whole range you will be able to see all the points where actually it can reach, that it actually becomes workspace of the manipulator. So for any value of theta, you will be able to get this relationship, I mean the position and orientation of the tool can be obtained for any value of theta. So, this is the forward kinematic relationship for this particular manipulator. So, for any given manipulator, you will be able to find a relationship like this. Once you have designed the manipulator, then this will be fixed, only the theta or the d can change, all others will remain same. So, you will be able to represent the position and orientation of the tooltip as a function of the joint variables as a theta or d and get the position and orientation for any value of this joint variables and that is known as the forward kinematic relationship. Because you know theta or d, you find out the position and orientation and then find out the relationship. Any questions? Fine. So, if you have understood this, please try to solve this, I will help you. So, please look at this six-axis robot. You have to find the position you have to find the position, orientation of the tool at the soft home position shown below for the six-axis articulated Arm Intelledex 660T. So, this is a commercial robot Arm known as Intelledex 660T. It has got six-axis. Only difference is that you have a roll here, a shoulder roll here compared to other robots and then you have a tool roll also here and you will see that there is a tool pitch here, but there is not tool yaw in this case. You do not see a separate tool yaw axis here. But instead you will have a shoulder roll in this case. So, it has got six-axis. So, do you think it has all the 6 degrees of freedom or they are, is it a redundant manipulator or manipulator with the less number of joints than required. Because you do not see a yaw joint here, yaw axis. So, you will be able to position the tool at any arbitrary yaw? Yes or no? You understood my question or not first say? So, whether you consider this as a holonomic one or a non-holonomic one. So, one advantage here is that we can actually use this shoulder roll, the shoulder roll can be used to convert this axis to a yaw axis. So, you if this is the current elbow or this is the elbow you have a pitch. Now, this is the pitch axis but if I want a yaw, I just rotate it like this, and then do this. Now, I got yaw axis. So, this can actually act as a pitch or yaw axis by using a shoulder roll. So, basically, you are having twist roll axis and you will be given a pitch axis, so you will still be able to control the yaw because you have an additional roll here, which can position it using the, can control yaw using this two joints. So, effectively just got all the 6 degrees of freedom, so it can actually be it is actually ergonomic robot. Yeah, so that is the question I am going to ask you. So, let us see how to assign the coordinate frames here. First question, the first thing to do is to assign coordinate frame. So we know that how to do this. So, we will start with this. So, this will be Z0, X0 and I call this as L0. Now, you have a shoulder roll and a shoulder, we call this as a shoulder pitch, this is shoulder roll, this is shoulder pitch and this is elbow pitch and then tool pitch and then tool roll, that is the way how it is defined. So, we need to look at each, you can assign this as the first one or this as the first one depending on how you assign this. Either you take this as the second axis or this as the second axis, then that shown as the third axis. Now we have to see, the same principle you should follow. So your axis will be like this your Z will be like this and this will be the Z. So, you will be getting a frame here, the origin will be there and once you have the Z axis, then you assign the X axis and Y axis based on the principle, it should be orthogonal to this and do go with the same procedure or follow the same rules, you will be getting the coordinate frames. So, I am assigning this as the Z1 axis and you have the X1 and Y1. Because X1 should be orthogonal to Z0 and Z1, therefore you take X1 in this direction and then you assign Y1 and the next joint, what will be the next joint. The origin, you have to see first the origin, so the axis will be this one, that will be the Z axis and this is the Z axis. So, they intersect at the same point and therefore we will be having it at the same origin, it will be at the same origin and then go with the same rule. So, you will be getting it as Z2, I have taken this in this direction. As I told you I can take in this direction or this direction, so any direction you can take. So, this is Z2 X2 and follow the procedure, you will get Z3, Z4 and then finally, your Z5 and Z6, Z5 and of course you have to assign here also, yeah. So, this is the way how you assign coordinate frame. So, I have not done anything new here, whatever we discussed in the previous classes, I just did it without going into the details, how did you get each one of this. So, as you know Z2 this axis, this axis and this axis, they are parallel, the joints are parallel. Therefore, you get Z2, Z3 and Z4 in the same direction and then since the origin is to the same point, Z5 is in this direction, the tool roll is in this direction and therefore Z5 and ZX, Z4 intersect here at the same point. So, we take the same origin here and then assign the other axis and finally the tooltip will be here Z6. So, that is the way how you get the, time to stop. So, this is the thing. So, now get DH parameters, so I am giving this to you. So, please find out the DH parameters and bring the DH parameters tomorrow, when we will see how to develop the forward kinematic relationship for this. So, once you get DH parameters, then there is nothing to do other than the matrix multiplication. So, get the forward kinematics relationship for this robot. So, see you tomorrow morning. Thank you. |
Introduction_to_Robotics | Lecture_31_Overview_of_Electric_Actuators_and_Operational_Needs.txt | I am Doctor Krishna Vasudevan from Electrical Department and we will be first looking at the subject of Electric Actuators for the purpose of robots. So, let me write that down. So, let us look at actuators and that is what we are going to talk about. Actuators are essentially elements in the system, elements in the system, to create motion. There are of course, various ways in which you can get actuation. There are hydraulic actuators, there are pneumatic actuator and there are electric actuator. apart from , i mean these are the main variety of actuation system that are used in industry and maybe you have studied about some of these earlier. Now, of course, when you look at the area of robotics one needs to pay attention to the needs of the domain of the robotics, and choose actuators that are able to meet the application need. So, let us start with some examples, examples of robots. I am sure you would have seen or heard or whatever about various application of robot. Any suggestion, where have you seen use of these kind of system? Student: Drones. Professor: Drones, yes drones are one widely used currently, a remotely operated system now and we had a very famous examples of a drone attack few days ago. Anything else? Student: Industrial robot. Professor: Industrial robots, can you be more specific? Student: Robotic arms. Professor: Robotic arms, for what purpose? Student: Pick and place. Professor: Pick and place robots are one of the most frequently used in industries. Anything else? Student: Assistive device. Professor: What? Student: Assistive device? Professor: Assistive device. Assistive devices in the sense, can you be more specific? Student: Prosthesis. Professor: Prosthesis? Student: Yes. Professor: Yes, they are assistive devices, yes, so we would not really call them in, I mean fit them in the place of robots because when you say robots we are talking about more autonomous system. Any other examples of robots? Student: Mobile. Professor: Mobile robots used for what? Application. Student: Terrain. Professor: Terrain, mapping. Student: Modular. Professor: Modular? Can you be more specific what do you mean by modular applications? Student: They are building their own shape. Professor: Modular, that is you are saying building their own shape. We have robots in medicine that can perform surgery. Student: Remotely operated. Professor: Remotely operated? Student: Yes. Professor: Yes definitely remotely operated, not yet, we are not yet in a state where we can keep yourself a robot. We are not there basically. So, these are all many of the systems that are there, and certainly there are lot of needs for remotely operated systems. So remotely operated systems for example it could be army vehicles, armored vehicles where you want to send a remotely operated vehicle in to the enemy domain and do some sort of surveillance perhaps which is I mean, you have already listed the example of drones which do that but drones are for aerial survey. Whereas you may have remotely operated armored vehicles on land which does the same thing, which enter into an enemy domain and do some kind information as they can get. So, in all these situations the goal is to have some sort of autonomous ability, probably remotely supervised approach. And if you look at the design of all these kind of system, one then look for, so look at all these applications, it is important that they are having movement which is the essence of having a robot, you wanted to achieve something. And since, you are looking at this kind of an application it is necessary that size of any system that is used on board, size must be reasonably small. Of course, ideally you would say that I wanted it to be as small as you can make it no size at all, zero size and achieve some actuation but that of course, is impossible, so you would do in the best feasible kind of system. And if, if you have applications that are not going to be something that requires movement, physically from one place to another, if you take for example, industrial system, industrial robots we had one variety which we have mentioned that is equipment that are going to take some object and then put it somewhere else. So these are, these mechanisms do not move from location to location, they are stationery and their objective is to move an object from one place to another, so they have robotic arm that gets hold of one object and then shifts their location to some other, some other location, robot itself does not move. But on the other hand you have another application for example, if you look at the aspect of material handling, material handling robots. So, here the idea is that you may want to shift some material from one place to another which is a long distance away, so you have a system, you may have and there are various ways of achieving it, one could be a sort of box type thing with wheel here and the object that is located here, and this box moves wherever it needs to go, I am sure you would have seen these kind of applications somewhere. So, applications in industry are for variety things. Then you have also application for underwater, underwater robotics. So, you have systems that go underwater look at the environment there, maybe attempt to, attempt to locate something which has accidentally fallen off. So, in all these type of situation you need size to be reasonably small because if you are going to look at a system that is moving, obviously anything that is going to move has to shift itself first apart from whatever load is going to be there. So if you do not want the actual system itself to be extremely heavy, you want it to be able to take the load with the system, on the other hand if you have these kind of systems you are not really concerned so much about how big it is because all the while it is going to stay in one place and therefore you may have a little bit of delay in sort of selecting the actuator and selecting the other system. So, one important thing as I mentioned is therefore how big the system is. The second important thing is that your system should meet operational needs. There is no use in having a system that is small but it cannot meet your load requirement, but if you want to lift this phone and put it there, you need to lift a certain weight, for that you need to have a certain actuator or a certain step and that has to be moved so the actuator can be capable of accommodating the movement which makes the arm move from one place to another and then put it there. So, range of movement is important, speed of movement is important all that is going to decide how, how much is the capacity of the actuator. And then, you do not want to have an uncontrolled system, you cannot have an actuator, fit it and say you do what you want. So you need to have an ability to determine how this system is going to actually move, instance to instance. So, you need to have an actuator that has very good ability for control. So, all these three aspects one, two and three are major aspects in determining what sort of actuation system you want. The other important aspect in any actuation requires energy, without a source you cannot achieve any movement, therefore you also need to look at what kind of sources are available, sources for energy, what sort of sources are available at a particular location for a particular application that you can make use of. So, if you look at examples for this, let us say you have an industrial robot which is stationary. One other use for an industrial robot which is at fixed location is robots used for, used for welding. Have you seen that? In assembly line especially if you go to automotive industry, assembly lines are filled with these things, robots that bend in all kinds of forms and then apply the requirement on the weld. If you look at these kind of applications, they are always located at a fixed place in an assembly line, they do not need to move, they do not need to do anything else, other than bend their arms and things that are determined apriorly. You do not go every time and tell the robot to move like this. So somewhere it has been decided apriory that this robot will start with its arm in a particular position then move through few angle and rotations and all that, and they go and stay there for some particular time until the weld process is over, and then come back in a certain way. So these are, since they are always fixed in a particular way and the industry is filled with electrical, electrical sources in the sense electricity in available everywhere, it is most appropriate to build this out of electrically actuated systems. Electrically actuated in the sense at least the first source of activation would be electricity, the end result would still be achieved using something else. So, there are actuating elements known as electro hydraulic system. So they are essential hydraulic systems but the first level of actuation is by, is by electrical means. So, the electrical entity, the electrical actuated entity then releases the flow of oil and that oil is going to ultimately result in the movement of the arm. So, it maybe electro hydraulic but then the initial form at least may be electricity because its widely available. But on the other hand, if you look at an underwater robot, how do you actuate this? Student: we can use a wired connection Professor: That is very difficult. You do not know at what depth you are. If you are not going to get a fixed amount of this thing at all, initially there is no measure when you are on the surface, as you go down you get increasing amounts of thing, so that is very difficult. Anything else? Student: Battery is there. Professor: Battery is probably the best bet as of now. Otherwise you could have some kind of oil, fuel, diesel etcetera stored within that and then operate an engine which you then give to the actuation to whatever you want to move. And that is another option, but I do not think they are used in this underwater application, you do not want to take fuel all the way down. But in majority of the situation where the depth is not very high, they do not give the storage mechanism at all inside. They are simply having a long reach of electrical supply which is going to be available on shore, onboard and you simply make a long electrical link to the underwater robots and it draws electricity from the shore. So this could be a way in which it could. If you want an autonomous operation then you need to definitely have some kind of storage mechanism onboard, so that is what we have. Here you have as I mentioned, industrial electrical supply. Then I mentioned remotely operated armored vehicles. In this case it is remotely operated that means you are looking at an armored vehicle located somewhere and then you going to be sitting far far away inside your own area and this enemy, this remotely operated vehicle is going to go into the enemy domain and you need to operate from there which means that it is impossible for you to have a long electrical link for several long distances. Therefore there must be a local energy storage and this is the armored vehicle. Therefore the best form of energy storage will be fuel itself. So in this case what you have onboard is basically diesel engine. But this engine has to first of all move the vehicle that is one thing. Apart from that the vehicle may have lot of other accessories for detecting many things. You may need to, you may need to detect whether you have any explosives located underground for which you need some sort of sensor. And those will need to be actuated by some arm. So you have to lift the sensor and then maybe rotate it adjacent to the arm, move it a little bit maybe. Therefore there are further actuation means for moving other accessories that are located on the board. So, how does one move that? Ultimately the source of all that maybe from the diesel engine. So in these kind of applications usually the way it is, it happens is, this then is used to generate electricity and then that is used to operate whatever else we want to operate as an electrical accessory. So, there are various forms in which energy storage or energy usage could be available and one needs to select the appropriate mechanism for that. Now, if you look at this aspect, ability to have good control you need to have systems where it can respond the way you ask it to. So if you look at the way systems are designed today, most of the systems that require some sort of a control depends upon electronics. You want to write some software, you have some algorithm which you know, which you want to implement and which says that this arm should move from this to this location, to this part with certain velocity profiles etcetera. So this is an algorithm and the easiest way to implement the algorithm is to implement it in the electronics, some digital electronics. FPGS MCs so on. So this is exactly what you have, an electronics forms the basis of implementing very sophisticated algorithms. And once you have that you may then use the end result of your algorithm which is available in electronics to actuate the non electronics system that is fine, but it is easier if there is an electric system in hand which we can control. And it so happens that when you look actuators that you want to control, as of today the actuating systems that give you the best ability of control are electric system. So, from that viewpoint you have electrical actuators as the best means that is available. Then if you look at the system that meet the operational need, you need to select the capacity of the actuator that is apropriate for the particular use at hand, which means that if you want to lift this phone you need a certain capability from the actuator, if you want to lift this entire bench, obviously your actuator needs to be bigger. So one needs to select the size and the capability accordingly, there also you find that electric systems are fairly good, it means that output or not really our, the mechanical torque that it is able to generate divided by the weight of the system, is what is going to determine when the efficacy of a particular actuator that you select. And electric actuators are not very bad in this case, they do fairly well and are able to meet most requirements. However, if you look at this number and you want to select the best out of the lot, you will find that hydraulic systems offer the best options amongst all varieties. They, they are usually the smallest for its output load capability that you want. So, if you look at the earlier days when actuation systems were required, they were invariably hydraulic systems. But the world is sort of moving away from hydraulic systems mainly due to the fact that hydraulic systems are difficult to maintain at a big new system, you have to have oil which is high pressure, and then that oil has to circulate through lot of orifaces and you have to maintain the oil to be highly consistent, no dust etcetera and you have to take care of leakages which will make the operation very difficult. So, because of all these things you want to move towards electric actuators which are much much easier to maintain and operate. So, due to all these kind of situation, so if you look at the first one, hydraulic are the best, as far as size is concerned, but due to other things that we just now discussed we want to move toward electric actuator. So, if you then look at electric actuators itself, electric actuators are today mainly electric motors. There are sometimes other valves that are used, electromagnetic valves, which can either release the fluid flow or stop the fluid flow. But unless that is the role of, of that is the operational need and mostly one looks at electric motors. And when you look at electric motors one needs to take recognition of the facts that what is the source of energy that is available, in what form? Electricity, that is available, electricity is available in either in DC or AC and one may at the outset say that I would choose the motor that is appropriate for the form of the source. You have AC source, the AC motor, a DC source the DC motor, that approach works well if you want to look at very small application. So, if you look at electric motors, I am sure you would have all at least had some initial introduction to electric motors. So, electric motors are basically electro mechanical energy conversion devices which then accept an electrical input and deliver a mechanical output, or this is mechanical side of link and this is electrical side of link. And en electric motor therefore enables an interaction between these two sides. You make the electric motor or electric machine in a more sort of generic sense is an appliance or a device that can change the electrical energy and convert it into mechanical or take mechanical and convert to electrical, both ways are feasible for any (electric) electric, electro mechanical device. And this interaction happens in the, that is, is enabled by the presence of magnetic fields inside the system. So, unless you have magnetic field this entire thing does not happen. So first of all you need establish the field inside the system. So, if you have such a system then it is feasible to control the electrical side and get what you want on the mechanical side. Ultimately as far as robots are concerned, you are interested in the mechanical side of things and to achieve what you want, you need to have a good ability to control the electrical side of things. There are many different varieties of electric motors that are available. So, broadly these motors are called as, one is DC motors and the other is AC motors. In AC motors there are different varieties, one is called as the induction motor, another is the synchronous motor. The synchronous motor, there are other variety which are called as Brushless DC motor, Permanent Magnet Synchronous motor. There are yet another group of machines which are known as Variable Reluctance Machine. So under this group what you have are Stepper Motors and Switched Reluctance Motors. And now there is another variety which is also making its appearance that is called Synchronous Reluctance Motors, reluctance motors. So these are broadly a large range of available electric motors for used in your actuation, used for actuation. Now, the question is, which one will you use? Now, if you look at the need for actuation I said earlier that any system that you are going to select must meet the operational needs. What do you think will be an operational need? From the actuator, what do you think will be an operational need? Student: Torque requirement. Professor: Torque requirement is the primary need. So torque capability is a primary need. Any sort of movement requires due to generating a force which in again a rotating system is the mechanical torque. Now, if this is what you need let me give you few examples, let us say that with respect to time, I am going to draw a graph of mechanical torque, actuator 1 gives you, I mean this is T equal to 0. That is when you start, when you start the actuator and you find that the mechanical torque goes like that. Let us take the table to generate the mechanical torque that looks like this. In the next case, I have another actuator 2. This is torque again, the next actuator generates the mechanical torque that looks like this. Which one will you choose? Student: The stable one. Professor: Why did you use the word stable? Student: the other one have oscillations. Professor: So, there are oscillations, I do not know whether it is decent enough to call it as unstable. But so you think that the first one is better, why? How do you think that the oscillation, the second one will impact your operation? Student: there will be vibrations. Professor: There will be vibrations. Student: Object may get damaged. Professor: Object may get, may have some damage in a few cases, but what does it mean? You said that there may be vibration. What determines whether vibrations will be there? Definitely this mechanical torque is having oscillation. Fine. But is it going to imply that you will have vibrations? Is there any thing else between this and the vibrations that you will see? Is there anything else that is going to determine how much vibration will be there? Damping is the important aspect, you are going to have an actuated that is generating this, but in all of that, this result in the vibration of a system, there are certain other elements in between. For example, if you take the case of a robotic arm, let us say I have a robotic arm that is going to pickup this and place it there. And in my arm I have a actuator that in the first case moves like this, second case it is active. The first case is obvious, there is no difficulty there is a smooth mechanical torque, it goes and settles down there. Now in the next case, there is a slight oscillation that is there. So how do you think the system will move? Is it going to really move like this? Student: There we have damping later. Professor: whether you have? Student: damping later. Professor: damping later. So the actuator that is resting inside, at the place where which is going to have a rotational motion, that is the one that is having this kind of a mechanical torque, and the motion is not of the actuator but of your arm. And between the actuator and the motion of the arm, sits your mechanical system equation, which contains lot of mass and therefore there may be a moment of inertia which reflects on to the, on to the actuator. And that moment of inertia and the motion of the arm and your weight etcetera, will mean a substantial actuation of this ripple, even if there is no damping, substantial actuation of that ripple may be there in the actual motion that is felt. Student: so it depends on the natural frequency. Professor: It depends on the natural frequency. Professor: It may. So, exactly. So there are these undesirable things the question is how much? There is likely to be losses in the system due to this, there maybe undesirable resonances that may happen, but the question is at what frequency? So, if this oscillation frequency is far far away from your mechanical system, resonance frequencies. As long as it does not cause an undesirable oscillation in rotor speed itself, definitely let us say you have another actuator, a third actuator which is generating something like this, how is this different from the second one? Student: higher amplitude Professor: higher amplitude lower frequently. So which one do you think is more, is little more difficult, is it the third one or the second one? Student: Third one. Professor: Third one. Why? Because amplitude are higher definitely, then we can match the mechanical frequency of the resonance. Usually mechanical resonance frequencies are low frequency. You do not look at the mechanical resonance happening at 100 kilohertz, it never does that. If you want 100 kilohertz resonance in a mechanical system you are looking at ano scale, a very very light system which is unsuitable for robotic applications. So, mechanical system when you look at it usually they are mass, weight and therefore resonance frequencies are fairly low. Therefore the last system that I do is more likely to cause difficulties in operation as compared to the second one. So given the mechanical system you can always therefore determine what size of frequencies are allowable in that system such that it does not cause a difficulty. We must look at the fact that we are looking at an engineering application and not a physics application. I mean if you look at just analysis and say physics, if you want an output equal to x, you are looking at x dot 0 as the output. But when you come to engineering, if you say you want an output of x, you are looking at x plus or minus something, which is acceptability. So acceptability is what is of intrest not the fact that it should be exactly equal to some number. For example, if you say that this mechanical torque I want to be equal to 50 Newton meter. As an engineer, you would immediately ask how much is the accuracy. You want to get 50 point 00000 or you can 50 with some allowance there, so you always say plus or minus some delta. So, if this is the requirement that is physically achieved, you can always say depending on the application I want this value to be equal to 0 point 1 Newton meters, maybe the application require or maybe the application require 0 point 01 Newton meters, but it is never equal to 0. So you have always a certain allowance in your application which can then determine how much of ripple magnitude is allowable. And then comes the question of how much ripple frequency is allowable which then consideration based on your mechanical system design, so based on that one can allow. Therefore we are now saying that it is not necessary to have this kind of system, if you get it, very good, nobody complains. But it is okay to have this kind of system, provided you are within your design limits, design requirement. So, this is, this therefore is a very important aspect in system design that such a behavior is acceptable, why we will see later as we go on. So, this is one important aspect, ability for generating torque. The next one is speed of movement. It should be capable of moving at whatever speed you want it to move. Having said that, now let us say that you want ultimately this robot arm to go from here to there, how much time would you like it to take to go from here to there? Do you want it to move in one second or do you want it to move in let us say fifteen seconds, how will you determine that? Student: based on weight. Professor: Weight of what we are moving definitely has to be considered, but how long it is going to take to move from here to there is usually an operational need. For example, if you are at an assembly line and you have objects that are moving along the belt and the robot has to pick this up and put it in a box. So this is one of the things in an assembly line, where you want this entity to assemble the output, the end output you want to lift it and put it in a box and close it and send it over for dispatch, so this is one application. So you want this to take it out, put it in a bag. And this entire operation of taking this from here, locate it in the box and bringing the arm back must be finished by the time the next object comes, otherwise it is going to miss norms. So you need in a determined operating the rate of movement of the belt then you know how much time you have in order that this arm goes from here to there and then comes back. So, that when determining the speed with which you wanted to move. Usually given a mechanical system you do not foresee that you require to move from this, from here to there, and bring it back in one second. That is too high a speed for a mechanical system, You can build a mechanical system but the actuation need to be very very large, you are looking at very fast movement, faster the movement move your actuation need. So usually speed, the end speed of, speed which you want to achieve in comparison to the motor speed that are available to you would be rather slow. If you look at an electric motor speed for example, you will be seeking in the levels of something like 1500 RPM. You do not want the arm to move at 1000 RPM and keep it there. You will probably hit something, impossible to do. You would be looking at a much slower speed. So now, the question arises is do you make the motor itself rotate at a slow speed, or do you make the motor rotate at higher speed and do something else to make your arm move slower? Is it feasible to make the motor rotate at higher speed and get the arm to moving slower? How will you do it? Student: Gears. Professor: Gears is one. So you are really looking at motor speed versus load speed. Whether you wanted to be same or you wanted to be different. So, this is one important aspect in deciding what kind of motors you wanted to be used. |
Introduction_to_Robotics | Lecture_93_Occupa_Grid_Mapping.txt | hello everyone and welcome to the third lecture in a week 11 in the intro to robotics course where we are going to continue looking at probabilistic robotics right so we have looked at various estimation problems and various kinds of models so far so in the last lecture we were talking about uh how the motion model has to be corrected in the presence of a map right so where does this map come from right so you can assume that the map is given to you a priori but quite often ah being able to you know estimate the map right of the environment in which you are operating in ah is one of the really an ability that we want an autonomous robot to have right so one way of thinking about it is it is a core competency for autonomous system being able to detect estimate the map of the region in which they are operating right so formally what we are going to assume is a map m is essentially just a list of objects in the environment along with their properties right you could say ok there is an obstacle here it's in location x y and and then the orientation of theta one right so that could be m one right so i'm going to look at this as a collection of objects along with your properties in some sense uh this whole map estimation problem is a lot more difficult than what we have looked at so far the all the problems in terms of you know state estimation and and even estimating the motion models and so on so are slightly easier because even if you are operating in a continuous space right so we are at any point of time keeping track of only one location when we are looking at ah state estimation right so all the bayesian filter problems and all the slightly simpler but in the case of mapping we are literally looking at so many different locations right in fact if the if you are operating in a continuous space then you are really looking at being able to represent this continuous workspace in some kind of an abstract form right so so just make the job easier right so think of this as a more of a discrete estimation problems we move it down into this kind of a list of objects and their properties right and so the mapping is done in two ways right so typically so this list of objects that we talked about right they could either be some kind of a feature based map right so in a feature based map n is some kind of a feature index right so features could be ah the different objects in the environment could be wall positions of walls and partitions and so on so forth right and so the value of m n remember m m is just m n is an nth object right contains other than the properties of the feature right it also contains the cartesian location of the features in the in the workspace right so you can just say ok here the xy location i have a feature it's like it could be a table so in which case i would have some kind of dimensions of the table and and so on so forth right so the feature based maps ah specify the shape of the environment at specific locations right namely the locations of these objects that you have on the map right it doesn't necessarily give you a complete description of uh the shape of the environment where there are no objects right so it sometimes it's actually useful to have these feature based maps when i'm trying to do a localization in in the world right so if i'm actually next to a specific feature right if i'm next to for example if i'm next to a particular building when i'm trying to do navigation right it's easier for me to know where i am oh i am next to that building so that way i get i get more information when i have this kind of feature based representation right and likewise another advantage with feature based representation is easy to add additional features and also it's easy to move move an object around right so i can adjust the position of an object on the map uh just by you know doing additional sensing or getting closer to the object and therefore being able to refine my position estimate of the object and so on so forth so so feature based maps are easier to update in some sense and and i can do this without having to worry about actually verifying any of the other locations or other objects that are there in my ah in my map right so i can just independently adjust the position of the new features so that way it's convenient in certain conditions but in other ways when i want to figure out if a particular location is free for me to move into right suppose do i have an obstacle or not like we were talking about in the motion model right in such a case i would really have to run through all my objects right all my features to figure out if that is a feature that intersects with that location that i want to move to right in such cases uh it's easier for us to maintain what are called location based maps like in feature based map each m correspond each mi okay corresponds to a specific object or a specific feature that i want to map right in the location based maps each mi corresponds to a specific location right so we call these location based maps as volumetric maps in the sense that they offer a label for any location in the world right so in the feature based maps you do not get that volumetric maps they not only contain information about the actual objects they also contain information about absence of objects for example spree free space right so if i want to know if there is a particular location to which i can move to having a location based map and having seeing that location as marked as free space is a useful thing to have right so in many cases when i want to do these kinds of planning right path planning and so on so forth i would like to have this kind of location based maps right when i wanted to localization and i want to figure out where i am in the map right with respect to uh you know landmarks and so on so forth feature based maps are easy and when i want to do planning and i want to do some kind of prediction in terms of where i will end up with right uh location based maps are more useful right and quite often in many practical scenarios we would like to go back and forth between the two right and so for the rest of the lecture i will look at a specific kind of location map right location based map which is very popular which are called occupancy maps right occupancy grids or occupancy maps right and basically the idea here is they assign to each x y coordinate right a binary occupancy value 0 0 or one zero means there is uh it's location is free one means the location is occupied right that's why it's called occupancy maps so one means a location is occupied so i can query any location in in a in in a world and then figure out whether that location has a object or does not have an object okay so if you actually you can even store it as a bitmap as zero and one and therefore sometimes even for large spaces you can have a compact occupancy grid or occupancy map so when you are operating in a continuous space right so we typically don't look at each continuous x y coordinate value for occupancy grid occupancy maps what we do is we actually make a grid space right around it right so we make a make like a grid line so we will see it in later later in one of the slides and for each cell in the grid we assign a occupancy value as opposed to assigning it for each xy coordinate because we are operating in a continuous space then this is not really a practical approach right so what we do is we discretize the state space and make it make it into a grid and that is when occupancy maps are also called occupancy grids right right so looking at the occupancy grid algorithm so in some sense the goal is to compute the following posterior right so given a sequence of observations and the sequence of states right in each state i am making some kind of an observation right i would like to estimate a map yeah what is the probability over the map m right so remember m is actually a collection of mis right for each m i denotes a particular index in the map right so it's a collection of these objects so it could so in the occupancy grid each mi is going to correspond to a location right so the controls really play no role in the occupancy grid maps right since we already know where we are right so for each location i make a measurement i really need to know whether it's occupied or not occupied right so i don't really need to know what is the control that will get me through that space that in some sense i can assume that the path that i am actually following for doing this mapping is already known to me right so we are going to say that each mi right denotes a grid cell with index i right so which is each so i'm going to say m one means index one right so and an occupancy grid map is essentially just a collection of many of these grid cells uh mi right so for all the grid cells mi and each mi is going to have either a one or a 0. so m i equal to 1 means that it is occupied 0 means free right and i am going to use the probability of m i to refer to the probability that m i is equal to 1 that is actually occupied so whenever you see when i whenever we write p of m i it actually means the probability that the i-th cell is occupied right so and then what we do here is instead of looking at this probability probability of m given z 1 to z t i am going to just say that ok what is the probability of m i given z 1 z 1 to t and x 1 to t and then look at the pro look at this for each one of these cells independently so i am going to assume that the probability that a cell is occupied does not influence or gets influenced by any of the other cells in the in the state space right its influence only by the measurements i am making and the actual state values right only influence only by z1 to t and x1 to t so this makes my problem much easier otherwise i have to look at this very large joint estimation problem right so instead of that we will assume that each cell is estimated independently right now if you think about it so what are we getting here so i have a single variable right that i am trying to estimate right and this single variable is static i am assuming that the cell is static it is not changing right its not going to get affected by my actions or my movement because we are talking about the map right so the map is either clear there or it is occupied right thats a single variable taking on one binary value zero or one and it doesn't change right so we already looked at this right we already looked at the binary base filter with static state earlier and we already looked at the algorithm right so all we really do need to do now is to apply that algorithm so where instead of looking at x right and so looking at the state x given a sequence of observations right i am going to look at what is the value of m i probability of m i given the sequence of observations and the sequence of states i have seen right its exactly the same algorithm so instead of looking at it as a state estimation algorithm i am going to run the binary base filter with static state algorithm to solve the occupancy grid mapping problem right using the same kind of logs representation of occupancy right so this algorithm should look very familiar to you you can go back and look at the binary base filter with static stake algorithm and here for each so what do i have here so i have the likelihood of the occupancy that i have seen so far right right starting from the beginning so basically t minus 1 is like my belief state on the map okay it's like the belief state on the map right so what is the likelihood that cell i is occupied at time based on the measurements i have made till time t minus 1. so that is what this set is right so you'll have one entry for each value for i and then you have your current state and your current measurement okay now for all cells mi right now if mi is in the perceptual field of zt so what do i mean by that that means that when in x t i'm making a measurement z t right so how much of the world can i see at z t depending on what kind of sensing do i have it's not that i'm only looking at the current state i'm in right so i could look at a region of state around it as we'll see in the next next couple of slides right so for example i'll just go just look at this for the measurement model right so even though i am here in this particular state right now so i am actually able to see lot more states based on my field of vision right so i have this field of visions i am able to see a lot more states and i can actually make estimates about whether these states are also occupied or not just by making a measurement at x t so when i say i am at x t and i am making a measurement z t that does doesn't mean zt returns information only about the current cell occupied by the robot it could actually give me information about more more parts of the state space so that is basically what we are trying to take into account here so when i say if mi is in the perceptual field of zt that means that the measurement zt gives me some information about the cell i right then i update the log likelihood right using the inverse sensor model that i that i have earlier right so if you remember this this expression so this is l t minus 1 comma i which is my old belief in the likelihood form right and then the inverse sensor model so the likelihood through the inverse sensor model and minus l naught which is the initial likelihood that i start off with right so this is my update equation right and if it is outside the perceptual field of zt that means i cannot make any estimate about whether cell i is occupied or not if it is outside the perceptual field of that t then i leave the likelihood as it is so the new belief state is same as the whole belief state because i have not made any measurement on cell i for me to change this right and i do this for all cells mi at the end of it i get the new map right this is the new belief over the map remember we what we are looking for is the probability of m right given x one to t and z one to t and this is the new probability of m given z one to t x one to t ok so i hope that is clear right i am going through this a little faster because we already seen the binary base filter with static state and this is exactly the similar kind of an algorithm right except that instead of looking at the belief over your current location you are looking at the belief over a particular cell in the map right with the occupant secret okay so now the interesting question is okay how do i come up with this inverse sensor model to use in this occupancy grid algorithm right so how we come ah the occupancy grade algorithm right so ah so what we do is uh uh as follows right so just just think about this for a minute right so if the robot is here right and there is an obstacle either here or here there are two different situations where we are talking about both in both cases the robot has the same kind of field of view right robot has like a 15 degree cone of vision let us say right and in one case because there is an obstacle nearby the robot is able to look at all these states all the states marked in white and it can tell you these are clear and it can look at all the states marked in black and can tell you there is an obstacle there and all the gray cells it cannot give you any information above over and above what you already know right so this is not something that the robot the the current measurement can give you any information on right and here is a case where there's an obstacle that's a little farther away right the robot could potentially right so could potentially look at a lot more states here right the robot could potentially look at a lot more states here right and then it can give you information about all of these saying that they are clear because this is a the field of view there is nothing that stops it until it hits the obstacle right beyond the obstacle the robot doesn't know what is there right before the obstacle the robot knows what are the cells and these are all free and anything outside of this field of view or behind the obstacle it can't give you any additional information right so the inverse model is basically implemented in the log odds form directly so that i can just use the inverse model here instead of saying the log odds of the inverse model right so that's what we had earlier ah so this is this is the log odds of the inverse model so instead of saying that i could sorry you have to have to cancel that so instead of saying the log odds of the inverse model i can just say the inverse model here and learn the inverse model in terms of the log odds okay and so here you can see that everything that is white means it is free everything black means it's occupied everything gray means i don't know really okay so how do we go about doing this and right so here is a here is an algorithm that tells you what is being computed in the previous slide right so i am going to assume that the current state x t is given by x y and theta right so x y and theta and the cell i right the index i denotes a cell mi and which is centered at x i y i so if if you think of this as a cell right so the center of the cell the center of the cell is what i denote as x i like x i y i right now r is the distance of the cell from the robot right and phi is the angle the cell subtends with the with the robot the direction with the direction the robot is facing right so phi is the angle it subtends with the direction the robot is facing so now among all the possible sensors that could be looking in the direction remember zt is a collection of all sensor readings i pick the one that is pointed most in the direction of the object right so this basically this means that so the ob this is the sensor theta j sense is the sensor that is pointed as close to the direction of the object from the robot right so i picked that now what i do is if this object right is farther away than the maximum range of the sensor or where the sensor says there is an obstacle right if the object is farther away then the maximum range of the sensor or where the sensor says there's an obstacle whichever is the minimum of the two right then i say that hey i don't know anything so i am going to return l naught right or if the object is too far away from the direction the sensor is looking remember this is the sensor that is the closest to the object that is looking in the direction closest to the object but even then the object is too far away in both these cases i am going to say that i don't know where it is so this both these cases correspond to so the the first condition right the r condition corresponds to the cell being in one of these locations right and the second condition the the one on the fee corresponds to a cell being in one of these locations right so in both cases we say that we will return l naught so so what does it mean to return l naught remember if you plug in l naught here so i'll have l naught and this will become minus l naught so they will cancel out so i'll basically leave the belief asset is without changing anything so that's what i mean by saying i'll return l naught there okay okay then if ztk which is the distance of this of the obstacle right uh ztk is less than z max that means that the robot has sensed an obstacle right and r minus z max is less than alpha by two that means the the the distance to the cell m i right the distance to the cell mi is within the object alpha is the object thickness right so it is within the object thickness from the maximum range right so in which case i will return that the cell i is occupied all right on other hand if the distance is actually less than r is the distance to the cell right if the distance is actually less than ztk which means this ztk is the distance at which i have sensed an obstacle right if r is less than z t k then i will say r is free right then i will say l the the cell mi is free right so l free right so the cell m i is free if r is less than z t k right but if r is not less than z t k right if r is actually greater than z t k i am going to say its alpha by 2 then i'm going to say that the cell is occupied right so what do we mean by cell is occupied right and so this is a algorithm taken from the book and that's the error of the book itself so here this case you should read that as so that means i have detected an obstacle that is within the maximum distance of the within the maximum range of the sensor right and the distance of the cell is within the obstacle thickness of this distance in which case i would say that it is occupied right so this should be ztk or if it is less than ztk right then i'm going to say it is free okay so that's basically the algorithm okay so this is how i how i build up a inverse range map so i talked about two kinds of maps one which is a feature based map which is typically more useful when i want to do localization as we will see a little later and the other is this kind of location based map of which the most popular is the occupancy grid map and we just saw that how we can use the binary base filter with static state and an inverse sensor model in order to estimate the occupant secret map |
Introduction_to_Robotics | Tutorial_1_Probability_Basics.txt | hello and welcome to the first tutorial in the introduction to machine learning course my name is priyatash i am one of the teaching assistants for this course in this tutorial we'll be looking at some of the basics of probability theory before we start let's discuss the objectives of this tutorial the aim here is not to teach the concepts of probability if you are in any great detail instead we will just be providing a high level overview of the concepts that will be encountered later on in the course the idea here is that for those of you who have done a course in probability theory or are otherwise familiar with the content this tutorial should act as a refresher for others who may find some of the concepts unfamiliar we recommend that you go back and prepare those concepts from say an introductory textbook or any other resource so that when you encounter those concepts later on in the course you should be comfortable with them okay to start this tutorial we'll look at the definitions of some of the fundamental concepts the first one to consider is that of the sample space the set of all possible outcomes of an experiment is called the sample space and is denoted by capital omega individual elements are denoted by small omega and are termed elementary outcomes let's consider some examples in the first example the experiment consists of rolling an ordinary die the sample space here is the set of numbers between 1 and 6. each individual element here represents one of the six possible outcomes of rolling a die note that in this example the sample space is finite in the second example the experiment consists of tossing a coin repeatedly until the specified condition is observed here we are looking to observe five consecutive heads before terminating the experiment the sample space here is countably infinite we the individual elements are represented using the sequences of x's and d's where h and p stand for heads and tails respectively in the final example the experiment consists of measuring the speed of a vehicle with infinite position that the vehicle speeds can be negative the sample space is clearly the set of real numbers here we observe that the samples case can be uncountable the next concept we look at is that of an event an event is any collection of possible outcomes of an experiment that is any subset of the sample space the reason why events are important to us is because in general when we conduct an experiment we are not really that interested in the elementary outcomes rather we are more interested in some subsets of the elementary outcomes for example on rolling a die we might be interested in observing whether the outcome was even or odd so for example on a specific roll of a die we let's say we observed that the outcome was odd in the scenario whether the outcome was actually a one or a three or five is not as important to us as the fact that it was odd since we are considering sets in terms of sample spaces and events we will quickly go through the basic set theory notations as usual capital letters indicate sets and small letters indicate set elements we first look at the subset relation for all x if x element of a implies x element of b then we say that a is a subset of b or a is contained in b two sets a and b are said to be equal if both a subset of b and b subset of a hold the union of two sets a and b gives rise to a new set which contains elements of both a and b similarly the intersection of two sets gives rise to a new set which contains of only those elements which are common to both a and b finally the complement of set a is essentially the set which contains all elements in the universal set except for the elements present in a in our case the universal set is essentially the sample space this slide lists out the different properties of set operations such as commutativity associativity and distributivity which you should all be familiar with it also lists out the demarcan's laws which can be very useful according to the demarcan's laws the complement of the set a union b is equals to a complement intersection b complement similarly the complement of the set a intersection b is equals to a complement union b complement the de morgan's laws presented here are for two sets they can easily be extended for more than two sets coming back to events two events a and b are said to be disjoint or mutually exclusive if the intersection of the two sets is empty extending this concept to multiple sets we say that a sequence of events a1 a2 a3 and so on are pairwise disjoint if ai intersection aj is equals to null or all i not equals to j in the example below if each of the letters represents an event then the sequence of events a through e are pairwise disjoint since the intersection of any pair is empty if events a1 a2 a3 so on are pairwise disjoint and the union of the sequence of events gives rise to the sample space then the collection a1 a2 and so on is set to form a partition of the sample space omega this is illustrated in the figure below next we come to the concept of a sigma algebra given a sample space omega a sigma algebra is a collection f of subsets of the sample space with the following properties the null set is an element of f if a is an element of f then a complement is also an element of f if a is an element of f for every i belong to the natural numbers then union i equals to 1 to infinity a i is also an element of f a set a that belongs to f is called an f measurable set this is what we naturally understand as an event so going back to the third property what this essentially says is that if there are a number of events which belong in the sigma algebra then the countable union of these events also belongs in the sigma algebra let us consider an example consider omega equals to 1 2 3 this is our sample space with this sample space we can construct a number of different sigma algebras here the first sigma algebra f is essentially the power set of the sample space all possible events are present in the first sigma algebra however if we look at f2 in this case there are only two events the null set or the sample space itself you should verify that for both f1 and f2 all three properties listed above are satisfied now that we know what a sigma algebra is let us try and understand how this concept is useful first of all for any omega countable or uncountable the power set is always a sigma algebra for example for the sample space comprising of two elements h comma t a feasible sigma algebra is the power set this is not the only feasible sigma algebra as we have seen in the previous example but always the power set will give you a feasible sigma algebra however if omega is uncountable then probabilities cannot be assigned to every subset of the power set this is the crucial point which is why we need the concept of sigma algebras so just to recap if the sample space is finite or countable then we can kind of ignore the concept of sigma algebra because in such a scenario we can consider all possible events that is the power set of the sample space and meaningfully apply probabilities to each of these events however this cannot be done when the sample space is uncountable that is if omega is uncountable then probabilities cannot be assigned to every subset of 2 to the omega this is where the concept of sigma algebra shows its use when we have an experiment in which the sample space is uncountable for example let's say the sample space is the set of real numbers in such a scenario we have to identify the events which are of importance to us and use this along with the three properties listed in the previous slide to construct the sigma algebra and probabilities will then be assigned to the collection of sets in the single algebra next we look at the important concepts of probability measure and probability space a probability measure p on a specific sample space omega and sigma algebra f is a function from f to the closed interval 0 comma 1 which satisfies the following properties forward clear the null set equals to 0 probability of omega equals to 1 and if a1 a2 and so on is a collection of pairwise disjoint members of f then probability of the union of all such members is equals to the sum of their individual probabilities note that this holds because the sequence a1 a2 is pairwise disjoint the triple omega fp comprising us sample space omega a sigma algebra f which are subsets of omega and the probability measure p defined on omega comma f this is called a probability space for every probability problem that we come across there exists a probability space comprising of the triple omega fb even though we may not always explicitly take into consideration this probability space when we solve a problem it should always remain in the back of our heads let us now look at an example where we do consider the probability space involved in the problem consider a simple experiment of rolling an ordinary die in which we want to identify whether the outcome results in a prime number or not the first thing to consider is the sample space since there are only six possible outcomes in our experiment the sample space here is consists of the numbers between one to six next we look at the sigma algebra note that since the sample space is finite you might as well consider all possible events that is the power set of the sample space however note that the problem dictates that we are only interested in two possible events that is whether a number whether the outcome is prime or not thus restricting ourselves to these two events we can construct a simpler sigma algebra here we have two events which correspond to the events we are interested in and the remaining two events follow from the properties which the sigma algebra has to follow please verify that the sigma algebra listed here does actually satisfy the three properties that we have discussed above the final component is the probability measure the probability measure assigns a value between 0 and 1 that is the probability value to each of the components of the sigma algebra here the values listed assumes that the die is fair in the sense that the probability of each phase is equals to 1 by 6. having covered some of the very basics of probability in the next few slides we look at some rules which allow us to estimate probability values the first thing we look at is known as the bonferronis inequality according to this inequality probability of a intersection b is greater than equals to probability of a plus probability of b minus one the general form of this inequality is also listed what this inequality allows us to do is to give a lower bound on the intersection probability this is useful when the intersection probability is hard to calculate however if you notice the right hand side of the inequality you should observe that this result is only meaningful when the individual probabilities are sufficiently large for example if the probability of a and the probability of b both these values are very small then this minus 1 term dominates and the result doesn't make much sense according to the bool's inequality for any sets a1 a2 and so on the probability of the union of these sets is always less than equals to the sum of their individual probabilities clearly this gives us a useful upper bound for the probability of the union elements notice that this equality will all only hold when these sets are pairwise disjoint next we look at conditional property given two events a and b if probability of b is greater than zero then the conditional probability that a occurs given that b occurs is defined to be probability of a given b is equal to probability of a intersection b by probability of b essentially since event b has occurred it becomes a new sample space and now the probability of a is accordingly modified conditional probabilities are very useful when reasoning in the sense that once we have observed some event or beliefs or predictions of related events can be updated on or improved let us try working out a problem in which conditional probabilities are used a fair coin is tossed twice what is the probability that both process result in heads given that at least one of the tosses resulted in the heads go ahead and pause the video here and try working out the problem yourself from the question it is clear that there are four elementary outcomes both tosses resulted in heads both came up tails the first came up heads while the second class came of tails and the other way around since we are assuming that the coin is fair each of the elementary outcomes has the same probability of occurrence equals to one by four now we are interested in the probability that both tosses come up heads given that at least one resulted in the heads applying the conditional probability formula we have probability of a given b equals to probability of a intersection b divided by probability of b simplifying the intersection in the numerator we get the next step now we can apply the probability values of the elementary outcomes to get the result of one by three note that in the denominator each of these events is mutually exclusive thus the probability of the union of these three events is equals to the summation of the pro individual probabilities as an exercise try and solve the same problem with the modification that we observe only the first toss coming up heads that is we want the probability that both tosses resulted in heads given that the first loss resulted in a heads does this change the problem next we come to a very important theorem called the bayes theorem or the bayes rule we start with the equation for the conditional probability probability of a given b is equal to probability of a intersection b by probability of b rearranging we have probability of a intersection b equals to probability of a given b into probability of b now instead of starting from prod with probability of a given b if we started with probability of b given a you would have got probability of a intersection v equals to front view b given a into probability of a these two right hand sides can be equated to get probability of a given b is equal to into probability of b is equal to probability of b given a into probability of a now taking this probability of b to the right hand side we get probability of a given b is equal to probability of b q a into probability of a by probability of b this is what is known as the bayes rule note that what it essentially says is if i want to find the probability of a given that b happened i can use the information of probability of b given a along with the knowledge of probability of n probability v to get this value as you will see this is a very important formula here we again present the bayes rule in an expanded form where a1 a2 and so on form a partition of the sample space as mentioned bayes rule is important in that it allows us to compute the conditional probability probability of a given b from the inverse conditional probability probability of b given a let us look at a problem in which the bayes rule is applicable to answer a multiple choice question a student may either know the answer or may guess it assume that with probability b the student knows the answer to a question and with probability q the student guesses the right answer to a question she does not know what is the probability that for a question the student answers correctly she actually knew the answer to the question again pause the video herein try solving the problem yourself okay let us first assume that k is the event that the student knows the question and let c be the event that the student answers the question correctly now from the question we can gather the following information the probability that the student knows the question is p hence the probability that the student does not know the question is goes to 1 minus p the probability that the student answers the question correctly given that she knows the question is equals to 1 because if she knows the question she will definitely answer it correctly finally the probability that the student answers the question correctly given that she makes a guess that she does not know the question is q we are interested in the probability of the student knowing the question given that she has answered it correctly applying bayes rule we have probability of k given c is equal to probability of c given k into probability of k by probability of c the probability of answering the question correctly can be expanded in the denominator to consider the two situations probability of answering the question correctly given that the student knows the question and probability of answering the question correctly knowing that the student does not know the question now using the values which we have gathered from the question we can arrive at the answer of p by p plus q into 1 minus p note here that the page rule is essential to solve this problem because while from the question itself we have a handle on this value probability of c given k there is no direct way to arrive at the value of probability of k given c two events a and b are said to be independent if probability of a intersection b is equals to probability of a into probability of b more generally a family of events a where i is an element of the integers is called independent if probability of some subset of the events a i is equals to the product of the probabilities of those events essentially what what we're trying to say here is that if the you have a family of events ai then the independence condition holds only if for any subset of those events this condition holds from this it should be clear that pairwise independence does not imply independence that is pairwise independence is a weaker condition extending the notion of independence of events we can also consider conditional independence let a b and c be three events with probability of c strictly greater than zero the events a and b are called conditionally independent given c if probability of a intersection b given c equals to probability of a given c into probability of b given c this condition is very similar in form to the previous condition for independence of events equivalently the events a and b are conditionally independent given c if probability of a given b intersection c equals the probability of a given c this latter condition is quite informative what it says is that the probability of a calculated after after knowing the occurrence of event c is same as the probability of a calculated after having knowledge of occurrence of both events b and c thus observing the occurrence or non-occurrence of b does not provide any extra information and thus we can conclude that the events a and b are conditionally independent given c let us consider an example assume that admission into the mtech program at iit madras and iit bombay is based solely on candidates get score then probability of admission into iit madras given knowledge of the candidates admission status in iit bombay as well as the candidate's gear score is equivalent to the probability calculated simply knowing the candidate's grade score thus knowing the status of the candidate's admission into iit bombay does not provide any extra information hence since the condition is satisfied we can say that admission into the program at iit madras and admission into the program at iit bombay are independent events given knowledge of the candidate's gate score |
Introduction_to_Robotics | Lecture_24_Robot_Architectures.txt | okay so in the last class we talked about various robot architectures so robot architecture basically comes from the type of joint and the way the joints are arranged and we saw that the first 3d view of freedom of the robot is used for positioning the wrist and these positioning joints decide the architecture of the robot so we look at the way in which the first three joints can be arranged or the type of joints and the way can be arranged you'll be able to get different configurations different architectures for the robots and we found that there can be five commonly used to podium architecture and they are basically the polar coordinates cylindrical 40nm Cartesian coordinates jointed arm and selective compliance assembly robot arm so these are the the file commonly used architectures and we saw that this can actually be obtained by having two rotary joints and one Chris Matic joints and then we can have this cylindrical body by having our P and piece or two prismatic joints and then one rotary joint will give us V weak elbow diem for architecture and then if you have others the mark P you will be getting in Cartesian coordinate body and arm assembly so this was what we saw in the last class and again joined at bode arm architecture is RRR so if you have all the joints rotary joints then we will get it as a jointed um body and assembly architecture and another variation of this is basically Escarra again three rotary joints so you've got two rotary joints and one up Chris Matic joints I will be specific so you can get their a different configuration known as selective compliance or simply robot arm so we look at these configurations and then look at their coordinate sorry the workspace and then a little bit on the how the kinematics is the kinematics is easy simple or a difficult why it is difficult or why something some configurations are preferred over the other by looking at these architectures so first look at let us look at the polar coordinate body I'm Assembly so as we mentioned it's a are R and P so you have a FP which actually goes suppose this is the base and you have a coordinate frame like this then you have one joint which is the prismatic one which actually goes in this direction and then we can actually have one axis which actually rotates with respect to this that is it can actually go this is the one then you can actually go up and down in this direction and then you can have old leg these are so basically he is going like this and then in and out so you can actually get R and theta or R and V like this and then if you rotate this with respect to the vertical axis you will be able to cover them three dimensional space so the first one will give you to the planner positioning the this motion and this rotation will give you the planar positioning within a plane and out of line motion is possible because of the rotation with respect to the base so this is the way how this our RP is arranged and the structure will look like this so you have this motion the prismatic joint going out of this or going inside so you can actually get a motion like this and then the rotation about this axis the horizontal axis so you have a vertical axis rotation and a horizontal axis rotation so the horizontal axis rotation gives you a planar motion with this and the vertical axis rotation gives you the out of plane motion also so this way you will be able to get a three-dimensional workspace in polar coordinate so R theta will be the positioning that you can use and then we will actually dig you that Y dimension R so I mean XYZ coordinate can be obtained by these three joint so that if this is the risk point so assume this is whines and we can see that the risk point can actually be placed in the three-dimensional space using the R R and P configuration so that is basically the polar coordinates body an awesome assembly not a very popular configuration but there's a specific application you may find this useful we'll look at that okay know the workspace or we call this a work envelope of the robot because Bart space has got a little bit more specific meaning but the envelope is the the maximum envelope that the robot can actually reach that is basically known as the mark work and well app so the gross works envelope of a robot is defined as the locus of points in three tons of space that can be reached by the wrist so the points where the wrist can reach in three dimension space is generally known as the work envelope some time work and element workspace are used in the same context but in general work envelope is the maximum that you can reach overall in the [ __ ] in space work space may be a subset of the over on work envelope yes that depends on the join limits and other things so without considering those drawing limits you can say this is the gross work envelope we can think of a robot and this a bunt envelope is basically depending on the positioning axis only because we are talking about the wrist and the disposition is decided by the major axis and therefore work and Milaap is decided by the major axis I mean the type of joints in the axis I mean type of joints available and the orientation or the way in which the joints are organized okay so it shows the the polar workspace that you can actually achieve this kind of robots now if you look at the this is some of the commercial robot which actually uses a polar workspace so you can see that all the motors will be somewhere at the base and even for this linear motion also the motor will be somewhere here and this can actually reach the base also the and can actually reach the base and then pick up something and then do some tasks so that is why that is one of the advantages of this robot and since are the motors are summer at the base you can actually have large works large payload also because these motors need not carry the load of other joint so this is one example for a polar workspace robot not very popular not you don't see them many in the industry so these are the three parameters that actually defines the motion of the robots so you have a linear motion you have two rotations theta and V so the XY is see the position that the risk can reach is a function of these three parameters theta E and Rho so this is the the relationship that you can get from the with the quad Cartesian coordinates and the join parameter so we call this as the joint parameters because these are the variables that actually changes the position of the wrist and this relationship is not very straightforward compared to other robots that's why we call this a little bit of complex kinematics because you cannot directly tell what will be the XYZ position when you change many one of this you need to have a proper kinematic model for that one and other things like last which from the central supports went to reach objects on the floor and motor is one and two are very close to the base so it can actually be positioned in a proper way you don't get too much of mechanical design issues with this kind of so that's about the polar coordinate robots now if we so this was basically our RP now if you change this to R P P that is two prismatic joint and one rod rejoins then you will be getting this is a cylindrical coordinate or silty Bodie and um Robert or a cylindrical coordinate of the cylindrical body and a massive wave of motion so it consists of a vertical column and horizontal motion so you can actually have this kind of motion and this kind of emotion so you have the the arm which can be moved out in and out and vertical column relate you to which in a massively smooth up or down so you can have up-and-down motion and then in and out motion using this and then you can have a rotation with respect to this axis also so the these two these two P will be set the motion in this plane and the rotation will be said how it goes out of the plane Oz so that is the way how this cylindrical podium assemblies organized so you have one rotary joint and two prismatic joints so this prismatic joint will take you in this I suppose this is the X and z axis so the XZ plane motion is decided by these two and the y coordinate will be decided by the rotation with respect to this axis so this is the weekend cottage so as the name suggests the workspace is a earth is cylindrical but of course because of the mechanical requirements you won't be able to get the complete interests workspace the workspace will be having a constant dick to constant exchanges and in between the space will be the workspace of same taqueria so can you draw the work and we'll up for this what will be the work and will up for these robots so this is a typical cylindrical robots in industry what you can see so the workspace will be like a cylinder so you'll be having a bigger workspace I mean outer end my lab and then there will be a leaner and melih because of the mechanical construction you need to have joy and so you cannot reach the center so it'd be like this and this area will be the workspace that can or the envelop that you can have and because of this space which is known as a cylindrical robot okay so this will be the workspace that you can see so you will be having gifts here and then here because this distance is decided by the team on this motion so what is the race that you can have that decides this and this height is decided by the motion that you can have in this axis so these two are decided by these two prismatic joints and then assuming that it can go all the way 360 degrees you will assume that job but not all the robots will be able to go 360 degrees there will be mekin join limitation so the actual workspace will be a subset of this so what are the parameters that will decide the XYZ here function of two linear motions right so we call it as Rho and Z and then an angle respect to Z so these three parameters will decide the position of the wrist again not a straightforward relationship these two can be decided directly compared to the previous one this is simple because the x position and z position can be easily obtained by these two lines or depending on how much this has moved you will be able to get this and then the angle will be decided by this one so Y will be decided by the rotation and this xn that can be easier n that is the way how it is defined so the this is an example for a cylindrical workspace robot so you have three parameters theta is Theta n Rho and kinematic model is simple compared to the previous one and of course then you are using hydraulic actuators you will be able to get large forces because it's all vertical and the horizontal motions and per column is built with sweet tears requests guides for protection because the linear joints are always a troublesome always troublesome in especially in industry environment they get corroded or get damaged easily because you can see there will be protection on this surface so these are all provided with the bellows to protect the slides from getting jammed due to some Enron mental pollutants or something like that and there should be very smooth always otherwise you'll get lot of motion difficulties that is one of the difficulties with the this matic joint so you need to have proper protection for them otherwise it will get damaged okay the next one is basically the Cartesian coordinates called s P P and P so this is the Cartesian robots are the joints are prismatic so you can see there is a motion like this I a motion like this and emotion like this so all the coordinates X Y & Z directly related to the movement of these robots so it's very easy to find out the position of that wrist suppose this is the wrist we can easily find out the wrist by looking at the motion of each joint so it's a very simple kinematic model and exactly and actually there is not much of a modeling here the position of X is decided by one join Y is decided by another joint that is defined inside it by the other joint and therefore we can very easily find out the position of the tool so it's a very simple kinematic model so this is one example for a industrial robot with are the three prismatic joints up to the positioning to the up to the wrist you any questions sin may you have some questions okay yeah so so as you can see it is a very simple design because what started we use this kind of things very well all so far XYZ tables you use in many of the equipment where you want to position it in the three-dimensional space go for this kind of mechanical strength configuration so one so one advantage is that it is very easy to visualize and the control the control becomes very easy because you don't need to have any kinematic model just control individual motors forward or position you need the certainty is one disadvantage is that as in the previous case are these need to be protected are the slides need to protected otherwise it will actually get jammed or if they start malfunctioning I can't be able to move because of some kind of pollutant so are there other were depth is getting into this slide another disadvantage okay whatever it will be another disadvantage of this one can you tell me workspace is discontinuous why okay okay builds workspace later but any other thing lords okay yeah I know it will be an issue because you have if you have a large reach then you need to have bigger very load carrying then this water will be of observeit it has to be held by DS and it has to be held by that may be one reason yeah another major problem with this that the actual work space available over the works and envelope available will be something like this but the total robot says size will be much bigger than the size of the work space that you can get because each one has to move so you need to have a very large size of the robots to get a smaller workspace or if you increase the work space then the robot cells also keeps on increasing so it is not it's unlike other configurations this one has got the largest size of the robot for a given workspace so that is one disadvantage so it's not commonly accepted unless you have a large space available and if you have you can see in some cartesian robot robot will be very big and the work space will be somewhere in the center a small area if you can afford to that kind of thing and you don't yeah and that is what I mean even in that case the work space available will be very small compared to the size of the robot so that is the way how normally people make so they will be making a structure like this and then you will be having the slides on these one slide here and another slide here and then one up and down so the workspace will be somewhere in the center you will be able to get a workspace here in what the full workspace will be able to get so the overall size of the robot increases when you go for a prismatic joint so that is why this is not preferred normally most of the individual robots won't go for this kind of a configuration unless there is a specific requirement which warrants the use of this kind of constriction so that is the join that are sorry the prismatic joint prismatic robots so is a Cartesian workspace robots are represented robots so the workspace is Cartesian and you can actually find out the Part B word space will be something like this and of course there may be depending on the construction will be having map the workspace will be within this and you can actually get XYZ position depending on the the stroke length that is available for that so depending on where actually you place the workspace you'll be able to get a variation of this as the workspace so X Y Z well so directly so you don't need to have any other parameters here so the linear motions X Y depends that is actually the position of the wrist itself so you don't have any particular relationship here so simple kinematic model we did structure and then robust we carry large working volume the working volume is smaller than the robot volume requires free area between the robot and the object to manipulate gets protection so all those things are the disadvantages advantages are it is very easy to visualize and and program so that is the advantage all right so the next one is the jointed arm architecture which is the most commonly used architecture where we have our joints as rotary joints so this is the this point if this is the risk point this is the these points these are that is trying so easy that is point so you have one rotation here and the rotation here and then the rotation here and so you have three rotations three rotary joints so the first two rotary joint so these two rotary join these two rotary joints will decide about this plane motion within this plane decide and this one will be said out of the plane so if this is the X Z plane then exit is decided by these two joints r3 and r2 the positioning within this plane is decided by this and then out of this plane the y coordinate is decided by the this rotation so this is the way how the robot joints are placed so you will be able to get are the XYZ coordinates within the for cannula and most of the industrial robots follow this kind of a configuration the first three joints or depending on the robot this can actually the axis may be difference instead of this rotation can have a rotation here itself and then how this axis rotation of rotation axis can be different orientations but the first three will be rotary joints so that you will be able to position the wrist in X Y Z so that is the joint it am architecture so there are many names joined at arm anthropomorphic arm and etcetera etcetera but this is the configuration that we human also has more three joints though we don't have the axis separated like this so we can actually use these three joints to position their wrist in any plane so I get want to plane this plane or I want to move to this plane I will be able to use these three joints here to place my wrist in any coordinated XY three-dimensional space so the same configuration is used in the industry Robo toys so this is the general architecture so rotational axis rotational axis and rotation x3 rotary joints so what will be the workspace Arden button what can develop will intersect the robots okay what do you mean by that's okay so you can see this one actually can go up like this right the first two joints can actually take a path like this but then when the the two joints are this position so if you take this too then only this can actually move or this this has got its limit then it can actually move like this and then this can also move like this so you will be getting a a complex work let work and my lab not exactly like a circular path here are you cannot get but will be something like a complex one and then similarly you will be getting a complex our outline so it's need not be something different from the the previous configurations do not define the exact shape but it will be a slightly complex what can be left will be able to get so you can see here so tap to here and then it actually changes its configuration and then comes like okay the exact shape depends on again the Matylda the limits and the mechanical structure it has but in general you can say the work camera will be in this region not all will be available for other robots most of the robots will be having a workspace in this area more and then back said it will be having a very limited can be lab and show you a typical work envelope often industrial robots but this is the general shape that you can see for the Arara proposed so you'll be able to see these kind of robots we have in our lab also we have few robots with this kind of architects so this shows the typical work envelope of and not the typical one but for a particular robots with the dimensions you can see this will become the actual workspace that the this can reach yes as it comes up here then it has reached one range of this joint then the next range will start here then the next one like this you'll be able to see a bark envelope which can actually Beach now most of the robots will be designed in such a way that they have the large workspace in the forward region so that you can actually do the operation so behind not many operations will be carried out and therefore this part space will be somewhere like this one again depending on the range that you decide you will see that can actually be less or more but most of the time the workspace will be in this region so that it can carry out some tasks again there are there may be pockets where it cannot reach because of the joint limits so the reachable workspace may be again less but this region every robot designer will try to make this region very accessible and without much of non accessible area so as long as you can get the good workspace over this in this region then the robot design is considered to be the good one for a particular application so though it can all which can reach here not always this much space will be used for any practical purpose except for some moving of some objects now we will see how to how we can identify the workspace of a robot later so once we know these joint limits and the relationship between these coordinates suppose we know the relationship between the coordinates and the joint so we have three joint size so theta we end size so I put this as three joint angles so if we know this relationship between the coordinates X Y Z and then the joint angles or the joint variables then we will be able to find out for every value of theta and every value of a and every value of sy you will be able to find out what is the position that you can reach and the set of all these points basically makes the workspace so the workspace points all the points in one phase can be obtained by substituting the value of theta piensa in this you will be able to get Alda workspace so once we develop this relationship then we can actually write this simple program looking at the minimum and maximum limit of theta P n sy you will be able to find out all the points in 3d space but the robot can reach that is the reachable workspace that could be high art space oh sorry in this case it is mentioned this end effector now this is for the risk P symmetrical with respect to this you are dealing no no so the reason is that this joint but suppose did you take this joint you can actually come up to here I mean can I have a limit up to like this but here it is having a limit only up to here so this may be the joint limits that will be set for that's drying oh no this will be actually like a rotation how much you can rotate with respect to this axis that is that is with respect to the vertical axis now this can actually be rotated that will be the volume okay so this is in this plane only okay this is shown in this plane this is exit plane now how much it can get in white depends on how much you can rotate with respect to this axis so the whole thing this workspace the whole disk workspace can actually be rotated you can sweep through this angle that will give you the volume so this is the area that you can get in one plane and then you see this and again it may not be 360 degree there may be a limit for this there actually it will stop okay all right so these are the most popular configurations and it's a bit complex in ematic model so it is very difficult for you to predict what will be the XYZ position by just by knowing the theta you need to calculate because it's not a direct relationship and that is basically the kinematic model we talk so how do you how the positions are position of the tip is related to the joint angles we need to develop a relationship for dots and it can get the large working volume with respect to the robot size for the given robot says the largest work welding can be obtained by obtain for the angular robot configuration and the least is for the prismatic configuration and it can reach the upper and lower side of an object that I will tell you later so the complex kinematic model so linear movements are difficult when I say difficult suppose you want to get the emotion in like this suppose I want the tip to move from this point to this point I need to control all the three joint ok in this the planar motion but ingredient in space you want to get a straight line you need to control all the three joints and then move the robot to get the straight line motion so compared to other robot configuration this is a slightly complex to get them in your mush it is possible but maybe a bit complex to get the straight line motion because all the three joints need to be controlled okay let me finish this also so SCARA is basically a RRP robots so you have to row tree joints so true rotary joints and a prismatic joints so this is the configuration of skaara it stands for selective compliance assembly robot arm it is a very commonly used robots because lot of we can place operations in creates little bit of compliance in the one direction that is it should be able to move it in a particular plane so that you can get compliance and then insert some like a peg in hold assembly problem so you'll be able to get in a compliance using this kind of robots so the the peculiar design feature here is that all the axis are vertical so you can see are the joints are in vertical direction all the axis of rotation are in vertical direction there are no other joined axis everything is in vertical direction and therefore it provides compliance in the other direction other plane so that is the advantage of this except that vertical axis are used for solar and elbow joints to complain so just we to be compliant in a horizontal direction for vertical insertion task so you are getting a complaints in the horizontal direction because all the joints are in vertical and that is the advantage of going for this configuration of robots other than that it is like an RRP robot normal our RP robots the the only difference is that are the axis are in vertical direction again since these are the vertical direction the the inertia of this arm is not actually reflected on to this and it is moving this direction the load is actually in the vertical direction so it is not reflected on these axis that is another advantage of this acquired from the compliance actually this motor need not take the load on load of these motors so because it is vertically mounted and the works which will be something like this it is almost like a cylindrical one except that you'll be having some kind of a depending on this because of this joint you will be having a small curve over here and it comes to the at end of the four cannula other than that it will be more like yes helical yeah but it I do in Ag so basically you have the this vertical load here I mean the motors are here in the vertical direction this also vertical direction so this load is not reflected on the this motor because this motor is actually moving in this direction so the vertical load is not actually reflected on to this model compared to the other kind of robot this load will be reflected onto these when you have to move like this right so in this case it is not reflected as so these are some of the typical robots that you can see this was a video it's not but you can actually get lot of examples from the literature there are not of such robots available in the markets and we also have one robot we will show you the will demonstrate in stock or 2x plus one of the glasses so sorry there's something wrong here don't take this here t down deter do and be ok so it's a P so mistake it is given and correct it it's not rotation no I see the what I am saying is that the gravity load will not be acting on it only the acceleration would will be coming so if you want to actually dip you'll be having a load but the weight of the motor will not reflect onto this just that you've taken by the structure itself now that you'll be taken by the structure so you define this design the structure in such a way that it can take the load so the mortar need not take the the gravitational load of the other motors that's what actually so it can have a very high speed so very high speed assembly robots are made using this kind of an architecture and you have only a vertical axis okay so we'll stop here and just to talk briefly about the wrist configurations also how the risk can be assembled and then we move to the kinematic relationship thank you |
Introduction_to_Robotics | Lecture_82_Extended_Kalman_Filter.txt | Assumptions that we made for the Kalman Filter, so some things for you to remember is that the Kalman Filter is a bayes filter algorithm. And the crucial assumptions we made was that the state transitions and the measurement models were both linear. So the state transition was linear in the sense that xt was a linear function of xt minus 1 and a linear function of ut. Likewise, zt was a linear function of xt. So this all, basically, these are the linearity assumptions we made. And the second assumption we made was that over these linear models, there was an additive Gaussian noise. So we assumed that there was some Gaussian noise added to both the linear transition model as well as the measurement model. And this allowed us to keep the belief distribution as a Gaussian distribution. And that allowed us to localize to a point with some amount of noise around it. So this is assumptions that we made. And I have mentioned that we use Kalman Filters in practice because these assumptions are often valid and we can get away with it. But sometimes, we need to do something more to get the Kalman Filter to work and so, so, in this lecture, we will look at one such extension to the Kalman Filter. So here is an example, a very simple example where the linearity assumptions will not hold for us. Suppose you have a robot that is moving in a circle. It could be moving with a constant velocity, does not matter but the dynamics of the movement is not linear. It could be moving at a constant speed but the dynamics cannot be described in a linear fashion. But one might, you know, think that at any point on the circle, any point on the circle, I might approximate the instantaneous motion of the robot by looking at the tangent to the circle at that point and assuming that the robot is moving on the tangent in a linear fashion. Of course, at every instant, I will have to keep changing the direction of the tangent but for that one particular instant on the circle, I can assume that the dynamics are linear for a very short duration. Is that clear? So the whole movement is not linear but at an instant, I can assume that the movement motion is approximated by a linear function. So this is the idea behind what we call the Extended Kalman Filter. So it basically overcomes the linearity assumption by first assuming that, by first assuming that the next state probability and the measurement probabilities are governed by some arbitrary nonlinear function. So we are going to assume that xt is a function of both ut and xt minus 1, where g is some arbitrary. Again, as before with the Kalman Filter, that is an additive Gaussian noise. So it is still assuming that the noise is Gaussian and it is additive Gaussian noise but we assume that the, the main dynamics instead of being given by a and b, which is like linear multipliers on x and u, I am just going to assume it is an arbitrary nonlinear function g on xt minus 1 and ut. And similarly, for the measurement case; similarly, for the measurement model, I am going to assume that zt is some arbitrary nonlinear function, h of xt. And again, you have an additive Gaussian noise, which here we denote by delta t, as we did with the Kalman Filter. So is that clear? So we have g, which is an arbitrary function that models the next state probability, and h, this is an arbitrary function that models the measurement probability. Now, what we are going to do next is just like we did with the circular motion, just like we did with the circular motion, we are going to try and make a linear approximation for these functions at a specific point on the function. So it might not be a great approximation for the whole function but at that point, it is a good approximation of the function. So the main reason why we have to do this linearization is as follows. If I have an arbitrary function g and h, so if I am using this, I can still write my update equations. I can still write update equations using g and h, it is just, it does not look very complicated. But the, but the biggest challenge for me is that the belief distribution, which I start off as a Gaussian distribution, remember, I start off with mu naught and Sigma naught as my belief, the parameters of my belief distribution, and as we keep going around every iteration of the algorithm, I just keep updating the mu and Sigma because my belief distribution stays a Gaussian. So it makes a life a lot easier with the Kalman Filter case. But in this case, what happens is, we cannot assume that the updated belief distribution would be Gaussian. If it is an arbitrary function, it will most certainly not be a Gaussian. So one way of getting around this is whatever we described right now. So we create a linear approximation for the function g and h. And now, once I start using the linear approximation in my updates, it just becomes like a Kalman Filter. It is like applying a Kalman Filter at but using different functions, different linear functions, every time I apply it. Where the different linear functions are obtained by approximating these nonlinear equations. So the actual dynamics of the robot was governed by these nonlinear equations. But when I am doing the belief updates, I just, for a minute think it is linear, so that I can do the same updates or similar updates of what I do with Kalman Filters and leave my belief distribution as a Gaussian distribution. Makes sense? So the reason we are linearizing this here, for doing the update. The reason we went to nonlinear case because linear assumption was too restrictive for modeling realistic dynamics. And now, we are going back to linearization so that I can get my update to stay Gaussian, updated beliefs to stay Gaussian. So in some sense, I am trying to get the best of both worlds. Using, what is called an Extended Kalman Filter. So there are like numerous extensions to Kalman Filter, like hundreds of extension to Kalman Filter. People have different kinds of flavors of Kalman Filters and so, each one of these is going to cater to very, very specific assumptions about the robot, the motion model, the world that you are operating in, and the kind of belief distributions you need, and so on, so forth. And almost all of them have this idea that you are looking at additive Gaussian noise. That is what makes it a family of Kalman Filters. And so, but then the various other parameters that you would need to model would change. So in this course, we are going to look at only the Extended Kalman Filter because it is very useful, and also simple to understand extension. And you are free to look at other material as you go along. So let us look at how we are going to do this linearization. So I assume some of you are familiar with what is known as the Taylor Series Expansion of a function. Suppose I have a function g of ut, xt minus 1, I can expand it around a point. In this case, I am expanding it at mu t minus 1. I can do that for evaluating the function at mu t minus 1 and keep adding higher powers of the difference. Here, I have just approximated it with just the first term. So if people remember the Taylor Series, so this would be like g prime ut, mu t minus 1 into xt minus 1 minus mu t minus 1 plus g double prime ut mu t minus 1 into xt minus 1 minus mu t minus 1, the whole squared, and so on, so forth. So that would be the Taylor Expansion. But if you just take the first term, so you can see here, here is a picture of how the Taylor Expansion work like. So that is trigonometric curve, it is a sin, so that is sine theta. And then, the red line is essentially the Taylor Expansion of sin theta around 0, with just one term, just the expansion that we showed here. So this expansion for the theta equal to 0. And you can see that it is a straight line, but then we have linearized it and it is a pretty decent approximation of the function just around 0. It is not, not quite that and that is a small deviation, but exactly at 0, it is equal. And so, that is essentially what. We want we have a linear approximation that is good enough very, very close to the point at which we are doing the approximation. So the higher curves, so you can see that there is a yellow curve and the orange curve, and so on, so forth, these are as you keep adding more and more terms to the Taylor Series Expansion, it becomes closer and closer to the full function. But we are not interested in that. So all we are really interested in is making sure that we have a linear approximation at the point where we want to compute this. So coming back, now I am going to use g prime and we already, I mentioned this, this is a fairly standard notation. So g prime ut comma xt minus 1 is essentially the partial derivative of g with respect to x; not with respect to u. So it is basically so it is doh g of ut xt minus 1 divided by doh xt minus 1. So it is basically the derivative with respect to x and not with respect to u. So that is what g prime is. And so, we are going to do this approximation. So now, remember I said, we try to approximate around a certain point. We try to approximate around a certain point. In this in this picture here, we have approximating sin x or sin theta around the equal to 0. So now, I would really like to approximate my g around a point for x. So what is the point that I am going to approximate this around? So I really need the approximation of the dynamics at t minus 1, so then I can predict what will be the stated time t, correct? So I need an approximation of g at time t minus 1, a linear approximation of g at time t minus 1 so that I can compute what would be the state at time t by using that linear function. I will take t minus 1, whatever is the state at time t minus 1, I will compute the state at time t. And remember, according to our beliefs, the most likely state at time t minus 1 is mu t minus 1, correct? So the most likely state at time t minus 1 because that is what, that is how our belief is determined. Remember, belief is a Gaussian distribution with mean mu and variance Sigma. The belief is a Gaussian distribution with mean mu and variance Sigma., so the most likely state under the belief is going to be the mean. So for the belief at time t minus 1, the most likely state is going to be mu t minus 1. So I will take the most likely state, which is mu t minus 1 and I will do a linear approximation at that point. So what it really means is, I will evaluate the Taylor Series at mu t minus 1, but I will use only the first term here. So, so this is so, the approximation for g of ut comma xt minus 1, this approximation is good at mu t minus 1 is equal to g of ut comma mu t minus 1. Remember that now this quantity, so this quantity, so this quantity is a constant because I know what ut is, ut is not, no longer a function now. I know what ut is I have already applied ut and mu t minus 1 is, is again a specific value for xt minus 1. That is what I think is the most likely state. It is no longer variable, it is actually an assignment xt minus 1. And what you should remember, even though g is a function of u and x, both, but in this particular instant, I, when I am applying my model g, I have already picked my ut, I am not using g to pick ut; already picked my ut. So ut is a constant because it is already given to me when I am doing my belief updates. And mu t minus 1 is a constant that comes from my previous belief. So now that I have plugged in specific values for x and u, I can actually compute what g is at this point. So this is actually a specific, specific quantity. It is no longer a function. So this is basically some quantity that is in the x space, that is, in the state space; it is a specific state, this is no longer a, this is no longer function. And then, what I am doing here is this is the first term in the Taylor Series expansion. So I am going to take the derivative of g with respect to x. And then, I will multiply that with a difference of xt minus 1 minus mu t minus 1. So this is the part that gives me the functional form. So it is now, it is a functional form in terms of xt minus 1. And so, notice that I am using a specific notation here. So I use g prime of ut comma mu t minus 1, what it really means is that I take the derivative, I take this, I compute this derivative and I substitute mu t minus 1 for xt minus 1 here, and I compute the value of the gradient at that point. So this also is a constant. So that also is a constant. So now I converted my nonlinear function g ut xt minus 1 to some kind of a constant plus g prime, which is a constant times xt minus 1 minus mu t minus 1 which is a constant. So basically, I have converted that into some kind of equation, which is linear in xt minus 1. So all the higher-order powers of xt minus 1 that could have been there in this have vanished because of the Taylor Series approximation. Remember that this is actually an approximation and it is good to go at mu t minus 1 for a very, very short interval. Not throughout the function. I hope that is clear what we are doing with the linear approximation. So now that we have this approximation, we can go ahead and try to start writing our update equations. So I am going to assign capital Gt, right, can be equal to g prime ut comma xt minus 1. So that is basically a matrix of size n cross n because I am going to take the partial derivative with respect to all the components of x. So n is the size of x, and so g prime of ut comma xt minus 1. Now, I can rewrite my G, which is what we did in the previous slide, I am just writing it using my Gt notation. I am going to rewrite my g of ut comma xt minus 1 is equal to or approximately equal to g of ut comma mu t minus 1. And as we saw that this was a specific evaluation of the function g plus Gt, which is the Jacobian times xt minus 1 minus mu t minus 1. So I mean, I presume all of you are familiar with the Jacobian, you must have seen it multiple times already in the in the course. And so, this is essentially the Jacobian times xt minus 1 minus mu t minus 1. So the next state probability, which is what is our motion model, which is probability of xt given ut comma xt minus 1; can be approximated as follows as a Gaussian distribution. Remember that our next-state distribution was given by, our next-state xt was given by g of ut xt minus 1 plus a Gaussian noise epsilon t. And if you remember from our previous lecture, so epsilon t is a Gaussian with 0 mean and covariance matrix Rt. So that is essentially the same thing that we are using here. And therefore, since it has a 0 mean, so I will be basically adding this value to it. So that is the mean. So it is the next state probability distribution is basically a Gaussian with the mean given by g of ut comma mu t minus 1 plus Gt times xt minus 1 minus mu t minus 1. So that is essentially the mean of the Gaussian for the next state distribution and the covariance is Rt as we had earlier. So I hope that is clear. Now that we have the state transition model, we had something similar like this. So what did we have for the Kalman Filter? So we had A plus, so we had A xt xt minus 1 plus B, B ut, ut, there was the mean and the covariance was Rt. Similarly, we can approximate the observation function h of xt as h of mu t bar. Why are we getting a mu t bar here, why not mu t. We use mu t when we did the next state probability but you are using mu t bar here. So why is that? So you should remember that we use the observation function or we use the measurement function only with bel bar, not with bel. So when they come to bel bar, I have already updated my most likely location from mu t minus 1 to mu t bar. So I will be doing the linearization of my h at mu t bar because that is where I actually end up using h. I use it with updating from a bel bar to bel; bel bar of xt to bel of xt. So that is what I am using it for. And therefore, we will use mu t bar as the point around which I will linearize the function. So h of xt, I am approximating this by h of mu t bar plus h prime of mu t bar; h prime is as you can see, is a partial derivative of h with respect to x times xt minus ut bar. This is exactly like the Taylor expansion that we used for g except that it is, it is around mu t bar. And likewise, just like we did earlier of writing Gt as I mean, g prime of , as Gt. Similarly, we write h prime as capital Ht, and this is the partial derivative for the measurement function. So I am approximating h of xt has h of mu t bar plus capital Ht times xt minus mu t bar. So that is my function. And now, remember, we are (use), you have to use mu t bar here because that is the most likely state at that time when I actually use my h. So now, my measurement probability, which is probability of zt given xt, the probability of zt given xt can be approximated as a normal distribution with mean given by h of mu t bar plus capital Ht times xt minus mu t bar and Qt, which is a covariance of the noise term delta t. Remember, delta t is a Gaussian noise, additive Gaussian noise with mean 0 and covariance Qt. So now, I will do the same thing here but the mean, now the new mean, instead of being Ct times xt as we had earlier, it is going to be h of mu t bar plus Ht times xt minus mu t bar and Qt would be the covariance as before. So thus, second, see here is almost like the Kalman Filter, except that instead of using A, B, C, you are going to use the Ht and Gt which is essentially helping us linearize the function. Now, once we have these linearized versions of the filter, of the motion model, and we basically rewrite the Extended Kalman Filter algorithm to look very much like the original Kalman Filter algorithm. Again, lines 2 and 3 are the update equations for go from bel xt minus 1 to bel bar xt. So as before, your bel xt is given by mu t minus 1, bel xt minus 1 is given by mu t minus 1 and Sigma t minus 1. I have the action ut I have the observations at t. Now, I can use that to compute my mu bel bar of xt, which is again obtained by obtaining, computing mu t bar and Sigma t bar. And it looks, this expression looks very similar to what we had earlier, except that instead of A, B, and thing, so we are just using G now. And likewise, we have the, so one thing note is that I am actually using the exact g here; g of ut comma mu t minus 1. So I am basically looking at, plugging in mu t minus 1 and computing what, mu t bar. So that is that is something to note. And then again, I have the Kalman gain which is computed in the similar fashion. And I have lines 5 and 6 now compute bel of xt from bel bar. And likewise, very similar computation to what we had earlier and the Kalman Filter. So this is essentially the Extended Kalman Filter algorithm. So it is more powerful than the Kalman Filter in the sense that it uses a nonlinear model for both the movement as well as the measurement but it retains the same convenience of Kalman Filter because I can assume that the belief is Gaussian, and it will stay Gaussian through my updates. So that is that is basically the Extended Kalman Filter. So we can stop here. |
Introduction_to_Robotics | Lecture_91_Velocity_Motion_Model.txt | Hello everyone and welcome to week 11 of the Introduction to Robotic course and so we are continuing looking at the CS aspect of the course. So far we have been looking at recursive state estimation and we talked about you know various assumptions we make on the believe model, the motion model and so on for and derived different classes of filters. So this week we will look at a set in other components that we need to allow us to implement these algorithms that we discussion. So we need to really look at how we are going to come up with the motion models. And how we are going to come up with measurements models. And as well as motion of a map that gives us the specifications of the environment in which we expect the robot to operate in. So the motion models give us if you remember the probability of xt given xt minus 1 and ut, where xt minus 1 is the state at t minus 1 and ut is the action that you perform at time t and xt is the resulting state. So this is the model that we use in the prediction set of the filter algorithms. And then the next thing is the measurement model that tells you what is the probability that I will make a specific measurement zt given that I am in state xt. So remember that xt and zt and even ut can be vectors. They are not necessarily scalers even the many of the illustration that we saw where all with the scaler quantities just to make the presentation easier. And again maps like you can see here, a sample here or some kind of representation of the robot s environment and that we will use in reason of the filtering step. And as we will see later in terms of planning, motion planning and I am trying to design how get from point A to point B in a particular environment. Then we using the notion of a map and most of the maps that we will be looking at are what our 2D maps. When we looking mostly at the 2 dimensional projection of the work space and not necessarily 3D maps and these are usually sufficient for motion planning. And some forms of localization that we will talk about later. But then we could have more complex maps as well depending on how complicated your work space is. So what I am going to do is first talk to you about certain kinds of motion models. Next we will talk about maps and then we will go on to talking about measurement models and which again as we have done, when the filtering case also. I will be talking about very specific examples but then depending on what your application is you might want to use more or less complicated model. And that you will have to pick up as you go along. So the first thing I am going to talk about or what are call velocity motion models. A velocity motion model assumes the following. So I am going to assume that by robot state is described by some position it could be x coordinate the y coordinate of the robot and some kind of an orientation theta. So this is a standard 2D robot location and post representation and I am going to assume that the control to this robot is given through rotational and translation velocities. We could think of control as being specified through a multitude of methods, it could be through torques to motors, it could be through final displacements, or like we see here it could be through the velocities that result from the controls that we apply from the changes to the derives and so on so forth. What is the resulting velocity? So I am going to assume that you have the translational velocity vt and you also have the rotational velocity omega t and this constitute your action at time t. And the convention we will assume though we are not going to delve into it too much. So positive rotational velocities induce a counter clockwise rotation and negative rotational velocities induce a clockwise rotation. So clockwise would be this counter clockwise would be like that, counter clockwise would be like this. And positive translation velocity vt as is normal correspond to forward motion. And whichever is the direction orientation that the robot is facing in, it will move forward along that line. So that is what the forward motion get over means. And so what we will do now is look for some kind of a closed form algorithm for computing the probability p of xt given ut and xt minus 1. So this is really what we want this is what the motion model is, so probability of xt given ut and xt minus 1. So this algorithm is going to accept as input an initial pose which is given by x, y and theta under control ut which will denote by v and omega and a potential successor state xt which is given by x prime, y prime and theta prime. So given all this three this algorithm will return a probability that x prime, y prime, theta prime is resulting state by applying actions v and omega in state x, y and theta. So the resulting state xt is x prime, y prime, theta prime and so what is algorithm is going to compute is, how likely is it that if I start from x, y, and theta and apply v and omega to the robot I will end up with x prime, y prime, and theta prime as mistake what is the probability that will happen. So that is basically what is algorithm is computing. So and the other thing that we are going to assume is the difference between t minus 1 to t is always a fix duration delta t regardless of whether it is 1 to 2 or 15 to 16, whatever is the time step, we will going to assume it is of a fix duration delta t and this allows us to make the model little simpler. So normally we would have seen this kind of forward, I mean kinematics model in terms of deterministic dynamics given xt minus 1 and ut you know what xt will be and you solve some differential equations and you find that out. But what we are doing here is trying to make this into problematic model to account for all kinds of non determism that is there in a real system. And this allows us to then directly plugin to the filters that we have seen in the previous weeks. So here is the algorithm, this looks a little complicated, right? So I am going to actually walk you through this slowly. So before that I just wanted to point out one thing so if I have a, I have initial pose x, y, theta and I move in a constant velocity of v and omega throughout the duration delta t. So we are assuming that we start from the location x, y and a specify pose theta. And we are going to move with a constant translation and rotational velocity over a duration delta t. So what is that going, what is that going to mean? It means that we will be moving in an arc of a circle that is going to be centered at some point. So let us say that you have a so I am going to moving in circle with some center given by some coordinate x star, y star and then I will be marking out and I will be marking out some segment of the circle over which I will move. So basically I start here then I end here and by orientation here is some theta and by orientation here could be some other theta. So the robot could start here and it will end there and it is starting with some orientation which is theta here and orientation there is theta prime. So because I am moving with this constant velocity I will be prescribe I mean I will be describing some arc of a circle with some center. So this is essentially what we will be looking at and, so this what this algorithm also captures right now. So what we have to first find out is what is this center of the circle that I am going to take. But notice that this whole idea of moving along this along a circle is fine as long as I have no noise in the vault. If I have some kind of noise and is not going to be exactly the motion that I am going to execute. So we will have to figure out what will be the actual motion that I will execute as well. So what I am going to do right now is going to assume that. So given that x and x prime where the starting x coordinate and the ending x coordinate of the motion and y and y prime are the starting y coordinate and the ending y coordinate of the motion. And theta was the orientation that I was originally facing, theta was the orientation that I was originally facing. So remember that my forward movement would be along the direction theta, so the center of this circle that I will be following is given by x star and y star as this equation. So you can actually look at the book to figure out how we arrive at this expression but this is essentially like saying that okay I am starting at x, y and I am ending at x prime, y prime. So a perpendicular drawn at the midpoint of these two points. A perpendicular line drawn at the midpoint of these two point should go through the center of the circle. So that is essentially the idea here, so that is where we come up with so x star and y star essentially trying to figure out what should be the midpoint of the circle assuming that x and that x, y is the starting point and x prime, y prime is the ending point. And I am having a angle theta to start with. So this is essentially what I have. And solving for this equation I get mu is it given by this expression and so I compute mu first and then plugged that in here to find what my x star and y star is. And the distance I am from x star and y star when I start basically or basically when I end when you could use either one of these should give me the radius of the circle that I am traversing. And likewise the angle that is subtended by x, y and x prime y prime at x star, y star gives me the total angle that I have traversed from my beginning. Now, given that so this is the total angle I have traversed I can compute what my v hat and omega hat. So what are v hat and omega hat? These are the actual velocities that I travel with. So what is v and omega that is given as part of ut, v and omega that is given as part of ut are the actual velocities that I applied to the robot. Due to variety of noise parameters in the world the robot actually travels at v hat and omega hat. Now how do I get this v hat and omega hat? This is basically the rate at which my angle changes times the radius. So this basically gives me the distance I have travel and delta t so r star into delta theta gives me the distance I travel and delta t is the time I took. So this basically actual velocity of travel and this delta theta is the angle that I have traverse according to this and delta t is the time I took. Therefore, omega hat is the actual angle of velocity that I have. So intuitively I will hope it makes sense, so this x star, y star is the center of the circle that I am traversing when I am moving with velocity vt and omega t will be an omega. That is the assumption here and assuming that this will be the circle that I am traversing assuming that I have constant velocity for the time delta t constant angle of velocity and translation velocity. So this is the center of the circle this is the radius of the circle, so this is the total angle I have traversed. And this is the effective translation velocity and effective angle of velocity. Now what is going to happen is because I am assuming that. So I am traveling with this constant translational and angular velocity. And my entire computation has basically assume that I am going to end up but x prime, y prime I never looked at theta prime, I never looked at theta prime. Therefore, this model up till line 8 does not really ensure that I end up at theta prime, it ensures that I end up at x prime and y prime assuming I am ending up x prime and y prime is what it is doing the computation, it never assumes how did I end up with theta prime. So for that what we do is we look at the rate at which so we look at how much angle, angular velocity we have already exclude. And then, we compute a correction which we call gamma prime. So this correction gamma prime is the correction that we make to the, the effective angular velocity. So that I end up with theta prime and not theta plus delta theta, is that make sense? So theta plus delta theta is what I am right now modeling here. So I am saying that delta theta is the angle that I would have moved if I start off at x, y with angle theta end apart x prime, y prime. I am assuming I would have acting my pose should have change by delta theta but because that is noise in the angular rotation also I end up at some theta prime. Now theta prime did not necessarily be delta plus, I am sorry did not necessarily be theta plus delta theta. It is independent value that we know that is given to you as part of xt. So account for the noise in the angular rotation, so we take the actual angular distance that is traversed. Find out actual angular velocity should be subtract the number that we already computed which is omega hat and call this correction to the angular velocity has gamma hat. So my actual model now is my real translation velocity is v hat. My real angular velocity is omega hat plus gamma hat but action that I have applied is v and omega. So now the question that we have to really ask ourselves is, hey if I apply v and omega what is the probability that the true velocity that I see will be v hat and omega hat plus gamma hat? Is that clear? So now the question the noise question the probability question becomes given that I applied v and omega given that I applied v and omega as the actual actions at time t. What is the probability that the effective actions that where applied to the robot or v hat and omega hat plus gamma hat. So I really, I would like v hat equal to v and omega hat to be equal to omega. And I am sorry and omega hat to be equal to omega and gamma hat to be 0 I do not have to apply additional correction at all. I really like my v hat to be v and omega hat to be omega and my gamma hat to be 0, so that case is the ideal noise free case. But then we would not be here, if everything was ideal and noise free and therefore we are looking at this noisy case. And we are going to model this probability that you apply v what is the chance that the actual velocity is v hat. So I am going to look at v minus, I am going to look at the term v minus v hat and I am going to define a probability distribution for v minus v hat. Remember what would I like it to be ideally I would like it to be 0. So what this function probability is it models some probability distribution that has a 0 mean which is most likely thing, so 0 mean I would like to have it, I like it have 0 mean and some variants given by this term. We will see in the next slide, so the probability function so this is what we had probability x, v is going to tell you, what is the probability that you observe e x when the actual distribution of e x is model with a variable that has 0 mean and variants of b. So that is what we are looking at. So if you think about the various terms that we have modeled here. Look at the various terms we have here we have v minus v hat, so remember ideally v hat should be v, therefore v minus v hat should be 0. So we would like this should be this probability function to have the maximum probability at 0 but the v minus v hat will be some small number. Therefore, if it is close to 0 we would like to have this to have a high probability if it is very far from 0 will act this travel low probability. And how we control the width of this decay width of this probability is given by this expression which is alpha 1 v plus alpha 2 omega. But alpha 1 and alpha 2 or actually noise parameters that tell you how much the variation in the velocity depends on the actual magnitude of velocity both in the translational case and in the rotational case. So I would be expect naturally if you are moving very fast you have a higher error and whether it is in angular direction or in the translational direction. So whether so whichever component of the velocity it is, the faster you are the more the error would be and so therefore alpha 1 and alpha 2 in fact all the alphas tend to have a positive small, positive quantities. So they model the influence of the velocities on the errors. So the first term gives you, what is the probability of v minus v hat occurring. So we are already computed v hat we know what v is given to you and so what is the probability that you will that difference. The second term likewise thus set for omega and here alpha 3 and alpha 4 again or parameters that look at how the magnitude of the velocities effect the error in the angular position. And likewise the second component of the angular error which is gamma hat and again is given by 0 mean and variants b distribution where again b is a factor that depends both on magnitude of the translation velocity magnitude of the rotational velocity. And you have alpha 5 and alpha 6 so all the alphas are positive quantities and each of this pairs of alphas determine the effect of the magnitude of the velocity on the respective errors. And so this probability function where I am looking at the probability of x and given variants b. So the mean of this distribution is always going to be 0. Therefore, I could model this as a Gaussian random variable. So that gives me this expression gives me the Gaussian as you can see its 0 mean the mu is not, mu does not figure in to be expression at all or I could think of it as a triangular distribution where the profile looks like a equilateral triangle. The profile looks like a equilateral triangle something like this and so outside of this triangles based the probabilities would be 0 and within this it will basically follow this kind linear decay. So this is basically this would be the 0 mean part, that would be the 0 part where you have the highest probability and then you have the linear decay of the both side and outside of a certain range the probability would be 0 as indicated here. This is call the triangular distribution so you could use the normal distribution which is our friend which we seen it multiple times in the past or we can look at the triangular distribution, here makes sense. So it is a little complicated, so lots of things were packed into this so basically here we assume that there is initial noise free, there is initial noise free translation. But the actual velocity and that is apply to it is v hat and omega hat plus gamma hat. And then to see that this is the actual velocity components that are applied. So for this to be actual velocity components that were applied, now what is the probability that will happen assuming that the true velocity is that we used where v and omega, so that is basically what this function returns. So this is one of looking at it where I am actually returning the probability to you and so here is an example. So remember I have told you alpha 1 alpha 6 or the once that control how much noise is there in the model. And so here are 3 different predictions made by this motion model based on different values of alphas. So the first figure has a moderate value for alpha 1 through alpha 6. It is a moderate value for alpha 1 to alpha 6 so this is the starting position that is the starting position and that is the ending position. So this is x, y and that is theta and this is x prime, y prime and that is theta prime. So now if I look at it so this is basically my probability distribution so you can see that it is spread out both in the translational sense and also in the angular sense both in the translation sense and the angular sense this distribution is spread out. Now, when I look at the second case here, the distribution is spread out only in the translational sense. And the angular distribution is small, so that means that my alpha 3, alpha 4, alpha 5, alpha 6 or small. And my alpha 1 and alpha 2 have large translation fact alpha 1 and alpha 2 have a larger error in figure b then they doing for that a. And in figure c you can see that I have my alpha 1, alpha 2 or small therefore I have a smaller translational error but I have a larger angular error than I had in a. So basically my alpha 3, alpha 4, 5 and 6 would be larger in figure c and alpha 1, alpha 2 would be smaller than in figure a. So basically we kind of prediction that you would see for the probability distribution or for various values of alpha 1 alpha 6. So we look at the proper fully parameter x model in the previous example. A slightly simpler version could have been to use a particle filter. So if you are trying to use a particle filter in particle filter basically if you remember I do not really computed in closed form all I do is draw samples according to the motion model. So as long in I really need to only draw sample according to the motion model instead I have computing a full form distribution like I was doing earlier for every point I have query I have return a value all I need to do is, hey can you return 10 samples from this forward model according to this distribution? So basically I need to able to draw samples according to probability of xt given ut, xt minus 1 if you look remember when we are talking about the particle filter we never said anything about the motion model. All we really need is a way to draw samples from this distribution. So what this is going to accept. Now what this motion model is going to accept, is the initial pose like an xt minus 1 given by x, y, and theta and the control v and omega. And it is going to generate the a random pose xt according to this distribution. Notice that we are going to make the same assumption that we had for the previous model, what are the assumption that we had for the previous model? We are going to assume that the velocity is actually v prime and v prime is given by v hat and v hat is given by v you know some plus some noise that is proportional to both the linear magnitude of the velocity and the velocity of the angular velocity, magnitude of translational and magnitude of angular velocity. So v might be the actual angle that actual action we applied but some other v hat and the omega hat are the true velocities with which the robot is moving and that true velocity distribution is what we have to be looking at when we are sampling from for the particles at xt. So looking at the figure again so you can again see what is, what we are going to do. So we have original v I am going to draw a noise because I remember at 0 mean. So sample basically takes a 0 mean distribution with variants of alpha 1 v plus alpha 2 omega it going to take this as the input this is basically the variants of the distribution I am sampling from. So I draw a sample from that distribution and that gives me my v hat likewise I draw a sample from alpha 3 v plus alpha 4 omega with as a variants. And that added to omega gives me omega hat likewise I have sample a gamma hat as well. Now my actual angle is theta prime is given by theta plus omega hat delta t plus gamma hat delta t. And x prime, and my y prime are given by these expressions this is essentially the forward model. I start from x, y facing an angle theta and I move with v hat omega hat as my velocities therefore I end up with x prime and y prime as my final positions. So you can think of lines 2 to 4 as actually doing the stochastic part of it which given a v give me a v hat given a omega give me a omega hat and give me gamma hat. So this is basically the stochastic part of it. So perturb the actual v and omega to get my v hat, omega hat, and gamma hat and once I get that act them actual prediction of x prime, y prime and theta prime a betterment stick equations just the using the kinematic motion model to get me the actual locations and if I call this function repeatedly I going to get different samples depending on what values I sample for v hat, omega hat, and gamma hat. So this is basically the particle filter version of it. So this is slightly easier to implement and you goes with the particle filter of that we use for the motion tracking earlier. And just like what we saw earlier, so if I do this and if I draw day 500 particles so this a, b, and c correspond to the same alpha setting that we had in the previous figure a, b, and c. and you can see here this is for so for moderate error so you see a moderate error values of alphas. You see moderate spread in both the angle and the velocities in this case the velocity error is higher but angular error is smaller. And in this case the velocity error is smaller and the angular is higher and you can see the particles getting spread out exactly the way we saw the distribution is getting modeled in the previous case. So this is essentially our discussion on the velocity model for the motion model. So, next one that we are going to look at is the Odometry motion model. |
Introduction_to_Robotics | Lecture_92_Odometry_Motion_Model.txt | so in the last lecture we looked at the velocity motion model and so today we are going to look at the odometry motion model right so the odometry information is essentially information on how much the robot has moved right how much the position of the robot has changed and the automatic data is typically obtained from uh you know looking at motion sensors that are there on the bot right the most most popular kind of a motion sensor that we use is the wheel encoder right so looking at these kinds of wheel encoder information and so on so forth uh you integrate this information in order to get an estimate of how much you actually moved right so so one way of thinking about it is that it's actually a measurement right it is not really a control it's actually a measurement but another way of thinking about it is that you look at this as the effect of the control action right it's effect of the control action that was applied to the robot and so you use this as a surrogate for the actual control that was given okay so why do we use this so that's what we are mentioning here that is an alternative to using the robot velocities the main reason that we end up using this is because the quite often the actual velocities that are imparted to the robot right at least the controller that we want that we are setting it to give a certain velocity that need not always get translated into physical movements right so there is too much noise in that so often the more accurate movement estimates are obtained by integrating the motion sensor information integrated from the measurement information rather than from just the control information right and also many many uh commercial platforms make these kinds of accumulated information available to to you to make decisions on right so so while the measurement could still be erroneous right it's actually typically more accurate than the velocity for a variety of reasons so that could be you know some kind of drift and slippage in the actual operating conditions right and but apart from that velocity models also suffer an additional approximation error right so we have some kind of a mathematical model that describes how the velocity maps to the movement but again there could be mismatches in that as well right so well the odometry information is essentially the emotion that actually happens right and that kind of you know allows us to ignore the other errors that come from this modeling problems right so technically uh like i said earlier odometry are actually sensor measurement right they are not control measurements and so if i actually want to think of the odometry as measurements itself right and i might have to add more to the to the state variables right and therefore instead of expanding my state dimensions include things like velocity and so on so forth not just the x y and theta like we saw earlier so to avoid including the velocity and other things as part of the state information because of looking at the sensor measurements so we typically take this odometry as a control signal and we will see in the next couple of slides how this is done but the odometry is treated like a control signal and then that allows us to actually define a new kind of motion model that is based on the actual odometry measurements as opposed to the real control that was given right and so this is used quite often in many of the current day systems right this kind of odometry motion models right so the idea here is that we are going to look at the odometry signal as the action right so the u t that we talk about would be the odometry measurements that we make as we will see in the in in the next slide right so as before our goal here when we are trying to build a motion model is to come up with the probability distribution of x t given the previous state xt minus one and the control action ut right so um so if you think of the actual dynamics of the system right as taking the robot from a pose x t minus one to oppose x t right i can think of the odometry as reporting right and advance from x bar t minus 1 to x bar t where the bars right x bar t minus 1 and x bar t are actually the measured values of the pose at time t minus 1 and at time t right while x t minus 1 and x t are the actual pose at time t minus 1 and time t x bar t minus 1 and x bar t are the measured post as by as measured by the odometry readings at time t minus 1 and time t and we will denote these as we did before so where x t minus 1 we denoted as x y and theta right and x t we denoted as x bar y bar and theta bar right likewise what we'll do is for x t x bar t minus 1 we'll denote these readings as x bar y bar and theta bar which is basically the x coordinate x coordinate measured or estimated right the x coordinate measured or estimated from the odometry readings at time t minus 1 this is the y coordinate estimated at time t minus 1 and this is the orientation estimated at time t minus 1 and likewise this is x bar prime is the x coordinate estimated at time t y coordinate estimated at time t and the orientation estimated at time t okay so this is basically this will be our notation so while the robot actually moved from x t minus 1 at time t minus 1 to x t at time t right the odometry tells us it moved from x bar t minus 1 to x bar t right from in that time interval right so what is the use of this right so one of the main reasons why this vodometry information is useful right even if i am not able to do a correct correspondence between the x bar y bar theta bar to x y theta this change that i observe in the estimate right from x bar theta x i mean x bar t minus 1 to x bar t it is similar or very close to the change that actually happened between x t minus 1 to x t get it so even if x bar t minus 1 and x bar t are erroneous estimates of the poses at t minus 1 and t right i could still say that the the change from t minus 1 to t is similar for both the estimated quantities and the actual quantities so if from the estimated quantity if i can get the change right then i can apply it to the actual x t minus 1 that i have and also find out what x t is so this is essentially this is what allows us to build the probabilistic model for the actual state based on the estimated measurements from the odometry okay right so that's basically uh uh what we're saying here and so the the the difference is uh is is useful right so now the control itself right i am going to now represent it by both the x bar t minus 1 and x bar t basically this what i really need from the control u t is to figure out what the change was so instead of actually computing the change and giving that as the ut which is the motion information so i am going to say ut basically consists of two successive estimated poses which is x bar t minus 1 and x bar t okay that is what my ut is right so what i do now is i transform this the relative odometry information right into a sequence of three steps okay which we will call as delta rotation one delta translation and delta rotation two okay so what are these right so delta rotation 1 is essentially taking the original orientation of the robot taking the original orientation of the robot and moving it so that it faces in the direction of the translation right so i know this right so basically if this is this is essentially my x bar t minus 1 and that is my x bar t okay so what i do is this delta rotation 1 is rotating the robot suppose the the theta bar so that it now points in the direction of the translation right the direction i know from my x bar y bar and x bar prime y bar prime right so now it points in the direction of the translation right and then i do the delta translation so that i move the robot to the destination location and i still have a residual angle right that i need to model so delta rotation 2 basically rotates the robot to the final orientation which is theta prime right this is theta bar prime is the final orientation of the robot from the direction of motion so i will take the direction of motion and change it to the final orientation so i have decomposed my total motion into delta rotation 1 delta translation and then another delta rotation 2. so this is essentially how i do it so given any pair of positions s bar and s bar prime which are any any any two poses i can basically reduce it into a unique delta rotation one delta trans and delta rotation two ah and for every every move every pair of states i can basically recover this right and then what i do is just as i did in the velocity motion model i am going to assume that there are independent sources of noise for all the three and my overall probability is basically given by the product of this three ah noises okay so again while we are constructing this algorithm that will output the probability suppose i give you an initial pose x t minus 1 a control which is u t in which this case which is two sets of measurements x t minus one bar and x t bar and a final pose x t right given this this algorithm is going to return the probability of x t happening when i apply u t to x t minus 1. so what is applying u t mean here that means that i have done the translation to my state as given by these two vectors okay so i have my initial pose i have my initial pose which is x y theta right and then i have the translation that delta root 1 right and delta trans and delta root 2 as specified by this pair of vectors in ut that is my action and then i finally have the successor pose given by x prime y prime theta prime right what is the probability that this is the successor pose given that i started with that state right and then applied the translation or the applied the action as specified by these two measurements so that is basically the problem here so right now this is how we are going to do this so lines 2 3 and 4 right lines 2 3 and 4 actually compute the the rotation and the translation ah differences as per the measured pose which is ut right so this is basically telling me okay how much has the post changed from x bar t minus 1 to x bar t okay that is what 2 3 4 tells me 5 6 7 tells me how much the position has changed from x t minus 1 to x t right that is what the cap is ok delta cap rot 1 is basically delta dot one the same expression as delta r dot one but computed on the x y and x bar y bar right here it is computed on x ah x prime y prime sorry here it is computed on x bar y bar and x prime bar and y prime bar and theta bar while in 5 6 7 i actually use the measurements the states given in x t and x t minus 1 right now once i have done this basically what lines 8 9 and 10 do is exactly the same noise model that we had earlier right so now this is a 0 mean probability distribution with variance is this so i really want my my rotation 1 and my rotation 2 rotation 1 delta hat to be the same values right so i basically want my odometer measurements to be accurate so if i am assuming the odometer measurements are accurate then delta dot one minus delta cap plot one should be zero so i'll model this noise in the the the first rotation right difference by the factor p one which basically gives me a sample from a 0 mean probability distribution with variance given by alpha 1 dot 1 cap plus alpha 2 delta trans cap right so this means how much i rotate and how much i translate both of these are going to contribute to the noise in the first rotation similarly the both the rotations and the translation errors changes are going to contribute to the error in the translation and finally the translation and the second rotation the magnitude of those is going to contribute to the error in the the second rotation that we have so each we are assuming as independent sources of error so i have p1 times p2 times p3 okay very similar to what we had earlier so its not very complicated except that the control is given by a slightly different way of specifying it which is the odometer information and we also get a slightly different noise model that it was earlier it depended on the magnitude of the velocities right the noise that we use was dependent on the magnitude of the translational and the rotational velocities but now it is going to be at the based on the magnitude of the the actual rotation and the actual magnitude of the translation itself okay so that this is basically explaining what is happening there lines 2 to 4 i get the relative motion parameters from the odometer readings and lines 5 6 7 i get the actual the relative motion parameters from the actual poses that were given right and 8 to 10 8 9 and 10 compute the error probabilities for the individual motions and then finally i multiply all of them to give back the overall probability of seeing the actual transition okay and you can see the same setting here like we did with the different alpha settings so this is again for a normal alpha setting and you can see that both the angular and the translational spread because all the alphas are normally spaced and here is more translational error and lesser angular error and in c i have more angular error and less translation level exactly the same kind of parameter settings as we had earlier so the actual alpha 1 to alpha 6 values might be different but the the settings are similar right right notice that the final angular distances differences might lie between minus pi and plus pi and therefore we have to make sure that we are truncating everything appropriately right and just like we did earlier with the particle filter version of the motion model we can do a particle filter version of the motion model where we first compute delta dot one delta trans and delta r2 and then we sample a delta prime right remember when i am using the particle filter version i do not get an x t as an input right so i cannot compute delta dot one because i do not have an x t as input but what i do is i randomly sample values for the delta cap right all the three delta cap values i sample randomly and once i have a sample and then i do a deterministic computation of what x prime y prime and theta prime should be based on the cap values i have computed right and then that gives me the final sample of the state and if i repeatedly call this function i will get many many different samples and just like last time you can see that again for 500 samples this looks like a very very similar figure to what we saw with the motion model okay so the challenge with using the odometry model is that the odometry information is available to you only after the motion is complete not before the motion has happened right so in the filtering use case that we have seen so far right for filtering it is fine because we typically use the motion model only after the actual motion has been completed but when we are trying to do planning as we will see later right we really need to make predictions before the motion is completed in such cases we have to fall back on using something like the velocity motion model right so the odometry motion model can be used in cases where the motion has been completed and we are just using the particle filter for doing the state estimation right or or the base filters for using state state estimation in such cases we can use the volumetric model because we really need it to complete so typically this is something what you would see here i am i have applied a particle filter based uh odometry model here and you can see that as i keep moving right so my estimates become increasingly uncertain right this is only based on the motion model right i have not incorporated any observations here right once i start incorporating observations these things will again start relocalizing but right now just to give you a feel of what happens when i use the motion model alone to make the predictions as i keep moving right the the the probability distribution keeps spreading out so i basically started here and i have moved around like that and you can see that how the noise always keeps increasing right until i make a measurement so that i can collapse this uncertainty right so ah just a small note here we will come back to maps in greater detail later on but just to tell you the importance of incorporating map information when i am doing this kind of motion models and same thing with the measurement models right so we have described all the motion so far assuming that we have no knowledge about the environment and anything that needs to be captured is somehow captured in the motion model itself but the motion models that we have looked at are fairly straight forward and simple right so they basically look at a small linear motion right this is a small rotation followed by a translation it really does not talk about anything that's there in the environment itself right so in many cases we typically have some kind of a map right the map tells us whatever information that we have about the environment in which the robot is currently moving right and so we look at later there are something called occupancy maps right so for example that tells us whether a particular location or particular pose is free right free meaning that the robot can actually move over that space or is it occupied is it occupied meaning that could be an obstacle could be a table chair could be some kind of walls or whatever right the robot is not able to go over that space so these kinds of occupancy maps will tell us whether the space is free or not and so the robot's pose must always be in the free space right so knowing the map allows us to further refine our motion model right so now our motion model should start looking like this what is the probability of x t given that i started in x t minus 1 i did action u t and i am operating in a map m right so i will have to start conditioning my motion models on the map m and if m carries information relevant to the post estimation right so if m is something that is going to actually affect what i am estimating then typically ignoring the post information is going to give me wrong information wrong wrong estimates of the post right so it can become arbitrarily complex right to become arbitrarily complex so what typically we do is whenever the motion is very small whenever i am going to make a very small transition from x t minus 1 to x t right so what do i mean by that that means that i should be making these predictions very frequently right i should make the predictions very frequently uh between so every every post right i shouldn't wait for a large motion to be completed before i make the prediction in such cases i can approximate this probability by two things right so i can look at the original motion model like i had earlier which is probability of x t given u t and x t minus one and some kind of a validity model okay so what is the probability that x t is a valid pose right given that i am operating in a map m right so i have moved from x t minus 1 to x t right is x t a valid pose right so given that i am operating in map m and it has usually some kind of a normalizing factor right so this this i am only checking for the validity of the final pose and as long as the changes are small right so this should be this should be fine right so so the second time sometimes you can also think of it as a consistency of the pose with respect to the map m right for example in the case of occupancy map so the probability of x t given m is 0 if the robot collides with an occupied grid right otherwise we'll give it some small constant value depending on what the normalization factor is right then this can become a uh your probability estimate right so it becomes a very easy case now right so if x t given m is uh i mean if m if x t is actually a occupied grid cell in m then the probability is 0 x t is not an occupied itself the map says that x t is free and then it will have some constant value so it becomes easy it so the the computation does is not any any complex right so basically the original motion model it will either be multiplied by a zero or by a constant and the constant could potentially be absorbed into eta so you can either take it as being multiplied by 0 or multiplied by 1 right right so ah here is a simple example and so i take an x t right which is the original sample of the motion model right and then i look at the probability of x t being valid given m right if the probability is greater than zero right then this is a valid pose so i will return that x t by pi ah to my motion model if my pi is zero that means that wherever i have moved right the sample that i have generated for x t is not a valid post this is then therefore i keep sampling until i get a valid ah post final post x t right so this is this technique is called rejection sampling right so i have a complicated distribution to sample from but the complication can be broken into two parts right i have a simple distribution which is the motion model that the simple motion model that i can sample from right but then i have certain exclusions so whenever i figure whenever i hit one of those excluded samples under the model i just ignore it i reject it and i keep drawing more samples right so the effective sampling distribution would not be the one given by the motion model itself that we had earlier but where the probabilities corresponding to these x ts which have a which have a zero probability under this distribution have been set to zero so this is called rejection sampling it is a very easy way of accounting for this kind of corner rejection cases without making the base sampling distribution more complicated right so this will work something like this right so my original motion model let us say is this this gives me a probability distribution as indicated here in this figure right and let us say that there is actually an obstacle here that is a wall or a partition here that i can't occupy right so there will be some small region around the partition because my robot has a has a volume right so there is a small region around the partition which the robot can't occupy so all those states have to become zero probability so now so instead of sampling from this distribution for my next state i will be sampling from this distribution note that these parts have become actually denser right so this has become darker therefore that means the probability mass here is higher than the probability mass here and so everywhere else the probability would be 0 okay right so still there is a problem right look at this region which is which which i have denoted by a star right this region right still there is a problem because even though according to the map this is a valid region for the robot that is standing here right for the robot that is standing here to reach this region it actually has to go through the obstacle right so that is a challenging part right so ah so this is what we're basically saying here so in the real world that is not possible right so if our updates are small right if basically i am updating very frequently right then this kind of situations are going to very rarely occur in practice because as soon as i move little bit right i would not find i can never find myself on the other side of the obstacle right i will find myself in a state ah where the where i am actually hitting the obstacle therefore i'll i'll i will not move right i'll have to look at a different way of completing the motion right it's only because i am looking at a larger translation here right i am actually having these kinds of infeasible cases and even in other cases we have sometimes we might have to actually check for the entire path if it is collision free right so sometimes you have to do a very expensive check ah to make sure that we are looking at valid translations right the valid motions right so we have to be very careful about choosing our update frequency for the filter as well okay |
Introduction_to_Robotics | Lecture_53_The_PM_Synchronous_Motor_PMSM_and_SPWM.txt | So, what we have seen is that during the interval when any two switches are on in that interval the equivalent circuit can be ideally represented as a DC circuit. This is what we had seen here for the case of switches 1 and 6 being on what you have is the input DC source is now connected to the motor to those two phases of the motor, where the voltage looks induced EMF looks like DC and therefore, the flow of current is DC. In that interval and that is what we have drawn here, the induced EMF is flat, flow of current is flat that means this, then means that power consumed by the EMF is constant E multiplied by I this is the induced EMF E and you multiply this by I E is constant I is constant and therefore, E into I is also a fix number and this is the one that is converted to mechanical power and therefore, mechanical power developed in the machine is also not going to vary within this interval and if therefore, the speed is not going to change this, then implies that electromagnetic torque is not going to change this is valid in 160 degree interval where two devices are going to conduct. In the next 60 degrees another two devices will conduct again with the same value of induced EMF and you have the same applied voltage. Therefore, in the next 60 degrees the same current will flow and you have some other. So, after red and yellow what do we have we have red and blue. So, in the next 60 degrees you have the same red that is going to continue and then here you have blue and through that the same DC current is going to flow and therefore, again electromagnetic torque is not doing very what we only said was that, when you go for 160 degrees to another 60 degrees then, there is an interruption something happens in this region where the EMF are not going to be the same because, one is falling another is going to increase and reach that level so, in that small interval there is a non ideal behaviour which leads to a ripple torque. As we had drawn in the graph that is shown there so, this happens in the intervening small interval when one set of devices finished conducting the next set is going to take over for the next 60 degrees. Student: will current have a ripple Professor: Current also will have a ripple yes, during that small interval. Student: does this arise because of the interval Professor: Yes this arises because of the interval so, during that small interval you notice, that one phase which was off now has to start conducting so, ideally we assume that if. So, here you have the red and the orange, red and yellow conducting here it is red and blue that is conduct that means, if you plot Iy and Ib Iy should have been DC until this time and it should go to 0 during this time because, y is no longer conducting whereas, b should have been 0 here and b will begin to conduct here but, the fact that there is an armature inductance it will not allow this current waveform to be like this if it were like this, then there is no ripple that is going to be produced electromagnetic torque whatever, was happening as one as it is simply being you know, implemented by another phase. But, this is not going to happen because, of the inductance and therefore, the situation looks different what happens is, this current goes to 0 over some duration this current rises to this the desired level in some other duration the durations may not be the same but, that is governed by what is the speed at which the motor is rotating and because of this you have the electromagnetic torque not being flat some variation will be there depending on what exactly is happen. That is the origin of this ripple that we have drawn and that one has to contend. So, we need to understand one more thing in the operation that is how to reverse speed or reverse the direction of rotation in the case of a DC machine. In the case of a DC machine, if whatever was on the side after this line had you connected a DC machine here then, how would you reverse the direction of rotation you simply have the reverse the supply that is all but, unfortunately you cannot do that here, why? If you reverse the supply you have diodes that are connected here and the reverse supply will simply short circuit to the diodes and your whole circuit will simply blow so, it is not an option now, to reverse any sign of input voltage one has to figure out other ways of handling. So, this is an AC machine and it is not a DC machine it is a brushless DC machine because, with the inverter from the DC source it looks like a DC machine that is all but, the motor itself is AC. So, to understand that, we need to remember one equation based on which we will develop, what is going to happen we should note that, the induced EMF is given by the rate of change of flux linkage there is a fundamental equation in physics so that is not too difficult and flux linkage is the unique function of rotor angle if the rotor is the one where you have the field that is being made, you have the field generating system located on the rotor and therefore, depending on how the rotor is rotor angle is then, that flux that is generated is going to depend on that and therefore, flux linkage is a function of rotar angle. So, what we will do is first draw the induced EMF waveform that may a familiar with so, that waveform is going to be looking like this now, what I will do is change this so, I will make this as 60 degrees 0 120 180 240 300 and 360 I am drawing 60 degree intervals so, we are going to have waveform looking like this then, you have y then, we have b. Now, if this is going to be the case given the fact that, the induced EMF is the d Psi by dt what will be the waveform of flux linkage so, if you draw the flux linkage for the R phase red one in this interval the induced from 60 to 180 degrees the induced EMF is flat that means, that induced EMF is given by the derivative of the flux linkage function which means that, the flux linkage function should be of nature that is increasing in a linear manner. Now, note that flux linkage that is induced EMF is given by rate of change of flux linkage with respect to time so, what we can write is e equals b side by d theta multiplied by d theta by dt and we said that flux linkage is a unique function of the rotor angles which means that dPsi by d theta must be a unique function and that gets multiplied by d theta by dt which is then dependent on the direction of rotation if you are going to define that for anticlockwise rotation d theta by dt is defined to be more than 0 then for the other direction of rotation d theta by dt will be less than 0. So, let us assume that we are going to be looking at the d theta by dt greater than 0 and that is this induced EMF is drawn for d theta by dt greater than 0 that is assumption that have made then, how will we draw the flux linkage waveforms, how will you draw the flux linkage waveform during this interval flux linkage should be linearly increasing with respect to rotor angle so, now what I am going to do is plot this with respect to rotor angle because, we are just going to multiply that by d theta by dt. So, during this interval then, you are going to have flux linkage linearly increasing and during this interval we are going to have flux linkage linearly decreasing obviously, because d theta by dt is negative similarly, here it is decreasing and here it is increasing what happens in this 60 degree interval there is an instant way d theta by dt is equal to 0 which means that the flux linkage sorry d Psi by dt is equal to 0 which means that at this instance the flux linkage should have 0 slope and during this interval d Psi by dt is still negative d Psi by dt becomes greater than 0 only in this part and therefore, you will get a plot that is still negative here becomes 0 and then, starts becoming greater than 0 similarly, this plot will go like that and come over here where, this is the 0 instance this will then come like this and go where this is 0 instance and so on. So, this will go like that, like this what will happen to the waveform of white that will also be very similar to this except that it is phase shift so, why will then be so, this is the region where it is negative and then, it is positive here, negative here so, let us not throughout the waveforms too much so, I am going to stop at this point. So, now the question is that, we have drawn this induced emf for d theta by dt greater than 0 what will happen if d theta by dt is less than 0 now, d Psi by d theta is still going to be defined by this slope and they will now have d theta by dt less than 0 let us draw another plot so, the side by d theta for the red one in the interval 0 in 60 to 180 degrees d Psi by d theta as greater than 0 but, d theta by dt is less than 0 that means, this induced emf will now be negative and here, it would be positive this will be positive so, you are going to have a waveform that is looking like this, what happens to y? y will now be negative here y will be positive here and then, you have b that is going to come how will you mark the x-axis now this is still induced emf, what will be x-axis d theta by dt is negative that means, you are going to be moving along this direction see if this is the direction of increasing rotor angle but, if you are going to be rotating in the opposite direction you will have to be moving along this direction. So, the induced EMF will have to be looked at as moving along this direction and there is now therefore, a difference in the earlier case where we plotted this with respect to time, this is now with respect to time because, the rotor is rotating in the opposite direction and what we notice is that, if you are going to have the induced emf greater than 0 for r phase that is followed by the induced emf becoming greater than 0 for y and then, it is b whereas, now what you have is r phase is 5 followed by b followed by y. So, the sequence in which you encounter greater than 0 induced emf now has reversed because, you are rotating in the opposite direction so, what we now need to do so, this means that opposite direction or reversal of rotation means reversal of phase sequence here, it was r y b sequence whereas, here it is r b y sequence. Now, we had hall sensors, hall switches that were put in order to detect the magnetic field and thereby give an indication to the circuit as to where the rotor is and we had put hall sensors and hall switches such that we had H1, H2 and H3 and we had put the position of H1 in the stator remember, the hall switches are positioned on the stator the rotor is rotating therefore, the hall switches would encounter varying magnetic fields as the rotor rotates and that is why it is able to indicate when the magnetic field under its influence becomes north or becomes equal south. So, we had put the hall sensors in such a way that H1 was going high at the time when the induced emf or r phase was becoming flat so, this was the position of H1 and it remains high for 180 degrees and then becomes low this is how H1 would behave and H2 was at this position and H3 was further shifted by 120 degrees this was how it was and we said that, if you want to look at the region where device number 1 for example is on 1 would be on whenever r phase was high and so, this was the region when 1 is on as per the direction of rotation greater than 0. But now, if you reverse the direction of rotation, where do you want 1 to be on 1 should be on whenever the induced emf becomes greater than 0 and therefore, the reason when 1 should be on now shifts it should now be on during this time and not here for reverse direction of rotation. So, if you want to give the signals for switch number 1 for forward rotation, so, let us assume that, this is forward if it is forward then, we had written the function as H1 and H2 bar whereas, if you now want to reverse it what we need to do is H2 and H1 bar so, for reversing direction of rotation the signal to be generated using the hall switch outputs has to be redefined so, when we do the overall control structure, we said that there is going to be. We had a control structure that looks like this you had a digital logic and the digital logic accepts the hall switch inputs and then delivers the signals to be given to the devices. Now, this block requires one more input which is pertaining to the direction of rotation so, depending on what the direction of rotation is, you have to select an appropriate function which will determine when a particular switch has to be on so, for example, for switch number 1 you have defining functions which are either H1 and H2 bar or H1 bar and H2 which one will you select that depends on what is the direction of rotation you are going to have. So, based on that, then you have to select the switching instance so, reversal of rotation means reversal of phase sequence so, if you now write down the sequence in which the switches are going to operate, you will find that, the sequence is altered so, that is what will happen in this case so, please go through define that is formulate definitions for all the six devices formulate these explorations for both directions of rotation and then see what has happened. So, I will leave that, as an exercise to you do it. Now, as I mentioned in the last class, the brushless DC motor has a ripple torque however, what is interesting about the brushless DC motor is that in order that you do closed loop control, in order that you define what switches need to be turned on, you only need rotor angle information once in 60 degrees that is you require the rotor angle information whenever, a particular EMF waveform becomes flat. So, you need information regarding whether this specific rotor handle has been reached, if it has been reached than you need to know that the switch pertaining to this region, that is the r phase will need to be turned on. Similarly, after this you need information when this angle is reached, then you need information when this angle is reached. So, it is enough if you know that the rotor has reached a position where an emf is becoming flat. So only once in 60 degrees it is sufficient if you know the angle. Because in the interval of 60 degrees the circuit simply looks like a DC cirpute, DC current is suppose to flow there is nothing you need to specifically control. Because the only thing that you need to control is the magnitude of DC voltage to be applied that you are already doing by having a high frequency switching wave form which you use to define the duty ratio during that time as we discussed in the last class. But, when we go to a PMSM, that is a synchronous machine as it is called, we said that the emf is sinusoid. Now if it is going to be sinusoidal this posses some difficulty that first so the emf is sinusoidal which then means that you need to allow sinusoidal flow of current into the motor is necessary. Why sinusoidal flow of current is necessary that is because as we have saying all along you want a constant mechanical power output form the machine and if you have a three-phase induced emf that means you have, let us say three sources which are looking like this. Let us say they are connected at one end and these are emf sources than if you send three flows of current, if you do this, this is a current source, this let us say induced emf. So from the source to the induced emf is active power flow. How do you find out how much active power flows in this electrical circuit? So, RMS is one way but RMS means that you are looking at one cycle of the wave form. And then you take a RMS of that, RMS does not say how the wave form is going to be with respect to time. RMS is a sort of lumped array, lumped information about the entire thing leaving out the details of what is happening. But what we want is that the active power flow with respect to time should be constant; this is our desire. And by circuit analysis what one can understand is that if they induced emfs are of the form er is equal to sum E multiplied by cos omega t, ey is equal to sum E multiplied by cos of omega t minus 120 degrees and eb is E cos omega t minus 240 degrees. If this the induced emf set, such induced emfs are called as balanced induced emf where the magnitudes are all the same, amplitudes are the same and they are all well defined sinusoids and they are phase shifted by 120 degrees. So, if the induced emf form a balance set than if the flow of current is such that ir is I cos omega t plus phi and iy is I cos omega t plus 5 minus 2 pi by 3, that is 120 degrees. And ib is I cos omega t plus phi minus 240 degrees. So, if you have such a set than if you draw the wave form of er into ir plus ey into iy plus ec into ic, note that I have used lower case in all these things that means we are taking instantaneous values instant to instant. Every instant you take the value of e multiply by the corresponding value of overall the phases add all of them up. So, we are talking about instant to instant variation. If you plot this, this will be the nature of wave form. Therefore, if you are going to have a motor with a sinusoidal emf than it is necessary to send a sinusoidal flow of current into the motor so that the mechanical output power is flat with respect to time. If mechanical output power is flat with respect to time then it means also that if you draw, if you draw electromagnetic, electromagnetic torque that will also be constant with respect to time for constant speed operation. Because of this you want the flow of current to be sinusoid and if you want sinusoidal flow of current into the source, into the induced emfs. What should be the nature of output voltage generated by the invertor, that should also be sinusoidal. This therefore demands that the invertor generate sinusoidal voltage. You know the invertor is, the invertor is this circuit where is that circuit, this circuit we have already said this is the invertor. Now the problem is that the invertor consist of switches and it cannot generate a sinusoidal voltage, it cannot generate a voltage wave form that vary smoothly over time, it is not possible. So again, like what we said before when we say sinusoidal what we really mean is that if you take the switching wave form generated by the invertor on an average it looks like a sinusoid. That is what we really have therefore the invertor will generate a sinusoid plus lot of switching repel. And therefor you will not get a wave form that is smooth electromagnetic torque like this, what you will get is a wave form like that. And we already concluded that this wave form is acceptable, provided you are able to keep the amplitude small and the frequency of the repel high. So, the question then is how does the invertor generate this. So if you take, for example this particular leg. Let us take one le of the invertor and you consider the voltage at this point with respect to this potential. So, let us call this node as R and let us designate this ground. What will be the voltage VR at any instant of time, what will be the voltage VR? The switches have to be operated necessarily that either this switch is on or this which is on, you cannot have at any given instant of time both the switches being on because that will short circuit resource. You can of course have a situation where both switches are off, nothing will go wrong. But if you leave both switches off, then you do not know what this potential is, so nobody operates it like that. So, the switches are operators such that this potential is always well defined, which means that at any given instant of time either the switch is on or this which is on and if one of the switches are on the other one is necessarily off. So, that is the manner in which the switches are operated, which means what will be the potential of this node R with respect to ground. What do you think it will be? So, you have these two situations, call this as which one and let us say this is switch four, we have situation one is on four is off or one is off four is on. These are, these are the only states that are allowed. So, in this case, what is VR, so let us say this is equal to VDC, Vr will be VDC. In this case what is VR, 0. So, VR will either be VDC or 0, that is all. So, you can never get a negative voltage of this, it is not feasible. You cannot go to a voltage lower than 0 because the structure does not allow it. But if you want to generate a sinusoidal voltage you have to have negative voltage, voltage has to be negative. So, what you do is you do not generate a sinusoidal voltage the 0 mean, you generate a sinusoidal voltage such that it is VDC plus some A cos omega t, VDC by 2 plus A cos omega t. So, you are generating a level shifted sinusoid, that means the output at this point will generate a DC quantity plus a sinusoid of certain amplitude at the low frequency entity So that means, if you know, we have seen in the earlier lectures that if you are going to generate a wave form, some V naught with respect to time, which looks like this and so on. This contains an average value which can be realized as, you take the average of this and that is going to be the average. But now instead of this, suppose you develop a waveform which is like this. What will happen to the average, the average will be a waveform that increases, goes to some maximum and then what I do is generate something like this. So now the average goes down and goes towards 0. So, now what we are doing is we are generating a sinusoidal moving average. As you move the window along, then you find that the average defined over one cycle of switching is varying along this is, along this x-axis and that variation can be made to define a sinusoid. But nevertheless, you cannot get negative variation because the topology limits you from generating negative voltages. And therefore, if you generate not just the sinusoid but the sinusoid shifted by a DC offset and you define these variations in accordance do that shifted by a DC offset, then it is feasible to generate a sinusoid completely shifted by that offset. So, you will have the full cycle being defined like this. And therefore, what you get is VDC by 2 plus some A cos omega t generated at this node R. How do you now generate another output voltage which is phase shifted by 120 degrees, all you need to do is phase shift this sinusoid by 120 degrees for the next phase and further by another 120 degrees for the next phase. Therefore, you are generating three sinusoidal voltages of same amplitude phase shifted by 120 degree each. And you are also adding a DC component to it how will that affect the rest of the circuit; will you see a DC current also flowing in addition to sinusoidal currents. Let us say that you have a circuit and consider resistors here. All these DC voltages are equal. And let us say, all these resistances are also equal. How much will be the flow of current here, zero current will flow. And therefore, even though we are generating DC voltages here since the DC is same on all three phases it will not pass a DC current into the circuit, into the motor what will flow is only a flow of current due to this excitation. And therefore, we achieve our goal of creating constant active power with that switching is. |
Introduction_to_Robotics | Lecture_11_Introduction.txt | Hello good morning to all of you and welcome to this course on introduction to Robotics. So, this in an introductory course offered in the department which actually covers the fundamentals of Robotics. As you know, Robotics is a very interesting field, so we thought that we should offer a course on Robotics but we want to keep all the students from various disciplines part of this course. That's why we made this as a bridge course which students from any branch can opt for and we will teach the fundamentals of Robotics which actually covers the disciplines of Mechanical, Electrical, Computer science, and ensure that this bridge course will help the students to learn advanced courses in robotics and become a expert in Robotics. So, as I mentioned, this is offered as a bridge course for students from various disciplines to learn the basics of Robotics and since it is a multi-disciplinary subject, we want to ensure that the students get the basic understanding of mechanical, electrical and Computer Science fundamentals related to the Robotics and that is irrespective of their branch of specialization. So, we expect students from Mechanical, Electrical, Civil, Computer Science, Chemical, Aerospace, Electronics - so irrespective of their branch of specialization, we feel that Robotics will be a core area where the students would like to specialize and therefore, this course is being offered. So, we have the course offered in four modules. As you can see here, the module 1, basically introduction to the field of Robotics, without giving too much of technical details, but we will look into the history, growth and the applications of Robotics. So, how the Robotics field started its growth and what is the current status and what are the various applications of Robotics Technology will be covered in the first module. And in... we cover some of the details about the laws of Robotics and other interesting aspects of Robotics. That module is basically to give you an idea of the wide applications of Robotic technology in various domains. And then we will go into the module 2. Module 2 is basically the mechanical aspects of robotics. We talk about the robot mechanisms, the Kinematics of Robotics - basically covering the coordinate transformations, DH parameters, forward Kinematics, inverse Kinematics, Jacobians and a bit on the static analysis of manipulators. And that actually is more towards the mechanical aspects of robots, basically the robot construction and how the Kinematics play a major role in the design and controls of robots will be discussed. And this mainly, for the benefit of students from other than mechanical streams so because some of the mechanical stream students will be aware of some aspects of this, but for others, especially those who are from Civil or Electrical or Computer Science background, will be having very limited knowledge in this aspect so we wanted to ensure that those students also will get a good understanding of the mechanical fundamentals of Robotics. Then the module 3 is on actuators. Basically the different kinds of actuators are used in Robotics, of course there are electrical, hydraulic and pneumatic actuators, but this module 3 covers electrical actuators and the selection of actuators for Robotics or robot applications. So, it will be covering the DC Motors, BLDC Servo Motors, then a bit on the sensors and the sensor integration and the control. So, we will talk about PWM control, Join Motion Control and a bit on the feedback control and computed torque control. This is purely on the electrical side of robotics and students from electrical background will be knowing most of these aspects. But still we wanted to ensure that how these are applicable in Robotics field is being discussed in this particular module. And the third module, sorry the fourth module is on the computer science aspects - basically the perception, localization and mapping. And we will talk about the probabilistic robotics where the path planning and other aspects of Robotics will be covered. This is purely from the computer science perspective - that will be a lot of algorithm development and how these algorithms are used in Robotics for path planning, localization and mapping will be discussed. So, this is the overall content of this course and as you can see, it is a, three distinct things are being covered in this course and therefore we have three faculty teaching this course. The module 1 and 2 mostly talk about the mechanical aspect of Robotics, will be covered by me. And then the electrical part will be covered by a professor from the Electrical department and this will be taught by a professor from Computer Science department. So, as you can see here, the focus is not exactly on the detailed robotic analysis or robot design, but the objective here is to make sure that the robotic students get an overview of these three distinct field which are playing a major role in the design and control of robots. And this will form as a bridge course or a base course for all of you to make sure that the advance courses in Robotics will be easy for you to follow and you will be able to take many courses in this field whether it is electrical control courses or advanced actuation and control sensing courses, or the courses on localization or perception or artificial intelligence, machine learning. So, all those things you will be able to take as advanced courses so that you can actually specialize in one area but you will be aware of the importance of other domains in the design and development of robots. So, that is going to be the content of this course. And as I mentioned the, of course the module 5 will be more on the application side, I mean how do you actually integrate these modules into a design project or design of a particular application robots. That will be the module 5, but more of integration of these four modules. So, the teachers as I mentioned - I am Asokan, I am from the department of Engineering Design, so I will be teaching the module 1 and module 2. And Professor Krishna Vasudevan, who is from the Electrical Engineering department will be teaching the electrical model and professor B Ravindran who is from the Computer Science department, he will be discussing the module 4 which discuss the computer science aspects of Robotics. So, that is the syllabus and the faculty. Now, I will start with module 1 today and then module 1 and, 1 will be very short, we will be having maximum two lectures and then we will start the module 2, then go ahead with module 3 and 4. So, the contents of module 1 will be more of a generic introduction of Robotics. So, I will be talking about some interesting applications of Robotics with robots everywhere and for everyone. And then we talked about the laws of Robotics and a little bit on the history and evolution of Robotics. And then a brief description of various applications of Robotics Technology, especially the two main areas of industrial robots and field and service robots. And then we will look into the various topics and other aspects of Robotics - what is Robotics all about and what are the major topics in Robotics which you should be aware of as well as which you can decide to specialize in one or more of these topics, so that you can be an expert in one area of robotic technology. And at the same time you will be able to apply the technology for the design and development of any new robot application. So, let us go to the first model and the details. So, if you can look at the Robotics Technology now or the robot application, you can see a robot almost every field whichever you can think of. For example, if you think of a robot for underwater, yes we have underwater robots. If you think of a robot for elderly care, yes there are elderly care robots. You talk about robotics for medical applications, yes that are a wide variety of applications for medical robotics. And you talk about defense applications, you talk about aerial application, you talk about entertainment you can see that Robotics application is almost everywhere in the world, in the field. So, if you look at this application - so what you are seeing here is a underwater robot. As you can see here - this is an underwater robot which can actually go into the water and it can go the depths of 500 to 1000 meters and carry out important tasks like either an observation kind of work or salvaging of an underwater, sunken object are carrying out small repair works under water. So, that is one major area of application of robots nowadays and lot of research and development work is going on in this area. And another application of this you can see here is a industrial robots where robots are being used for various applications in industry. So, probably one of the oldest application of robotic technology in the production field - you can see these two robots, the two robots are working in cooporation. They are actually holding a work piece and this work piece is being loaded to a machine and a robot is helping the machine to hold the object while the object is being machined or operated upon by the machine. And then, after completion of the work the robots are actually able to take it back, reorganize it, reorient the work piece and then feed it again to the machine. So, it is actually a very complex operation which may require three or four people to continuously tend to the machine. Here, these two robots are capable of doing it without any human intervention. And this is possible because of the mechanical system of the robot, the electrical sensors and controls and of course the algorithms which actually help the robot to carry out the task in cooperation. And this kind of applications are becoming more and more popular nowadays and there are a lot of industries which actually using industrial robots for their day-to-day production activities And here, you can see in this one - a mobile robot and this mobile robot - you might have seen this kind of robot in many applications, so they are actually wheeled robots and we can have wheeled robots or we can actually have tracked robots or we can have some other way of locomotion also. And you can see these are autonomous robots. So, they can actually, they can be planned to carry out some interesting task or they can be planned to go from point A to B and then to C and then come back to its original location. And as it moves, you can see that they can actually, it can avoid obstacles and it can identify the path where it is going. It can, it will be able to locate its path and localize itself, transfer the data back to the control station and carry out any task assigned to them and then come back to the original location. So, this kind of mobile robots are also becoming very popular nowadays and it is being actually extended many other applications like on-road vehicles, autonomous vehicles and many other applications. So, mobile robots, they actually moved out of the laboratories nowadays, they are actually going into the field and trying to see whether we can actually use it in industry space, workshop floors or in hospitals or in many other places where we want to have autonomous mobility for robots. And this one, this is very familiar with you - this one - which is known as the aerial robot. These aerial robots are nowadays, they call, we call just drones or sometimes we call just quadrotors when there are only four rotors. And here you can see, it has a got six rotors and they can actually vertically takeoff and land - that is one of the advantage of this kind, we call just VTOL, Vertical Takeoff and Landing, and they are widely used in nowadays for many surveillance applications and they are being used in the agricultural applications. And of course there are no limits of applications; we can actually find many applications for this kind of robots, they are becoming very popular. And this is only one kind of an aerial robot, there are different kinds of aerial robots. There are fixed wing aeriel robots and there are flapping wing aerial robots. So, you can actually see that the area robot itself has got multiple categories and applications. And what you are seeing here - this is kind of robotic systems which students will be developing as part of their academic activities because you can see this is like a kind of a spider, we can have legs and then we can have controllers, motors, sensors, actuators and we can actually program it to move in a particular direction or can have a circular motion or a straight line motion and we can plan and execute these task. This is more for students to learn the basics of Robotics because here there is a sensor module involved, there is an actuator module involved, there is a mechanical system involved and there are programming and controls involved. So, by designing this kind of systems you will be able to learn the basics of integrating various systems, sub-systems into a robotic system and then develop it as a application based robots. And this is the best way to learn the basics of Robotics because many students asked me I am interested in Robotics, how can I start working in Robotics? There is no direct answer to that, of course you need to learn a lot to do this. There are a lot of theory involved, a lot of practical exposure needed and there are lot of interdisciplinary knowledge involved. So, you can actually, you can learn the theory and parallelly you can work on practical implementation of hardware systems or integration of hardware and software and then start making a few simple robots and then slowly you can learn the technology. And probably the most complex and most fascinating type of robots are the humanoid robots - a lot of talk about humanoid robots. And people have been trying to develop humanoid robots for a quite long time. There was, there were a lot of interest in developing human like robot. Everybody wanted to have the robots which look like human and behave like human and if possible having emotions like a human. Unfortunately we are still far away from having a robot which can actually mimic the human behaviors. There are a lot of challenges. So, what you are seeing here is a humanoid robot, developed by, it is known as Asimo and this robot is quiet a old one, they have developed more, improved its further. So, this robot can actually walk and then push something or can talk to people or it can respond to the queries from people - so that much of work is possible to be done by this kind of robots. Unfortunately, we are far far away from having a robot with actually, which can actually do task like human. The challenges are mainly in having the decision making capability and of course the mobility and the control aspect. Because human has got lot of capabilities to control the motions and dynamically stabilizing the body as well as making decisions based on a lot of information but collected from the sensors and at the same time using our memory and our learning while making the decisions. Unfortunately, robots are still far away from reaching that stage and hopefully artificial Intelligence and many other developments taking place may lead to a lot of improvement but still it may take another 20 or 30 years for us to have something very basic or a very basic system which can actually mimic the behavior of a human to a very basic level. What we require in terms of a humanoid robot is if we can actually have the intelligence of a 6 year old child or the agility of an 8-year-old or the strength of a 15 year old - if we can actually... or the decision-making capability of a 10 year old boy, and if you can actually have that kind of capabilities built into a robot, that itself is more than sufficient for our applications. Unfortunately, we have to go a long way in order to reach this stage. So, this actually gives you only a very few applications and there are multiple applications which you can think of. There are applications in space, there are a lot of space robots; you might have heard about the Mars Rovers. And apart from this, okay, the current trend is basically to look at how to use these technologies for developing autonomous vehicles. So, a lot of hype is created in developing autonomous vehicles. So, DARPA challenge is one challenge initiated a few years ago in order to develop autonomous ground vehicles or autonomous road vehicles. And you can see that a lot of people compete in this, a lot of universities compete in the design and development of autonomous cars. Of course, in the first few years it was not very successful but then later on, many universities could come up with the design of robots, multi-terrain robots which can actually go from different... it can go across different terrains and carry out simple task and come back to its original location. And what you are seeing here is the robot applications for medical field. What you are seeing here is in a video, of a robot or a robotic arm attached to the person - a person who has lost her limb or the hand... can be attached with a mechanical arm which has got all controls associated with normal operation. And these signals for the controls can actually be taken from the human body itself. It can be taken using EMG signals or Muscle Innervation using S-EMG signals and processing these signals give the intention of the person. And this intention can be converted to the command signals to the robotic arm and the robotic arm can actually carry out the task what the person wants to do. So, this is a practical, available system from a company called DEKA, or you can search for DEKA robot - you will see this kind of development happening in the robotics field. And then of course, we have many other robots which are known as elderly assistant robot. Elderly assistant robots are the robots which actually can be used for people who are alone at home - elderly people who are alone at home. This can be used as a assistive robot. It can actually interact with the person, it can talk to the person and help the person in few basic day-to-day activities. So, actual field applications of Robotics technology. And now you can see that there are, apart from that, there are a lot of interest, interest generated for robotics and Robotics technology by children; especially in the field of educational robots and entertainment robots also. And there are a lot of industries which actually come up capitalizing this interest of students and others in Robotics Technology - they come up with robotic kits. And there are a lot of movies developed or... based on Robotics. So, most of the time, a robot is projected as a kind of a human or human like robot which can do a lot of things which are mostly of fiction. And... but people believe that a robot can actually do all those things. Only the Robotics people or robotic scientists are not able to develop it for practical application. So, there are a lot of hype created by this kind of the movies and media and the commercial establishments about robotics. And that actually, sometimes it is a good thing, so that we get a lot of publicity and a lot of interest and a lot of curiosity. But at the same time it creates a lot of confusion among people also about the real capabilities of robots and how actually how actually, what actually a robot is or how actually we can define a robot or how is, what is the difference between a robot and the automated machine and things like that. So, this is something which as a robotics engineer or a person who is interested in Robotics or an engineer who is interested in Robotics, we need to have a little more clarity on all the things. So, we cannot be like a public, a person without much knowledge about Robotics thinking that robots can do everything or robotics is a solution for every problem. As engineers, we need to have a more clarity on all these things and the purpose of introducing these things to you is to make sure that you have a better understanding of Robotics and Robotic Technology. Now, looking at all the things, the question is that - how do you actually define robots? Because you saw that there are multiple applications and there are multiple fields where actually the Robotics Technology can be applied and there are a lot of hype created about Robotics also. So, we need to have a little bit of understanding about how do we really define a robot. So, you can say that it can be a hobby for many people. So, Robotics can be a hobby for many people. You can see that especially students and those who are interested in electronics and controls and all, they take it as a hobby. So, they will build some small machine which can actually walk around or it can actually do some predefined task. Or sometimes it is a science fiction, because there are a lot of fiction written about Robotics, there are a lot of movies made about robots and robotic technologies and it's actually a scientific or engineering discipline also. And of course, now it is a industrial technology. So, you can say that it can be a hobby to somebody, it can be a fiction and entertainment for somebody or it is really a scientific discipline or it can be a real technology which can actually solve many problems in the field. Now, so it is very sometimes controversial and often misrepresentation in the media. So, media actually create a lot of misrepresentation about Robotics. So, whenever you say you have some robot, a robot which can do something without really checking whether it can really do or not, people will go immediately, go to the media and try to create a hype about it. So, because of all these things, no single definition is going to satisfy such a variety of perspectives and interest. So there are, as you can see, there are different applications and different way Robotics is being looked upon. It is very difficult to have a single definition for robots. So long ago, Robotics was defined as a... or a robot was defined as a software controlled mechanical device... sorry a software controlled mechanical device that uses sensors to guide one or more of end effectors through programmed motions in a workspace in order to manipulate physical objects. So, this was the definition given for robots long ago. And at that time, a robot actually meant as an industrial robot. You know, that was the first kind of a robot that actually came into the market and the definition was given based on that robot. So, you can see that it is a mechanical device and it is controlled by a software and it uses sensors and to get one or more end effectors... end effector is nothing but the tool attached to the robot. If it is welding robot, there is a welding tool. If it is a assembly robot or pick and place robot, there will be a grasper associated with that or if it is a painting robot, there will be a paint gun. So, that is basically known as end effector. And so, the end effector has to move around so that is to get one or more end effectors through programmed motions in a workspace, in a workspace, in order to manipulate physical objects. So, that was a definition given. So, you have an end effector which can actually manipulate physical objects. When I say manipulate, so I can take an object and manipulate it and place it somewhere else - that is the manipulation. So, and in a workspace, in a fixed area or in a fixed space, I can actually move this object. So, that is basically the manipulation. So, the robot was defined in that way. So it had a software controlled mechanical device which has got sensors, which has got end effectors, which can be programmed and using program, we can actually move around the objects using the end effectors. And that definition is really true for the industrial robot. But we can see that, you can see that definition is not really applicable nowadays. Because most of the robots what you see in the now, the present scenario, you will see that need not need have a end effector. So, the robot need not have an end effector, it need not have a fixed workspace, it need not manipulate objects also. So, many of these like, for underwater robot or a surgical robot or an aerial robot, it need not manipulate objects, it need not have a fixed workspace - and therefore, they given a new definition for robots which is known as the - Robotics is the intelligent connection of perception to action. It says that if we can say that I have a perception of doing something and if I can actually make it happen, using some intelligent connection, then I call it as a, call it as Robotics. So, this is the current definition given for Robotics and with this definition, we can actually bring a lot of things, under the umbrella of Robotics. And we will see how this definition really helps to, helps us to have a clear understanding of what is a robot and what is not a robot. So, we will discuss this in the next class. So, I will explain what actually we mean by this intelligent connection of perception to action. And then, we will look into the history and the development of Robotic technology and then we will look at more detailed applications also. So, thank you very much for listening. I will meet you in the next class. Thank you. |
Introduction_to_Robotics | Lecture_41_DC_Motor_Control_Regions_and_Principles_of_Power_Electronics.txt | In the last class, we talked more about how the DC motors actually works and when we are going to combine it with some sort of load, how do you then determine how or where the system will operate at which speed it will run? it is determined by this intersection between the speed versus, speed versus torque graph of the motor and the speed versus torque graph of the load. Where they intersect, that is where the system together will operate. So, when I say load, it is everything that is going to be there on this side. In this case, what we have drawn is a simple fan. But in general, it may not be, so it may be some series of mechanical linkages connected with something else, it could be as sophisticated as one can look at it. But the entire thing how it looks like the motor is this represented by this graph and as far as the motor itself is concerned, that is represented by this graph and therefore, where they intersect is where the system together is going to operate. Now, as I mentioned that there is a certain maximum speed of the motor maximum current and maximum voltage. Those are the limits of operate. But when you look at electric motors, the operation of electric motors. In this case, we considered that if you look at the stator we have a magnet which is going to produce a magnetic field and here you have a magnet that is going to produce a magnetic field. So, as I mentioned at one point this face of it, if it represents north, then this would represent south. Then this would represent south and this represents north and this is then the outer circumference of the stator itself. So, the field lines will then come out of this face go into the rotating member that is armature and then come out of this, there would be field lines that form a path here and then there would be field lines that will form a path here. So, this is the way it is going to go and as we said in this particular case, there is no way you can adjust the strength of this magnetic field. Because you have a fixed magnet and there is no adjustment that you can do to it, it will generate whatever magnetic field will happen, there is no adjustment between the two. But there are machines, where this is not the situation and what you have is let us say the outer surface looks like this of the cylinder in sectional view and then what you have is its structure that roughly would look like this and similarly a structure that roughly look like this and then you have the inner surface of the cylinder and then you have the rotating member sitting here, that is the rotor. So, here if you want to generate a magnetic field what you can do is put several turns of one conductor, bring this out, put several turns of conductor here bring this out. These two you simply connect in series. So, that whatever current you sent here will be the same current that will flow through this and you can connect a DC source here. Now, because you have a flow of current in the loop, this then we will be able to generate a magnetic field. So, you can arrange the direction of flow of this current and the winding direction in this such that the magnetic field in this will flow out of this into the rotor and then flow here back through this and then form the loop here. So, this may still look like north, this may look like a south pole as far as the rotor is concerned. So, these sort of machines are called as wound field machine. Whereas these kind of machines are known as permanent magnet machine. These are sometimes called PMDC machine. These are simply DC machine, wound field machine. So, obviously, you can see that the mechanical arrangement is a little more involved in the second case and this machine will therefore, be bigger than the other one which is quite obvious from the down the figure. But what is the advantage you get? The advantage is that the strength of the field that is generated in the second case, can now be adjusted by determining the length and breadth of the field. The armature part and whatever is there in the rotating member in both cases is the same there is no difference to that. The only difference is in the second case, you are able to adjust the level of magnetic field. Whereas, in the first case you cannot adjust the level of magnetic field. Why is this important? We said that if the rotor is going to rotate with a speed of omega, here also you have a rotor that let us say rotate the speed of omega. Some classes ago, we said that there is an induced EMF, which is going to depend upon speed and it depends upon the strength of the magnetic field. Now, for a particular design of the armature, when it has a certain number of electrical conductors connected in a certain fashion and so, on as the speed is going to increase the induced EMF will obviously increase and the machine is designed to withstand a certain maximum level of voltage and the induced EMF is going to be simply proportional to B multiplied by omega. So, as the level as the speed of the machine increases, the induced EMF increases and at a certain value of speed, it will equal the maximum voltage that the machine is designed to withstand, which means you cannot go beyond that value or speed for operating the machine. If magnetic field is kept the same. So, in the first case of the machine, there is no way to adjust the magnetic field and therefore, that decide what is the maximum speed limit that you can operate this motor. But however, in the second case, you have a mechanism to adjust the magnetic field. Usually you operate the machine with the maximum magnetic field that you can allow, why? Again, we need to go back to those equations that we wrote. You will see that the electromagnetic forces that are produced inside the machine also depend on the level of magnetic field and in operation, you would like to see that you generate the required amount of electromagnetic force by the smallest amount of amperes that you are sending. Which means that you would like to operate the machine with the maximum magnetic field that the machine is designed to handle. But however, having reached a value of speed, that will generate an induced EMF equal to the maximum voltage the machine can withstand. If you now want to go beyond that speed for some reason, usually machines are designed such that the mechanical system the integrity of the mechanical system is valid for much more than the speed at which the induced EMF is equal to the applied voltage or induced EMF is equal to the maximum voltage, a machine can withstand. You can still go for higher value of speed and therefore, if you do want to go for higher value of speed, then the only way is looking at this equation. The only way you can do it is by reducing the level of magnetic fields, because you cannot violate the condition that the machine can withstand only so much voltage. Student: (())(10:46) Professor: Yeah. See, let me rewrite those equations here. The generated electromagnetic torque is simply proportional to B multiplied by i. The generated EMF is proportional to B multiplied by omega. There is a certain maximum applied voltage the machine can withstand. Let us assume that the machine you have is designed to withstand an applied voltage or a maximum voltage is equal to 100 volts. B, let us say B equal to 1 weber per meter square, which then means for simplicity I am assuming that EMF is equal to B into omega and therefore, the generated electromagnetic torque is equal to B into i. Then this mean that the maximum speed to which the machine can go is 100 divided by 1. Which is 100 radian per second. That is the speed to which you can run the machine. At that speed how much input amperes it will draw is a separate question that is not bothered about that. You can go up to 100, 100 radians per second. The machine is also designed to take a certain amount of amperes that also we have seen in the last class that is the maximum size of the conductors inside the machine if somebody has designed it to be so much it can take only so many amperes. And let us say the maximum flow of current is i equals 5 amperes, arbitrarily I am just saying. Which now mean that maximum speed is equal to 100 radian per second. Maximum torque is equal to 5 Newton meters. Now, this expression says that the developed electromagnetic torque is simply being too high and for the usage purposes, when I want to, when I need the machine to develop an electromagnetic torque, I need to send flow of current. Now, if I need an electromagnetic, electromagnetic torque equal to let us say I am looking at I want to develop 4 Newton meter. I want to develop 4 Newton meter and the question is how do I do it? I can do it in many different ways. I can say that my value of B will be equal to be equal to this much complete, simply you know equal to 2 and then i equal to 2. I can say B equal to 0.5 and then i equal to 8. I can say B equal to 0.25 then i equal to 16. Which combination will you use? In the first machine that we do you have no option. There is simply no option, there is a given B there is no way of adjusting it, you have to live with whatever B it generates and so much current you have to send. So our first case, this issue does not arise at all. The issue arises only in the second case where you have a facility to adjust the magnetic field. So, if you have a facility to adjust the magnetic field and you want to generate 4 Newton meter of electromagnetic torque, out of these 3 which option will choose, you will choose the first option going by this, because? Student: less than 5 ampere Professor: That results in less than 5 ampere that is the least among all these current. But then the issue is that the machine is also designed to withstand only a certain maximum strength of magnetic fields. You cannot go beyond that and therefore. B equal to 2 Tesla itself will ruled out this is not possible. We said that B is only equal to 1 Webber per meter square. So, that option is rulled up. Now, given that, which one will you choose? You will not choose either of the two, you want to B equal to 0.5 or B equal to 0.25. Because you can go up to B equal to 1 and if you go to B equal to 1 i required is only 4 ampere and therefore, it makes sense for you to operate the machine at the maximum level of flux density or maximum magnetic fields then, that the machine is designed for. So, you will choose therefore, under most operating condition B is equal to B maximum of the machine. Which will then result in i minimum for a required level of torque to be generated. We are not talking about load torque. Load torque you can apply whatever you want that is an external phenomenon, we are only talking about what the motor can generate. Student: (())(17:45) Professor: Yes. So, there are different restrictions to the level of magnetic field. One is what is the maximum current that this conductor can take, that is one restriction. Second is that magnetic field has to flow through this arrangement of all this stator and it has to be flowing through all these regions everywhere. So, if that field has to go through all those regions, you do not want iron to go deep into deep into saturation. So, you need to look at whether the material is going into saturation and whether it can handle that flux density. So, if you require an unnatural level of B, perhaps it may not be feasible to generate it inside that iron arrangement at all. So, 1 has to look at what is the maximum fields meant that the geometry can support and what is the maximum flow of current that can generate that kind of magnetic field. So, all those together has to be considered in order to decide what is the value of the B equal to B max? How much it is. So, for a given machine, somebody has designed it and given it to you, then he will then say what maximum flux density you can have and therefore, of course, having designed the machine he will not say explicitly, what is the maximum flux density he will then say, what is the maximum flow of if that you can allow because that is what you can do outside. For a given if machine will generate so much magnetic field that is by design. But however, therefore there are application requirement, where you will need to go beyond the maximum speed of 100 radian per second in this example. Let us now say that you want to go to a speed of 300 radians per second. How do you? How can you do that? You can do 300 radians per second, if you look at this equation, you can do 300 radians per second only if you reduce the magnetic field, there is no other go. Because the machine cannot take more than that EMF. There are some other elements in the machine, which will decide what is the maximum voltage the machine can withstand and therefore, you can go to higher speed, higher speeds are possible if magnetic field is reduced. As I said this is an option available only in the second case. In the first case, where magnets are used it is not at all an option, in this particular machine. It is feasible if magnetic fields can be reduced. However, if you look at electrical power input to the machine, how do you determine that electrical power input is applied voltage V multiplied by the flow of current i? You can apply a certain maximum voltage V and the machine can take a certain maximum input current i and therefore, maximum input electrical supply, that is this is limited by machine rating V max into iMax. Output mechanical power output power which is mechanical, how much is that? Torque multiplied by speed. Now, going to higher speeds therefore, mean that if you want speed to be greater than the rated speed the machine cannot deliver a mechanical torque that is equal to the rated level of mechanical torque. Let us say that in this condition, we said that the maximum torque that the machine can generate at 5 Newton meters. If you want the machine to still generate 5 Newton meter at a speed equal to a value of 300, you are looking at 1500 watts of output delivered from the machine. Whereas, input power is limited, input power can only be equal to 100 volts multiplied by how many ampere did we say? 100 into 5, 5 into 100 is only 500 watt input. This is impossible you cannot supply 500 watt input and get 1500 watt output. So, the only way to manage this, if you want to go to higher speed is derate the level of mechanical torque the machine can generate, which means since we have said that the mechanical torque the machine will generate should be equal to the load torque that is demanded by the load. It only means therefore, that if you want to go the higher speed you must ensure that the load torque applied on the machine is reduced as the speed increase. So, this kind of operation is called as the field weakening operation. So, in field weakening mode torque that can be allowed to be generated reduces with speed. Note that it is torque that can be allowed to be generated or I should more specifically say maximum torque that can be allowed to be generated and could be decreased as the speed goes on increasing, you can operate the machine with a level of torque that is lower than that maximum torque that is okay. But you cannot go higher than the reduced value of torque. So, if you then look at a graph that can be drawn between torque capability of the machine and speed. Note that it is torque capability, how much you can allow to be generated. The earlier graph that we do, this graph speaks about what is the torque that will be generated for a particular value of speed, when you apply a certain voltage to it. It speaks about a particular operating situation. Whereas, now what we are drawing is a different graph that speaks about what is the ability of the machine, maximum ability of the machine to do something. So, under that condition, if you say that this is the value of rated speed, which is determined by induced EMF consideration, then from 0 to rated speed you can operate in this region with the rated magnetic field. Beyond this region, you have to reduce the magnetic field. Therefore, until this region you, I mean everywhere, you can supply the machine with the rated flow of armature input current and since in the first portion you are having the rated value of magnetic field up to this point the machine is able to generate the rated value of torque. But beyond this point, the machine will not be able to generate the rated value of torque you will have to derate the machine in accordance with the fact that mechanical torque multiplied by the speed must be equal to a fixed number. And therefore, if you draw a graph of the ability of machines to generate torque versus speed, this graph will be a hyperbola which is determined by the fact that torque into omega is equal to a fixed number. You can operate the machine anywhere inside this graph. This is the operating region feasible for the machine. Inside this region, where exactly the machine will operate is going to be decided by the actual load that is there the amount of voltage that you apply and all that, that is determined by the earlier graph. Now, having said all this and having come to an understanding in the last class that armature voltage control is a desirable mode for controlling the speed and in most cases you do not have a voltage source that is automatically variable, you have only a fixed voltage source available and from that fixed voltage source you have to generate a variable voltage for running the motor under various condition, the question is how to do it and we found that putting a resistance is not the best way to do things. So, then the question is what else you can do? So, that is where the field of our electronics enters the picture. It is important therefore, to understand how the philosophy of operation goes. So, let us take a simple example, that you have a DC source of 100 volts and you want to connect it to a load which is let us say a resistor. You want this resistor to be supplied with a voltage of 25 volts DC. This is the goal. Now, as we said earlier what can be done as you can simply connect another resistor in series and you know the value of this resistance. So, the other resistor can be very easily you know determined a simple law of potential division will tell you what should be the value but it is a hopeless exercise because it will be highly inefficient circuit. In order to supply so much to this register you will dissipate much more into other resistor and this is useless situation. Now, the question is we need to understand certain things. What you mean by DC?. So, let us say you take the voltage across this and let us say I draw a graph of this with respect to time and this is the voltage and the voltage waveform looks like this. So, is this DC? Yes, it is no questions about it. It is obviously DC because there is no change in the voltage. Now I draw another graph, is this DC? Why do you say that? Student: it does not change polarity Professor: It does not change polarity. Now I draw another graph, is this DC? It has a component of, how do you determine that DC component? Average value of this waveform represents the DC component that is contained in them. So, we define DC as a waveform with a non-zero DC component. This waveform, on the first waveform on the other hand, we call it as pure DC. Whereas this waveform is not pure DC it contains a DC component, but it is also polluted with so many other things. Now therefore, when I say that I want the supply 25 volt DC to that resistance and the question is what form of DC? Do you require a pure DC or can you do with some you know, maybe not so undesirable, but need not be eliminated. So, those kinds of impurities whether you can allow or whether you really need a pure DC. Now for example, let us assume that you want to supply this DC to some integrated circuit. You have seen DC. So, you want to apply DC to integrated circuit. Now the requirement of those normally of integrated circuits is that, if you see the datasheet and it says that this IC is the to accept an input voltage of 14 volts and they will say 15 volts with allowable disturbance limit of 15.1 to maybe 14.9 that is the kind of allowable band that they will have. So, in such kind of IC you cannot allow this kind of huge disturbance that we have drawn in the last graph. So, it really means that you require a voltage which is very-very close to pure DC and that is what you have delivered, you have to generate that kind of supply with high quality DC voltage. Now, there may be other systems where you may want to supply DC but it can accept all kinds of junk as long as there is a non-zero average value. So, if you are looking at a resistor, the resistor simply does not care, only thing that resistor is going to do is dissipate electrical power, you really do not care whether it is AC or DC, resistor behave the same way to either of them. So, for a resistor is really immaterial. So, just for argument's sake therefore, we are saying that we want 25 volt DC. When we say 25 volt DC, what do we mean therefore, we want the DC part of the waveform that is applied across it to be further 25 volts. So, this means that the DC component is equal to 25. So, having understood this then the question arises, is there a circuitry that I can put in between that gives me 100 percent efficient therefore a question arises whether we can have a circuit which means, you can put in between that gives you 100 percent efficient. So, if you look at elements that you can have in an electrical circuit, you would have seen that circuit elements are 1 is a resistor then you have an inductor and then you have a capacitor. These are three circuit elements that you are definitely familiar with and you would have seen this in some electrical codes or nowadays it is there in high school physics as well. So, resistors are denoted by the symbol R, this is L and this is C. Now of these three elements, which are the ones that have dissipation associated with it? Resistor is a dissipative element. Whereas, the inductor and capacitor are energy storage element. So, if you send the flow of current i through an inductor L that flow of electric current generates a magnetic field and energy is stored in the form of magnetic fields it is not lost. If you reverse the flow of electricity, that is i through it that electric that the field that is there decreases and you can get it back. Similarly, if you apply a DC voltage of V across the capacitor apply any voltage of V across the capacitor, half CV squared is the electric field energy that is stored in the capacitor and if you reduce the supply voltage you can take that field energy back. Whereas, in a resistor if you tend the flow of current i, i squared into R loss 1. Therefore, if we are looking at a circuit with high efficiency, certainly this is not admissible, you cannot use resistors in the circuit. Now, one other element which is there is a switch, right. How do you decide the dissipation in a particular element? Voltage multiplied by flow of current. So, voltage across the element multiplied by the flow of current in the element will be dissipation. If you take a switch, the switch may be either on or it can be off, there are no other positions that are there for the switch. So, if the switch is on the voltage across the switch is equal to 0 and some current will flow through the switch which is decided by the rest of the circuit, which does not decide how much flow of current will there it is just switched it on. Other circuit elements will decide how much current flows through it. If the switch is off, flow of current is equal to 0, how much voltage is applied after the switch will be decided by the rest of the circuit. But nevertheless, in either K, V into i is 0. Whether the switch is on or whether the switch is off volt, the dissipation across in the switch is equal to 0 because one of the quantities will be 0. Which means that high efficiency electrical circuit that we want to put in between the admissible elements are L, C and switch. Now, we come to the philosophy of how you can design such a circuit. We have said that what we want is a DC component of 25 volt, we are not saying it is a pure DC and you can use these switch, these element. Therefore, the simplest way one can design this is to put a switch here and then do it. So, how does this give you a DC voltage of 25? What you can do is you operate the switch such that it is on for this much duration and off for this much duration. On, off, on, off. So, if you do that this is switch control. So, when the switch control signal is high we understand that the switch is turned on. When the switch control signal is low, we understand that the switch is off. If this is what happens, what can we say about the voltage across the resistor? This is the circuit. If the switch is on what is the voltage across the resistor? 100 volt and if the switch is off, what is the voltage across the resistor? 0. Therefore, the voltage across the resistor will be 100, 0, 100, 0, 100 and so on. What is the average value of this? So, this is 1 division and this is 4 divisions and therefore average value is 1 fourth of this, this is 100 by 4, this is 25. So, we got what wanted. You were having waveform which has an average of 25 volt and that is what we want. So, a simple switch will be able to give the kind of voltage that is required. But the problem is that if you do not have a resistor, we are talking about DC motors. You put a DC motor there and then let us assume that the DC motor is equivalent to. So, let us say that you have this DC. Then you put a switch here and then let us say the DC motor is equivalent to armature resistance and then an induced EMF from induced EMF here. If this is the model, whether is this a model is a separate question. But if this is the modal and you have 100 volt here and you want a voltage of 25 volt at this point which is what we have designed and there is resistance and this is the DC voltage because T it is a constant speed. So, K into omega will be a constant voltage and therefore that voltage inside the circle is pure DC. So, if that is the case what will be the waveform of armature input current that flows? So, let us say that this is equal to a voltage of 20 volts and my armature resistance is equal to 0.5 ohm. So, if I want to draw the waveform of i, how will it look like? Same as, same as the voltage waveform, it will look like this, it will look like this and what will be this amplitude? 100 minus 20 is 80 divided by 0.5 is 160 ampere. This will be the wave form of i. The question is, is this desirable? It is not desirable. Why? Student: (())(46:32) Professor: This particular case we said earlier that you can take 5 ampere. So, let us say that is not the motor and motor can take 160 ampere. If motor can take 160 ampere, is this desirable? Student: we want constant current Professor: We want constant current, why do we want constant current? Motor will generate an electromagnetic torque which is dependent on this waveform. A scaled value of this waveform is electromagnetic torque and you do not want to apply this kind of torque to the load, it is a huge ripple torque, you do not know what the load will do. Therefore, this waveform is not acceptable. This voltage maybe acceptable. If this voltage is going to result in this current then it is not acceptable. If this voltage can somehow be made to result in smoother current then it is. So, the question really is not about this voltage that is generated; the question is about what can we do to this waveform. That we will see in next class. |
Introduction_to_Robotics | Lecture_2_1_Kinematics_Coordinate_transformations.txt | Very good morning. Welcome back to this course on Introduction Robotics. So we will start the topic Manipulator Kinematics today. In the last few classes I mentioned about the development of robotics in general and the applications of robotic technology in various fields, and I mentioned that industrial robotics is one of the major areas of robotics application. And when we discuss about the important topics in robotics, I mentioned that manipulator kinematics is one of the most important part which any robotics engineer should need to understand and then this will actually form the basis for learning robotics in many other applications also like field and service robotics. You need to have a basic understanding of manipulator kinematics. So, we will be starting this discussion today and as part of this kinematics, we will be talking about the object location and motion as the first step. So, kinematics basically talks about the position and velocity relationships in robotics. So, in order to understand that first we will talk about the Object Location and Motion. How do we actually represent an object in 3D space? And how do we actually represent the motion of an object in 3D space will be discussed. And then we will talk something about the Transformation Matrices. So if an object is moving in the 3D space and when the movement includes the translation rotation then how do we actually represent these transformations can be explained using Transformation Matrices. So we talk about Transformation Matrices and then we talk some, about something called Homogeneous Transformations. So, to represent the object transformation we need to use something called Homogeneous Transformations And once we learn this, then we will talk about the Forward Kinematics of the industrial manipulators. And then we go to the Inverse Kinematics and Differential Relationships. So, these are the topics that will be covered in the in this part of kinematics. So first we will start with the basic relationships of Object Location and Motion. So, now if you look at the Object Location and Motion as the first topic we can actually see that there are a few things we need to understand before we talk about the motion of an object in 3D space. Since you are from different backgrounds, some of you are from civil engineering, some from electrical, some from mechanical, some from chemical, I want to go through some basic mathematical relationships which some of you may be already familiar, but this is to make sure that all of you understand the basic concepts. So, that when we go to the forward kinematics and inverse kinematics, you may you will find it easy to follow. So, the first thing that we will be discussing is the Position of a point in space. How do we represent a position of point in space very basic thing which most of you already know, but then we will start with that and then see, how do we represent the location of a rigid body in space? Because the position of a point in space can be represented, but an object when you have an object, it is not only the position, you need to had, need its orientation also because an object it is a three-dimensional object. So, position alone is not sufficient to represent an object. So we need to have the location of an object where location says means the position and orientation of an object and then we talk about the Homogeneous transformation matrix. And in object motion, we need to see I mean there are two ways an object can actually move. So one is the translation. You can actually move linearly from one point to the other point translation motion. The other one is a rotation, rotary motion. So you can have a translation and rotation for an object. So how do we actually represent this translation and rotation using matrices that is basically the Transformation matrices, translation matrix and rotation matrix. And then we talk about a General Rotation principle also, and finally, we will have a Homogeneous transformation matrix and then we take few examples to explain how the homogeneous transformation matrix can be represented, can be used to represent the motion of an object in 3D space. So that is what we are going to discuss in the, the first part of object, location and motion, before we get into the robot kinematics part we first talk about the general object location and motion. So, let us see this, so you have an object in physical world. Suppose you have an object in physical world. It is a three-dimensional object and then it has got a sense EG and we will just put that point in there also. Now if I want to say that, what is the position of this object or how do you actually represent this object in a physical world? So, we need to have two things to represent it; one is that we need to saw, what is the position? What is the position of this object? And then we need to know, what is the orientation? Because the object can actually be in this like this way also you can actually put this object in this way both are the same object, but the position if it is the same position if you actually place it in this orientation, it becomes a different way of representation. So, how do we actually represent the position and orientation of an object? And that is basically the location of an object and how to describes it's change of position also? Now this object has moved from here this is object has moved from here. So this was the initial position. It has moved to here. Now, how do we actually represent this change of position and change of orientation? What way we can actually represent it mathematically is the question that is where we try to see how do we actually represent this object location in 3D space And when it is moving from one location to another location, how do you represent the, this transformation of location also? So that is the, these are the two basic issues we need to address before talking about having a robot moving a physical object. Yes, finally a robot is being used to move physical object. So I have an object here. I want this to be moved to here in a different orientation. So, how do I represent this using in the case of robot that can be understood only if we know how to represent 3D object in 3Ds, I mean object in space. So what we do is to we will try to look at something called Coordinate frames using the coordinates of the point. Suppose you have a three dimensional coordinate frame like this where we write this as X, Y and Z. Now we know that if you have a point in space if you have point in space I can actually represent this point using its coordinates. So, I can say that the coordinates are X, Y and Z and then we will say, this is the X coordinate, this is the Y coordinate and this may be the Z coordinates. So X, Y and Z. So, we will say that at this point P can be represented as X, Y, Z that is the way how you represent a point in 3D space. This is a vector P the using the coordinate frame you will be able to represent the vector P. Now if is the same as an object in space. Suppose this is an object a 3D object in space then we can actually say that the position of the centre of this object can be represented again using the coordinate points. So, you have these coordinates to represent the, the centre of this object, but that is not sufficient in the case of a 3D object because that actually gives you only the position of the object but we need to know how this object is oriented with respect to a frame. So, we need to always have a reference frame. So we called this is the reference frame with respect to which we can represent a position. So whether it is a point or an object, we can use a reference frame to represent the position of a object or a point or an object and that is what we called as the coordinate frame. So this coordinate frame becomes the reference frame to represent the position of an object or a point in 3D in space. So it can be 3D space or any dimensional space, we can use a coordinate frame to represent position of object or point in the space. Now, if you want to represent the orientation of the object in space then we need to have something called then we need to have another reference frame. We will say that this also has got a reference frame. So we have a frame attached to this which is like this. So I will say that, there is a reference there is a frame attached to this, so I will say this is x dash, y dash and z dash. So, I will say, this is the frame which is attached to the body. So, now we have a reference frame and a frame attached to the body also. Now the position of the body or this object can be represented as the position of the coordinate frame, this frame. So we if we know the origin of this frame, then we can represent the origin of this frame using the coordinates as the position of the object and the orientation of the object can be represented as a, as reference to this frame. So, what is the orientation of this frame with respect to this frame? So I call this as the reference frame and I call this as the mobile frame. So I call this as a mobile frame or an object frame whatever you name you want to called. So if I call this as a mobile frame and this as a reference frame, then I can represent the orientation of the object by looking at the orientation of this coordinate frame with respect to this coordinate frame. So how these two coordinate frames are aligned that actually represents the orientation of the object. So in order to represent the position and orientation of object in 3D space, we need to refer, we need to represent them in terms of coordinate frames. So we need to identify the reference frames and with respect to that frame we can actually represent the position and we can have an frame attached to the object and the orientation of that frame with respect to the reference frame gives the orientation of the object with respect to the reference frame. So this way we can define the position and orientation of objects with respect to coordinate frames. Now the frame which is used as a reference is normally referred to as a fixed frame and the frame which is attached to the body is known as a mobile frame. So, we have a fixed coordinate frame with respect to which the position can be defined and we have a mobile frame with respect to and the orientation of the mobile frame with respect to the fixed frame can be used to represent the orientation of the object. So, that is the way how we can represent the position and orientation of objects in space. Now, let us see let us go to the the basic definition for the coordinate coordinates and the coordinate frames and then see how we can represent the orientation using the coordinate frames. So, now if you have a vector p suppose p is a vector. In R dimensional space, I mean n dimensional space where R is; Rn is n dimensional space, then the coordinates of p with respect to X are denoted as pX. So, the coordinate of p with respect to X are denoted as X and the vector p can be defined as sigma k is equal to 1 to n, pX, kxk. So this is the way how we can actually define the vector where this is the coordinate of p with respect to the kth axis. So, you have this x1, x2, x3, xn as the axis of the coordinate frame or the n dimensional coordinate frame then we will get this as the coordinates of p with respect to X and pX, xk is the complete vector, how you define the vector in n-dimensional space. Now we say three-dimensional space you will be able to see that this p1, p2, p3 will be the coordinates and we will be able to get the vector. Now, how do we actually define these coordinates? Suppose you have a coordinate frame like this and this is your p then this is the your px that is the coordinates of p in the x axis. So that is the px. Now this px can actually be obtained as by taking this vector p. So if you take the dot product of p with respect to x, then we get this as px that is the dot product of this vector with the unit vector of x axis you will be getting this as px. So, px or the coordinate of p with respect to x is nothing but the dot product of the vector p with respect to the x axis. Similarly, py will be the dot product with respect to y and pz will be dot product with respect to z. So, that is the way how we actually define the coordinates of p in this coordinate frame. So, the kth coordinates of p with respect to X is defined as p dot xk. So we can see that the kth coordinate of p with respect to x or with respect to the coordinate frame x is x1, x2, x3 depending on the number of the dimension of the coordinate frame. So, here if it in three dimensional coordinate frame then you have k is equal to 3, maximum. So, this is p xk is p dot xk. So if it is p with respect to X, then we have this coordinate p xk as p dot xk. So you take the dot product of the vector p with respect to the kth axis you will get the coordinate of p. So this is the way how we define the coordinate of a point in space. Now if you have, that is the explanation here given here so you have this as p1x, p2x etcetera. So, we have this as this is the p that is the vector p and p1x is nothing but p dot x1 and p2 x is nothing but p dot x2. So that is the way how we get the coordinates of p. Now, having seen this now we need to look at, how do we actually represent this in different situations. That is how do we actually represent the transformation of a object. That was about the point p. Now, suppose we have an object like this suppose we have an object three dimensional object in the three-dimensional object we define the coordinate frame as m1, m2 m3. So we define a coordinate frame m1, m2, m3. Suppose this is I take this as a object. So I have a object three-dimensional object and I define a coordinate frame like this m1, m2 and m3 three three-dimensional coordinate frame. And as you can see I define a point P at one corner. So, I can see I can define a point P like this in this coordinate frame. So, as you can see, I am using this point and this is the origin of the coordinate frame and m1, m2, m3 are the axis of this coordinate frame. Now, this point P to the coordinates of P can be actually be obtained by taking P dot m1, P dot m2 and P dot m3. So that that will be the coordinates of P with respect to the frame m1, m2, m3. So, I attach this to the object and therefore I call this as mobile frame because I assume that it actually moves along with the object the frame moves along with the object. So I called it as mobile frame and I will be defining this I mean the point P it here and I will get the coordinates as P dot m1, P dot m2, P dot m3. Now, I assume that this object is rotating by 90 degree or 180 degree. So, I am having this object and the here this point is P, I am assuming that it actually rotates by 180 degree. So, it actually move like this and the point is P now here. So point P as I mean that is rotated and I want to know, what is the position of P after the after rotation. So, the object is rotated 180 degree and I want to know, what is the position of P with respect to the frame now? So, now the P is this and you have this frame also rotated like this I mean along with the object the frame rotated and you want to know the position of P. So, what will be the position of P now will it be the same as previous one or the P will change now the position of P will change with respect to the frame? It is not going change because now P will be again P is P dot m1, P dot m2 and P dot m3. So that is the coordinate of P and since m1, m2, m3 also rotated along with the object there will not be any change in this coordinates of P. So, P will remain same, so any point P in this three dimensional object will remain as the same as the object moves object rotates or translate whatever it is because the coordinate frame also moves along with that therefore the coordinates of P remains same as it has the object moves. So there is no change in the position of P with respect to the mobile frame Now, we assume that this object we actually move this object to or we actually place the objects in this frame, I have a fixed frame now. So I have a fixed frame. I will say that the fixed frame is this one. I will assume that there is a fixed frame with respect to my body or what the room, I will say that I am initially aligning this object with the fixed frame such that the, the axis of mobile frame and the fixed frame are aligned. That is, I am placing it like this, I am placing this object in this frame with a fixed frame there is a fixed link or I have a call this as fixed frame f1, f2, f3. So f1, f2, f3 is a fixed reference frame and initially I place this object p in the in this frame aligning m1 with it is slightly different in this case, but I assume that it is m1, m2, m3 are aligned in this case. Now if I align this and then get the point P with respect to m will remain the same. So P with respect to m will remain the same here. Now, I will find out what is the point of P with respect to f. So I am interested in knowing now, what is Pf? So this was Pm mobile frame I am interested in knowing, what is Pf? What is the position of P with respect to the fixed frame? I can actually get it again by using the same principle. I take P dot f1, P dot f2 and P dot f3 I can take the dot product of this vector P with respect to f1, f2, f3 I will get the position of this. And the position of P with respect to m will be P dot m1 and P dot m2 and P dot m3. Now if I rotate this object 90 degree now at this location if I rotate it by 90 degree, so initially we have this I will put this f1, f2 and f3 and we had this objects and we had this m1, m2 and m3. So, this was point P. Now, if I have rotated this objects by 90 degree and this P, actually I have turned this and the object actually came like this and the P actually reached here this position. So, this is P now. So m1, m2 m3 also rotated. So, now m1, m2 and m3 is going like this, this will be m1, this will be m2 and this will be m3. So you have this object initially this object was in this position and it rotated by 90 degree or 180 degree as like this and the object has actually moved this position the P has actually reached here. Now if I want to get the position of P after rotation, so if I get the position of P after rotation. So P with respect to mobile frame after rotation will again remain same as P dot m1, P dot m2, P dot m3. So there is now change in the position of P with respect to mobile frame. But since this is rotated by 90 degree its position with respect to f can be still represented using the P dot f1, P dot f2 can be represented using P dot f1, P to f2, P dot F3, but since this object has change and its mobile frame has moved therefore the values of P dot f1, P dot f2, P dot f3 will not remain same because the object has rotated. So, what actually it says that when the object is rotating the position of P remains same with respect to a mobile frame but its position changes with respect to the fixed frame and therefore we need to if we need to represent the position of P with respect to the fixed frame when the it is moving we will need to find a method to find out what is Pf when the robot, when the object is moving with respect to the fixed frame or with respect to its own frame. So that is basically the, the transformation of coordinates. How do we represent the position of a point in 3D space or position of a point in that in an object when the object is moving with respect to a reference frame? How do you represent the position is the question of coordinate transformation? So, let us look at this in a slightly detailed way. So, assume that the P the two coordinate frames Pf Pm is basically P dot m1, P dot m2, P dot m3 as you can see. So this is P dot m1, P dot m2, P dot m3. Now this is P dot f1, P dot f2, P dot f3. That is the point P with respect to fixed frame can be represented using P dot f1, P dot f2, P dot f3. Suppose there is a rotation so the coordinate transformation problem is to find the coordinates of P with respect to f given the coordinate coordinates of P with respect to m, suppose we know the coordinates of P with respect to m. How do we find the coordinates of P with respect to f is the coordinate transformation problem? So we have this P now with respect to m. And we want to know, what is P with respect to f when this P is moving when this object is moving. How do we actually represent P with respect to f, if we know PM and when it is moving when the object is moving, how do we represent PF, if we know PM. Because PM remains same with respect to the mobile frame it remains same but with respect to the fixed frame it changes because of the motion of the object. So, how do we represent PF, if we know PM is basically known as the coordinate transformation problem. So, we know the coordinates of P with respect to M. But we want to know coordinates of P with respect to F. So, we want to know this PF, when this P is moving it is moving in different ways still we want to know, what is the P with respect to F. So that we can represent the position and orientation of the object with respect to a reference frame whatever happens to the object whether the object is moving rotating translating, we still want to know what is the position and orientation of the object. So if there is a car or a mobile robot in the room and the mobile robot is moving around. So, we want to know its position, so we need to have a reference frame and we want to know it is orientation also, so if you want to know the position orientation of the robot in the room, we represent the position with respect to a reference frame and the orientation will find out the mobile frame and then see how much it has rotated and we try to find out the orientation also. So the question here is how do we get this PF? That is the fixed frame coordinates when there is a movement for the object and for the point P which is when it is moving how do we represent is basically the coordinate transformation problem. Now if we look at this in detail. We can see that so the F is the frame with f1, f2, f3 and M is the frame with the m1, m2, m3 coordinate frames and F being an orthonormal frame. Then for each point P in Rn that is the point P in Rn as you can see, this is the point P in Rn we can say that PF is equal to A PM. So we can represent PF as matrix A multiplied by PM and this A is an n by n matrix defined by fk dot mj. So, this A is known as the coordinate transformation matrix. So, basically we are telling that we can write PF as a matrix A multiplied by PM that is if we know PM. If we know this PM if P represent to mobile frame then the point P with respect to the fixed frame can always be represented using this relationship where A multiplied by PM where A is known as the coordinate transformation matrix, and the elements of this transformation matrix can be obtained as fk dot mj. So, F is the fixed frame M is the mobile frame F is fixed frame, M is the mobile frame. So fk dot mj gives you the elements of Ak or Akj is fk dot mj. So if the three dimensions frame, so if have f1, f2, f3 and m1, m2, m3. So, we can see this initially they are aligned. So fk F and M are aligned and then when it is rotating you will be having a different point to be represented and that point P can be represented with respect to F by using this A. So, now if you write A for a three dimensional, for a three-dimensional space f1, f2, f3 and m1, m2, m3 then we can say that A11 is f1 dot m1 then f1 dot m2 and f1 dot m3, three elements. Similarly, f2 dot m1, f2 dot f2 dot m2 and f2 dot m3. Similarly, here f3 dot m1, f3 dot m2, f 3 dot m3. So this is the way how you get the transformation matrix. Basically, it says that if you have f1, m1, f2, m2, f3, m3 and if there is a rotation, now there aligned so you can see in this case f1 dot m1. So this, so here not aligned. So f1 dot m1 and this is m1 this is f1, so f1 dot m1 will be 0. So there will be they are not aligned so it will be getting it as 0. So whenever there is an alignment you will see f3 dot m3 you will see f3 dot m3 is 1. And same way you can get f2 dot m2, f1 dot m2 etc and we will be able to get this matrix A there is a transformation matrix A. So, the transformation between two coordinate frames. So these are two coordinate frame one is mobile and other one is fixed. So, the coordinate transformation between these two frames can always represented using a matrix A and the elements of A can be obtained by taking the dot product of fk dot, fk and mj. So, we will be getting all the elements of the rotation matrix by taking the dot product. So that is basically the coordinate transformation matrix between two coordinate frames one is a fixed frame another one is a mobile frame in this case. So, the matrix A is known as the coordinate transformation matrix and A is given as f1 dot m1, f1 dot m2, f1 dot m3 and early this one also. So this is known as the coordinate transformation matrix. So now if can if you know the point P in mobile frame. And you want to find out PF. So what you need to know is what is A and this is A depends on how it is actually rotating with respect to the fixed frame how the coordinate frames are aligned. The coordinate frames are aligned both f1 m1, f2 m2 and f3 m3 they are aligned then this will be 1, 0, 0; 0, 1, 0; 0, 0, 1. If F and M are aligned, for example, if this is f1, f2 and f3 and this is m1 and this is m2 and this is m3 then you will see that if f1 is aligned with the m1 then f1 dot m1 will be 1, f1 dot m2 will be 0, f1 dot m3 will be 0. Similarly, f2 dot m2 will be 1, f3 dot m3 will be 1. If they are aligned, then the dot the matrix will be identity matrix and that will says that PF and PM are the same. So, the point P in fixed frame and mobile frame will be having the same coordinate. If they are not aligned, then there will be a different coordinate in the fixed frame and that is obtained by using this relationship. So, that is what actually the coordinate transformation matrix. Hope you have understood the coordinate transformation matrix. As I mentioned some of you must be knowing all these things. But for those who are not familiar, I just wanted to ensure that you understand the basic principle of coordinate transformation matrix. This transformation matrix is used in many other field also. Now, if we know this coordinate transformation matrix, so that is basically we have PF is equal to A PM but we can do the inverse also, if we know PF then we can actually find out PM from PF. So what we need to do is to use the A inverse. So we can actually take the inverse of the coordinate transformation provided the origin are the same that is the only condition here. So, we can actually get the inverse I mean inverse can be obtained as a transpose in this case. So if F and M the orthogonal coordinates Rn having the same origin and let A be the coordinate transformation matrix that maps M coordinates to F coordinates. Then the transformation matrix, which maps F coordinates to M coordinates is given by A inverse, where A inverse is equal to A transpose. So, if you know A, you can use the A transposed to get the inverse coordinate transformation provided both the frames are at the same origin. So both having the same origin, then we will be able to use A inverse equal to A transpose. So that is the inverse coordinate transformation matrix. So, that actually talks about the basic principle of coordinate transformation that is you have two coordinate frames and you want to represent a point with respect to one coordinate frame from the other frame or you know, one the point with respect to one coordinate frame the coordinates are known you want to know the coordinates with respect to the other frame. So, one we called a fixed and one we called as mobile. If the frames are moving related to each other. Then we will be able to get the coordinates by using the coordinate transformation matrix. And what we need to do is to take the dot product of the axis and get the matrix, you will get the coordinate points. Now, let us consider some of the transformations because we know there is a movement of the frame mobile frame. So let us consider some of the movements. So for example if you have a frame like this a fix frame f1, f2 and f3. Now I will define m1, m2 and m3. So these two frames initially they are aligned, they are having the same origin. So there can be two ways it can actually transform or can move two ways; one is that you can actually rotate the mobile frame can actually rotate m1, m2 it can actually rotate assume that it is rotating with respect to m3. So, you will be getting this as m1 the new m1 will be this and this will be the m2. So, that is one. So basically we are saying that there are coordinate frame is rotated by an angle theta with respect to f3 or m3. So that is one way of rotation one way of moving movement. And another one is that the transformed one it can actually move somewhere here. It can actually translate. It can translate and then assume that it is initially translated like this m1, m2, and then it can actually rotate also I mean you can have a translation rotation or rotation translation anything like that is possible. So, there are multiple ways in which the coordinate frame can move or an object can move. So when you say coordinate frame is moving you are saying an object is moving and the coordinate frame attached to the object is basically the mobile frame. So, the coordinate frame can actually have a rotation or it can have a translation of P, I can say this coordinate frame has moved from here to here and then rotated also. So you can have a translation rotation or you can have a rotation and translation. So there are different ways of moving. So we first consider the rotation only. We consider that the coordinate frame is rotating and when it is rotating we are interested to know, what is the transformation matrix when there is a rotation or there is a pure rotation. So we want to specify the position and orientation of mobile tool in terms of coordinate frame attached to this. So normally the transformations involve both rotation and translation. So first we consider the rotation alone that is this mobile frame has rotated by an angle theta with respect to the fix frame and we want to find out what is the transformation matrix. So, we know that PF is, we know PF is A PM and A is the transformation matrix. Now we want to find out what is this transformation matrix when there is a rotation. That is, you want to find out. What is this A when there is a rotation? And we know that this rotation matrix, sorry, the transformation matrix is defined as f1 dot m1, f1 dot m2, f1 dot m3, similarly f2 dot m1, f2 dot m2, f2 dot m3, similarly, f3 also. So this is the way how the rotation matrix is this matrix is defined. So first if it is only rotation then we call this as a rotation matrix. So we called this as a rotation matrix. So, we will look into this as a rotation fundamental rotation. So we will say that there is only one rotation if the mobile coordinate frame is obtained from fixed coordinate frame by rotating M about one of the unit vectors of F then the resulting coordinate transformation matrix is known as fundamental rotation matrix. So, if you have a rotation of the mobile frame with respect to one of the fixed frames then we call this rotation as a, the transformation as a fundamental rotation and the transformation matrix is known as fundamental rotation matrix. So as I showed you, you can actually have a rotation like this. So you have f1 m1, f2 m2, f3 m3 that is the coordinate frames can be F and F and M. So, f1, f2, f3 represents the frame and m1, m2, m3 represents the mobile frame and then assume that the mobile frame has rotated by an angle theta. Now, the new frame is m2 dash and m3 dash and it is with respect to f1 it is rotated we can get this as, so if can get this as R1 phi as the rotation matrix as f1 dot m1, f1 dot m2, f3 dot m3 like this f1 dot m3 and then f2 dot m1 dash, m2 dot f2 dash like this. So we are actually taking the dot product of the rotated frame, rotated axis with respect to the fixed axis and trying to find out what is the dot product. So, this R1 phi or the fundamental rotation matrix is f1 dot m1 dash. Now we can see that f1 dot m1 dash, so this f1 and m1 dash are aligned there is no rotation and there is actually the rotation was with respect to this fixed frame f1 and therefore m1 did not move and therefore you get this as 1. So, f1 dot m1 dash is 1 in this case fundamental rotation. And this will be 0, 0 and this is f2 dot m2 dash, so f2 and m2 dash they are not aligned. So there will be a dot product f2 dot m2 dash, f2 dot m3 dash, f3 dot m2 dash, f3 dot m3 dash and so this is 1,0,0; cos phi, minus sin phi, this is by an angle phi. So, you will get this cos phi minus sin phi sin phi cos phi. So this is the fundamental rotation matrix and the rotation with respect to f1. So, when the rotation with respect to the first axis then the rotation matrix is 1, 0, 0; 1, 0, 0; cos phi, minus sin phi, sin phi cos phi. So, this is how you get the fundamental rotation matrix. So whenever there is a rotation of the mobile frame with respect to the first axis of the fixed frame your transformation matrix will be like this and that is known as the fundamental rotation matrix. So, same way we can actually find out if the rotation with respect to f2 or the rotation with respect to f3 you will be able to get the rotation matrix in the same way, the difference will be if it is rotating with respect to f2 then f2 m2 dash will be 1. If it is rotating with respect f3 then this will be 1. Otherwise the rotation the matrix formulation will be the same. So you can get the fundamental rotations as like this. So R1 it is rotation with respect to the first frame first axis, then this will be the rotation matrix. And if the rotation with respect to the second axis, f2 then this will be the matrix so we can see this will be 1 because f2 and m2 will be aligned if the rotation is respect to f2. And the third one will be like this R3 will be cos phi, minus sin phi, sin phi, cos phi, 0 0 1. So the rotation is with respect to the third axis. So this is the fundamental rotation matrix that you can identify whenever there is a rotation of the mobile frame with respect to the fixed frame and the rotation is with respect to a particular axis. Then you will be able to easily get that rotation matrix by using this rule. So first axis means the first row and first column you will be able to see like this 1, 0, 0; 1, 0, 0 then this will be the second axis with respect to second axis this one, third axis this one. So these are known as the fundamental rotation matrices. The transformation matrix which represents only rotation then we call it fundamental rotation matrix. So you do not need to really I mean if this will be we will be using this many times in the kinematic analysis, but you do not need to remember it by heart. So there is a simple way to remember it. So, there is a pattern in this one. So if the rotation is with respect to first axis, you can see the first row and first column, first row and first column will be part of identity matrix 1, 0, 0; 1, 0, 0. If the rotation is with respect to second then the second row and the second column will be part of identity matrix 0, 1, 0; 0, 1, 0. Similarly if the rotation with respect to third, third row and third column will be 0, 0, 1; 0, 0, 1. So that is the pattern in this and then, the diagonal will be always cos phi, see diagonal element will be always cos phi. So, we can see always it will be cos, off diagonal elements always will be plus or minus sin phi. And the sin of the off diagonal above the diagonal is minus 1 to the power of k. So if you have a sin phi here and the sin of the off diagonal term above the diagonal is minus 1 to the power of k. So minus 1 to the power of k. So this is the above the diagonal is minus this one. So it is minus 1 to the power of 1, this is minus 1 to the power of 2, this is minus 1 to the power of 3, so you will be getting it as minus, plus, minus. So, we will always see that the kth row and kth column of Rk phi are identical to the kth rank column of identity matrix. And the remaining 2 by 2 matrix the diagonal terms are cos phi and the off diagonal always plus or minus sin phi. So it will be either plus or minus sin phi. And the sin of the off diagonal term above the diagonal is always minus 1 to the power of k. So, if you can remember this you will always able to get this fundamental rotation matrix without any difficulty. So that is the fundamental rotation matrices when the mobile frame is rotating with respect to the fixed frame. Now, suppose you want to get the composite rotations suppose you have multiple rotations taking place because first this suppose this is an object and you have point P here. Now, this red one is the so this is the black one is the fixed frame and the red one is the mobile frame assume that and if you rotate it, you can actually see that this will be it you can be rotated in with respect to this axis, it can be rotated with respect to this axis, then this will be moving to this side. So, it can actually get the p somewhere here and you can actually rotate again with respect to this you can actually rotate with respect to the vertical axis you will be getting this position as P. So like the suppose you have multiple rotations taking place. You want to know, what is the point of P? What is the point P with respect to the fixed frame? And this can be done by using a simple algorithm. So we need just need to follow a simple algorithm to get a composite rotation matrix that is initialize the rotation matrix R to I an identity matrix, which corresponds to F and M being coincidence. So assume that F and M are coincident at initially and assume that your rotation matrix R is identity matrix. And then look at the rotation, if the mobile frame M is rotated by an amount phi about the kth unit vector of F then pre multiply R by Rk phi. So, initially R is I. Now you look at what is the rotation happening if the mobile frame is rotated with respect to the kth unit frame of fixed frame then do your pre multiplication then R is equal to I Rk , k is equal to 1, 2 or 3. So you can get this as a pre multiplication Rk multiplied by R that is R. So R is phi now. So it is Rk multiplied by R so new one. But if the rotation is with respect to mobile frame or its own frame because the object can actually rotate with respect to its own its own frame also, not necessary that it should always rotate with respect to fixed frame it can rotate with respect to its own frame also, and that is that is the case then what you need to do is to do a post multiplication. So here if the rotation with respect to the kth vector of its own frame its own kth vector then you do here a post multiplication. So then R is multiplied with Rk. So the post multiplication and pre multiplication makes difference and therefore the rotation can actually be represented using either a post or pre multiplication. So this is the algorithm that we need to use and if you have more multiplication, more rotations keep on doing this, keep on doing this till you complete all the rotations. So that is how you get the composite rotation matrix by looking at whether it is rotating with respect to its own frame or with respect to the mobile, the fixed frame and it continue this till all the rotations are completed and the resulting matrix maps M to F. So you can get the M to F mapping by the resulting composite rotation matrix. So that is the way how you get the composite rotation. So, I will stop here. So we will talk about this Yaw Pitch Roll transformation matrix as a composite rotation, and then we will see how to get the transformation matrix using the algorithm that we discussed. So we will discuss this in the next class. Thank you. |
Introduction_to_Robotics | Lecture_25_Kinematic_Parameters.txt | Hello, welcome back. In the last class we discussed about the robot architectures, the body and arm assembly configurations. And we found that there are two types of joints; the rotary and prismatic, and by arranging these rotary and prismatic joints, we will be able to get different body arm configurations. And we found that there are five types of architectures that is possible for industrial robots, it is known as the jointed arm architecture. Then we have this Cartesian robot, and then we have the SCARA robot. So, I will show you a small video here to highlight these three important type of robots. So, you can see this is the most commonly used robots which is the 3 R type RRR type, or jointed Arm architecture, okay. So, you can see here the different joint axis, you have the first joint which is the rotary joint, then you have the second joint which is moving now, and then you will be having that third joint also that will be used as the joint which would be used for positioning. This is the one, this is the third joint, that is the most commonly used rotary kind of robots. So, let me go forward so, it has various applications of this robot. Now, it's the second one is the SCARA configuration, you can see that there are 2 R joints and the 1 P joint. So, you have the last 1 is the prismatic joint and then 2 rotary joints and all the joint axis are in vertical direction. So, this can be used for positioning and assembly applications and it actually provides you a compliance in the horizontal plane because all the joints are in the vertical axis and the workspace of this is shown there. And the third one is the 3 prismatic joints, which we call it as the Cartesian robot. So, this Cartesian robots are again used for pick and place applications. The only difficulty with this kind of robot is there the space required for them is large compared to their workspace otherwise, they are very easy to control because all the three motions in x, y, z axis so it will be easy to control. So that is the advantage of having this robot. Okay, so you will be seeing this kind of robot in many other applications also in industries and as well as in many other places, you will be able to see these kind of robots in use. So that actually gives the three important robots that are being used in the industry and the body and arm configurations of the jointed arm, Cartesian and SCARA they are used for positioning the wrist in the 3D space. Now, if you look at the wrist to configurations, there can be different types of wrist assemblies, to be attached to the end of arm and their end effector is attached to the wrist assembly. So, we will be having the end effector which is attached to the wrist assembly and the function of wrist assembly is to orient the end effector. So, you can actually orient the end effector using the wrist assembly because wrist assembly has more 3 degrees of freedom and therefore, you will be able to orientate the end effector using these 3 degrees of freedom. So as I mentioned the body and arm determines the global position of end effector and the 2 or 3 degrees of freedom wrist actually allows you to get roll, pitch or yaw. So these are the three orientations that is possible using the wrist assembly, so body and arm give you the position and the wrist gives you the orientation roll, pitch and yaw. This is a typical assembly of wrist configuration. So, we can see that you need to have 3 joints so 1 you see is basically, this is the pitch axis and this is the yaw axis and this is the roll axis. So, the roll is along the z axis, so this is the wrist so you can actually here this is the roll motion and this is the pitch motion and this is the yaw motion. So, in a mechanical wrist assembly you cannot have all the 3 joints at the same point I mean, difficult to have. Unlike our wrist somehow we are able to get all this though they are not exactly at 1 point. So, here you can see, this is 1 axis, pitch axis, the vertical yaw axis and the roll axis. And all are RRR so all the joints need to be rotary for orienting, positioning you can have prismatic joints, but for orientation you cannot use a prismatic joint so all the joints will be rotary joint and this part will be attached to the positioning part. So, you have the body and arm part, so the body and arm part will be attached to this, the wrist assembly and then this point you will be having the tool connected to it. So, the body and arm assembly and tool are connected through the wrist assembly so that is the use of wrist in the industrial robots. So, we discussed about the body and arm configuration, we discussed briefly about the wrist assembly and the wrist actually can have different ways of assembling these joints, it can have a spherical wrist also, where all the 3 joint axis intersect, we will get a spherical wrist. And at the end of the wrist you will be connecting the end effectors, the end effectors are the tools which are attached to the wrist assembly in order to get the necessary work done. So, you can have 2 types of things; 1 is the gripper, and 1 is the other tools. So, gripper is basically for grabbing an object so you can have two finger gripper or three finger or multi finger gripper can be used for manipulating objects. So, we can have pick and place or manipulation of objects can be done with the help of grippers. And other tools that can be attached are the welding tools or painting guns or any other tools you want to attach you can actually have it as attachment to the wrist assembly, that is the way how we will be using the whole robots for practical applications. So this is a example for 2 finger mechanism for gripper. So normally gripper is not continuous part of the robots, and therefore the motion or this degree of freedom is not added to the robots degrees of freedom. So we will be having six degrees of freedom for the robot, but then the gripper will be having additional degrees of freedom. So in this case, it's a 1 degree of freedom gripper can actually have an open or close motion, and this is a 2 finger type. So you can have a mechanical linkage to control this, you can have an addition motor here, it can be mechanical or pneumatic or hydraulic any powering mechanism can be used and then we will be able to get a motion here. And depending on the application, you can have a type of fingers designed in such a way that you will be able to get a proper grip of the object. So thats all about the basic structure of robots or the mechanical construction details of a robot. So this is important to know when we have to discuss about the kinematics because about the degrees of freedom and how they are assembled, what is the body and arm configuration these are important in order to understand the kinematics. That is why I discussed that part before we talk about the forward kinematics of the robot or sometimes we call it as the arm equation also. So, in the previous classes, we discussed about the basic coordinate transformation, and how the coordinate transformation can be used to represent the position and orientation of 1 coordinate frame with respect to the other coordinate frame as well as when the coordinate frames are moving, how can we represent those transformation using these transformation matrices. Now, we need to apply those principles into the robots and then see how it can actually be represented or how the relationship for kinematics can be developed using those transformation matrices. So, here we will be discussing few points; 1 is the kinematic parameters of the robot, where we define two things that the joint parameters and link parameters. And then we will see how we can actually use this information to get the transformation for developing the kinematic relationship. So to do that, we need to do something to assign the coordinate frames. So in 1 of the classes I mentioned that there are methods by which we need to assign coordinate frames to the joints to get that transformation. So, we will see how the coordinate frames can be assigned and then we will talk about something about normal sliding and approach vectors how we can actually get this approach vectors using the coordinate transformation. And finally, we will have something called Denavit-Hartenberg representation for robots. And then we will talk about the arm equation and then take few examples to discuss. So, I explained what is the need for forward kinematics in one of the classes. So, most of the time what we will be doing, we are interested to know the position of the tooltip, and if you move these joints how these tooltip gets affected is the relationship we are interested in, that is where we want to know the kinematics. So, if we supply these joint values theta I put just these joints parameters as theta, if I substitute these values of the theta, can I get the position x, y, z of the tool is my problem. So, how do I get the position of the tool, if I know all the joint values of the theta that is basically the kinematics problem here. And in order to learn this, or in order to develop this relationship, we need to define few coordinate frames in the beginning. So this also explained in one of the classes, so we define a base coordinate frame like this. So we will have a base coordinate frame attached to the base of the robot, all the measurements, all the position and other things can actually be referred to this frame and we call it as the base coordinate frame. And then we will define a tool coordinate frame at the end of the tool that actually represents the position of the end effector. Whatever the end effector we are using, we refer that as the tool coordinate frame so that is the frame which is of interest to us, we want to know where the tool is going with respect to the reference frame. And in addition to this, we define something called a object frame or the workspace frame, work object or coordinate system or the user coordinate system, which is to represent the position of an object. If I want to manipulate this object, then I need to know what is the coordinate of this object. So, we define an object coordinate frame or a user coordinate frame. So, these are the basic frames we define in order to talk about the kinematics of the robot. And once we define this frame, we can actually represent all the other measurements or the position of the joint with respect to this coordinate frame. So, that is the importance of having a coordinate system for the robots. So, I already mentioned this world coordinate system, the basic coordinate system is known as the world coordinate system and this is known as that tool coordinate system. So, origin and axis of robot manipulator are defined relative to the robot base. So, we have the robot base and then we define all the coordinates with respect to the robot base. And that tool coordinate system, so we define this as the tool and then we define a tool coordinate system, and we define these coordinates based on some criteria. So I will explain that how do we assign the coordinate frame later, but we will be having 1 coordinate system at the tooltip. So, the alignment of axes system is defined relative to the orientation of the wrist faceplate to which the end effector is attached. So, this is the wrist faceplate, where you will be attaching the end effector and the coordinate system will be defined with respect to that, so there is a tool coordinate system. So, we have a base coordinate system and a tool coordinate system. Now, coming to the forward kinematics relationship, so as I told you, you have a robot so you have 1 joint here, and then you have another joint, a joint here. It can be a rotary or prismatic joint and assume that this is the wrist and you have the tool attached to it. And you have this base coordinate system and you have this tool coordinate system so this is what we have. Now, we know that this can actually rotate so, there is a theta, there is another theta with respect to this theta 2, theta 3, theta 4, etc to theta 6 and we want to know the position of this. So, this is the tool and this is the base and what is the position of this tool with respect to this reference frame when theta 1, theta 2, theta 3, etc are changing. So, that problem is known as the forward kinematics problem or direct kinematics problem that is this theta are known are the joint variables, we call this as joint variables because this can actually vary, at every joint you can have a theta varied and therefore, these are known as joint variables. So, if I know these joint variables, can I see what is the X position and orientation of the tool? Can I get this X is the forward kinematics problem or the direct kinematics problem? So, a relationship between the joint variables and the position and orientation of the tool is to be formulated. So, how they are related, how can I develop a relationship between these two coordinate frame is the forward problem. And we found that whenever this is moving, there is a way to represent the relationship between this 1 and this 1 using a coordinate transformation matrix. But, since there are multiple joints, and each joint can actually affect the position, we cannot directly write the relationship because it's not a single transformation, there are multiple transformations involved and therefore, we need to have a formulation, a systematic formulation in order to represent x as a function of Theta. So, x need to be represented as a function of theta and that is basically known as the forward kinematics problem. So, let us see how to develop this relationship or how to develop this formulation using the coordinate transformation matrix. So, we can define the direct kinematics problem as given the vector of joint variables so it can be theta or D prismatic joint or rotary joint so generally we define it as a joint variable of a robot manipulator determined the position and orientation of the tool with respect to a coordinate frame attached to the robot base is known as the direct kinematics problem or the forward kinematics of the manipulator. And we need to have a concise formulation of a general solution to the direct pneumatics problem. Since there are multiple joints involved and these joints can have different configurations, or joints can be assembled in different ways as we saw in the robot architecture, we need to have a concise formulation for the general solution to the direct kinematics problem. So you should be able to use this formulation for any kind of robots, whatever maybe the robot configuration, we should be able to apply this rule and get the formulation that is basically the problem of direct kinematics. So we already saw these links and joints, so a link is a solid mechanical structure and joint provides relative motion between links. So in order to develop this forward kinematics relationship, first we need to look at the parameters of the robot because there are a lot of mechanical parameters involved in the robot, for example, the link, it has got some length and there will be some displacement between the links in 3 dimensional space, so we need to look at those parameters first and then see how these parameters contribute to the kinematics of the robot. For example, I have this joint. So, I have 1 joint here and 1 joint here. So, this is the length of this link, this is the length of this link. And suppose this joint and this axis is like this, and this axis is vertical so I can say that this is the axis. So these two axis are 90 degree apart and similarly this link length and this link length are also different. So how these link length and the access of the joints affects the kinematics also is important. And therefore, we need to study the links and joints from their design parameters and therefore we define these parameters as the length of the link twist angle, joint angle and distance. And joint variable is the parameter that is variable. So, we have a joint parameter that is actually varying also. Now, we will look at this and then define how these parameters can actually be defined in the case of a manipulator. So, first we start with the links and joints, so we will start with the link zero and then joint 1, then link 1, joint 2, link 2, joint 3 and link 3. So, this is the way how the links and joints will be numbered. Then, so that is the fixed base is link 0 and the last thing is the end effector. Then for an N axis robot, there are n plus 1 links interconnected by n joints, so we have n plus 1 links connected by n joints. That is what actually you can see here, including the links 0 there will be n plus 1 links. And the joint K connects link k minus 1 to link k, this also we discussed earlier. So, joint k connects the link k minus 1 to k, and we need to define this parameter, so as I told you, there are two important parameters one is known as the link parameter, the other one is the joint parameter. So, we will see what are these parameters connected to the links and joints of the manipulator. So, you can see that link 0 and link 1 are connected through a joint 1. And similarly, joint 1 and joint 2 are connected through the link 1. So, positioning of these joints and the way they are arranged in the manipulator leads to these parameters link parameters and joint parameters. So, let us define these parameters or try to find out what are these parameters. So, for example, we take an example for a simple model of a manipulator. So, we can actually see here, I consider this as a manipulator and there is a joint like this. So, we can see it can rotate like this, then there is a joint like this which can rotate and there is a joint which can rotate like this. So there are joints and links here so now I can consider this as link 0, then link 1, link 2, link 3, etc, then I have joint 1, this is the joint 1, joint 2 and joint 3. And you will see here the joint axis is like this first one, second one is like this and third one is like this. So, we can see the links and joints can be arranged in different ways, though all are rotary joints, it can have different joint axis and that actually gives you a different configuration also. Now, we need to define something called a joint parameter. So, the joint parameters basically talks about the relative position and orientation of 2 successive links, which can be used for using parameters called joint angle and joint distance. So, these are the two parameters that we can define as the joint parameters, which are the joint angle and joint distance. Now, if you look at here, so this is joint K and this is link K and link K minus 1. So, you have link K and link k minus 1, for example, I take this as the link K and this is K minus 1. So, you have link K and link K minus 1 connected by a joint k. So, this is the way how we can say, so link K and link K minus 1. So, joint angle and joint distance are the two parameters, so you have joint angles and joint distance. Now, joint angle so joint k connects link k minus 1 to K, the parameters associated with the joint k are defined with respect to z k minus 1. So, assume that this is the z k minus 1 that is a rotation joint axis and this is link k minus 1 and this is link k. So, you have these two joint link K and link k minus 1 connected by a joint k and this is measured with respect to z k minus 1. So, the z k minus 1 is aligned with the axis of joint k, this is aligned with the axis of joint k. So, you have k minus 1, link k minus 1 link K and joint k and this axis is known as z k minus 1 axis that is the joint axis. Now, the joint angle theta K, the joint angle theta K is the rotation about z k minus 1 needed to make axis x k minus 1 parallel with axis x k that is, you have this link k and this link k minus 1. So, how much just to be rotated in order to make these two parallel, link K and link k minus 1 parallel is the joint angle theta K. And that is measured with respect to this z k minus 1 axis, so this is basically the joint angle. So, now you can see suppose, this is the way this initially, so this is link k and this is k minus 1 link, this is Kth link and this is joint axis how much I have to rotate this one to make it parallel is the Theta K, that is that joint angle. How much this theta is a joint angle is the rotation about z k minus needed to make axis x k minus 1 parallel with axis x K. So, this is the x k minus 1 axis and this is the x k axis. So, how can I make these two axis parallel and what is the rotation needed is basically the joint angle theta k. Then, the d k is the joint distance d k is the translation along z k minus 1 needed to make axis x k minus 1 intersect with axis x k. So now, you have the distances, suppose this joint was not like this, this joint had to some distance like this, suppose the joints are like this. So, this is link k minus 1, this is link k and the axis of this link is here x k and this link is x k minus 1 link axis is here, x k axis is here. So, there is a distance between these two links and that distance is known as the d k. So, in this case if I make it like this, then both are aligned so d k is 0, but if there is a distance between these two, then that d k will be the joint distance, so that is the way how the joint distance is defined. So, you have two joint parameters, one is known as the joint angle so joint angle is the angle required to make x k minus 1 parallel with x k and that is measured with respect to z k minus 1. And d k is the translation alongside z k minus 1 translation needed to make x k minus 1 axis intersect with x k. So, this axis and this axis intersect at this point so that is basically the joint distance d k. So, this is the way how these two parameters can be defined joint parameters. Now, we have the link parameters also, we will discuss the link parameters in the next class. So, the most important one is the joint parameters, joint angle and joint distance. Joint angle is the angle required to make x k minus 1 parallel with x k that is joint angle measured with respect to z k minus 1 and then d k is the distance traveled translation along z k minus 1 needed to make axis x k minus 1 intersect axis x k. And out of this variable, always one will be a constant. So, out of this variable, joint is along x axis, one of these will be always constant. For each joint it will be always the case that one of these parameters will be fixed. So, in rotary joint theta will be variable and the prismatic joint d k will be a variable and the theta will be constant for the prismatic joint and d will be constant for the rotary joint so that is a property of these time parameters. So, out of these two parameters, one will be always variable, the other will be constant. So, let us stop here, we will discuss about the link parameters in the next class. Thank you. |
Introduction_to_Robotics | Lecture_43_The_HBridge_and_DC_Motor_Control_Structure.txt | So, in the last class we looked at the circuit as we have drawn and we found that one can determine what is the magnitude of this increase that is, if you call this as delta i, then delta i is the magnitude of the ripple. And one can determine the magnitude of the ripple if you know what is the duration for which the switch is on and the value of the inductance and the difference between the supply voltage and the induced EMF. So, this only determines the magnitude of the ripple, but you have no idea whether it is going to be a waveform like this or is it going to be a way form like this. In either case, the magnitude of the ripple in all these 3 cases is the same, the only difference is the average value. These equations that we wrote in the last class do not say about what average ripple value is going to be. How do you find out what will be then be the average value? No, because if you know the current waveform, then you can find out the average value, area of the triangle divided by the duration. But these equations do not tell you anything about what the area of the triangle is also going to be, this equation only says that the change between the minimum and maximum value of current delta i is determined in this fashion. So this is, this information is incomplete. So, in order to determine what the average value is going to be, you need to understand that this input voltage or applied voltage is being given to a DC motor. And we need to understand what the DC motor will do? The DC motor or any other motor will always attempt to meet its load requirement. So, if it so happens that the motor has to rotate a particular load, demanding a certain amount of mechanical torque to be developed at this particular RPM it will attempt to generate them and it is from that you can determine what the average value should be. And we know of that, the electromagnetic torque which the motor generates at the specific speed of operation must be equal to the load torque, this we have already seen from the graphs that we drew and this is equal to K times I. This I is the average value of flow of current Armature current into the DC motor and therefore the average value of flow of current is determined by what is the load torque required. So, if the load torque is, that the machine needs to supply to the load for that particular speed of operation is equal to 6 Newton meter and the value of K is equal to 1 Newton meter per ampere. Then, average current will then be equal to 6 divided by 1 which is 6 ampere. Now on top of the 6 ampere you are now going to have this ripple. So, obviously as you can see from this shape if this value is going to be I min and this value is I max then the average value is simply I min plus I max divided by 2, that is the average value because that is seen from the wave shape of this graph. So, we are now saying that this average value is equal to 6 ampere and then we also know what is I max minus I min which is delta I. That we know from this equation, that we have written. Therefore making use of these two equations one can determine this waveform fully. That means you can determine what is I min? What is I max? What is average? All that can be determined. So, the average value needs to be found out from what is the requirement from the motor. Now, one must then understand if we are going to have a circuit like this that we have drawn yesterday. What is the average value of the applied voltage that is going to come across the motor? In the earlier case, we saw that the average value is simply equal to Vg multiplied by the duty ratio, what happens in this situation? So, let me copy this. So, this is the circuit that we have that means we have, that is the circuit, we have switches 1 2 3 and 4. Now let us say that I am going to operate the switches in this manner. Switches 1 and 2 and then switches 3 and 4 that is control signals for 1 and 2, control signals for which is 3 and 4. This is drawn with respect to time and what I want to draw is the waveform of the voltage across the motor. So, we will figure switches 1 and 2 in this manner , this is the waveform and the control for 3 and 4 goes like this. That means what I am doing is, switching on 1 and 2 at the same time and when 1 and 2 are on, switches the other two switches are off and when 1 and 2 are off the other two switches are on, that is the manner in which I am going to operate. One may operate it in many different ways. But this is what we take, let us take this as an example. So, if this is the case, what will be the wave shape of Va. So, if 1 and 2 are on, then it means that the circuit reduces to one that looks like this, you have 1 is on and then you have the motor and then 2 is on and the voltage across the motor that we are looking at is this Va, so how much will be equal to Va if this value is Vg? Va is equal to Vg, it is simply connected to across Vg, so that means the voltage here is Vg during this time and in the next interval, what we are doing is switching off 1 and 2 and the other two switches are coming on which means that the circuit now looks like this you have Vg and then you have 3, you have the motor and then 4 is on. This is Va, here Vg. Now how much is Va? Minus Vg. So, you now draw minus VG and after this the switching pattern repeats. So, you keep going like. So, let us talk this duration as t on and this duration as capital T. So, the level here is Vg and the level here is minus VG. What is now the average voltage that is coming across the motor? Va which is the average value of this lower case Va, this may form as given by Vg into t on.minus Vg into t of divided by capital T. We are just taking the area of the graph, which is Vg into t on minus Vg into capital T minus t on divided by capital T which is two times Vg into t on minus Vg into T divided by capital T which is equal to Vg into 2 times duty ratio minus 1. Where duty ratio is t on by T. Now you have a different expression for the average voltage that is applied, so the value or the expression for average voltage is not something that is fix, it depends on what sort of converter is being used and how it is going to be operated. In this case, there is the expression that you get. Why this converter is interesting? Is that with this expression if you want the average voltage to be greater than 0, how can you make a greater than 0? Duty ratio must be greater than 0.5. What happens for duty ratio equal to 0.5, you get average value equal to 0, what happens for duty ratio less than 0.5. Va is negative, therefore you get a variation in the average value, all the way from negative to greater than 0 by simply varying the duty ratio without any discontinuity have happened, smoothly it can be very and therefore you can now get this motor to rotate in either direction or you can bring it to 0 speed as well. So, that is why this topology is very advantageous. Usually, whenever you talk of electrical motor drives one refers to whether if you draw a graph like this, where the X axis is P, Y axis is the generated electromagnetic torque. Then if you denote this operation here. Now let us say you are going to have a robotic arm. That is a nice example. So, you have a robotic arm that is to pick up an object from here and go and put it there that means the motor has to rotate in a certain direction it will have to go like this and then come down. Now obviously the motor you may want it to be increasing speed all along and as you reach the place where it is going to be put down, the speed has to be decreased and it has to come to 0 right at the point where the object has to be released. You cannot have a non-zero value of speed when the object is going to be located there, because if you are going to go with the non-zero value of speed and keep it there you are creating a mechanical impact and it is likely that the object will break or if it is going to be strong enough, it may not break, it may withstand. But in general, we would like the speed to go to 0 just at the point where you are going to put it down. That means you are going to have during the, during the operation if you are going to plot the acceleration, that is there in the electric motor the acceleration would perhaps start at a non-zero value, at the beginning and then this acceleration may increase. After sometime maybe if you went to pick up the object from here, you have accelerator then maybe you move with a fixed velocity. So, then acceleration could perhaps become 0 and then you go with 0 acceleration and when you are going to put the object down you have to decelerate and then go to 0 velocity. So, after some time you may have to have a negative acceleration and maybe maintain the negative acceleration and then go to 0 acceleration, when you place the object at the desired location. So, this is acceleration. As far as looking at speed is concerned, you know how to derive speed from acceleration. You can do some integration and get it. But the speed will then have to start from 0 in some manner and then the speed remains fixed and then the speed goes down and bring with 0. So, in this duration you are having a deceleration. In this region, you are having acceleration. So, you need to have a circuit here. This for example this circuit. Maybe it could be as an alternative it could be a circuit like this. You must see whether the circuit allow these kind of modes of operation. For example, this circuit will not allow you and ability to decelerate in a well determined manner. So, what you want is in the first case, that is the in this region, you want to operate such that electromagnetic torque is greater than 0, omega is also greater than 0. So, you are really operating here Q1 both omega greater than 0 and electromagnetic torque greater than 0, you can call this as forward motoring region. Now in this zone, speed is still a value greater than 0. But your acceleration is negative. How does the acceleration become negative? Acceleration is negative only if the generated electromagnetic torque becomes negative. And therefore, in this region of operation, you have a situation where Te is less than 0 and omega is still greater than 0, therefore you are operating in Q4 Te less than 0 and omega greater than 0, this is then called as forward breaking region. An electric circuit that is able to achieve these two regions of operation is then called as a two-quadrant drive. Now after you do this, this arm has to, having left the object there it now has to go back to the earlier place to pick up the next object and how does it go back to the next place, it now has to accelerate in the reverse direction speed increases in the negative direction. So, you still have now an acceleration in the reverse direction and then maybe go with the fixed velocity decrease acceleration and then after some time you may want to break that and then go something like that speed will then increase in the negative direction. So, go to 0 and therefore after coming here the speed remains fixed and then at this point the speed now starts to decrease and then go to 0. So in this region, then Te is less than 0 speed is also less than 0. This means you are operating here Te less than 0 and speed less than 0. But this is really making the motor rotate in the other direction. So, this is reverse, it is reverse but acting as a motor and then here is where you have the Te greater than 0, but omega less than 0 this is reverse breaking. So, a circuit that allows you all these 4 reasons of operation, is then if you have everything available that is called as a four-quadrant. So, this circuit which we discussed last this circuit is really a four-quadrant drive capable of rotating the motor and breaking the motor in either direction. So, having seen that, what we now need to understand is what is the overall motor drive closed loop system going to look like. So you have the motor, this is motor you have to input, that is one input port that is electrical input port and you have the shaft which is the mechanical output port and to this electrical input port you connect a drive circuit. This is your H-Bridge and this is H-bridge gives gets a DC Supply, DC input power source and this H-bridge requires, in order for the switch to operate you require signals to control devices. That means it needs to be given explicit signal stating when the switches should turn on, when it should turn off, that is what you required. How do you get that? Now, this break this all is going to apply a DC voltage to the motor and when you apply a DC voltage you note that you are applying an electrical input. The DC machine has an electrical subsystem and then a mechanical subsystem systems, the electrical subsystem is the part that contains all the armature windings and all that and as a result of flow of current intake, there is an electromagnetic torque which acts on the mechanical subsystem which contains the rotor inertia and whatever else is connected to. Therefore when you apply an electrical signal that is an applied DC voltage, what responds first, the first respond will be there from the electrical circuit. So, the fastest responding element in the DC motor is the electrical circuit, the electrical subsystem let us say, that is the one that responds first and therefore unless you have a good hold on the behaviour of electrical subsystem, your overall response will not be very good and therefore and this electrical subsystem is the one that is responsible to generate electromagnetic, electromagnetic torque. If you look at the behaviour of the motor itself in order to make it move, the first thing that it has to generate is an acceleration. If it cannot generate an acceleration, there is no way the rotor is going to rotate and therefore if you want to control the movement of the motor the first variable that you need to control is the acceleration or the electromagnetic forces that are happening in the machine and therefore the first thing that one does in this motor control system is that you require in some sense a reference, reference value for torque and in the case of DC machine since you know that Te is equal to simply K multiplied by I this is also reference Armature flow of current IA, same as that. And this reference torque or reference IA is then compared with the actual Armature current that is flowing from this and this determines at any point of time, what is the acceleration required in order that you go where you want probably at the fastest time or whatever and this difference therefore determines, this difference determine, how much Armature voltage needs to be applied. Because if you want more torque, it means that you need more Armature current that your somehow the reference is more than what is actually flowing that means you require more torque and therefore, it means that you need to apply more voltage to the motor and this is therefore, this input needs to be given to the bridge. This is then a current controller. We need to put something else here. We will come to that a little late. But now the question is, how do you know, how much reference torque has to be generated? That will usually come from something else. In the operation, you will not be determining the acceleration of the motor as it is, you may be determining the speed. For example, if you are going to move something from, if you need you have a mobile application, let us say and you want this to move from there to there. You will say move with this much speed. So, speed is most probably the variable that as a user you will be likely to give and acceleration is something internal to the system. So, how does you how do you know then, what acceleration must be given or what reference for must be given, that comes from the difference between the speed reference and then you have a feedback of the actual speed at which the motor is running. This is speed feedback and once you know, what is the difference between what you want as speed and what is the speed now. Obviously if the speed difference is large, what you want and what the motor is now rotating you would like to accelerate more and reach that value of speed. So, depending on this difference you then have a controller that determines how much should the acceleration be. So, this is then a speed controller. There are many applications, where speed is the reference value that is given by you directly, especially in application where you want something to move along a horizontal axis or so you will say move with this speed, but there are also many applications, where you cannot determine the speed. For example, if are going to have a robotic arm, that is a very nice example to say that, lift this object from here and place it there you will not be giving the reference in terms of speed. Because you would like this object to move in a particular manner specified locations with respect to time and then place it exactly there, that means the reference that you give will not be in terms of speed but actually in terms of the actual location of the object. How that is varying with respect to time that means with respect to time, you will see at this instant I want the object to be here, at the next instant I want the object to be there that means what you will give is actually a position reference. When I say reference, it does not mean it needs to be a fixed value, position reference may vary depending on where you want it to be and therefore what you do is, you take the position reference and then you get a feedback of the rotor position, position feedback. Note that even if you are looking at a robotic arm, which is going to take an object move it in space and then put it down somewhere, the entire motion of determining where the object is at a particular location in space at a particular instant can be equivalent map back to the position of the motor shaft. Because the motor shaft is what is going to cause rotation the motor shaft link with certain mechanical linkages is going to result in the motion of this object and therefore you can map that back to the shaft angle and that is what this motor control system will see. And depending on this position error, then a controller determines this is then a position controller, this controller then determine with what speed it has to operate. If the error is large then to go to that position you need to go the greater speed, reach that position which in turn means a certain acceleration and a certain acceleration means a certain flow of current which then means a certain voltage. So, all that is then subsequentl0y addressed. So, this is the sort of control structure that you would need for a motor drive for these kind of actuation application. So, this is called as a cascaded control structure. In this fastest way, fastest system variable is attempted to be controlled first and the fastest system variable is the flow of, flow of input current into the armature, that is an electrical system and electrical systems are usually much faster than mechanical systems, mechanical system have a moment of inertia, which is a big mass and it will take time for it to start rotating and move. So, fastest variable is controlled first, then you control the next faster variable, which is speed, which is an intermediate is neither too fast not too slow. And then finally you have the outer most loop which is the slowest variable, this is speed is something that varies much slower than armature current. Now comes this system which is there in between here that is this one, what does this block do? Now the output variable that is given at this position. This is, this is what this block let us call it GC the output of GC is going to determine, how much voltage needs to be applied to the armature in order to get so much current flowing. So, this variable at the output, if it is a analogue circuit implementation it is a simple analogue voltage, if you are going to implement this whole thing inside a digital system, a digital system, a digital controller, like a micro controller or a digital signal processors, so this is usually represented as new mu C and this is usually called at DSP. Now in this case this output variable will simply be a number. You are going to do some arithmetic inside, implement some equations etc finally you get a number. That number represents how much voltage should be applied to the machine. Now, there is obviously a disconnect between, what the output given by GC is and what needs to be given to the bridge, what needs to be given to the bridge, is a signal saying switch on, remain on for sometimes switch off. It means the switching control signal with whereas, what GC gives you is a number or an analogue output voltage. Therefore, you need something to convert this into the signals that the bridge really require and that job is done by a block known as modulator. So, if you look at this and let us say you want to vary the output voltage given to the armature,` what you will vary? This is the average voltage, you are going to vary therefore duty ratio and duty ratio variation means you are going to vary t on you are not going to vary T, though that can also be done if t on fixed and varied t that will also result in duty ratio creation. But that is a nonlinear variation, because we have one over T siting there. Whereas, if you keep t fixed and vary t on, then it is a simple scalar relationship. Therefore, what you would like to do is that you would like to vary the occurrence of this edge, such that maybe you would want to at some point control it to switch off here or at some point control it switch off here or you may want to switch off here, depending on that. Which essentially means that you are varying pulse width is being varied or in other words this width is is now being modulated and therefore, broadly this scheme is known as PWM, Pulse Width and then Modulation. So, that is why this block is given this way. How do you do that? That is again very simple. What one does is let us say that you have a triangular voltage waveform, this is triangle and I got a Y axis here and this let us say if 0 of the triangle and this goes up to an amplitude of A and here it is minus A and then what I do is, we said that the output from GC, let us say it is an analogue output voltage, then let the voltage be somewhere here. Then what we do is you take this to a simple comparator. The comparator is a way of comparing these two signal. So, here you give this value and then you give the other one here. The job of the comparator is to implement a very simple function if V plus is greater than V minus, output is high, if V plus is less than V minus, the output is low. So, if this is implemented what will be the output waveform from this comparator, this comparator will look at these points, places of intersection and then in these region, it will give a high signal and then it gives a low signal, then it gives a high signal, low here, high and low. And then what we can do is, feed this variation as inputs to switches 1 and 2 and then give the inverse of this as inputs to switches 3 and 4. This is to switch 3 and 4 and therefore we have the switch pattern that we had drawn here, something very similar to this and the system will then work well. Suppose the value of this analogue voltage is equal to 0. How much will the duty ratio of these signals be? Half, which means the armature voltage applied is 0. So, the reference that is the orange line, the level of that orange line is directly indicative of what is the magnitude and the sign of the Armature voltage that is then applied. So, this is a mechanism that allows you to convert the analogue output voltage is given by DC into signals, that are given to the bridge for it to operate. So, if this value is Vc, then t on is given by this interval and p is given by, this duration is capital T. How do we now find out what is the duty ratio t on by T? So, that can be found out by looking, we take this waveform of the triangle. So, this varies from A to minus A and then we have the DC level somewhere. This is the DC and therefore what we have is the triangle here. So, let us call this as O M N and XY, O M N and O X Y are? What are they? Similar triangles and we need to find out what is this duration? So, it is evident that XY divided by MN is equal to this high, whatever that is call it a P and let us extend this and this is Q. That is then OP by OQ. Which then means XY is nothing but t on, MN is nothing but T and OP is A plus VC and OQ is 2 times A. So, this is nothing but half plus BC by 2 A. Now this term VC by A is then given a called modulation index. So, if you call this as ma then duty ratio is half of half plus ma by 2 or ma plus 1 by 2. So, what can be done is this expression, what we wrote as Va is Vg into 2 times D minus 1 instead you can cast it in terms of modulation index. Va is therefore Vg into 2 times ma plus 1 by 2. Minus 1, which is Vg into ma plus 1 minus 1. Which is modulation index into Vg. So, one can therefore see that the output voltage is a linear variation of this. So, in this manner one can then attempt to control the entire system. This is how the closed loop control system of the drive looks like. One then has to look at how to design these controllers, which is a subject matter, we will not discuss in this course, but that is what it is. Now in this whole thing, there are certain important aspects that you need a mechanism to measure the speed. You need a mechanism to measure the actual position of the shaft and you need a mechanism to measure the flow of armature input current, those are additional stuff that one might need. So, this is the way a DC motor control structure looks like. Now the DC motor, as we have seen consists of a stator and then a rotor which is sitting inside and drawing the cylinder as seen from the side and then you have here another cylinder with brushes, that are going to be sitting there and then you take these two wires out and this whole thing is enclosed in an outer shell, then you have the shaft that is sitting off. The field is generated in the stator. So, stator generates the field and field lines go through the rotor and come back to the state, that is the system. Now as I mentioned right at the beginning DC motor is a very good motor for the purpose of implementing this kind of control structure, because whatever Armature current flows into the system flows into the machine does not affect the field. You do not change the field by sending an Armature current this happens because of the geometry of the system inside. That way it is very good. But the difficulty with the DC machine is that it has a system here. Which is the brush and commutator. Now this brush is something that has to slide on top of the cylinder which is going to be rotating always and you very well know that if you are going to have a sliding arrangement there is bound to be friction and there is not only friction, you need to have the friction very small, which therefore means that there needs to be good amount of lubrication and if there needs to be good lubrication, you cannot put oil or grease there, why? It would not conduct, you need to have good electrical conduction there, because you are looking at flow of current into the armature and coming out. So, you cannot put, afford to put oil there and say I will get good lubrication. So, that is impossible. So, you need to have a material where that will allow conduction and be a good lubricate as well, do you know such a material? Carbon is a very good material and therefore this brush is made of graphite. But what is the difficulty with that graphite is a very soft material and as you go on rotating the shaft it will produce less friction never the less, but it will also erode the material away. So, if you install the DC motor in the system, allow it to operate and you had an initial length of this brush as this much, after about a month you will see that the brush is only that much. That is not good system, because we will have to keep on opening the machine replace the brush. So, it Is a nuisance. It is not only that, the eroded material which is removed from this brush. How will it, I mean it would not just go away into are air. It has to deposit somewhere and in which place will it deposit? It will deposit on that cylinder and that is a very dangerous thing, if you allow it to deposit on the cylinder and do not remove it, then it will, it will simply make it a, it will make a dead shot between the two brushes. You want armature current to enter through the brush, through one of the brushes go into the armature conductors and come out through the other brush. But instead what will happen if it will enter through the first brush short-circuit the whole thing and go out through the second brush it means your motor will stop working after sometime. It will cause an explosion inside the machine, therefore DC machines though they are very good for the purpose of understanding control systems and so on they are not used in robotics application especially. Not to say that these machines are never used, they are used in industry for a wide variety of applications. But more because, until the last few decades the row until about 1990s to 2000s, there was no other very good alternative available. So, DC machine were the machines that are where used for all high precision, high performance control applications. But today if you look at it, nobody will use DC machines for this kind of control. Why did we then discuss DC machines at length? It is because they still provide the best way of understanding, what is required out of a motor control system and what are the basics of motor control operation. There are other machines which now substitute DC machines, which we will see in a little bit of detail as we go along. So, the next best machine is called as a brushless DC machine or a BLDC machine, which obviously means the brushes are no longer there. Which is the main drawback with decades of the DC machines. So, what happens if you have a brushless DC machine and how does one operate that? We will see in the next class. |
Introduction_to_Robotics | Lecture_72_Recursive_State_Estimation_Bayes_Filter.txt | last lecture we were looking at state estimation we started talking about recursive state estimation and so we were talking about what constitutes a state right the state state at each time t could be a very complicated vector of various entities that you could record like the robot post the location in the world right this could be the xyz location and also the orientation theta and the velocity with which if it is a mobile robot the velocity with which it's moving and so on so forth right and then the configuration of the actuators right so what angles are the arms in the gripper is in and whether it is holding an object and so on so forth and the locations of surrounding objects etc etc so we said that this could be potentially a very complex state right and then we started talking about measurement data this is essentially what the sensors give you and likewise we had z1 to zt so this measurement data could be camera images uh could be uh ultrasound sensor data right and could be other kinds of touch sensors and internal indicators like battery levels and so on and so forth right and then finally you have a set of control actions which could be movement actions it could be manipulation actions and sometimes it could just be sensory actions like turn on a camera or or rotate a camera in a certain direction and so that you get a different kind of input right and so we also adopted this uh notation that you start in ah state x0 you perform action u1 and you end up at x1 at where you take measurement z 1 right and so likewise throughout you you are at x t minus 1 you do u t you end up at x t where you make a measurement zt so that is the way we are going to be looking at it and the notations are as follows so we when i want to denote an entire sequence x 0 takes t i'll use x 0 colon t just remember that okay and likewise you one colon t and z one colon t to denote the entire sequence it could start from anywhere and could go till anywhere right it could start from t minus one and go to t plus one also so that doesn't matter so in the last lecture i mentioned that we will be using this textbook i just wanted to make sure that everyone has gotten down so we'll be using the textbook on probabilistic robotics by sebastian throne from bergaard and dieter fox it's from mit press and there are draft versions of the textbooks also available freely online this is not a complete version of the book but as a reference you could use these draft versions and the book is very extensive and like i said we will not be covering all parts of the book right only some of the highlights from the book okay and what we then started talking about is the system model right and i said the system model consists of uh two quantities one is the state transition probability right the first one is the state transition probability first one is so the consists of two quantities the first one is the state transition probability where you look at the probability that you land up in a certain state x t for example probability that i'll land up in front of open door given that i have been in the past sequence of states given by x 0 to x t minus 1 and that i have taken the actions given by u 1 to u t and i have made observations z 1 to t minus 1 right and then we assume that the markov property holds and therefore we can write this as p of x t given x t minus 1 and ut right and the next quantity we look at is what we call the measurement probability which is okay assuming that under the markov property the measurement property the measurement probability is just assuming that i have i am in state xt okay what is the probability that i will make a measurement zt so we are looking at probability that i will be making a certain measurement zt at time t given the history of states that i have visited up till time t right and the object the actions i have taken up till time t and the observations i have made until time t minus 1 right and all of this put together or the factors that my current observation could depend on and if you are assuming the markov property the observation depends only on the current state x t right and we also talked about how zt does not figure in the state transition probability because zt does not cause x t right zt is caused by x t that is actually captured in the measurement probability okay now assuming that you didn't have this noise right assume that the world is you know clearly observable assume that all you need to know right let's say you have a wheel robot that is moving around in a 2d workspace all you need to know is the exact x and y coordinate of the robot and that's that's all the information that you need to make all the decisions you need in the world right in such a scenario let's say if i make a measurement i know exactly where i am right so i know my state right because the measurement is going to be like something like a gps signal and that gives me the xy lat long very accurately let's assume that right so at every point i will know what state i am in so if i tell you that you you have made a measurement that tells you your lat long is x and y then i know that i am in location x y then i make a movement i say i move north and then i make another measurement now this measurement again gives me my lat long x and y right and therefore i know exactly where i am but the entire complication right the whole reason that we are looking at recursive state estimation right now is that i do not have such noise free measurement and i also have a problem with my modeling right motion modeling but right now we are only looking at uh the fact that my my measurements are not noise free and they are noisy okay and therefore i cannot uh exactly tell you what state i am in right the robot is not able to decide what state the robot is in and therefore has to so the robot is not able to exactly measure the state it is in and therefore has to look at a probabilistic estimate right and so we call this estimate of the robot right the the the robot is essentially try trying to ah you know decide what state it is in at any point of time this internal confusion about the actual state the robot is in is captured in what we call as a belief right the belief is essentially uh reflects the robot's internal knowledge or internal confusion if you would be at the flip side of it is about the state in which it is in right so for example if the robot could be in one of two places it could be in room one or it could be out of room one let's say these are the two things the robot could be right and then when it makes a measurement right and the measurement just tells it you are near a door right that doesn't really let you know whether you are inside room 1 or whether you are outside room 1 you could be near the door in any way right and then the only way that you could be sure about where you are is actually take some actions and see what happens with regard to the effects of actions and then continuously refine your belief right so when when i start off with no knowledge of where i am i could say that there is a half chance that you are inside the room and a half chance you are outside the room right so this kind of a probabilistic distribution over possible states right is called as a belief right and sometimes you use belief sometimes you use belief distribution so formally a belief distribution assigns a probability to each possible state variable right and conditioned on everything that you know so far right so formally i would say that belief x t right so belief x t would be um belief x t would be the the the belief that you have that what is the state that you are in at time t right right so x t is remember x t is a variable that tells you what state you are in time t so it could be in multiple states right so it is actually given by a probability of you occupying a specific state x t given the entire history of observations that you have made and the entire history of actions that you have taken so remember that you you never get access to your actual state right sometimes you can assume that you know x naught that is the state that you start in but even that is not available to you often right so you basically only have a sequence of actions that you took and the sequence of observations you made that is all that the robot knows for sure right and so given the sequence of actions and observations uh what is the probability that i am in a particular state at time t so that is essentially the belief at t so for every possible value that x t can take right you will have one probability and that basically is your probability distribution at time x at time t right so this essentially is a quantity that we call as a belief and we will denote it by bell x t right i think i know that it it's a little confusing right now uh it'll become clearer when we look at an example uh later and just keep in mind that uh when i say belief xt doesn't really mean that the robot is actually in two different states with some probability right the robot is always in one state the robot is not a quantum robot that it can be in multiple states at the same time the robot is always in one state it's just that it doesn't know what the state is so the belief does not encode the actual position of the robot the belief represents the robots estimate on the position right so belief tells you what the robot thinks is the position not what the actual position of the robot is the actual position of the robot is some x in the world that you don't know yet right so the belief tells you what is that robots estimate of where it is in the state space okay so while we are trying to compute the belief right it might sometimes be useful for us to compute a quantity which we'll denote as bell bar which is a prediction of where it expects to be right after it does an action okay so the bell bar x t is essentially the probability of x t given z 1 to t minus 1 and u 1 to ut so the difference between bell and bell bar is that bell is conditioned on z 1 to t and u 1 to t where bell bar is conditioned on z 1 to t minus 1 and u 1 to t i mean it needs to know the last action that you performed also okay so now you might actually start thinking a little bit about hey what happened to all these markov properties we were talking about why are we talking about the belief right now conditioning it on the entire history of observations entire history of states in the entire history of actions and so on so forth right the reason that we have to condition on the history of observations and history of actions is because we don't know the state if i know x t minus 1 then i can make this more markov if i know x t minus 1 then it depends only on x t minus 1 and u t right since i don't know x t minus 1 i have to look at the entire history of observations and the entire history of actions i have performed in order to define my belief right and note again the dynamics the underlying dynamics of the robot problem is still markup we are not changing that right underlying dynamics is still markov but because i do not have access to the true state right my belief estimations would be dependent on the complete history so you should remember that so just just because i put the whole history here doesn't mean that the problem has become non-markov okay is it clear so now so we have this quantity bell bar which i say is the prediction so why i say it is a prediction right i have not yet made a measurement of where i actually landed up so i i have data for all the past measurements i have made right z1 to zt minus 1 and i have data of all the actions i have taken including the current action right so after i have taken the current action i am going to say hey where should i go i should be you know maybe i'm trying to leave the room so with some probability you know 80 percent i'll leave the room right so i would say that hey if i was in the room earlier now i have left the room and that is what i should be looking at right so i am no longer inside the room right and this prediction right as you say on the slide does not incorporate the current measurement set t okay so if you notice the difference between bell bar and bell was that measurement zt right so going from bell bar to bell right so the the calculating bell from bell bar is actually called the measurement update or sometimes called the correction update right so when i whenever i go from whenever i compute bell bar i am saying that i make a prediction so we will make this more clear in the next few slides right whenever i compute bell bar i am making a correction i am sorry i am making a prediction and when i compute bell from bell bar i'm making a measurement update or a correction update right so sometimes a bell bar is called the prediction or a movement update right or a motion update right or the transition update and going from a bell bar to bell is the measurement update right so we saw these two quantities earlier right so let us let us move on so let us move on right so we will now look at the base filter algorithm right so we will now look at the base filter algorithm right and the the idea behind the base filter algorithm is to recursively update your belief right so the idea here is that i'm going to compute belief of x t that is that is a belief at time t so what is my distribution over the value that the state can take at time t right i am going to compute that from not only the dynamics i know the motion model and i know the measurement model right we assume that i have the motion model i have the measurement model and i also have access to the observations and the actions right remember that so what do i have i have the motion model that we talked about the transition model that we talked about i have the measurement model i have access to all the observations i have made so far and i have access to all the actions i have taken so far okay given that i have all of these right how do i take belief x t minus 1. so what is belief x t minus 1 the distribution of states at time t minus 1 right how can i take that and use that to compute belief x t right so and as you could get i guess there are two steps to this algorithm so we have the prediction step where we compute bar of x t from bell x t minus 1 and a measurement step that we compute bell x t from bell bar x t okay the clear so let us move on so let's look at a little bit more detail at the prediction problem right so what is the prediction problem remember the prediction problem is compute bell bar x t as a probability of x t given z 1 to t minus 1 and u 1 to t okay now uh we know that it's from the identities of probability we know that p of x is equal to ah integral p of x given y p y d y right now i am going to use this in order to simplify this expression and somehow introduce bell x t minus 1 okay so to ah simplify this expressions i am going to do this integral over all possible values that x t minus 1 would take just just to recall so what this integration does is essentially runs over the entire gamut of values that y can take and look at for every value that y can take what is the distribution over x right and then multiply that with the probability that y can take that value right and do this for the entire range of values that y can take right so this integral uh would give allow me to ah condition x on y and give me what i mean so what is p of x okay so likewise now i'm going to take this expression so i want p of x given z and u right so i'm going to condition it on x t minus 1. so now we have a probability of x is given by this integral right probability of x even y interpreted y but suppose i want to do probability of x given z right now i can write that as integral of probability of x given y comma z times probability of y given z d y right and so of course i have to take the integral over here right i can take the integral of that and that gives me the the the expression that i want okay so that's exactly what we are doing here now and once uh now that i want probability of x given z comma u i am going to do probability of x t given x t minus 1 z 1 to t minus 1 u 1 to t times probability of x t minus 1 given z 1 to t minus 1 u 1 to t minus 1 t x t right so if i assume that we actually have the markov property right so some of these things can get simplified so how do i simplify with the markov property notice that i said the markov property applies whenever i know x t minus 1 right if i do not know x t minus 1 i have to condition on the entire history but if i know x t minus 1 right i can get away with ah throwing away the history so since i have x t minus 1 here right i can throw away the history and then i can simplify that to probability of x t given x t minus 1 i still need u t because u t comes after x t minus 1 so i need x t minus 1 and u t that gives me this the first expression using the markov property right the second expression stays the same right its probability of x t minus 1 given z 1 to t minus 1 u 1 to t minus 1 d x t minus 1 right but then if you think about it what is this expression probability of x t minus 1 given z 1 to t minus 1 and u 1 to t minus 1 right if you think about it that's exactly what our belief expression was right belief x t is probability of x t given z 1 to t u 1 to t now belief x t minus 1 would be probability of x t minus 1 given z 1 to t minus 1 u 1 to t minus 1 right so that's so that is essentially our belief expression so i can i can replace this right with belief x t minus 1 right so now my bell bar expression has now become something very simple my bell bar x t equal to integral probability of x t given x t minus 1 comma u t so this is essentially my motion model right that is essentially my um right so that is my motion model right so that is my motion model and then i have my belief x t minus 1 d x t minus 1 right so my bell bar is essentially integral of the motion model times the previous belief taken over all values that for the previous state right so that's that's basically it right so now once i have bell bar now next thing would be to do the measurement update right so here the measurement update i go from bell bar go from bell bar to bell right so let us look at the bell definition again so bell x t is p of x t given z 1 to t u 1 to t right so that's essentially what del x t is now i am going to use the bayes rule all of you are familiar with the base rule probability of a given b equal probability of b given a times probability of a divided by probability of b i'm pretty sure all of you are familiar with this rule right now using that i am going to rewrite this right so if you think of z 1 to t z 1 to t is actually z 1 to t minus 1 comma z t right so i am going to bring that out here right so i am going to look at probability of z t given x t comma z 1 to t minus 1 u 1 to t times probability of x t given z 1 to t minus 1 this is essentially my p of a part right so i have this is my p of b given a this is my p of a which is probability of x t given all the previous measurements and all the actions i have taken so far and divided by probability of b part which is probability of z t given z 1 to t minus 1 u 1 to t so this is essentially applying bayes rule here assuming that assuming that i am looking at probability of x t given z 1 2 t minus 1 comma that t comma u 1 to t so this is assuming that that is my expression as opposed to x t z 1 to t right i am just splitting up z 1 to t as z 1 to t minus 1 z t and u 1 to t so i am assuming that my z t is my b right and my x t is a and the rest of the variables are all conditioning variables right so therefore i use the i apply base theorem and write it as probability of z t given x t z 1 to t minus 1 u 1 to t times probability of x t given z 1 to t minus 1 u 1 to t divided by probability of b okay now if you think about it right i can again apply the markov property because my x t has now gone to the conditioning part right and assuming that i am what is the i'm asking the question what is the probability of z t given x t as soon as i ask that question i can assume the markov property right and i can simplify the first term in the expression as probability of z t given x t i do not have to worry about the history after that right so this is what we said in the measurement model right so this guy is essentially my measurement model okay and what about the rest of it is the probability of x t given z 1 to t minus 1 and u 1 to t and that is exactly my bell bar right so this is my bell bar update right that is my bell bar function so i have it i have my measurement model into my bell bar of x t correct and the denominator actually does not depend on x itself right and i can just replace it with some kind of a normalizing factor eta which essentially i could just sum the numerator for all possible values and then and then normalize it so that i get a probability distribution i compute this p of bell of x t right you can think of some kind of a temporary variable right for every possible value that x t can take i compute the numerator and then i divide all of these by the sum of the all the numerators i computed and that gives me the probability so that is where the eta is it's some kind of a normalizing factor and i mean technically it is supposed to be probability of z t given z 1 to t minus 1 u 1 to t but it is little hard to compute and therefore we we just use this normalization trick right so instead of actually computing the the transition on the observations alone right without worrying about the states i use known quantities what are the known quantities i am using i am using the measurement model right let us go back and look at them all right so i am using the right so i'm using the measurement model and i'm using bell bar which we just computed on the previous slide using known quantities again so using both the measurement model and bell bar we are able to compute what is belief without ever computing the ah the denominator right so we just compute the numerators and then normalize across all the possible values that x t could take right now if x t could take continuous values right then the normalization becomes a little tricky and so what we will see in the next few lectures are techniques for making this computation tractable by making assumptions about the form of the uh transition function the the motion model the form of the measurement model and also the representation for the belief right right now i have made no assumptions right i'm making i'm just telling you these are very general probability distributions right as we go along so we'll start making specific assumptions about the transition model we'll start making specific assumptions about the measurement model and about the belief so that we come up with different kinds of tractable computations right now eta is easy for you to compute if you think that x t can take only a small number of finite values right find them small number of values right but if x t can is like a continuous value output right so like orientation it could just take any value or it could be velocity or even x y coordinates right it could be anywhere in the in the in the workspace then computing this eta basically having to compute over all possible values for the belief right it's going to be a little tricky and we need to have some assumptions about how the belief looks like so that i can do the computation tractably |
Introduction_to_Robotics | Lecture_29_Forward_Kinematics_Examples.txt | so yesterday we briefly talked about this problem the forward kinematics of 6x is circulated robot into latex and the first part as I mentioned we were to assign the coordinate frame so we saw that there are I mean if you see success robots so you will be able to identify l 0 to l 6 as the coordinate axis so we first assign the coordinate frame zi l 0 and then we will have L 1 so L 1 is a shoulder roll so we will have this as the z axis that is your Z 1 and then you can have Z 2 in this direction or in this direction whatever the direction you choose so if I choose this direction it will be Z 2 then you have three four and set Y is the tool roll so it will be like this same origin and finally you have zzs so you can identify all those joint axes so this way you can identify all the joint axis and once you have the joint axis then you next you assign the x axis x axis should be normal - I mean identify the origin first the origin will be the intersection of both the z axis set Z 1 and Z 0 z 2 z 1 etc so we will get L 1 then you will get L 2 at the same point so you will actually mark it at some other point somewhere here then you have l3 l4 l5 and finally Elsie's so this is the way how you can get the coordinate axis coordinate origins okay so once you have this origin identified then you identify the assign x axis x else it should be normal to 0 at Z 1 and Z 0 and away from z 0 so you can assign the x axis accordingly and you will get all the coordinate frames as shown here okay so you can identify the coordinate axis okay so once you have this the next part is to get the pH parameters so we assume that theta is variable so for the time being we don't try to find out the actual value of theta first we'll try to find out all de fixed parameters ad and alpha so we can actually have ETA 1 and then what will be a 1 a 1 will be zero okay so this is a B and alpha so a will be 0 B will be this B 1 which is 373 0.4 and alpha is at 0 to z 1 so there is a 90 degree rotation you have to look at with respect to x1 what is the rotation and then you will be able to get the Alpha so this way you will be able to get all the parameters please get alpha 1 alpha 2 alpha 3 that is more tricky others are more straightforward so what will be alpha 1 alpha 1 value 90 or minus 90 plus 90 okay so alpha 1 is basically measured from 0 to Z 1 I shared with respect to which axis X 0 or X 1 X 1 so which is X 1 here so this is X 1 this is X 1 axis you have to look at from this axis this axis and then see how much it has rotated so it has rotated clockwise or anti-clockwise clockwise so it should be minus 90 right so if you look at from look at from x1 so you cut from this so this is the original position and it is rotated like this okay so you look at from here it has actually rotated and that clockwise direction right now I did you get your person once again X 0 to X 1 we are talking about X that 0 to Z 1 Z 0 to Z 1 with respect to X 1 that is what do you need to measure okay alpha 2 you can cross check your answers later so find out what is alpha 2 alpha 2 is Z 1 to Z 2 Z 1 to Z 2 so said one was like this it has actually moved like this and with respect to X 2 so X 2 is downwards but n plus 90 okay find out why it is plus 90 because if you are looking from here it is this was the okay this is your X this is your X this was your Z 0 there's more like this it has more anti-clockwise okay so that is the way how you look at this was Z 0 axis this is your X 1 X 1 has mode C 0 as become like this in the anti-clockwise direction therefore it is plus 9 okay yeah now you have to look at Excel to to Z one so that one was actually it has moved like this from from this position there's more like this the diocese and you have to look from here so it is more like this if you looking from here these taxes just move like this okay so it'll be the angle plus 90 right again it is anti-clockwise moment so you'll be have plus 9 so this is the where each one you need to check and then see what is the value of alpha so you have this 1990 0 0 90 0 so that is the way how you get the Alpha because that 2 set 3 set for our parallel that's why you get these three two zeroes and then set for and z5i was rotated by 90 degree set by ends at 6 are same points I mean same direction so it is 0 again and you apply the rules then we will get the d1 then finally you get d6 and you will get a 3 and a 4 a 2 is 0 so this is actually a 3 a 4 and this is B 1 and this is B D 6 so you have all the parameters identified all the D H parameters identifies and once you have this you need to get the D H matrix or the transformation matrix between each coordinate frame so you can actually come find out what is t10 by substituting the values of theta 1 D 1 a 1 alpha 1 so you'll be getting all the transformation matrices only thing you cannot down here is that we assume the home position here we assume the home position here and for this home position you will be able to find some values for theta that is for the this one position you can find a theta value what is the value so you will be able to find out theta 1 to theta 6-4 the given home position yes it's a variable when you change the position you may get a different data but at this for the given position you will be able to find this theta so theta 1 is the angle between X 0 and X 1 you can see there is an angle between X 0 and X 1 that is 90 degree plus 1 minus you have to check based on the Z 0 axis so with respect to Z 0 so with respect to Z 0 you have to look at that this is actually more so from here it has mode inside so it has moved like this so you need to find out what is the angle of this one is named theta 1 so that way you will be able to get the home position so if I call this as the home position of the robot you will be able to find out the home position theta as 90 - 90 90 0 nineteen's so this is the home position theta values theta is a variable but for the given configuration that we are used for hitting the coordinate frames there is an angle between x0 and x1 similarly X 1 2 X 2 etc so that is in US 90 - 90 90 0 90 see you will be able to where you can verify this later how you are getting these values 90 - 90 etc why we need this information is that we can actually cross-check our forward kinematics by looking at these values substituting these values we can cross-check our forward formulation where you are doing it correctly or not that's why we need to have this information okay alright so now let us do the forward relationship let us make the forward relationship so what I will do I look at this elbow as the point where you are bifurcating it's for their convenience so that you can act it 3 & 3 joints so elbow I will take it as the elbow to base I will first consider and then tool to elbow then finally tool to base I can consider so let us find out the so if you want to find out elbert the base transformation so this is p10 he sees he 2 1 and this is P 3 2 we simply substituted the values of theta alpha a and B into this and we got this information so I do not substitute that I mean value here but this put it has a 3 and B 1 substitute the value of alpha in order to get the matrix so now if you multiply these three matrices you will get the L board obeys relationship as this you can cross check later I'm going to do the multiplication here but this is somewhere where you normally make mistakes there's either multiplication or substitution you may make some small mistakes but then before you proceed further you need to cross check whether your formulation is correct or not if your formula is wrong here then the next part also you'll make the mistake you may end up with a completely wrong forward relationship this any question you okay you okay I mean I said mentioned it's not unique but then you need to make sure that you follow some unmentioned so that did not be okay so how do you cross check whether you have correct formulation is correct so what we need to do is to look at the theta values theta 1 theta 2 theta 3 for the home position so we took this as the home position so this is a elbow point we had taken and now we are actually finding out the base to elbow as the relationship here substitute the value of theta 1 theta 2 and theta 3 into this relationship pxpypz edge so this represents the position of elbow with respect to the base frame so this is the x position of the elbow this is the Y position of the elbow and this is the Z position of the elbow now substitute the value of theta 1 theta 2 that is 90 minus 90 and 90 I think so substitute the values of theta 1 90 data to 90 and theta theta 2 minus 90 and theta 3 90 into this equation and find out what will be the value of P X when you substitute value of theta 1 theta 2 theta 3 into this what will be the value of P X what do you get as px yes theta1 theta2 theta3 90 so does it become 0 this will become 1 right so you'll get px is equal to a3 right and then you will get P y equal to 0 and PZ will be because this will become 0 this will become 0 cos theta 3 therefore this will be V 1 that actually tells you that the position of elbow with respect to the base frame in the home position is a 3 away from the origin of base which says that this distance is a 3 so this plea tells that ok how do you got the elbow from the base is a 3 which is correct as per the diagram it is correct py is 0 because it isn't that exit axis y-coordinate is 0 and you have Z coordinate is basically be one elbow Z coordinate is b1 which is correct because as per the home position we assume this position and elbow is at a 3 and B 1 xn z then I extended coordinates so shows that the formulation what you already got this corrects so your angles whatever the next parameters you assumed and ordinate coordinate frames you assign everything is perfect and that is why you are actually able to cross-check your formulation so this way you can make sure that your formulation is correct so whenever you do a forward kinematic relationship at some point basically in the first three joint you do first and cross check whether you are getting it correct and in C as per your composition then you are able to verify it and that you tell you that you are parents I [ __ ] won't be available no no I say you cannot bury because and say consecration is given for a robots also normally unless it is a reconfigurable robots okay we are not talking about the reconfigurable robot we're talking about an industrial robot which is already designed you already have a design for it and you cannot then you cannot change their joint axis arbitrarily no no no no you cannot do that that zero is always along the joint axis okay and that joint X is already fixed for a given configuration because you have a motor attached to join like this and you are rotating in this direction only you cannot say that I'll change the motor to this direction water is not going to be change in the wheel configuration so the joint axis is always fixed for a given configuration oh you mean why you are saying that why don't you take joint axis along x axis oh then the whole formulation will be then you cannot follow these formulation you need to have a different formulation for doing lines no no then you don't call this new H parameter because the H parameter says that you are saying that joined axis is Z then you get the parameter then only we call just D H parameters right so D H parameter is has a business of that formulation only you are telling this is the BH parameters so if you say that again I'll assign joint axis by X Y u RZ then you cannot say that okay DZ DX where I mean then it is a different we need to have a different formulation for doing that but this is the standard formulation that you can use to get the D H parameters if you say that on my joint axis is not along Z then no more this B's formulation is values okay then you need to have a different way of approaching it that is complex that's why a standard form - has been made no no it will be more complex you not have that kind of fact because then it we are becoming arbitrarily assigning joined axis right then you cannot really formulate a standard no why X or Z why not why no see all this assignment comes after the first assumption of it exists then if you are saying that is not the taxes then your alpha is not defined with respect to x axis or that axis X 0 y 0 ZT raised again you can decide what you want to have each way you want to have for origin 3 no no non si no you should understand one thing see we are at theta is a variable alpha we are defining as the rotational axis or the rotational axis are oriented okay so you are saying that your you have one rotational axis Z and then how these rotational axis is related to the next rotational axis is basically the Alpha that is how they are defining alpha the angle between two joined axis and now you are saying this is that and the next one is X then you have to design define the Alpha is between Z and X with respect to some other axis and that'll actually makes more upon a complication of the system instead of having a standard formulation so this gives you a more standard formulation then the other one is more arbitrary so you don't have any control over the angle because each one you have to look at which axis you are defining it as drawing then you to defend that esta alpha so this is much more easier I don't know why you would say that that is more easy yeah so you're sailing between 3 & 1 instead of begin to know then why three why not for that question we'll come now so why three it can be six and one right directly prolly we can discuss this later I think you have a different view so probably we will discuss it separately can beat me and they will get displaced okay yeah okay so this is how you cross check then you can make sure that whatever the sub substituted they're actually getting it as de 1 so L bought the base composition again see though this is your elbow and so this is a 3 and this is B 1 and now we know that this position of the elbow is the coordinate of this is a 3 0 D 1 so this way you can cross check your position information and you can check your orientation also you will see that the x axis of the third frame or the elbow frame is aligned with the origin frame origin of origin of X 0 of the origin and therefore you will see this is 1 0 0 the vector is aligned along the same direction both x axis so it will be 1 0 0 now you will see that Y the y axis of this this is the y axis of elbow and that is along the z 0 axis so you have this J 0 0 1 here yes why is aligned along the z axis so I take it this is 0 wrong one sliding vector and then you will see that the z3 that is the z axis of the elbow is opposite to the Y of the origin therefore you will get it n 0 minus 1 0 so this way you are sure that your formation is correct what were the joint angles you assumed the alpha you assumed a 3d G all the parameters are correct up to this point there is no problem so you can go ahead so that is the confirmation that you can have a cross-check you can have with your formulation okay all right so now we can do the the next three that is the tool to elbow you can find out this transformation again using the same three transformation matrices I mean the similar transformation matrices we can get this s elbow to elbow s this so you find the individual transformation matrix and then multiply the three doodlenet transformation matrices then you get the tool to elbow transformation s please so as I mentioned 4 5 represents C 4 5 represents theta 4 plus theta 5 cost cos theta 4 plus theta phi now again if you are interested you can actually cross check this once again but with respect to this you have to cross check you will see that the x coordinate as per this is this one BF 4 this is air 4 and this is V 6 so you will see F 4 plus DC CH will be its x coordinates so if you substitute the value of theta for n theta phi you will find this as f 4 plus B 6 and others will be 0 0 because Y & Z will be 0 here you will be able to cross check it again substituting the value of theta for theta by n fetuses or you can do it as a final one you want to find out finally you want to find out what is heat tool to base so multiply these two matrices that is the elbow dope base and tool to elbow multiply you will be getting it as a and their four by four matrix which gives the position of the tool with respect to base frame and when you do this again it's a multiplication of matrices you will get it as this way so this is the position vector just separating the position and orientation so the position vector will be like this px py and PZ this is what you get in the last column of the four by four matrix and the other three will be the orientation will be this one our our matrix will be this separately I given so this is your normal vector this is the sliding vector and this is the approach vector and this is the PX of the tool with respect to base frame py of the tool is respect to base name and pz again you can cross check your formulation because you have this ETA 1 2 theta 6 identified for the home position substitute the values of theta 1 to theta 6 theta in this case only theta 5 substitute here in this matrix to get the px py PZ and check whether you are getting it correctly so if you do this if you are done it properly your PX should be equal to a 3 plus a 4 plus B 6 because that is the x-direction length of the manipulator and your P Y should be equal to 0 and these it should be equal to d1 by substituting the value of theta 1 to theta 6 composition values of theta 1 to theta 6 in this matrix if you are getting this as the relationship that means you have done it properly and you are formulation is correct you are not done inch I mean you are not getting it that means something is either the join gang look taken for the home position or the Alpha I will taken or the origin that you assumed that may be something wrong that is a cross-check that you can do to make make sure that it's okay so if you want to cross-check you can cross check it here and you will get this as a 3 plus a 4 + B 6 0 and D Y so you can see that this is 3 plus here 4 + B 6 the x coordinate with respect to base frame of the tool and the Z coordinate with respect to this is v6 so your formulation is correct and therefore what are the parameters you assume that the parameters you calculated 13 is correct you want to have one more cross-check you can check the orientation of the tool frame so this is the tool frame how this tool frame is oriented with respect to the base frame can be cross kept and you will see that this z axis is aligned with the x-axis and therefore you will see this is 0 0 1 and x axis is aligned with the z axis and therefore you will see this 1 0 0 and then this is y axis is along the Y of this negative Y tilde 0 minus 1 0 so your formulation is correct you have got the forward kinematics correctly so that is the shaking that you can do when you have a and you are doing in forward in a magic analysis of element whatever may be the manipulator consideration assume a home position and then for that home position you are saying the coordinate frame get the D H parameters find out the home position theta and then substitute this in the forward formulation up to the wrists or up to the tool and then cross check and finally make sure that what you are getting is correct if you are not getting it correctly something s gone wrong that is the final then you can go check and then find out what has gone wrong okay any questions on the forward kinematics so we will not be doing any more example for forward kinematics we'll be moving to the inverse problem now okay fine so two things one is about the tutorial tomorrow so there are two questions from the previous tutorial which you could not complete along with that you will be doing this also in the next class that is tomorrow 9 o clock 8 o clock you will be doing the forward kinematics of this and then trying to verify your answers with the soft form position values so it's again a simple manipulator which you already we have done it in different ways so not no complications we can get it straight forwards so try to finish try to do this and then submit the and says in the next class so this is your position what you are interested good position you to find out the tool to base matrix as a four by four matrix and get the P X py and PZ as a function of D H parameters so this will be done in tomorrow and the class so please come prepared don't say I did not get enough time to do it tutorial will help you to solve it but you should come prepared with basic things that we already discussed in the class okay so there will be an assignment given to you okay so the assignment will be the questions will be released by today or tomorrow you have to submit by next Wednesday next Wednesday 18th or so basically there is a one problem on the forward kinematics of a industrial manipulator and that the second one will be a code so you had to write the code for the forward kinematics of any manipulator a general codes basically how do you get the da's parameters I mean once the BH parameters are given to you how do you actually find out the how do you write a code for the forward kinematics and then getting the position yeah pardon you can use don't use MATLAB any other software can be done this again MATLAB will be the easiest thing to do I don't want you to write it in MATLAB you can write it I mean Python or any other language so finally you do use that one for a standard manipulator and then show the results also the using the code how do you get the resets so basically that is one which will help you to find out the workspace of a manipulator once you have so once you do substitute the value of theta for any value of theta you should be able to get the position and orientation of the tooltip so by sleeping the joint angles you will get all the points that the robot can reach then you will get that as their what space of the so questions will be released by tomorrow you have to submit the result I mean insist in Moodle by 80s okay so just to test how much you understood whatever the DB discussed in the last few classes there are few questions for you you have to answer these questions when the arm of a robot s1 travel you tend to prismatic joins its workspace will be first RTP cylindrical right yeah should not think this much if the axis used to intersect in used to orient the tool intersect at a points then the robot is said to have kind of reached what kind of a wrist they're intersecting at one point all the joint axes are intersecting at one point then you call it yes spiracle wrist right so it's known as s very good wrist what will be the y-axis yeah on red all right okay these are homogeneous transformation matrix okay so T is basically R if this is P what is T inverse R transpose minus R transpose P right so that is the inverse of these magnets what is alpha or how is alpha defined alpha is defined as the angle between that can set K minus 1 measured with respect oh yes okay okay so that is the Alpha so now you know theta what is theta what is the ended what are the join parameters theta end deterrent be linked parameter Y n alpha okay okay so you know all the answers so we'll have an assignment so question will be available so this is 19 much I am not sorry 19 much is Thursday right so it make it on 18 months yep 19 your exam starts oh I don't want to do it probably 18 we can finish it in March you to submit last signs okay so I thought of starting inverse problem today but I think we don't have much time so what we are doing trying to do is so far what we did was you see if you know the joint parameters theta what is the P that you can reach is the forward problem that is if you have a manipulator and if I know these values of theta 1 to theta 6 I know where this tool will reach that is basically the forward problem so I can actually comment the joint angle and move it to a particular position so that is what we and we know when we comment from joint angles we know where this will tweets so we can find out the position but most of the problems in industrial robots is not that one I have a workpiece here I want the robot to go and pick up this workpiece and I know where the workpiece is at present so what I know is that there is a workpiece here and I know the position and orientation of this workpiece I want to know what should be these joined angles to reach this position so what should be this what should be the joint angles to reach this position so that the robot can pick up the object so the object position orientation is not I want to calculate what should be my joint angles to reach here that is basically the important problem in industrial robots or most of the industrial problems are like that where the robot reaches is not that critical but how can I reach an object and pick up the object or if I want to place this object here I do I know where to place the object with respect to the base of the robot I know where to place the object but then to place this object here what should be the joint angle to reach there that is what I don't know so this problem of getting theta if the P is known is known as the inverse problem so the forward problem was theta is known what is P is the forward problem but then P is known theta is not known is an inverse form and this is much more complex compared to the forward so forward now appears to be more straightforward if you have the you have standard formulation we just apply the standard formation to get an inverse solution sorry forward problem third but for inverse there is no standard solutions each robot has to have its own inverse inverse solution there is no standard formulation like a forward problem you have to look at the forward of the particular manipulator and then see how to solve for the inverse so it becomes much more complex and another problem is that suppose there is an object here and I want to reach the object I can calculate joint angles to reach here but I can reach like this also so I have two ways to access this or sometimes multiple ways to access this then it becomes more complex because I will be have multiple solutions so how do I actually tackle it the multiple solutions as well as the manipulated specific solutions for the inverse is then challenged and therefore the inverse problem is complex are much more complex than the forward problem and there is no standard formulation like divs in this case you will be using the edge to solve it but there is no standard way to solve all the robots with the same kind of a solution forward we have the same solution for all kinds of robots where it is Phi axis x axis or any configuration you can use the same DX formulation and get the solution but inverse that is not possible you need to have manipulator specific solutions to solve the inverse so we will discuss this in the next class and then we will talk about velocity relationship also me okay so we will stop here so come prepared for the tutorial tomorrow and check your Moodle for the questions on same inputs thank you |
Introduction_to_Robotics | Lecture_102_Markov_Localization.txt | So, in last lecture we looked at the taxonomy of the localization problems. And, this lecture we will look at a very simple, the Markov Localization algorithm. So, if you look at this, this is exactly the Bayes filter algorithm except that instead of looking at, just the state, I also have to look at the map here. So, now we already looked at, looking, the, the motion model with respect to the map and we also looked at the measurement model with respect to the map. So, it is nothing more than, there is running the original Bayes filter algorithm where both motion model and the measurement model are going to use the knowledge of the map. So, if you look at their, the, the Markov localization algorithm, it takes as input your previous belief state, your current action, your current measurement and the map. And, it could be potentially the current map. If, if you have, if you have a time-varying map. And, then basically we do the same thing, for all X we obtain bel bar which is taking into account the motion models. So, this is the prediction update. And, then we also get bel by accommodating the measurement. So, this is the correction or the measurement update. And, this gives me the position of X taking into account the knowledge of the map, M. And, then after I have done this update for all the states, it return the new belief state. Just the Bayes filter algorithm. And, this is known as the Markov localization problem. And, in fact, if you roughly think about it, the Markov localization problem can address any of the three local and global problems we talked about. So, the first one is the Global localization problem. We just say that the belief distribution is unique. So, or if it is the position tracking problem and I will set the belief distribution to be something very very focused. If initial pose is completely known, then I will set my bel X naught to be 1, if X naught equal X naught bar, which is where the, which is the right initial position. So, so I know that the robot is in X naught bar, therefore I will set bel X naught, bel X naught bar to 1. And, bel X naught if it is other than X naught bar, I will set it to 0. So, this is the easy way of handling the position tracking problem. And, if you do not know the exact initial position, I only know the initial position around the small window and then I can just treat it like a narrow Gaussian, where I have my sigma naught, which is a very very narrow initial belief. And, my X naught is, X naught bar is the mean. So, in the first case I assumed it was exactly at X naught bar. In this case I am assuming that okay, it is somewhere around X naught bar, not too far away. Then how far away is determined by sigma. And, then again, this is assuming that there is just one correct location and then I do the position tracking after that just the normal belief update gives us the position tracking equation. So, let us suppose I want to do global localization. I just say, it is just the uniform distribution. My bel X naught is a uniform distribution. So, I just say that it is the, one over size of X and starts from there. And, the regular updates will essentially give me the global localization equation. And, if I have a belief distribution like the particle filter case which can take care of multiple hypotheses, then I can do proper global localization. And if I have a Gaussian filter, then it will be a little tricky because it going to very quickly narrow down on a single hypothesis, which might be wrong. I mean, you still might have some noise around the hypothesis but it will, find it hard to actually fit the right distribution quickly. And, then what about the kidnapped robot problem. So, it is some kind of a partial knowledge. So, like bel X naught can actually be something arbitrarily. So, so, in fact I can, if you remember this figure I was showing you, I was in a world with 3 doors. And, if I say that my first sensor reading is that I am next to one door. So, I can always say that I will start of near a door, in this case I can say, the density has some value near the door except in other places it is all 0. So, that could be a way of accommodating partial knowledge. And, so if I want to accommodate the kidnapped robot question, I have to make sure that my belief distribution is such that my updates, updates never make the belief anywhere about to 0. So, here is the example of the Markov localization algorithm. This is running in a global localization setting. Therefore, when it starts, the belief is anywhere in the world. So, that is the uniform distribution over the span of the world. Next, what happens, what happens, it senses that it is next to a door and therefore the belief distribution becomes one of these 3 locations because only those 3 places are likely to activate the door sensor. This is the probability of the door sensor getting activated in these locations and therefore, I basically put into these 3 locations. And, I am able to do this mainly because I have the map. Because I know where the doors are in the map and therefore as soon as I sense the door, I can put myself in these three places where the doors are in the map. And, then what I do is I move. So, I move to the right. So, that is the action that has happened and I move to the right. If you remember, we saw this already earlier, in the, in the case of the filter problem. But, now I am just telling you that that it is a localization problem as well. There we did not think about the map. Here we have to have the map. So, the bel bar tells me that I move. So, basically what happens is these 3 peaks, just move to the right by some amount. And, then they also get, spread out because my motion model has some noise. And, then finally I sense a door again and the door model has not changed. The door model still is the same thing. And, so, given that these were the three places where my motion model put me. And remember, we started off with the uniform distribution. This has actually not gone to 0. There is still some, some probability that I could be anywhere in the world. Because I want to make sure that I can allow for the kidnapped robot problem. And, therefore now I make a measurement. These are the 3 places where there is a door. And, then what happens is I combine the bel bar and the measurement. And, now I am more or less sure that this is where I am right. And, these places, you know, kind of get dampened down. So, now what happens is the robot start moving further and further. And, none of these measurements are enough for me to make any refinement. These measurements essentially just tell me a wall. I mean, and wall could be anywhere. There are many, many places where I could sense the wall and so it is not really telling me much. And, essentially it is just that my movement model noise getting added. Therefore, from a very sharp distribution here, I basically go down to a distribution that is kind of more spread out and not as peak. Because my motion model keeps diffusing that track distribution a little by little. So, this is essentially the localization problem. It is slightly different from the Markov, from the Bayesian filter problem because localization here is done with respect to the map. And, it does not look very different for you because the sensor model accommodates the map already. And, the movement model and sensor model are going to accommodate the map. And, here the movement model really does not depend on the map. So, because I have not tried to open the door. So, that is, that is basically the Markov filter algorithm in operation here. So, if you think about it, we talked about multiple kinds of maps. So, we talked about feature based maps and we also talked about location based maps. And, where we looked at occupancy grid maps as a location based map. If you remember, feature based maps; they were collection of features or objects and their properties; where the properties could include a position as well, right other than various other features of the object. And, so the localization problem, the algorithms as we have been seeing them so far; the Bayesian filter algorithm, as we have seen so far are amenable for working with location based map, especially grid based map. They are great for working with occupancy grid maps. When we start moving to feature based maps, we have to do a little bit more work in order to do localization. Sometimes, actually the feature based maps are more powerful for it to the localization because you can localize yourself with respect to these features. In such cases what we do is instead of looking at the raw sensor measurements, we try to extract features from the measurements. So, we, we take the raw sensor measurements and we try to extract certain features from the measurements. So, the features could be something like, what is the location of the door? Or what is the location of the table in the environment? So, I know that there is, the map consists of 5 tables, 3 chairs, 4 doors and 2 windows or something like that. Now, instead of saying that, I have these in in in like a grid model and I am going to localize, I could potentially just use these in terms of the landmark or or or feature based model, where each object is like a landmark. And, I have these features corresponding to these landmarks. Now, if I know for sure, what sensor feature corresponds to what landmark, it is much easier for me. So, if I know that, the reading coming from sensor 3 is actually sensing door 2. How could this happen? Let us say that doors have numbers on them and my sensors are cameras. So, I could look at the door and say, that is door 2. So, I have this correspondence. So, I know exactly or or or my features could be a meeting beac , like, like a radio beacons, or Bluetooth beacons. They are emitting signals in the, in the environment and as soon as they receive a signal, I know which Bluetooth beacon is a meeting that signal. So, I am going have multiple features. If you remember, I was going to have multiple features, and then what I do is, I get the sensor reading, which is Zt. From the Zt, I am going to compute these feature values. And, to make sure that I am assigning the right feature value or the right post to the right landmark, I am going to maintain what is called the correspondence. So, the correspondence here is something like this. So, cit means that the the ith sensor feature I have computed at time t. I am computing f1t, f2t, bla bla bla. So, the ith feature I compute at time T with the value it takes tells me what is the landmark. We remember, we could have one to n landmarks on the map, right. Each landmark has this feature corresponds it. Let us say that I computed distance to a door is 5 meters. Let us say, so, may be our distance to a door is 1 meter. Let us say, I have computed distance to a door is 1 meter. Which door is this? Is it door 1, door 2 or door 3? So, the first distance measurement I computed is distance to the door is 1 meter. And, suppose this is door 3, my c1t will be 3. So, what is, what are we doing here? So, my f1t, my f1t is 1 meter. My c1t is 3. What this mean is, my distance to door number 3 is 1 meter. Is it clear? So, it is not always the case that my first sensor reading gives me distance 2 to door 3. It could be some other time I come in opposite direction, say, my fifth sensor reading might be giving the distance to door 3. In which case my c5t will be 3 and c5t would be, say 1 meter, 2 meter, or whatever is the distance to the door. So, capital N here is the number of landmarks in the map and the N plus 1, the N plus 1, is essentially to say, for whatever reason I do not know what ,what this feature corresponds to. There is some landmark that I am not able to map this feature to. And, therefore I will put it at the N plus 1th value. Suppose, there are 10 landmarks and if my c variable says that its value is 11, that means, that for that particular feature say, c5t is 11. That means the fifth feature; I do not know what it has computing. So, that is what I mean by assigning 11 to it. So, this is exactly what this line is saying. If, if the value that cit takes, say some j, is actually less than or equal to N, then ith feature at time T corresponds to the jth landmark in the map. Suppose I have 3 doors. Let us say that third feature, of first feature c and c1t is 3. Then the first feature corresponds to the third door. And, when cit is equal to N plus 1, I do not know what it is. So, this is essentially the problem of the correspondence. Now, if I know the correspondence, right, there is a hardly straight forward adaptation of the Bayesian filter algorithm. And, if we look at the book, they have given you all the, the worked out the full example with extended Kalman filter on how to accommodate these correspondence values into your localization model. So, basically the, the, the way you compute the belief, updates changes. So, you do this with respect to the, the observation, the features that you have computed and the distance to those features and so on. So, some kind of a triangulations is what you do to accommodate the measurements. The motion model, more or less stays the same. But then what happens if I cannot give you the correspondence? Here I am assuming somebody has given me the correspondence, in the, the first part. If I cannot give you the correspondence, it is challenging. But, this is usually the case, right. I mean correspondence can rarely be determined with certainty. So, (corres), sometimes, sometimes, I will think, I will be thinking this is door 3. If the doors do not have numbers on them then I am doomed. So, I do not know whether it is door 1, door 2, or door 3. I do not know what it is. Or Bluetooth beacons without, I mean, not Bluetooth beacons; just some bouncing bombs of my ultrasound, then I really do not know what is actual identity of the landmark. In such cases, you also have to estimate the values of the cit s. So, you not only have to look at the value of X, but you also have to have some kind of an estimate for the values of cit. And, so, there are the multiple ways in which you can do it. You can also, you can have a distribution over cit and use that for updating your belief, bel Xt. Or you could estimate a point value, say, something like a maximum value estimate. So, it is the most likely value of the correspondent and then you say that, this is the value; I am not going to look at the noise. And, then use that for making your updates. Now, you can see why kidnapped robot problem can become a reality. Suppose, I say, I think it is door 2, now correspondence assignment tells me it is door 2. But it is actually door 3. Or maybe I have made a mistake in my estimating the correspondence variable. I might have actually think I am in a very different part of the world for a few updates. And, I might suddenly for sure know that it is door 2 then I know as far as the robot is concerned, 'hey, what? I was getting readings from door 3 all this while; suddenly I am getting a reading from door 2. I, do not know where I am?". So, you might want to account for these kinds of egregious errors. So, so, that is a reason why you look at the kidnapped robot problem as a special case of global localization. So, this gets, it gets slightly more involved. Because you need to have a separate estimation procedure that is running for the correspondence problem as well. And, so once the, once you estimate, once have some kind of, either a point estimate or a distributional estimate for the correspondence, you can then use that in the previous algorithm that we spoke about in order to estimate the location, in order to update the belief actually. So, notice that once I know the correspondence, it is going to affect how I go from my bel bar to bel. So, bel bar is the motion model. So, that gives me some noise in terms of the prediction that I am making. And, to correct the prediction, I am going to use these correspondence values and then map my location with respect to the known features of these landmarks. These landmarks, remember in the map, if I know that I am so far away from landmark X, and so I am 1 meter away from the door, then there is only certain part of the state I could be in. Because I know exactly where the door is. So, the noise is in that 1 meter path, how far away I am from the door. So, these are essentially some small modifications that you make to the Kalman filter of the extended Kalman filter algorithm in order to accommodate this feature based map. But the bigger challenge is when we have to look at the unknown correspondence problem. |
MIT_6S191_Introduction_to_Deep_Learning | MIT_6S191_2020_Introduction_to_Deep_Learning.txt | hi everyone, let's get started. Good afternoon and welcome to MIT 6.S191! TThis is really incredible to see the turnout this year. This is the fourth year now we're teaching this course and every single year it just seems to be getting bigger and bigger. 6.S191 is a one-week intensive boot camp on everything deep learning. In the past, at this point I usually try to give you a synopsis about the course and tell you all of the amazing things that you're going to be learning. You'll be gaining fundamentals into deep learning and learning some practical knowledge about how you can implement some of the algorithms of deep learning in your own research and on some cool lab related software projects. But this year I figured we could do something a little bit different and instead of me telling you how great this class is I figured we could invite someone else from outside the class to do that instead. So let's check this out first. Hi everybody and welcome MIT 6.S191 the official introductory course on deep learning to taught here at MIT. Deep learning is revolutionising so many fields from robotics to medicine and everything in between. You'll the learn the fundamentals of this field and how you can build some of these incredible algorithms. In fact, this entire speech and video are not real and were created using deep learning and artificial intelligence. And in this class you'll learn how. It has been an honor to speak with you today and I hope you enjoy the course! Alright. so as you can tell deep learning is an incredibly powerful tool. This was just an example of how we use deep learning to perform voice synthesis and actually emulate someone else's voice, in this case Barack Obama, and also using video dialogue replacement to actually create that video with the help of Canny AI. And of course you might as you're watching this video you might raise some ethical concerns which we're also very concerned about and we'll actually talk about some of those later on in the class as well. But let's start by taking a step back and actually introducing some of these terms that we've been we've talked about so far now. Let's start with the word intelligence. I like to define intelligence as the ability to process information to inform future decisions. Now the field of artificial intelligence is simply the the field which focuses on building algorithms, in this case artificial algorithms that can do this as well: process information to inform future decisions. Now machine learning is just a subset of artificial intelligence specifically that focuses on actually teaching an algorithm how to do this without being explicitly programmed to do the task at hand. Now deep learning is just a subset of machine learning which takes this idea even a step further and says how can we automatically extract the useful pieces of information needed to inform those future predictions or make a decision And that's what this class is all about teaching algorithms how to learn a task directly from raw data. We want to provide you with a solid foundation of how you can understand or how to understand these algorithms under the hood but also provide you with the practical knowledge and practical skills to implement state-of-the-art deep learning algorithms in Tensorflow which is a very popular deep learning toolbox. Now we have an amazing set of lectures lined up for you this year including Today which will cover neural networks and deep sequential modeling. Tomorrow we'll talk about computer vision and also a little bit about generative modeling which is how we can generate new data and finally I will talk about deep reinforcement learning and touch on some of the limitations and new frontiers of where this field might be going and how research might be heading in the next couple of years. We'll spend the final two days hearing about some of the guest lectures from top industry researchers on some really cool and exciting projects. Every year these happen to be really really exciting talks so we really encourage you to come especially for those talks. The class will conclude with some final project presentations which we'll talk about in a little a little bit and also some awards and a quick award ceremony to celebrate all of your hard work. Also I should mention that after each day of lectures so after today we have two lectures and after each day of lectures we'll have a software lab which tries to focus and build upon all of the things that you've learned in that day so you'll get the foundation's during the lectures and you'll get the practical knowledge during the software lab so the two are kind of jointly coupled in that sense. For those of you taking this class for credit you have a couple different options to fulfill your credit requirement first is a project proposal I'm sorry first yeah first you can propose a project in optionally groups of two three or four people and in these groups you'll work to develop a cool new deep learning idea and we realized that one week which is the span of this course is an extremely short amount of time to really not only think of an idea but move that idea past the planning stage and try to implement something so we're not going to be judging you on your results towards this idea but rather just the novelty of the idea itself on Friday each of these three teams will give a three-minute presentation on that idea and the awards will be announced for the top winners judged by a panel of judges the second option in my opinion is a bit more boring but we like to give this option for people that don't like to give presentations so in this option if you don't want to work in a group or you don't want to give a presentation you can write a one-page paper review of the deep learning of a recent deepening of paper or any paper of your choice and this will be due on the last day of class as well also I should mention that and for the project presentations we give out all of these cool prizes especially these three nvidia gpus which are really crucial for doing any sort of deep learning on your own so we definitely encourage everyone to enter this competition and have a chance to win these GPUs and these other cool prizes like Google home and SSD cards as well also for each of the labs the three labs will have corresponding prizes so it instructions to actually enter those respective competitions will be within the labs themselves and you can enter to enter to win these different prices depending on the different lab please post a Piazza if you have questions check out the course website for slides today's slides are already up there is a bug in the website we fixed that now so today's slides are up now digital recordings of each of these lectures will be up a few days after each class this course has an incredible team of TAS that you can reach out to if you have any questions especially during the software labs they can help you answer any questions that you might have and finally we really want to give a huge thank to all of our sponsors who without their help and support this class would have not been possible ok so now with all of that administrative stuff out of the way let's start with the the fun stuff that we're all here for let's start actually by asking ourselves a question why do we care about deep learning well why do you all care about deep learning and all of you came to this classroom today and why specifically do care about deep learning now well to answer that question we actually have to go back and understand traditional machine learning at its core first now traditional machine learning algorithms typically try to define as set of rules or features in the data and these are usually hand engineered and because their hand engineered they often tend to be brittle in practice so let's take a concrete example if you want to perform facial detection how might you go about doing that well first you might say to classify a face the first thing I'm gonna do is I'm gonna try and classify or recognize if I see a mouth in the image the eyes ears and nose if I see all of those things then maybe I can say that there's a face in that image but then the question is okay but how do I recognize each of those sub things like how do I recognize an eye how do I recognize a mouth and then you have to decompose that into okay to recognize a mouth I maybe have to recognize these pairs of lines oriented lines in a certain direction certain orientation and then it keeps getting more complicated and each of these steps you kind of have to define a set of features that you're looking for in the image now the key idea of deep learning is that you will need to learn these features just from raw data so what you're going to do is you're going to just take a bunch of images of faces and then the deep learning algorithm is going to develop some hierarchical representation of first detecting lines and edges in the image using these lines and edges to detect corners and eyes and mid-level features like eyes noses mouths ears then composing these together to detect higher-level features like maybe jaw lines side of the face etc which then can be used to detect the final face structure and actually the fundamental building blocks of deep learning have existed for decades and they're under underlying algorithms for training these models have also existed for many years so why are we studying this now well for one data has become much more pervasive we're living in a the age of big data and these these algorithms are hungry for a huge amounts of data to succeed secondly these algorithms are massively parallel izybelle which means that they can benefit tremendously from modern GPU architectures and hardware acceleration that simply did not exist when these algorithms were developed and finally due to open-source tool boxes like tensor flow which are which you'll get experience with in this class building and deploying these models has become extremely streamlined so much so that we can condense all this material down into one week so let's start with the fundamental building block of a neural network which is a single neuron or what's also called a perceptron the idea of a perceptron or a single neuron is very basic and I'll try and keep it as simple as possible and then we'll try and work our way up from there let's start by talking about the forward propagation of information through a neuron we define a set of inputs to that neuron as x1 through XM and each of these inputs have a corresponding weight w1 through WN now what we can do is with each of these inputs and each of these ways we can multiply them correspondingly together and take a sum of all of them then we take this single number that's summation and we pass it through what's called a nonlinear activation function and that produces our final output Y now this is actually not entirely correct we also have what's called a bias term in this neuron which you can see here in green so the bias term the purpose of the bias term is really to allow you to shift your activation function to the left and to the right regardless of your inputs right so you can notice that the bias term doesn't is not affected by the X's it's just a bias associate to that input now on the right side you can see this diagram illustrated mathematically as a single equation and we can actually rewrite this as a linear using linear algebra in terms of vectors and dot products so instead of having a summation over all of the X's I'm going to collapse my X into a vector capital X which is now just a list or a vector of numbers a vector of inputs I should say and you also have a vector of weights capital W to compute the output of a single perceptron all you have to do is take the dot product of X and W which represents that element wise multiplication and summation and then apply that non-linearity which here is denoted as G so now you might be wondering what is this nonlinear activation function I've mentioned it a couple times but I haven't really told you precisely what it is now one common example of this activation function is what's called a sigmoid function and you can see an example of a sigmoid function here on the bottom right one thing to note is that this function takes any real number as input on the x-axis and it transforms that real number into a scalar output between 0 & 1 it's a bounded output between 0 & 1 so one very common use case of the sigmoid function is to when you're dealing with probabilities because probabilities have to also be bounded between 0 & 1 so sigmoids are really useful when you want to output a single number and represent that number as a probability distribution in fact there are many common types of nonlinear activation functions not just the sigmoid but many others that you can use in neural networks and here are some common ones and throughout this presentation you'll find these tensorflow icons like you can see on the bottom right or sorry all across the bottom here and these are just to illustrate how one could use each of these topics in a practical setting you'll see these kind of scattered in throughout the slides no need to really take furious notes at these codeblocks like I said all of the slides are published online so especially during your labs if you want to refer back to any of the slides you can you can always do that from the online lecture notes now why do we care about activation functions the point of an activation function is to introduce nonlinearities into the data and this is actually really important in real life because in real life almost all of our data is nonlinear and here's a concrete example if I told you to separate the green points from the red points using a linear function could you do that I don't think so right so you'd get something like this oh you could do it you wouldn't do very good job at it and no matter how deep or how large your network is if you're using a linear activation function you're just composing lines on top of lines and you're going to get another line right so this is the best you'll be able to do with the linear activation function on the other hand nonlinearities allow you to approximate arbitrarily complex functions by kind of introducing these nonlinearities into your decision boundary and this is what makes neural networks extremely powerful let's understand this with a simple example and let's go back to this picture that we had before imagine I give you a train network with weights W on the top right so W here is 3 and minus 2 and the network only has 2 inputs x1 and x2 if we want to get the output it's simply the same story as we had before we multiply our inputs by those weights we take the sum and pass it through a non-linearity but let's take a look at what's inside of that non-linearity before we apply it so we get is when we take this dot product of x1 times 3 X 2 times minus 2 we mul - 1 that's simply a 2d line so we can plot that if we set that equal to 0 for example that's a 2d line and it looks like this so on the x axis is X 1 on the y axis is X 2 and we're setting that we're just illustrating when this line equals 0 so anywhere on this line is where X 1 and X 2 correspond to a value of 0 now if I feed in a new input either a test example a training example or whatever and that input is with this coordinates it's has these coordinates minus 1 and 2 so it has the value of x1 of minus 1 value of x2 of 2 I can see visually where this lies with respect to that line and in fact this this idea can be generalized a little bit more if we compute that line we get minus 6 right so inside that before we apply the non-linearity we get minus 6 when we apply a sigmoid non-linearity because sigmoid collapses everything between 0 and 1 anything greater than 0 is going to be above 0.5 anything below zero is going to be less than 0.5 so in is because minus 6 is less than zero we're going to have a very low output this point Oh 200 to we can actually generalize this idea for the entire feature space let's call it for any point on this plot I can tell you if it lies on the left side of the line that means that before we apply the non-linearity the Z or the state of that neuron will be negative less than zero after applying that non-linearity the sigmoid will give it a probability of less than 0.5 and on the right side if it falls on the right side of the line it's the opposite story if it falls right on the line it means that Z equals zero exactly and the probability equals 0.5 now actually before I move on this is a great example of actually visualizing and understanding what's going on inside of a neural network the reason why it's hard to do this with deep neural networks is because you usually don't have only two inputs and usually don't have only two weights as well so as you scale up your problem this is a simple two dimensional problem but as you scale up the size of your network you could be dealing with hundreds or thousands or millions of parameters and million dimensional spaces and then visualizing these type of plots becomes extremely difficult and it's not practical and pause in practice so this is one of the challenges that we face when we're training with neural networks and really understanding their internals but we'll talk about how we can actually tackle some of those challenges in later lectures as well okay so now that we have that idea of a perceptron a single neuron let's start by building up neural networks now how we can use that perceptron to create full neural networks and seeing how all of this story comes together let's revisit this previous diagram of the perceptron if there are only a few things you remember from this class try to take away this so how a perceptron works just keep remembering this I'm going to keep drilling it in you take your inputs you apply a dot product with your weights and you apply a non-linearity it's that simple oh sorry I missed the step you have dot product with your weights add a bias and apply your non-linearity so three steps now let's simplify this type of diagram a little bit I'm gonna remove the bias just for simplicity I'm gonna remove all of the weight labels so now you can assume that every line the weight associated to it and let's say so I'm going to note Z that Z is the output of that dot product so that's the element wise multiplication of our inputs with our weights and that's what gets fed into our activation function so our final output Y is just there our activation function applied on Z if we want to define a multi output neural network we simply can just add another one of these perceptrons to this picture now we have two outputs one is a normal perceptron which is y1 and y2 is just another normal perceptron the same ideas before they all connect to the previous layer with a different set of weights and because all inputs are densely connected to all of the outputs these type of layers are often called dense layers and let's take an example of how one might actually go from this nice illustration which is very conceptual and and nice and simple to how you could actually implement one of these dense layers from scratch by yourselves using tensor flow so what we can do is start off by first defining our two weights so we have our actual weight vector which is W and we also have our bias vector right both of both of these parameters are governed by the output space so depending on how many neurons you have in that output layer that will govern the size of each of those weight and bias vectors what we can do then is simply define that forward propagation of information so here I'm showing you this to the call function in tensor flow don't get too caught up on the details of the code again you'll get really a walk through of this code inside of the labs today but I want to just show you some some high level understanding of how you could actually take what you're learning and apply the tensor flow implementations to it inside the call function it's the same idea again you can compute Z which is the state it's that multiplication of your inputs with the weights you add the bias right so that's right there and once you have Z you just pass it through your sigmoid and that's your output for that now tension flow is great because it's already implemented a lot of these layers for us so we don't have to do what I just showed you from scratch in fact to implement a layer like this with two two outputs or a percept a multi layer a multi output perceptron layer with two outputs we can simply call this TF Harris layers dense with units equal to two to indicate that we have two outputs on this layer and there is a whole bunch of other parameters that you could input here such as the activation function as well as many other things to customize how this layer behaves in practice so now let's take a look at a single layered neural network so this is taking it one step beyond what we've just seen this is where we have now a single hidden layer that feeds into a single output layer and I'm calling this a hidden layer because unlike our inputs and our outputs these states of the hidden layer are not directly enforced or they're not directly observable we can probe inside the network and see them but we don't actually enforce what they are these are learned as opposed to the inputs which are provided by us now since we have a transformation between the inputs and the hidden layer and the hidden layer and the output layer each of those two transformations will have their own weight matrices which here I call W 1 and W 2 so its corresponds to the first layer and the second layer if we look at a single unit inside of that hidden layer take for example Z 2 I'm showing here that's just a single perceptron like we talked about before it's taking a weighted sum of all of those inputs that feed into it and it applies the non-linearity and feeds it on to the next layer same story as before this picture actually looks a little bit messy so what I want to do is actually clean things up a little bit for you and I'm gonna replace all of those lines with just this symbolic representation and we'll just use this from now on in the future to denote dense layers or fully connected layers between two between an input and an output or between an input and hidden layer and again if we wanted to implement this intensive flow the idea is pretty simple we can just define two of these dense layers the first one our hidden layer with n outputs and the second one our output layer with two outputs we can cut week and like join them together aggregate them together into this wrapper which is called a TF sequential model and sequential models are just this idea of composing neural networks using a sequence of layers so whenever you have a sequential message passing system or sequentially processing information throughout the network you can use sequential models and just define your layers as a sequence and it's very nice to allow information to propagate through that model now if we want to create a deep neural network the idea is basically the same thing except you just keep stacking on more of these layers and to create more of an more of a hierarchical model ones where the final output is computed by going deeper and deeper into this representation and the code looks pretty similar again so again we have this TF sequential model and inside that model we just have a list of all of the layers that we want to use and they're just stacked on top of each other okay so this is awesome so hopefully now you have an understanding of not only what a single neuron is but how you can compose neurons together and actually build complex hierarchical models with deep with neural networks now let's take a look at how you can apply these neural networks into a very real and applied setting to solve some problem and actually train them to accomplish some task here's a problem that I believe any AI system should be able to solve for all of you and probably one that you care a lot about will I pass this class to do this let's start with a very simple two input model one feature or one input we're gonna define is how many let's see how many lectures you attend during this class and the second one is the number of hours that you spend on your final projects I should say that the minimum number of hours you can spend your final project is 50 hours now I'm just joking okay so let's take all of the data from previous years and plot it on this feature space like we looked at before green points are students that have passed the class in the past and red points are people that have failed we can plot all of this data onto this two-dimensional grid like this and we can also plot you so here you are you have attended four lectures and you've only spent five hours on your final exam you're on you're on your final project and the question is are you going to pass the class given everyone around you and how they've done in the past how are you going to do so let's do it we have two inputs we have a single layered set single hidden layer neural network we have three hidden units in that hidden layer and we'll see that the final output probability when we feed in those two inputs of four and five is predicted to be 0.1 or 10% the probability of you passing this class is 10% that's not great news the actual prediction was one so you did pass the class now does anyone have an idea of why the network was so wrong in this case exactly so we never told this network anything the weights are wrong we've just initialized the weights in fact it has no idea what it means to pass a class it has no idea of what each of these inputs mean how many lectures you've attended and the hours you've spent on your final project it's just seeing some random numbers it has no concept of how other people in the class have done so far so what we have to do to this network first is train it and we have to teach it how to perform this task until we teach it it's just like a baby that doesn't know anything so it just entered the world it has no concepts or no idea of how to solve this task and we have to teach at that now how do we do that the idea here is that first we have to tell the network when it's wrong so we have to quantify what's called its loss or its error and to do that we actually just take our prediction or what the network predicts and we compare it to what the true answer was if there's a big discrepancy between the prediction and the true answer we can tell the network hey you made a big mistake right so this is a big error it's a big loss and you should try and fix your answer to move closer towards the true answer which it should be okay now you can imagine if you don't have just one student but now you have many students the total loss let's call it here the empirical risk or the objective function it has many different names it's just the the average of all of those individual losses so the individual loss is a loss that takes as input your prediction and your actual that's telling you how wrong that single example is and then the final the total loss is just the average of all of those individual student losses so if we look at the problem of binary classification which is the case that we're actually caring about in this example so we're asking a question will I pass the class yes or no binary classification we can use what is called as the softmax cross-entropy loss and for those of you who aren't familiar with cross-entropy this was actually a a formulation introduced by Claude Shannon here at MIT during his master's thesis as well and this was about 50 years ago it's still being used very prevalently today and the idea is it just again compares how different these two distributions are so you have a distribution of how how likely you think the student is going to pass and you have the true distribution of if the student passed or not you can compare the difference between those two distributions and that tells you the loss that the network incurs on that example now let's assume that instead of a classification problem we have a regression problem where instead of predicting if you're going to pass or fail to class you want to predict the final grade that you're going to get so now it's not a yes/no answer problem anymore but instead it's a what's the grade I'm going to get what's the number what so it's it's a full range of numbers that are possible now and now we might want to use a different type of loss for this different type of problem and in this case we can do what's called a mean squared error loss so we take the actual prediction we take the the sorry excuse me we take the prediction of the network we take the actual true final grade that the student got we subtract them we take their squared error and we say that that's the mean squared error that's the loss that the network should should try to optimize and try to minimize so ok so now that we have all this information with the loss function and how to actually quantify the error of the neural network let's take this and understand how to train train our model to actually find those weights that it needs to to use for its prediction so W is what we want to find out W is the set of weights and we want to find the optimal set of weights that tries to minimize this total loss over our entire test set so our test set is this example data set that we want to evaluate our model on so in the class example the test set is you so you want to understand how likely you are to pass this class you're the test set now what this means is that we want to find the W's that minimize that total loss function which we call as the objective function J of W now remember that W is just a aggregation or a collection of all of the individual w's from all of your weights so here this is just a way for me to express this in a clean notation but W is a whole set of numbers it's not just a single number and you want to find this all of the W's you want to find the value of each of those weights such that you can minimize this entire loss function it's a very complicated problem and remember that our loss function is just a simple function in terms of those weights so if we plot in the case again of a two-dimensional weight problem so one of the weights is on the x-axis one of the weights is on this axis and on the z axis we have the loss so for any value of w we can see what the loss would be at that point now what do we want to do we want to find the place on this landscape what are the values of W that we get the minimum loss okay so what we can do is we can just pick a random W pick a random place on this this landscape to start with and from this random place let's try to understand how the landscape is changing what's the slope of the landscape we can take the gradient of the loss with respect to each of these weights to understand the direction of maximum ascent okay that's what the gradient tells us now that we know which way is up we can take a step in the direction that's down so we know which way is up we reverse the sign so now we start heading downhill and we can move towards that lowest point now we just keep repeating this process over and over again until we've converged to a local minimum now we can summarize this algorithm which is known as gradient descent because you're taking a gradient and you're descending down down that landscape by starting to initialize our rates wait randomly we compute the gradient DJ with respect to all of our weights then we update our weights in the opposite direction of that gradient and take a small step which we call here ADA of that gradient and this is referred to as the learning rate and we'll talk a little bit more about that later but ADA is just a scalar number that determines how much of a step you want to take at each iteration how strongly or aggressively do you want to step towards that gradient in code the picture looks very similar so to implement gradient descent is just a few lines of code just like the pseudocode you can initialize your weights randomly in the first line you can compute your loss with respect to those gradients and with respect to those predictions and your data given that gradient you just update your weights in the opposite direction of that event of that vector right now the magic line here is actually how do you compute that gradient and that's something I haven't told you and that's something it's not easy at all so the question is given a loss and given all of our weights in our network how do we know which way is good which way is a good place to move given all of this information and I never told you about that but that's a process called back propagation and let's talk about a very simple example of how we can actually derive back propagation using elementary calculus so we'll start with a very simple network with only one hidden neuron and one output this is probably the simplest neural network that you can create you can't really get smaller than this computing the gradient of our loss with respect to W to here which is that second way between the hidden state and our output can tell us how much a small change in W 2 will impact our loss so that's what the gradient tells us right if we change W 2 in the differential different like a very minor manner how does our loss change does it go up or down how does it change and by how much really so that's the gradient that we care about the gradient of our loss with respect to W 2 now to evaluate this we can just apply the chain rule in calculus so we can split this up into the gradient of our loss with respect to our output Y multiplied by the gradient of our walk or output Y with respect to W 2 now if we want to repeat this process for a different way in the neural network let's say now W 1 not W 2 now we replace W 1 on both sides we also apply the chain rule but now you're going to notice that the gradient of Y with respect to W 1 is also not directly computable we have to apply the chain rule again to evaluate this so let's apply the chain rule again we can break that second term up into with respect to now the the state Z ok and using that we can kind of back propagate all of these gradients from the output all the way back to the input that allows our error signal to really propagate from output to input and allows these gradients to be computed in practice now a lot of this is not really important or excuse me it's not as crucial that you understand the nitty-gritty math here because in a lot of popular deep learning frameworks we have what's called automatic differentiation which does all of this back propagation for you under the hood and you never even see it which is incredible it made training neural networks so much easier you don't have to implement back propagation anymore but it's still important to understand how these work at the foundation which is why we're going through it now ok obviously then you repeat this for every single way in the network here we showed it for just W 1 and W 2 which is every single way in this network but if you have more you can just repeat it again keep applying the chain rule from output to input to compute this ok and that's the back prop algorithm in theory very simple it's just an application of the chain rule in essence but now let's touch on some of the insights from training and how you can use the back prop algorithm to train these networks in practice optimization of neural networks is incredibly tough in practice so it's not as simple as the picture I showed you on the colorful one on the previous slide here's an illustration from a paper that came out about two or three years ago now where the authors tried to visualize the landscape of a of a neural network with millions of parameters but they collapsed that down onto just two-dimensional space so that we can visualize it and you can see that the landscape is incredibly complex it's not easy there are many local minima where the gradient descent algorithm could get stuck into and applying gradient descent in practice in these type of environments which is very standard in neural networks can be a huge challenge now we're called the update equation that we defined previously with gradient descent this is that same equation we're going to update our weights in the direction in the opposite direction of our gradient I didn't talk too much about this parameter ADA I pointed it out this is the learning rate it determines how much of a step we should take in the direction of that gradient and in practice setting this learning rate can have a huge impact in performance so if you set that learning rate to small that means that you're not really trusting your gradient on each step so if ADA is super tiny that means on each time each step you're only going to move a little bit towards in the opposite direction of your gradient just in little small increments and what can happen then is you can get stuck in these local minima because you're not being as aggressive as you should be to escape them now if you set the learning rate to large you can actually overshoot completely and diverge which is even more undesirable so setting the learning rate can be very challenging in practice you want to pick a learning rate that's large enough such that you avoid the local minima but small offs such that you still converge in practice now the question that you're all probably asking is how do we set the learning rate then well one option is that you can just try a bunch of learning rates and see what works best another option is to do something a little bit more clever and see if we can try to have an adaptive learning rate that changes with respect to our lost landscape maybe it changes with respect to how fast the learning is happening or a range of other ideas within the network optimization scheme itself this means that the learning rate is no longer fixed but it can now increase or decrease throughout training so as training progressive your learning rate may speed up you may take more aggressive steps you may take smaller steps as you get closer to the local minima so that you really converge on that point and there are many options here of how you might want to design this adaptive algorithm and this has been a huge or a widely studied field in optimization theory for machine learning and deep learning and there have been many published papers and implementations within tensor flow on these different types of adaptive learning rate algorithms so SGD is just that vanilla gradient descent that I showed you before that's the first one all of the others are all adaptive learning rates which means that they change their learning rate during training itself so they can increase or decrease depending on how the optimization is going and during your labs we really encourage you again to try out some of these different optimization schemes see what works what doesn't work a lot of it is problem dependent there are some heuristics that you can you can get but we want you to really gain those heuristics yourselves through the course of the labs it's part of building character okay so let's put this all together from the beginning we can define our model which is defined as this sequential wrapper inside of this sequential wrapper we have all of our layers all of these layers are composed of perceptrons or single neurons which we saw earlier the second line defines our optimizer which we saw in the previous slide this can be SGD it can also be any of those adaptive learning rates that we saw before now what we want to do is during our training loop it's very it's the same stories again as before nothing's changing here we forward pass all of our inputs through that model we get our predictions using those predictions we can evaluate them and compute our loss our loss tells us how wrong our network was on that iteration it also tells us how we can compute the gradients and how we can change all of the weights in the network to improve in the future and then the final line there takes those gradients and actually allows our optimizer to update the weights and the trainable variables such that on the next iteration they do a little bit better and over time if you keep looping this will converge and hopefully you should fit your data no now I want to continue to talk about some tips for training these networks in practice and focus on a very powerful idea of batching your data into mini batches so to do this let's revisit the gradient descent algorithm this gradient is actually very computationally expensive to compute in practice so using the backprop algorithm is a very expensive idea and practice so what we want to do is actually not compute this over all of the data points but actually computed over just a single data point in the data set and most real-life applications it's not actually feasible to compute on your entire data set at every iteration it's just too much data so instead we pick a single point randomly we compute our gradient with respect to that point and then on the next iteration we pick a different point and we can get a rough estimate of our gradient at each step right so instead of using all of our data now we just pick a single point I we compute our gradient with respect to that single point I and what's a middle ground here so the downside of using a single point is that it's going to be very noisy the downside of using all of the points is that it's too computationally expensive if there's some middle ground that we can have in between so that middle ground is actually just very simple you instead of taking one point and instead taking all of the points let take a mini batch of points so maybe something on the order of 10 20 30 100 maybe depending on how rough or accurate you want that approximation of your gradient to be and how much you want to trade off speed and computational efficiency now the true gradient is just obtained by averaging the gradient from each of those B points so B is the size of your batch in this case now since B is normally not that large like I said maybe on the order of tens to a hundreds this is much faster to compute than full gradient descent and much more accurate than stochastic gradient descent because it's using more than one point more than one estimate now this increase in gradient accuracy estimation actually allows us to converge to our target much quicker because it means that our gradients are more accurate in practice it also means that we can increase our learning rate and trust each update more so if we're very noisy in our gradient estimation we probably want to lower our learning rate a little more so we don't fully step in the wrong direction if we're not totally confident with that gradient if we have a larger batch of gradient of data to they are gradients with we can trust that learning great a little more increase it so that it steps it more aggressively in that direction what this means also is that we can now massively paralyze this computation because we can split up batches on multiple GPUs or multiple computers even to achieve even more significant speed ups with this training process now the last topic I want to address is that of overfitting and this is also known as the problem of generalization in machine learning and it's actually not unique to just deep learning but it's a fundamental problem of all of machine learning now ideally in machine learning we want a model that will approximate or estimate our data or accurately describes our data let's say like that said differently we want to build models that can learn representations from our training data that's still generalize to unseen test data now assume that you want to build a line that best describes these points you can see on the on the screen under fitting describes if we if our model does not describe the state of complexity of this problem or if we can't really capture the true complexity of this problem while overfitting on the right starts to memorize certain aspects of our training data and this is also not desirable we want the middle ground which ideally we end up with a model in the middle that is not too complex to memorize all of our training data but also one that will continue to generalize when it sees new data so to address this problem of regularization in neural network specifically let's talk about a technique of regularization which is another way that we can deal with this and what this is doing is it's trying to discourage complex information from being learned so we want to eliminate the model from actually learning to memorize the training data we don't want to learn like very specific pinpoints of the training data that don't generalize well to test data now as we've seen before this is actually crucial for our models to be able to generalize to our test data so this is very important the most popular regularization technique deep learning is this very basic idea of drop out now the idea of drop out is well actually let's start with by revisiting this picture of a neural network that we had introduced previously and drop out during training we randomly set some of these activations of the hidden neurons to zero with some probability so I'd say our probability is 0.5 we're randomly going to set the activations to 0.5 with probability of 0.5 to some of our hidden neurons to 0 the idea is extremely powerful because it allows the network to lower its capacity it also makes it such that the network can't build these memorization channels through the network where it tries to just remember the data because on every iteration 50% of that data is going to be or 50% of that memorization or memory is going to be wiped out so it's going to be forced to to not only generalize better but it's going to be forced to have multiple channels through the network and build a more robust representation of its prediction now we just repeat this on every iteration so on the first iteration we dropped out one 50% of the nodes on the next iteration we can drop out a different randomly sampled 50% which may include some of the previously sampled nodes as well and this will allow the network to generalize better to new test data the second regularization technique that we'll talk about is the notion of early stopping so what I want to do here is just talk about two lines so during training which is the x-axis here we have two lines the y-axis is our loss curve the first line is our training loss so that's the green line the green line tells us how our training data how well our model is fitting to our training data we expect this to be lower than the second line which is our testing data so usually we expect to be doing better on our training data than our testing data as we train and as this line moves forward into the future both of these lines should kind of decrease go down because we're optimizing the network we're improving its performance eventually though there becomes a point where the training data starts to diverge from the testing data now what happens is that the training day should always continue to fit or the model should always continue to fit the training data because it's still seeing all of the training data it's not being penalized from that except for maybe if you drop out or other means but the testing data it's not seeing so at some point the network is going to start to do better on its training data than its testing data and what this means is basically that the network is starting to memorize some of the training data and that's what you don't want so what we can do is well we can perform early stopping or we can identify this point this inflection point where the test data starts to increase and diverge from the training data so we can stop the network early and make sure that our test accuracy is as minimum as possible and of course if we actually look at on the side of this line if we look at on the left side that's where a model is under fit so we haven't reached the true capacity of our model yet so we'd want to keep training if we didn't stop yet if we did stop already and on the right side is where we've over fit where we've passed that early stopping point and we need to like basically we've started to memorize some of our training did and that's when we've gone too far I'll conclude this lecture by just summarizing three main points that we've covered so far first we've learned about the fundamentals of neural networks which is a single neuron or a perceptron we've learned about stacking and composing these perceptrons together to form complex hierarchical representations and how we can mathematically optimize these networks using a technique called back propagation using their loss and finally we address the practical side of training these models such as mini batching regularization and adaptive learning rates as well with that I'll finish up I can take a couple questions and then we'll move on to office lecture on deep sequential modeling I'll take any like maybe a couple questions if there are any now thank you |
MIT_6S191_Introduction_to_Deep_Learning | MIT_6S191_2021_Convolutional_Neural_Networks.txt | Hi Everyone and welcome back to MIT 6.S191! Today we're going to be talking about one of my favorite topics in this course and that's how we can give machines a sense of vision now vision is one of the most important human senses I believe sighted people rely on vision quite a lot from everything from navigating in the world to recognizing and manipulating objects to interpreting facial expressions and understanding very complex human emotions i think it's safe to say that vision is a huge part of everyday human life and today we're going to learn about how we can use deep learning to build very powerful computer vision systems and actually predict what is where by only looking and specifically looking at only raw visual inputs i like to think that this is a very super simple definition of what vision at its core really means but actually vision is so much more than simply understanding what an image is of it means not just what the image is of but also understanding where the objects in the scene are and really predicting and anticipating forward in the future what's going to happen next take this scene for example we can build computer vision algorithms that can identify objects in the scene such as this yellow taxi or maybe even this white truck on the side of the road but what we need to understand on a different level is what is actually going to be required to achieve true vision where are all of these objects going uh for that we should actually focus probably more on the yellow taxi than on the white truck because there are some subtle cues in this image that you can probably pick up on that lead us to believe that probably this white truck is parked on the side of the road it's stationary and probably won't be moving in the future at least for the time that we're observing the scene the yellow taxi on the other hand is even though it's also not moving it is much more likely to be stationary as a result of the pedestrians that are crossing in front of it and that's something that is very subtle but can actually be reasoned about very effectively by our brains and humans take this for granted but this is an extraordinarily challenging problem in the real world and since in the real world building true vision algorithms can require reasoning about all of these different components not just in the foreground but also there are some very important cues that we can pick up in the background like this light this uh road light as well as some obstacles in the far distance as well and building these vision algorithms really does require an understanding of all of these very subtle details now deep learning is bringing forward an incredible revolution or evolution as well of computer vision algorithms and applications ranging from allowing robots to use visual cues to perform things like navigation and these algorithms that you're going to learn about today in this class have become so mainstreamed and so compressed that they are all fitting and running in each of our pockets in our telephones for processing photos and videos and detecting faces for greater convenience we're also seeing some extraordinarily exciting applications of vision in biology and medicine for picking up on extremely subtle cues and detecting things like cancer as well as in the field of autonomous driving and finally in a few slides i'll share a very inspiring story of how the algorithms that you're going to learn about today are also being used for accessibility to aid the visually impaired now deep learning has taken computer vision especially computer vision by storm because of its ability to learn directly from the raw image inputs and learn to do feature extraction only through observation of a ton of data now one example of that that is really prevalent in the computer vision field is of facial detection and facial recognition on the top left or on the left hand side you can actually see an icon of a human eye which pictorially i'm using to represent images that we perceive and we can also pass through a neural network for predicting these facial features now deep learning has transformed this field because it allows the creator of the machine learning or the deep learning algorithm to easily swap out the end task given enough data to learn this neural network in the middle between the vision and the task and try and solve it so here we're performing an end task of facial detection but just equivalently that end task could be in the context of autonomous driving here where we take an image as an input which you can see actually in the bottom right hand corner and we try to directly learn the steering control for the output and actually learn directly from this one observation of the scene where the car should control so what is the steering wheel that should execute and this is done completely end to end the entire control system here of this vehicle is a single neural network learned entirely from data now this is very very different than the majority of other self-driving car companies like you'll see with waymo and tesla et cetera and we'll talk more about this later but i actually wanted to share this one clip with you because this is one of the autonomous vehicles that we've been building in our lab and here in csail that i'm part of and we'll see more about that later in the lecture as well we're seeing like i mentioned a lot of applications in medicine and healthcare where we can take these raw images and scans of patients and learn to detect things like breast cancer skin cancer and now most recently taking scans of patients lungs to detect covid19 finally i want to share this inspiring story of how computer vision is being used to help the visually impaired so in this project actually researchers built a deep learning enabled device that can detect a trail for running and provide audible feedback to the visually impaired user such that they can run and now to demonstrate this let me just share this very brief video the machine learning algorithm that we have detects the line and can tell whether the line is to the runners left right or center we can then send signals to the runner that guides them left and right based on their positioning the first time we went out we didn't even know if sound would be enough to guide me so it's a sort of that beta testing process that you go through from human eyes it's very obvious it's very obvious to recognize the line teaching a machine learning model to do that is not that easy you step left and right as you're running so there's like a shake to the line left and right as soon as you start going outdoors now the light is a lot more variable tree shadows falling leaves and also the line on the ground can be very narrow and there may be only a few pixels for the computer vision model to recognize there was no tether there was no stick there was no furry dog it was just being with yourself ah that's the first time i run loading in decades so these are often tasks that we as humans take for granted but for a computer it's really remarkable to see how deep learning is being applied uh to some of these problems focused on really doing good and just helping people here in this case the visually impaired a man who has never run without his uh guide dog before is now able to run independently through the through the trails with the aid of this computer vision system and like i said we often take these tasks for granted but because it's so easy for each sighted individual for us to do them routinely but we can actually train computers to do them as well and in order to do that though we need to ask ourselves some very foundational questions specifically stemming from how we can build a computer that can quote unquote c and specifically how does a computer process an image let's use an image as our base example of site to a computer so far so to a computer images are just numbers there are two dimensional lists of numbers suppose we have a picture here this is of abraham lincoln it's just made up of what are called pixels each of those numbers can be represented by what's called a pixel now a pixel is simply a number like i said here represented by a range of either zero to one or in 0 to 255 and since this is a grayscale image each of these pixels is just one number if you have a color image you would represent it by three numbers a red a green and a blue channel rgb now what does the computer see so we can represent this image as a two-dimensional matrix of these numbers one number for each pixel in the image and this is it this is how a computer sees an image like i said if we have a rgb image not a a grayscale image we can represent this by a three-dimensional array now we have three two-dimensional arrays stacked on top of each other one of those two dimensional arrays corresponds to the red channel one for the green one for the blue representing this rgb image and now we have a way to represent images to computers and we can start to think about what types of computer vision algorithms we can perform with this so there are very there are two very common types of learning tasks and those are like we saw in the first and the second classes those are one regression and those are also classification tasks in regression tasks our output takes the form of a continuous value and in classification it takes a single class label so let's consider first the problem of classification we want to predict a label for each image so for example let's say we have a database of all u.s precedents and we want to build a classification pipeline to tell us which precedent this image is of so we feed this image that we can see on the left hand side to our model and we wanted to output the probability that this image is of any of these particular precedents that this database consists of in order to classify these images correctly though our pipeline needs to be able to tell what is actually unique about a picture of abraham lincoln versus a picture of any other president like george washington or jefferson or obama another way i think about this uh these differences between these images and the image classification pipeline is at a high level in terms of the features that are really characteristics of that particular class so for example what are the features that define abraham lincoln now classification is simply done by detecting the features in that given image so if the features for a particular class are present in the image then we can predict with pretty high confidence that that class is occurring with a high probability so if we're building an image classic classification pipeline our model needs to know what are the features are what they are and two it needs to be able to detect those features in a brand new image so for example if we want to detect human faces some features that we might want to be able to identify would be noses eyes and mouths whereas like if we want to detect cars we might be looking at certain things in the image like wheels license plates and headlights and the same for houses and doors and windows and steps these are all examples of features for the larger object categories now one way to do this and solve this problem is actually to leverage knowledge about a particular field say let's say human faces so if we want to detect human faces we could manually define in images what we believe those features are and actually use the results of our detection algorithm for classification but there's actually a huge problem to this type of approach and that is that images are just 3d arrays of numbers of brightness values and that each image can have a ton of variation and this includes things like occlusions in the scene there could also be variations in illumination the lighting conditions as well as you could even think of intra class variation variation within the same class of images our classification pipeline whatever we're building really needs to be invariant to all of these types of variations but it still needs to be sensitive to picking out the different inter-class variations so being able to distinguish a feature that is unique to this class in comparison to features or variations of that feature that are present within the class now even though our pipeline could be used could use features that we as humans define that is if a human was to come into this problem knowing something about the problem a priori they could define or manually extract and break down what features they want to detect for this specific task even if we could do that due to the incredible variability of the scene of image data in general the detection of these features is still an extremely challenging problem in practice because your detection algorithm needs to be invariant to all of these different variations so instead of actually manually defining these how can we do better and what we actually want to do is be able to extract features and detect their presence in images automatically in a hierarchical fashion and this should remind you back to the first lecture when we talked about hierarchy being a core component of deep learning and we can use neural network-based approaches to learn these visual features directly from data and to learn a hierarchy of features to construct a representation of the image internal to our network so again like we saw in the first lecture we can detect these low-level features and composing them together to build these mid-level features and then in later layers these higher level features to really perform the task of interest so neural networks will allow us to learn these hierarchies of visual features from data if we construct them cleverly so this will require us to use some different architectures than what we have seen so far in the class namely architectures from the first lecture with feedforward dense layers and in the second lecture recurrent layers for handling sequential data this lecture will focus on yet another type of way that we can extract features specifically focusing on the visual domain so let's recap what we learned in lecture one so in lecture one we learned about these fully connected neural networks also called dense neural networks where you can have multiple hidden layers stacked on top of each other and each neuron in each hidden layer is connected to every neuron in the previous layer now let's say we want to use a fully connected network to perform image classification and we're going to try and motivate the the use of something better than this by first starting with what we already know and we'll see the limitations of this so in this case remember our input is this two-dimensional image it's a vector a two-dimensional vector but it can be collapsed into a one-dimensional vector if you just stack all of those dimensions on top of each other of pixel values and what we're going to do is feed in that vector of pixel values to our hidden layer connected to all neurons in the next layer now here you should already appreciate something and that is that all spatial information that we had in this image is automatically gone it's lost because now since we have flattened this two-dimensional image into one dimension we have now basically removed any spatial information that we previously had by the next layer and our network now has to relearn all of that uh very important spatial information for example that one pixel is closer to the its neighboring pixel that's something very important in our input but it's lost immediately in a fully connected layer so the question is how can we build some structure into our model so that in order so that we can actually inform the learning process and provide some prior information to the model and help it learn this very complicated and large input image so to do this let's keep our representation of our image our 2d image as an array a two-dimensional array of pixel values let's not collapse it down into one dimension now one way that we can use the spatial structure would be to actually connect patches of our input not the whole input but just patches of the input two neurons in the hidden layer so before everything was connected from the input layer to the hidden layer but now we're just gonna connect only things that are within a single patch to the next neuron in the next layer now that is really to say that each neuron only sees so if we look at this output neuron this neuron is only going to see the values coming from the patch that precedes it this will not only reduce the number of weights in our model but it's also going to allow us to leverage the fact that in an image spatially close pixels are likely to be somewhat related and correlated to each other and that's a fact that we should really take into account so notice how the only that only a small region of the input layer influences this output neuron and that's because of this spatially connected idea that we want to preserve as part of this architecture so to define connections across the whole input now we can apply the same principle of connecting patches in our input layer to single neurons in the subsequent layer and we can basically do this by sliding that patch across the input image and for each time we slide it we're going to have a new output neuron in the subsequent layer now this way we can actually take into account some of the spatial structure that i'm talking about inherent to our input but remember that our ultimate task is not only to preserve spatial structure but to actually learn the visual features and we do this by weighting the connections between the patches and the neurons so we can detect particular features so that each patch is going to try to perform that detection of the feature so now we ask ourselves how can we rate this patch such that we can detect those features well in practice there's an operation called a convolution and we'll first think about this at a high level suppose we have a 4x4 patch or a filter which will consist of 16 weights we're going to apply this same filter to by four patches in the input and use the result of that operation to define the state of the neuron in the next layer so the neuron in the next layer the output that single neuron is going to be defined by applying this patch with a filter with of equal size and learned weights we're then going to shift that patch over let's say in this case by two pixels we have here to grab the next patch and thereby compute the next output neuron now this is how we can think about convolutions at a very high level but you're probably wondering here well how does the convolution operator actually allow us to extract features and i want to make this really concrete by walking through a very simple example so suppose we want to classify the letter x in a set of black and white images of letters where black is equal to negative one and white is equal to positive one now to classify it's clearly not possible to simply compare the two images the two matrices on top of each other and say are they equal because we also want to be classifying this x uh no matter if it has some slight deformations if it's shifted or if it's uh enlarged rotated or deformed we need we want to build a classifier that's a little bit robust to all of these changes so how can we do that we want to detect the features that define an x so instead we want our model to basically compare images of a piece of an x piece by piece and the really important pieces that it should look for are exactly what we've been calling the features if our model can find those important features those rough features that define the x in the same positions roughly the same positions then it can get a lot better at understanding the similarity between different examples of x even in the presence of these types of deformities so let's suppose each feature is like a mini image it's a patch right it's also a small array a small two-dimensional array of values and we'll use these filters to pick up on the features common to the x's in the case of this x for example the filters we might want to pay attention to might represent things like the diagonal lines on the edge as well as the crossing points you can see in the second patch here so we'll probably want to capture these features in the arms and the center of the x in order to detect all of these different variations so note that these smaller matrices of filters like we can see on the the top row here these represent the filters of weights that we're going to use as part of our convolution operation in order to detect the corresponding features in the input image so all that's left for us to define is actually how this convolution operation actually looks like and how it's able to pick up on these features given each of these in this case three filters so how can it detect given a filter where this filter is occurring or where this feature is occurring rather in this image and that is exactly what the operation of convolution is all about convolution the idea of convolution is to preserve the spatial relationship between pixels by learning image features in small little patches of image data now to do this we need to perform an element-wise multiplication between the filter matrix and the patch of the input image of same dimension so if we have a patch of 3x3 we're going to compare that to an input filter or our filter which is also of size 3x3 with learned weights so in this case our filter which you can see on the top left all of its entries are of either positive one or one or negative one and when we multiply this filter by the corresponding green input image patch and we element wise multiply we can actually see the result in this matrix so multiplying all of the positive ones by positive ones we'll get a positive one multiplying a negative one by a negative one will also get a positive one so the result of all of our element-wise multiplications is going to be a three by three matrix of all ones now the next step in as part of the convolution operation is to add all of those element-wise multiplications together so the result here after we add those outputs is going to be 9. so what this means now actually so actually before we get to that let me start with another very brief example suppose we want to compute the convolution now not of a very large image but this is just of a five by five image our filter here is three by three so we can slide this three by three filter over the entirety of our input image and performing this element-wise multiplication and then adding the outputs let's see what this looks like so let's start by sliding this filter over the top left hand side of our input we can element wise multiply the entries of this patch of this filter with this patch and then add them together and for this part this three by three filter is placed on the top left corner of this image element-wise multiply add and we get this resulting output of this neuron to be four and we can slide this filter over one one spot by one spot to the next patch and repeat the results in the second entry now would be corresponding to the activation of this filter applied to this part of the image in this case three and we can continue this over the entirety of our image until the end when we have completely filled up this activation or feature map and this feature map really tells us where in the input image was activated by this filter so for example wherever we see this pattern conveyed in the original input image that's where this feature map is going to have the highest value and that's where we need to actually activate maximally now that we've gone through the mechanism of the convolution operation let's see how different filters can be used to produce feature maps so picture a woman of a woman a picture this picture of a woman's face this woman's name is lena and the output of applying these three convolutional filters so you can see the three filters that we're considering on the bottom right hand corner of each image by simply changing the weights of these filters each filter here has a different weight we can learn to detect very different features in that image so we can learn to sharpen the image by applying this very specific type of sharpening filter we can learn to detect edges or we can learn to detect very strong edges in this image simply by modifying these filters so these filters are not learned filters these are constructed filters and there's been a ton of research historically about developing hand engineering these filters but what convolutional neural networks learn to want to do is actually to learn the weights defining these filters so the network will learn what kind of features it needs to detect in the image doesn't need to do edge detection or strong edge detection or does it need to detect certain types of edges curves certain types of geometric objects etc what are the features that it needs to extract from this image and by learning the convolutional filters it's able to do that so i hope now you can actually appreciate how convolution allows us to capitalize on very important spatial structure and to use sets of weights to extract very local features in the image and to very easily detect different features by simply using different sets of weights and different filters now these concepts of preserving spatial structure and local feature extraction using the convolutional operation are actually core to the convolutional neural networks that are used for computer vision tasks and that's exactly what i want to dive into next now that we've gotten the operation the mathematical foundation of convolutions under our belts we can start to think about how we can utilize this operation this operation of convolutions to actually build neural networks for computer vision tasks and tie this whole thing in to this paradigm of learning that we've been exposed to in the first couple lectures now these networks aptly are named convolutional neural networks very appropriately and first we'll take a look at a cnn or convolutional neural network designed specifically for the task of image classification so how can you use cnns for classification let's consider a simple cnn designed for the goal here to learn features directly from the image data and we can use these learned features to map these onto a classification task for these images now there are three main components and operations that are core to a cnn the first part is what we've already gotten some exposure to in the first part of this lecture and that is the convolution operation and that allows us like we saw earlier to generate these feature maps and detect features in our image the second part is applying a non-linearity and we saw the importance of nonlinearities in the first and the second lecture in order to help us deal with these features that we extract being highly non-linear thirdly we need to apply some sort of pooling operation this is another word for a down sampling operation and this allows us to scale down the size of each feature map now the computation of a class of scores which is what we're doing when we define an image classification task is actually performed using these features that we obtain through convolution non-linearity and pooling and then passing those learned features into a fully connected network or a dense layer like we learned about in the first part of the class in the first lecture and we can train this model end to end from image input to class prediction output using fully connected layers and convolutional layers end to end where we learn as part of the convolutional layers the sets of weights of the filters for each convolutional layer and as well as the weights that define these fully connected layers that actually perform our classification task in the end and we'll go through each one of these operations in a bit more detail to really break down the basics and the architecture of these convolutional neural networks so first we'll consider the convolution operation of a cnn and as before each neuron in the hidden layer will compute a weighted sum of each of its inputs like we saw in the dense layers we'll also need to add on a bias to allow us to shift the activation function and apply and activate it with some non-linearity so that we can handle non-linear data relationships now what's really special here is that the local connectivity is preserved each neuron in the hidden layer you can see in the middle only sees a very specific patch of its inputs it does not see the entire input neurons like it would have if it was a fully connected layer but no in this case each neuron output observes only a very local connected patch as input we take a weighted sum of those patches we compute that weighted sum we apply a bias and we apply and activate it with a non-linear activation function and that's the feature map that we're left with at the end of a convolutional layer we can now define this actual operation more concretely using a mathematical equation here we're left with a 4x4 filter matrix and for each neuron in the hidden layer its inputs are those neurons in the patch from the previous layer we apply this set of weights wi j in this case like i said it's a four by four filter and we do this element-wise multiplication of every element in w multiplied by the corresponding elements in the input x we add the bias and we activate it with this non-linearity remember our element-wise multiplication and addition is exactly that convolutional operation that we talked about earlier so if you look up the definition of what convolution means it is actually that exactly it's element-wise multiplication and then a summation of all of the results and this actually defines also how convolutional layers are connected to these ideas but with this single convolutional layer we can how can we have multiple filters so all we saw in the previous slide is how we can take this input image and learn a single feature map but in reality there are many types of features in our image how can we use convolutional layers to learn a stack or many different types of features that could be useful for performing our type of task how can we use this to do multiple feature extraction now the output layer is still convolution but now it has a volume dimension where the height and the width are spatial dimensions dependent upon the dimensions of the input layer the dimensions of the filter the stride how how much we're skipping on each each time that we apply the filter but we also need to think about the the connections of the neurons in these layers in terms of their what's called receptive field the locations of their input in the in the in the model in in the path of the model that they're connected to now these parameters actually define the spatial arrangement of how the neurons are connected in the convolutional layers and how those connections are really defined so the output of a convolutional layer in this case will have this volume dimension so instead of having one filter map that we slide along our image now we're going to have a volume of filters each filter is going to be slid across the image and compute this convolution operation piece by piece for each filter the result of each convolution operation defines the feature map that that convolution that that filter will activate maximally so now we're well on our way to actually defining what a cnn is and the next step would actually be to apply that non-linearity after each convolution operation we need to actually apply this non-linear activation function to the output volume of that layer and this is very very similar like i said in the first and we saw also in the second lecture and we do this because image data is highly nonlinear a common example in the image domain is to use an activation function of relu which is the rectified linear unit this is a pixel-wise operation that replaces all negative values with zero and keeps all positive values with whatever their value was we can think of this really as a thresholding operation so anything less than zero gets thresholded to zero negative values indicate negative detection of a convolution but this nonlinearity actually kind of uh clamps that to some sense and that is a nonlinear operation so it does satisfy our ability to learn non-linear dynamics as part of our neural network model so the next operation in convolutional neural networks is that of pooling pooling is an operation that is commonly used to reduce the dimensionality of our inputs and of our feature maps while still preserving spatial invariants now a common technique and a common type of pooling that is commonly used in practice is called max pooling as shown in this example max pooling is actually super simple and intuitive uh it's simply taking the maximum over these two by two filters in our patches and sliding that patch over our input very similar to convolutions but now instead of applying a element-wise multiplication and summation we're just simply going to take the maximum of that patch so in this case as we feed over this two by two patch of filters and striding that patch by a factor of two across the image we can actually take the maximum of those two by two pixels in our input and that gets propagated and activated to the next neuron now i encourage all of you to really think about some other ways that we can perform this type of pooling while still making sure that we downsample and preserve spatial invariants taking the maximum over that patch is one idea a very common alternative is also taking the average that's called mean pooling taking the average you can think of actually represents a very smooth way to perform the pooling operation because you're not just taking a maximum which can be subject to maybe outliers but you're averaging it or also so you will get a smoother result in your output layer but they both have their advantages and disadvantages so these are three operations three key operations of a convolutional neural network and i think now we're actually ready to really put all of these together and start to construct our first convolutional neural network end to end and with cnns just to remind you once again we can layer these operations the whole point of this is that we want to learn this hierarchy of features present in the image data starting from the low-level features composing those together to mid-level features and then again to high-level features that can be used to accomplish our task now a cnn built for image classification can be broken down into two parts first the feature learning part where we actually try to learn the features in our input image that can be used to perform our specific task that feature learning part is actually done through those pieces that we've been seeing so far in this lecture the convolution the non-linearity and the pooling to preserve the spatial invariance now the second part the convolutional layers and pooling provide output those the output excuse me of the first part is those high-level features of the input now the second part is actually using those features to perform our classification or whatever our task is in this case the task is to output the class probabilities that are present in the input image so we feed those outputted features into a fully connected or dense neural network to perform the classification we can do this now and we don't mind about losing spatial invariance because we've already down sampled our image so much that it's not really even an image anymore it's actually closer to a vector of numbers and we can directly apply our dense neural network to that vector of numbers it's also much lower dimensional now and we can output a class of probabilities using a function called the softmax whose output actually represents a categorical probability distribution it's summed uh equal to one so it does make it a proper categorical distribution and it is each element in this is strictly between zero and one so it's all positive and it does sum to one so it makes it very well suited for the second part if your task is image classification so now let's put this all together what does a end-to-end convolutional neural network look like we start by defining our feature extraction head which starts with a convolutional layer with 32 feature maps a filter size of 3x3 pixels and we downsample this using a max pooling operation with a pooling size of 2 and a stride of 2. this is very exactly the same as what we saw when we were first introducing the convolution operation next we feed these 32 feature maps into the next set of the convolutional convolutional and pooling layers now we're increasing this from 32 feature maps to 64 feature maps and still down scaling our image as a result so we're down scaling the image but we're increasing the amount of features that we're detecting and that allows us to actually expand ourselves in this dimensional space while down sampling the spatial information the irrelevant spatial information now finally now that we've done this feature extraction through only two convolutional layers in this case we can flatten all of this information down into a single vector and feed it into our dense layers and predict these final 10 outputs and note here that we're using the activation function of softmax to make sure that these outputs are a categorical distribution okay awesome so so far we've talked about how we can use cnns for image classification tasks this architecture is actually so powerful because it extends to a number of different tasks not just image classification and the reason for that is that you can really take this feature extraction head this feature learning part and you can put onto the second part so many different end networks whatever and network you'd like to use you can really think of this first part as a feature learning part and the second part as your task learning part now what that task is is entirely up to you and what you desire so and that's that's really what makes these networks incredibly powerful so for example we may want to look at different image classification domains we can introduce new architectures for specifically things like image and object detection semantic segmentation and even things like image captioning you can use this as an input to some of the sequential networks that we saw in lecture two even so let's look at and dive a bit deeper into each of these different types of tasks that we could use are convolutional neural networks for in the case of classification for example there is a significant impact in medicine and healthcare when deep learning models are actually being applied to the analysis of entire inputs of medical image scans now this is an example of a paper that was published in nature for actually demonstrating that a cnn can outperform expert radiologists at detecting breast cancer directly from mammogram images instead of giving a binary prediction of what an output is though cancer or not cancer or what type of objects for example in this image we may say that this image is an image of a taxi we may want to ask our neural network to do something a bit more fine resolution and tell us for this image can you predict what the objects are and actually draw a bounding box localize this image or localize this object within our image this is a much harder problem since there may be many objects in our scene and they may be overlapping with each other partially occluded etc so not only do we want to localize the object we want to also perform classification on that object so it's actually harder than simply the classification task because we still have to do classification but we also have to detect where all of these objects are in addition to classifying each of those objects now our network needs to also be flexible and actually and be able to infer not just potentially one object but a dynamic number of objects in the scene now if we if we have a scene that only has one taxi it should output a bounding box over just that single taxi and the bounding box should tell us the xy position of one of the corners and maybe the height and the width of that bounding box as well that defines our bounding box on the other hand if our scene contains many different types of objects potentially even of different types of classes we want our network to be able to output many different outputs as well and be flexible to that type of differences in our input even with one single network so our network should not be constrained to only outputting a single output or a certain number of outputs it needs to have a flexible range of how we can dynamically infer the objects in the scene so what is one maybe naive solution to tackle this very complicated problem and how can cnns be used to do that so what we can do is start with this image and let's consider the simplest way possible to do this problem we can start by placing a random box over this image somewhere in the image it has some random location it also has a random size and we can take that box and feed it through our normal image classification network like we saw earlier in the lecture this is just taking a single image or it's now a sub image but it's still a single image and it feeds that through our network now that network is tasked to predict what is the what is the class of this image it's not doing object detection and it predicts that it has some class if there is no class of this box then it simply can ignore it and we repeat this process then we pick another box in the scene and we pass that through the network to predict its class and we can keep doing this with different boxes in the scene and keep doing it and over time we can basically have many different class predictions of all of these boxes as they're passed through our classification network in some sense if each of these boxes give us a prediction class we can pick the boxes that do have a class in them and use those as a box where an object is found if no object is found we can simply discard it and move on to the next box so what's the problem with this well one is that there are way too many inputs the this basically results in boxes and considering a number of boxes that have way too many scales way too many positions too many sizes we can't possibly iterate over our image in all of these dimensions and and and have this as a naive solute and have this as a solution to our object detection problem so we need to do better than that so instead of picking random boxes or iterating over all of the boxes in our image let's use a simple heuristic method to identify some places in the image that might contain meaningful objects and use these to feed through our model but still even with this uh extraction of region proposals the the rest of the store is the exact same we extract the region of proposal and we feed it through the rest of our network we warp it to be the correct size and then we feed it to our classification network if there's nothing in that box we discard it if there is then we keep it and say that that box actually contained this image but still this has two very important problems that we have to consider one is that it's still super super slow we have to feed in each region independently to the model so if we extract in this case 2000 regions we have here we have to feed this we have to run this network 2 000 times to get the answer just for the single image it also tends to be very brittle because in practice how are we doing this region proposal well it's entirely heuristic based it's not being learned with a neural network and it's also even more importantly perhaps perhaps it's detached from the feature extraction part so our feature extraction is learning one piece but our region proposal piece of the network or of this architecture is completely detached so the model cannot learn to predict regions that may be specific to a given task that makes it very brittle for some applications now many variants have been proposed to actually tackle and tackle some of these issues and advance this forward to accomplish object detection but i'd like to touch on one extremely quickly just to point you on in this direction for those of you who are interested and that's the faster rcnn method to actually learn these region proposals the idea here is instead of feeding in this image to a heuristic based feedback or region proposal network or method we can have a part of our network that is trained to identify the proposal regions of our model of our image and that allows us to directly understand or identify these regions in our original image where there are candidate patches that we should explore for our classification and our for our object detection now each of these regions then are processed with their own feature extractor as part of our neural network and individuals or in their cnn heads then after these features for each of these proposals are extracted we can do a normal classification over each of these individual regions very similar as before but now the huge advantage of this is that it only requires a single forward pass through the model we only feed in this image once we have a region proposal network that extracts the regions and all of these regions are fed on to perform classification on the rest of the image so it's super super fast compared to the previous method so in classification we predict one class for an entire image of the model in object detection we predict bounding boxes over all of the objects in order to localize them and identify them we can go even further than this and in this idea we're still using cnns to predict this predict this output as well but instead of predicting bounding boxes which are rather coarse we can task our network to also here predict an entire image as well now one example of this would be for semantic segmentation where the input is an rgb an image just a normal rgb image and the output would be pixel-wise probabilities for every single pixel what is the probability that it belongs to a given class so here you can see an example of this image of some two cows on the on some grass being fed into the neural network and the neural network actually predicts a brand new image but now this image is not an rgb image it's a semantic segmentation image it has a probability for every single pixel it's doing a classification problem and it's learning to classify every single pixel depending on what class it thinks it is and here we can actually see how the cow pixels are being classified separately from the grass pixels and sky pixels and this output is actually created using an up sampling operation not a down sampling operation but up sampling to allow the convolutional decoder to actually increase its spatial dimension now these layers are the analog you could say of the normal convolutional layers that we learned about earlier in the lecture they're also already implemented in tensorflow so it's very easy to just drop these into your model and allow your model to learn how to actually predict full images in addition or instead of single class probabilities this semantic segmentation idea is extremely powerful because it can be also applied to many different applications in healthcare as well especially for segmenting for example cancerous regions on medical scans or even identifying parts of the blood that are infected with diseases like in this case malaria let's see one final example here of how we can use convolutional feature extraction to perform yet another task this task is different from the first three that we saw with classification object detection and semantic segmentation now we're going to consider the task of continuous robotic control here for self-driving cars and navigating directly from raw vision data specifically this model is going to take as input as you can see on the top left hand side the raw perception from the vehicle this is coming for example from a camera on the car and it's also going to see a noisy representation of street view maps something that you might see for example from google maps on your smartphone and it will be tasked not to predict the classification problem or object detection but rather learn a full probability distribution over the space of all possible control commands that this vehicle could take in this given situation now how does it do that actually this entire model is actually using everything that we learned about in this lecture today it can be trained end to end by passing each of these cameras through their dedicated convolutional feature extractors and then basically extracting all of those features and then concatenating them flattening them down and then concatenating them into a single feature extraction vector so once we have this entire representation of all of the features extracted from all of our cameras and our maps we can actually use this representation to predict the full control parameters on top of a deterministic control given to the desired destination of the vehicle this probabilistic control is very powerful because here we're actually learning to just optimize a probability distribution over where the vehicle should steer at any given time you can actually see this probability distribution visualized on this map and it's optimized simply by the negative log likelihood which is the negative log likelihood of this distribution which is a normal a mixture of normal distributions and this is nearly identical to how you operate in classification as well in that domain you try to minimize the cross-entropy loss which is also a negative log likelihood optim or probability function so keep in mind here that this is composed of the convolutional layers to actually perform this feature extraction these are exactly the same as what we learned about in this lecture today as well as these flattening pooling layers and concatenation layers to really produce this single representation and feature vector of our inputs and finally it predicts these outputs in this case a continuous representation of control that this vehicle should take so this is really powerful because a human can actually enter the car input a desired destination and the end to end cnn will output the control commands to actuate the vehicle towards that destination note here that the vehicle is able to successfully recognize when it approaches the intersections and take the correct control commands to actually navigate that vehicle through these brand new environments that it has never seen before and never driven before in its training data set and the impact of cnns has been very wide reaching beyond these examples as well that i've explained here today it has touched so many different fields in computer vision especially and i'd like to really conclude this lecture today by taking a look at what we've covered we've really covered a ton of material today we covered the foundations of computer vision how images are represented as an array of brightness values and how we can use convolutions and how they work we saw that we can build up these convolutions into the basic architecture defining convolutional neural networks and discussed how cnns can be used for classification finally we talked about a lot of the extensions and applications of how you can use these basic convolutional neural network architectures as a feature extraction module and then use this to perform your task at hand and a bit about how we can actually visualize the behavior of our neural network and actually understand a bit about what it's doing under the hood through ways of some of these semantic segmentation maps and really getting a more fine-grained perspective of the the very high resolution classification of these input images that it's seeing and with that i would like to conclude this lecture and point everyone to the next lab that will be upcoming today this will be a lab specifically focused on computer vision you'll get very familiar with a lot of the algorithms that we've been talking about today starting with building your first convolutional neural networks and then building this up to build some facial detection systems and learn how we can use unsupervised generative models like we're going to see in the next lecture to actually make sure that these computer vision facial classification algorithms are fair and unbiased so stay tuned for the next lecture as well on unsupervised generative modeling to get more details on how to do a second part thank you |
MIT_6S191_Introduction_to_Deep_Learning | MIT_6S191_2019_Visualization_for_Machine_Learning_Google_Brain.txt | all right so thank you for the invitation it's really exciting to take part of my afternoon to come over here and talk to you all about some of the work that we're doing as Ava said I co-lead a couple of things at Google one is this team called the big picture group and a lot of our work centers around data visualization in the last three years or so it's been very much focused on data visualization for a machine-learning so you'll get to hear a lot about what we do today I also could lead the pair initiative which is stands for people plus AI research and there we have a bunch of different kinds of things that we do as part of this initiative we produce open source software hopefully some of which you are familiar with tensorflow is anyone tensorflow yes yes yes I'm seeing some noddings here we also put out educational materials so some of this actually starts to overlap with the data visualization for machine learning we've noticed that even as we at Google try to retrain our own engineering workforce to use machine learning it turns out that if you have simulations and data visualization front ends to play with some of these systems you just speed up learning by like a factor of 10 instead of going back to the command line to learn some of the basics so we can talk a little bit about that today as well all right so the first thing kind of let's start at the beginning with the data right why might you want to visualize your training data and what's special about your training data one of the things that we started saying at Google is that whenever you start working with machine learning and you you look at your results and maybe your results are not coming out as strongly as you would wish there is this innate intuition or this you're like itching to debug you're a machine learning model right one of the things we always say is before you even debug your model debug your data and that becomes it's it's incredibly important I'm sure you're talking about a lot of that here in the class as well problem is we don't have good tools to do that today so we're starting to build us at Google and other places everybody's starting to think about what might be interesting tools for you to start to debug your training data the data that your system is going to rely on so we started to approach this from a data visualization perspective so what I have here is a screenshot of the CFR ten data set everybody familiar with that data set very simple data set of images ten classes that's all and so what I'm going to demo for you right now is a visualization we created it's called facets and facets is a visualization right now I'm visualizing Safari ten and all I'm doing as you can tell is I'm just looking at all the pictures these are not all it's it's a sample of cifra ten of the pictures in each class and I can very easily zoom in and I can see what the pictures look like I can zoom out and even as simple as this is it already starts to give me a window into what's happening with my data one of the things I can see right away is that the heels are different right something you may expect so the the top row here is airplane and you can see that it has a ton more blue than say bird or a cat or deer and so this ship that may be expected right because I take pictures of airplanes against the sky and I take pictures of ships against the water okay but then I can start to play some interesting games I can say show me the same data set within each category but now I want you to arrange this in terms of hue distribution okay so now I can start to see okay these are little histograms of my pictures per category per hue and voila what are the bluest of the blue airplanes or there might be some interesting blue birds here okay these are also against the sky that's interesting let me see you I have some Reds on airplanes okay interesting just a very easy organic way to start looking at my data I can also look for things like okay what are the classes of in my data set that have the the most if you will kind of equal distribution of hues you get to things like ships and trucks and and so forth and you have these bulges of kind of earthy tones for the animals right which makes sense I can also start playing other games so I can come here and say okay now give me a confusion matrix of my data and again the confusion matrix is going to pair each one of these rows is what the system thinks my image is versus each one of the columns is what the humans have hand labeled my image to be so the good news to me is that right away I can see that the diagonal is the most populated set of cells that's good because that's where my system and the humans agree right and another interesting thing I can see here right away is that I have a couple of cells that are highly populated I have this cell here ton of pictures and I have this other cell here ton of pictures and sure enough these are my system is getting confused between dogs and cats okay so that already might start to give me some hints as to what I might want to debug about my system maybe I need to give it more images of cats and dogs so it does better on those classes now keep this in mind this is C far 10 this is kind of a hello world data set of machine learning everybody looks at this data set everybody benchmarks against this data set it's been looked at by thousands of people right so we started visualizing this and we're like oh okay so my system is very sure that these are cat these are not cats so let's see what the cats are that my system is very short are not cats interesting okay so I can see how like this guy here has a lot big ears maybe it's getting confused about that all right but then we started seeing some interesting things I challenge you to find something in the bottom rows that is indeed not a cat anyone a frog yes look at this this was hand labelled cat but my system is a hundred percent sure it's a frog and I have to give it to it I think it's a frog I don't think it's a cat at all but why does that matter it matters because just by giving you the ability to look at your data you can start to see actual mistakes in your data right and again this is a data set that thousands of researchers and practitioners use and benchmark again benchmark against and nobody knew that there were a couple of mistakes there interesting things right so again just creating a the ability to look at your data very easily can buy you can get rid of a lot of headache for you so that's that's facets and we open sourced facets so facets is available for anyone to use and it's been downloaded a bunch of times you don't have to use it for machine learning it's a visualization tool for any kind of faceted data set so you can use it for anything you want and here at MIT people have started using it so joy for instance at the Media Lab was using facets to look at racial bias racial and gender bias in facial recognition facial recognition systems from industry and she was using the visualization to make the point that the category of women of color where the was the category that these system would consistently do worse at alright I'm gonna I'm if I have time I'm gonna come back to the what if tool the only thing I'll say about this tool is that it's another open-source tool that we created that allows you to probe a machine learning model as a black box and to probe for things like machine learning fairness different concepts of ML fairness if we have time we'll come back to this alright so we just talked about the importance of looking at your data now let's talk a little bit about what can you learn when you're looking at your model sort of how is your model understanding the world how is it making sense of very very high dimensional data so I'm gonna walk you through a very quick exercise in high dimensionality so a warm-up exercise M nest yes M nest everybody familiar with omnis data set of machine learning and i'm going i'm going to start pulling us over to the world of visualization again so think about each one of these images as a very high dimensional vector how might I think about that all I'm going to do is I'm going to count the pixels and depending on the color of the pixel I'm gonna give it a value from zero to one one if it's all white zero if it's black something in the middle if it's a gray I end up with a vector boom i transformed an image into math that's great now I have a great way to compare these images alright so then we created a visualization to actually look at this so this is another tool that's open sourced by pear and this is called the embedding projector and now what we're doing here we're looking at M mist in high dimensional space okay so this is a 3d projection of of the Emnes numbers okay all I did here since I have ground truth I colored the numbers by the ground truth okay and the projection I'm using here is T's me it's a nonlinear projection it does really well in keeping track of local clusters it doesn't do very well at all in understanding the global structure of things but locally it's very good so what do we have here the good news is that we have clusters that's a good starting point and we have space between clusters so that's really good because it tells me that somehow these categories are being pulled apart from each other we also have so let's start playing with this tool it lets me rotate in many different ways I can zoom in and out to look more carefully at things I can also click on things so basically just so you know what's going on here I'm setting up a demo using the same tool that shows you how a visualization like this can work in a real system in a large system the system I was showing you before the the data was showing you before it was eminent right very easy toy example what I'm showing you now and this is it's taking its time to get to the T Snee projection it's calculating T's needs calculating all the clusters in this data set and I want to be able to interact with it that's why I'm taking my time and I'll talk to you while this is working in the background okay so we're gonna let this do its thing it's gonna run in the background I'm gonna try to get it not to distract us completely meanwhile let's talk about interpretability in a real world scenario so a couple years ago Google came out with something called a multilingual neural net Translate system what what does that mean it means that up until that moment whenever you wanted to use neural nets to translate between two languages you had to train a model on a pair of languages so I trained a model between English and French French English I trained a different model for English and Japanese Japanese English okay so it was all based on pairs for the first time Google came up with an architecture they'll let you ingest multiple languages and translate high-quality translations into multiple languages okay and that was that was a revelation it was it was quite a feat and so that's called the multilingual translation models there's another thing that was interesting in what these systems were doing somehow they were able to do what we call zero shot translation okay and so what that means is that imagine I have a system that is that is ingesting a bunch of sentences that translate between English and Japanese Japanese in English and it's also ingesting a bunch of sentences that translate between glish and Korea and back and forth okay somehow this system was able to also do Japanese to Korean and Korean to Japanese without ever having seen a single example of a sentence that goes straight from one to the next okay from Japanese to Korean so one of the challenges one of the big unknowns is that the people building these systems they weren't sure how is the system being able to do this with high quality translation high quality translation is extremely hard to get is extremely announced and so how is it doing that oh this was the data that we decided to visualize to actually start answering that question okay it turns out that between the encoder and the decoder there is an attention vector in the middle and this is the data that we were that we were going to visualize all right the question in these researchers Minds these are the people who were building the Translate system was imagine I have space with multiple languages an embedding space a very high dimensional space of language what were these multilingual systems doing were they doing something like what's here on the Left where they would resolve the space by keeping all the English in one corner keeping all the Japanese on the other corner and keeping all the Korean in another corner and then just mapping between these spaces for translations or where's the system doing something more like what you see on the right where it's way more messy but they're bringing together these multiple languages okay and so this why does this matter why does it matter if it's one way or another way it matters because if what the system was doing was mixing up the different languages in the same high dimensional space it had a very specific implication which meant it the system didn't care as much that one string said for instance home and another string said cazza it knew that these two strings had the same semantic meaning so if it was co-locating these very different strings from very different languages in the same high dimensional space it meant that it was paying attention to actual the semantics the actual meaning of the words and that matters because that then tells us that these are the first indications of a universal language okay and so but we had no way of knowing so we decided to visualize and how might we visualize this imagine I have a sentence that says something like the stratosphere extends from 10 kilometers to 50 kilometers in altitude what does that look like in high dimensional space it looks like a bunch of points where some points are going to be closer together like the 10 and the 15 maybe because they're both numbers okay so again location has a specific meaning here in high dimensional space and then what we do with the visualization is we connect these dots and so that is my sentence in high dimensional space it's the path of my sentence okay so now I have my sentence in English what does this look like when I have to set to when I have a translation does it look like this where I have my English sentence in one corner and I have the Portuguese version of that sentence on another corner or does it look more like something like this okay and so this is what we're trying to find out so the visualization for this let me see is actually this here so what I'm showing here again we're back in the embedding projector and I'm visualizing a multi-dimensional tea lingual system that takes in English Japanese and Korean I am coloring each one of those languages by a different color so English is blue Korean is red Japanese is yellow okay my question to you is as messy as this looks do you see a big neighborhood of red and a big neighborhood of blue and a big neighborhood of yellow no right so that right there was our very first indication that something really interesting was happening because we started to see clusters that had the three colors together okay and so the thing that I was hoping to show you but I don't know that I'm gonna be able to show you let me see if I can oh maybe maybe I'll show you like this so if I do a search for stratosphere well it it just did a bunch of stuff here that is not helpful ah I'm gonna try to drag it all the way here do you see this little cluster here the stratosphere with the altitude between 10 and 50 kilometers okay ignore all of this junk here this is not supposed to be there but this is what I care about if I look at this cluster okay and I look at the nearest neighbors these are the nearest neighbors here that I'm mousing over the three languages are here okay and my nearest neighbors for that ain't for that sentence in English both in Japanese and in Korea are all in that cluster alright so that in fact did you see I just clicked out and you see the cluster has three colors here so that was our indication that this was actually the beginning of a universal language this system was actually being able to coalesce the three spaces of three different languages in terms of semantic meaning now keep this image in mind and I want to show you a sister image of the image a visualization of a sister system this system here it's the same kind of visualization but this is a system that takes in English Portuguese and Spanish okay what do you see that's different here separated right I see a big neighborhood of red there and so we saw this and we're like wait wait wait a second we thought there were indications of a universal language going on what is that doing there by itself so we did a statistical analysis for quality of translation and we found out that the sentences in this cluster the was being pulled apart they were the translation translations were all bad low-quality translations okay so what does that tell us it tells us that the geometry of the space matters that if you have a multilingual system kind of in the case Google had if your system looks like this that's bad news that means that your system is not being able to cluster in the right way and it's having difficulties and it's going to do a bad job with translations okay so again visualization here not only to give you a sense of the landscape of what's happening these massively high dimensional spaces but also giving you a clue of how to start debugging your system maybe you need more training data in in Portuguese maybe it's something else but you have a problem okay so this was really exciting because it was the first time anyone could see kind of like an MRI version of this neuro net and the fact that it was functioning at the super-high level it was functioning for a language a couple of other things I wanted to show the same visualization can let us to understand things about our own language that I think are very interesting so this is again the embedding projector and you can one of the things I didn't demo today is not only do you have these projections like T Snead that you just saw or PCA you also can create custom projections you can say let's look at the English language so imagine each one of those dots is a word in the English language okay and now what I want to understand is the notion of biases in our language okay you may be familiar with papers like this one about word embeddings and the fact that you can do you can do calculations in in the direction of certain vectors so you can say oh look you know in word embedding space if I take the direction between China and Beijing if I take that direction and I follow roughly that direction from Russia I'm going to end up in Moscow okay if I follow that direction for Japan I'm going to end up in Tokyo and so on and so forth so there are these meaningful directions in wording bedding spaces that are incredibly powerful so now let's use that to start visualizing language what I did here is I went to the embedding projector that you just saw and I looked for the word book okay and then I filtered all the words by the hundred top nearest neighbors to book so I have words like biography and publishing and manuscript all the nearest neighbors to book but then I said okay now I want to see these words along the following projection I want to give an axis that goes from old to new so the closest the more to the left of where it is the closest to old it is so let's see some of the words I have poem manuscript author story okay closest to new here on the right I have company publications theory mind creation volume okay okay interesting now let's look at a different set of words my ex is now is man to woman and my focal word my anchor word is engineer and all of its nearest neighbors so I can tell already the engineer is closer to man than it is to woman by the way the y-axis doesn't mean anything it's random just pay attention to the X location okay so closer to man I have engineers engineering electrical electronics I have a really interesting word here at the bottom but I can't look at the bottom I can't point Michael Michael is one of the top nearest neighbors to engineer that's crazy okay and then close to woman I have journalist lawyer officer diplomat writer poet okay let's keep going my new anchor word is math between men and woman next to men computational arithmetic flash computation physics astronomical so forth next to a woman psychology instance procedural art educational library and so forth I changed my axes now it goes from Christian to on the left to Jew on the right and my anchor where it is moral so close to Christian Christian beliefs religion faith religious closest to Jew intellectual psychological serious educational sexual Liberty ethical so forth okay why does this matter it matters because it starts to it's to me this is kind of like the cannery on the mine about this is just language the data set here remember we were talking about training data be incredibly important word to vac which is this data set it's from hundreds of thousands of publications it's just how we use language in the real world okay so when you were training your machine learning systems based on language one thing to keep in mind is the fact that you're training on a biased data set and so being aware of these biases can actually be incredibly important and hopefully being able to use tools like visualization tools starts to give you a way of even becoming aware acknowledging under and trying hopefully trying to mitigate some of these effects I think I'm gonna stop here and open up for questions thank you any questions ah that's an awesome question if if if you had a word that had multiple meanings in say a projection like this it would probably show up in the middle which would not be very illuminating another thing that happens on the embedding projection projector is that you may have a word that is very very strongly so let's take let's talk about a concrete example the word Bank okay the word Bank can be a financial institution it can also be a river bank right if my corpus of data that I trained on is on financial news there is going to be a very high pole towards financial institution vocabulary right and I will see that bias in my data set hopefully what also shows up if my data set is broad enough is that when I highlight that word in the visualization here I was filtering but imagine I have the entire language okay let's say I highlight the word Bank it definitely lights up all of the financial institutional vocabulary that's highly connected to that but it also lights up stuff like River elsewhere in the visualization and that is one indication where you're like oh interesting there are these disparate very separate clusters here maybe there's something there maybe that's one indication that you have these multiple words with multiple meanings so part of what we were trying to understand was what mode at all it was in we had no clue we were literally like if it separates that's fine if it doesn't separate that's interesting literally we were like just give me a telescope so I can look into this world and see what it looks like the system was working really well so it was more of a curiosity of trying to understand how is it working so well what is it doing that it can get to these very high quality translations from no training data on specific pairs right it was more that kind of question than the oh we think it should do this or we think it should do that it was literally like what is it even doing that it works this way once we got to that set of questions another set of questions that came up was like well here's a system that's not working so well why is it not working so well and so that was the point that I was trying to make when I showed Olek this system between English Japanese and Korean that was working really well that looked like this okay now let's look at a system that was not working so well and it's a sister system okay here's a clue about why maybe it was not working as well does that make sense yeah sure in terms of biases oh yeah I think one thing that would be wonderful that I don't know that anyone has done yet is exactly what I let me know if I'm understanding your question correctly I think it would be wonderful to compare embedding spaces of different languages right it could be that in English we have a set of biases that maybe don't exist in Turkish I don't know but and even even trying to understand for instance what are the nearest neighbors of a given word in English versus nearest neighbors of a given word in French how much does that set of how do those embedding spaces differ I think that would be incredibly interesting one specific if we're talking about translation one very complex issue that touches translation word embeddings and and biases is what happens what used to happen now Google started addressing this what used to happen when you went to Google Translate and you said something about a doctor and a nurse the doctor is coming and the nurse and the nurse will see you okay so when you were to translate between languages that have gender like English two languages that don't have gender like Turkish it turns out that in so imagine it going the other way around imagine I have that sentence the doctor is coming to see you in Turkish and I am translating that into English on Google Translate Google Translate would translate it into he's coming the doctor is coming to see you he will look at your exams there was an assumption that it was a he if the word been used was nurse there would be an assumption in English that the pronoun following nurse would be she okay even though the original the source language had no notion of gender whatsoever attached to any of those words right and so Google had to actively proactively work out a solution where it realizes it's coming from a non gendered language to one that has biases in terms of professions and gender and give users a solution like oh the doctor is coming to see you he /she will talk to you next or something like that so you yeah these are the kinds of biases you want to be aware of and try to mitigate work around a very interesting point because part of what we're trying to do is we're trying to understand what is the intent on the user side so as a user I want a translation to my problem which is I see a sentence in Turkish and I have no idea what it means in English if all you're giving me is the top hit from a statistical distribution you're giving me a very specific answer you're not giving me all the possible answers to my translation problem as a user as a user it is important for me to know that the translation to that sentence could be he but it could be she right or do I want a distribution do I want you to decide for me what the translation is again like if I am if I don't know the language I'm translating from right I want to understand what are the possibilities in my own language I can understand so that therein lies the dilemma it really depends on what do you think what do you think the user the users intent is and how do you work with that but yeah to your point absolutely again we're back to the fact that the data is skewed absolutely it is skewed right and to your point it reflects a certain reality thank you |
MIT_6S191_Introduction_to_Deep_Learning | MIT_6S191_2019_Introduction_to_Deep_Learning.txt | good afternoon everyone thank you all for joining us my name is Alexandra Meany and one of the course organizers for six s-191 this is mi t--'s official course on introduction to deep learning and this is actually the third year that we're offering this course and we've got a really good one in store for you this year with a lot of awesome updates so I really hope that you enjoy it so what is this course all about this is a one-week intensive boot camp on everything deep learning you'll get up close and personal with some of the foundations of the algorithms driving this remarkable field and you'll actually learn how to build some intelligent algorithms capable of solving incredibly complex problems so over the past couple years deep learning has revolutionized many aspects of research and industry including things like autonomous vehicles medicine and healthcare reinforcement learning generative modeling robotics and a whole host of other applications like natural language processing finance and security but before we talk about that I think we should start by taking a step back and talking about something at the core of this class which is intelligence what is intelligence well I like to define intelligence as the ability to process information to inform future decisions the field of artificial intelligence is actually building algorithms artificial algorithms to do exactly that bit process information to inform future predictions now machine learning is simply a subset of artificial intelligence or AI that actually focuses on teaching an algorithm how to take information and do this without explicitly being told the sequence of rules but instead learn the sequence of patterns from the data itself deep learning is simply a subset of machine learning which takes this idea one step further and actually tries to extract these patterns automatically from raw data without being needed without the need to for the human to actually come in and annotate these rules that the system needs to learn and that's what this class is all about teaching algorithms how to learn a task from raw data we want to provide you with a solid foundation that so that you can learn how these algorithms work under the hood and with the practical skills so that you can actually implement these algorithms from scratch using deep learning frameworks like tensor flow which is the current most popular deep learning framework that you can code some of neural networks and deep learning model and other deep learning models we have an amazing set of lectures lined up for you this week including today which will kick off an introduction on neural networks and sequence based modeling which you'll hear about in the second part of the class tomorrow we'll cover some about some stuff about computer vision and deep generative modeling and the day after that we'll talk even about reinforcement learning and end on some of the challenges and limitations of the current deep learning approaches and and kind of touch on how we can move forward as a field past these challenges we'll also spend the final two days hearing from some guest lectures from top a AI researchers these are bound to be extremely interesting though we have speakers from Nvidia IBM Google coming to give talks so I highly recommend attending these as well and finally the class will conclude with some final project presentations from students like you and the audience will where you'll present some final projects for this class and then we'll end on an award ceremony to celebrate so as you might have seen or heard already this class is offered for credit you can take this class for grade and if you're taking this class for grade you have two options to fulfill your grade requirement first option is that you can actually do a project proposal where you will present your project on the final day of class that's what I was saying before on Friday you can present your project and this is just a three minute presentation we'll be very strict on the time here and we realized that one week is a super short time to actually come up with a deep learning project so we're not going to actually be judging you on the results you create during this week instead what we're looking for is the novelty of the ideas and how well you can present it given such a short amount of time in three minutes and we kind of think it's like an art to being able to present something in just three minutes so we kind of want to hold you to that tight time schedule and kind of enforce it very tightly just so that you're forced to really think about what is the core idea that you want to present to us on Friday your projects your presentations will be judged by a panel of judges and will be awarding GPUs and some home Google home AI assistants this year we're offering three NVIDIA GPUs each one worth over $1,000 some of you know these GPUs are the backbone of doing cutting-edge deep learning research and it's really foundational or essential if you want to be doing this kind of research so we're really happy that we can offer you these types this type of hardware the second option if you don't want to do the project presentation but you still want to receive credit for this class you can do the second option which is a little more boring in my opinion but you can write a one-page review of a deep learning paper and this will be doing the last day of class and this is for people I don't want to do the project presentation but you still want to get credit for this class please post to Piazza if you have questions about the labs that we'll be doing today or any of the future days if you have questions about the course in general there's course information on the website enter deep learning com along with announcements digital recordings as well as slides for these classes today's slides are already released so you can find everything online and of course if you have any questions you can email us at intro to deep learning - staff at MIT edu this course has an incredible team that you can reach out to in case you have any questions or issues about anything so please don't hesitate to reach out and finally we want to give a huge thanks to all of the sponsors that made this course possible so now let's start with the fun stuff and actually let's start by asking ourselves a question why do we even care about this class why did you all come here today what is why do we care about deep learning well traditional machine learning algorithms typically define sets of rules or features that you want to extract from the data usually these are hand engineered features and they tend to be extremely brittle in practice now the key idea is a key insight of deep learning is that let's not hand engineer these features instead let's learn them directly from raw data that is can we learn in order to detect the face we can first detect the edges in the picture compose these edges together to start detecting things like eyes mouth and nose and then composing these features together to detect higher-level structures in the face and and this is all performed in a hierarchical manner so the question of deep learning is how can we go from raw image pixels or raw data in general to a more complex and complex representation as the data flows through the model and actually the fundamental fundamental building blocks of deep learning have existed for decades and their underlying algorithms have been studied for many years even before that so why are we studying this now well for one data has become so prevalent in today's society we're living in the age of big data where we have more access to data than ever before and these models are hungry for data so we need to feed them with all the data and a lot of this datasets that we have available like computer vision datasets natural language processing datasets this raw amount of data was just not available when these algorithms were created second these algorithms require or these albums are massively parallel lies about their core at their most fundamental building blocks that you'll learn today they're massively paralyzed Abul and this means that they can benefit tremendously from very specialized hardware such as GPUs and again technology like these GPUs simply did not exist in the decades that deep learning or the foundations of deep learning were devil and finally due to open-source tool boxes like tensorflow which will you learn to use in this class building and deploying these models has become more streamlined than ever before it is becoming increasingly and increasingly easy to abstract away all of the details and build a neural network and train a neural network and then deploy that neural network in practice to solve a very complex problem in just tens of lines of code you can solve you can create a facial classifier that's capable of recognizing very complex faces from the environment so let's start with the most fundamental building block of deep learning and that's the fundamental building block that makes up a neural network and that is a neuron so what is the neuron in deep learning we call it a perceptron and how does it work so the idea of a perceptron or a single neuron is very simple let's start by talking about and describing the feed-forward information of information through that model we define a set of inputs x1 through XM which you can see on the left hand side and each of these inputs are actually multiplied by a corresponding weight w1 through WM so you can imagine if you have x1 you x w1 you have x2 you x w2 and so on you take all of those multiplications and you add them up so these come together in a summation and then you pass this weighted sum through a nonlinear activation function to produce a final output which we'll call Y so that's really simple let's I actually left out one detail in that previous slide so I'll add it here now we also have this other turn term this green term which is a bias term which allows you to shift your activation function left and right and now on the right side you can kind of see this diagram illustrated as a mathematical formula as a single equation we can actually rewrite this now using linear algebra using vectors dot products and matrices so let's do that so now is a vector of our inputs x1 through M so instead of now a single number X capital X is a vector of all of the inputs capital W is a vector of all of the weights 1 through m and we can simply take their weighted sum by taking the dot product between these two vectors then we add our bias like I said before or biased now is a single number W not and applying that non linear term so the nonlinear term transfers that transforms that scalar input to another scalar output Y so you might now be wondering what is this thing that I've been referring to as an activation function I've mentioned it a couple times I called it by a couple different names first was a nonlinear function then was an activation function what is it so one common example of a nonlinear activation function is called the sigmoid function and you can see one here defined on the bottom right this is a function that takes as input any real number and outputs a new number between 0 and 1 so you can see it's essentially collapsing your input between this range of 0 and 1 this is just one example of an activation function but there are many many many activation functions used in neural networks here are some common ones and throughout this presentation you'll see these tensorflow code blocks on the bottom like like this for example and I'll just be using these as a as a way to kind of bridge the gap between the theories that you'll learn in this class with some of the tensor flow that you'll be practicing in the labs later today and through the week so the sigmoid function like I mentioned before which you can see on the left-hand side is useful for modeling probabilities because like I said it collapses your your input to between 0 & 1 since probabilities are modeled between 0 & 1 this is actually the perfect activation function for the end of your neural network if you want to predict probability distributions at the end another popular option is the r lu function which you can see on the far right-hand side this function is an extremely simple one to compute it's piecewise linear and it's very popular because it's so easy to compute but has this non-linearity at Z equals zero so at Z less than 0 this function equals 0 and at Z greater than 0 it just equals the input and because of this non-linearity it's still able to capture all of the great properties of activation functions while still being extremely simple to compute and now I want to talk a little bit about why do we use activation functions at all I think a great part of this class is to actually ask questions and not take anything for granted so if I tell you we need an activation function the first thing that should come to your mind is well why do we need that activation function so activation functions the purpose of activation functions is to introduce nonlinearities into the network this is extremely important in deep learning or in machine learning in general because in real life data is almost always very nonlinear imagine I told you to separate here the green from the red points you might think that's easy but then what if I told you you had to only use a single line to do it well now it's impossible that actually makes the problem not only really hard like I said it makes it impossible in fact if you use linear activation functions in a neural network no matter how deep or wide your neural network is no matter how many neurons it has this is the best that I will be able to do produce a linear decision boundary between the red and the green points and that's because it's using linear activation functions when we introduce a nonlinear activation function that allows us to approximate arbitrarily complex functions and draw arbitrarily complex decision boundaries in this feature space and that's exactly what makes neural networks so powerful in practice so let's understand this with a simple example imagine I give you a trains Network with weights W on the top here so W 0 is 1 and let's say W 0 is 1 the W vector is 3 negative 2 so this is a trained neural network and I want to feed in a new input to this network well how do we compute the output remember from before it's the dot product we add our bias and we compute a non-linearity there's three steps so let's take a look at what's going on here what's inside of this nonlinear function the input to the nonlinear function well this is just a 2d line in fact we can actually plot this 2d line in what we call the feature space so on the x axis you can see X 1 which is the first input and on the y axis you can see X 2 which is the second input this neural network has two inputs we can plot the line when it is equal to zero and you can actually see it in the feature space here if I give you a new point a new input to this neural network you can also plot this new point in the same feature space so here's the point negative 1 2 you can plot it like this and actually you can compute the output by plugging it into this equation that we created before this line if we plug it in we get 1 minus 3 minus 4 right which equals minus 6 that's the input to our activation function and then when we feed it through our activation function here I'm using sigmoid again for example our final output is zero point zero zero two ok what does that number mean let's go back to this illustration of the feature space again what this feature space is doing is essentially dividing the space into two hyperplanes remember that the sigmoid function outputs values between 0 and 1 and at z equals 0 when the input to the sigmoid is 0 the output of the sigmoid is 0.5 so essentially you're splitting your space into two planes one where Z is greater than zero and one more Z is less than zero and one where Y is greater than 0.5 and one where Y is less than 0.5 the two are synonymous but when we're dealing with small dimensional input data like here we're dealing with only two dimensions we can make these beautiful plots and these are very valuable and actually visualizing the learning algorithm visualizing how our output is relating to our input we're gonna find very soon that we can't really do this for all problems because while here we're dealing with only two inputs in practical applications and deep neural networks we're gonna be dealing with hundreds thousands or even millions of inputs to the network at any given time and then drawing one of these plots in thousand dimensional space is going to become pretty tough so now that we have an idea of the perceptron a single neuron let's start by building neural networks from the ground up using one neuron and seeing how this all comes together let's revisit our diagram of the perceptron if there's a few things that you remember from this class I want to remember this so there's three steps to computing the output of a perceptron dot product add a bias taking non-linearity three steps let's simplify the diagram a little bit I just got rid of the bias I removed the weights just for simplicity to keep things simple and just note here that I'm writing Z as the input to the to the activation function so this is the weighted combination essentially of your inputs Y is then taking the activation function with input Z so the final output like I said Y is is on the right-hand side here and it's the activation function applied to this weighted sum if we want to define a multi output neural network now all we have to do is add another perceptron to this picture now we have two outputs each one is a normal perceptron like we defined before no nothing extra and each one is taking all the inputs from the left-hand side computing this weighted sum adding a bias and passing it through an activation function let's keep going now let's take a look at a single layered neural network this is one where we have a single hidden layer between our inputs and our outputs we call it a hidden layer because unlike the input and the output which are strictly observable or hidden layers learned so we don't explicitly enforce any behavior on the hidden layer and that's why we call it hidden in that sense since we now have a transformation from the inputs to the hidden layer and hidden layer to the outputs we're going to need two weight matrices so we're going to call it W one to go from input to hidden layer and W two to go from hidden layer to output but again the story here's the same dot product add a bias for each of the neurons and then compute an activation function let's zoom in now to a single hidden hidden unit in this hidden layer if we look at the single unit take z2 for example it is just the same perceptron that we saw before I'm going to keep repeating myself we took a dot product with the inputs we applied a bias and then actually so since it's Z we had not applied our activation function yet so it's just a dot product plus a bias so far if we took it and took a look at a different neuron let's say z3 or z4 the idea here is gonna be the same but we're probably going to end up with a different value for Z 3 and C 4 just because the weights leading from Z 3 to the inputs are going to be different for each of those neurons so this picture looks a little bit messy so let's clean things up a little bit more and just replace all of these hidden layers all these lines between the hidden layers with these symbols these symbols denote fully connected layers where each input to the layer is connected to each output of the layer another common name for these is called dense layers and you can actually write this in tensor flow using just four lines of code so this neural network which is a single layered multi output neural network can be called by instantiating your inputs feeding those inputs into a hidden layer like I'm doing here which is just defined as a single dense layer and then taking those hidden outputs feeding that into another dense layer to produce your outputs the final model is to find it end to end with that single line at the end model of inputs and outputs and that just essentially connects the graph and to end so now let's keep building on this idea now we want to build a deep neural network what is the deep neural network well it's just one where we keep stacking these hidden layers back to back to back to back to create increasingly deeper and deeper models one where the output is computed by going deeper into the network and computing these weighted sums over and over and over again with these activation functions repeatedly applied so this is awesome now we have an idea on how to actually build a neural network from scratch going all the way from a single perceptron and we know how to compose them to create very complex deep neural networks as well let's take a look at how we can apply this to a very real problem that I know a lot of you probably care about so I was thinking of a problem potential that some of you might care about it took me a while but I think this might be one so at MIT we care a lot about passing our classes so I think a very good example is let's train a neural network to determine if you're gonna pass your class so to do this let's start with a simple two input feature model one feature is the number of lectures that you attend the other feature is the number of hours that you spend on the final project again since we have two inputs we can plot this data on a feature map like we did before green points here represent previous students from previous years that pass the class red points represent students that failed the class now if you want to find out if you're gonna pass or fail to class you can also apply yourself on this map you spent you came to four lectures spend five hours on your final project and you want to know if you're going to pass or fail and you want to actually build a neural networks that's going to learn this look at the old the the previous people that took the scores and determine if you all pass or fail as well so let's do it we have two inputs one is four one is five these are fed into a single layered neural network with three hidden units and we see that the final output probability that you will pass this class is 0.1 or 10% not very good that's actually really bad news can anyone guess why this person who actually was in the part of the feature space it looked like they were actually in a good part of this feature space looked like they were gonna pass the class why did this neural network give me such a bad prediction here yeah exactly so the network was not trained essentially this network is like a baby that was just born it has no idea of what lectures are it doesn't know where final labs are it doesn't know anything about this world it's these are just numbers to it it's been randomly initialized it has no idea about the problem so we have to actually train it we have to teach it how to get the right answer so the first thing that we have to do is tell the network when it makes a mistake so that we can correct it in the future now how do we do this in neural networks the loss of a network is actually what defines when the network makes the wrong prediction it takes the input and the predicted output sorry it takes as input the predicted output and the ground truth actual output if your predicted output and your ground truth output are very close to each other then that essentially means that your loss is going to be very low you didn't make a mistake but if your ground truth output is very far away from your predicted output that means that you should have a very high loss you just have a lot of error and your network should correct that so let's assume that we have data not just from one student now but we have data from many many different students passing and failing the class we now care about how this model does not just on that one student but across the entire population of students and we call this the empirical loss and that's just the mean of all of the losses for the individual students we can do it by literally just computing the mean sorry just computing the loss for each of these students and taking their mean when training a network what we really want to do is not minimize the loss for any particular student but we want to minimize the loss across the entire training set so if we go back to our problem on path predicting if you'll pass or fail to class this is a problem of binary classification your output is 0 or 1 we already learned that when outputs are 0 or 1 you're probably going to want to use a soft max output for those of you who aren't familiar with cross entropy this was an idea introduced actually at MIT and a master's thesis here over 50 years ago it's widely used in different areas like thermodynamics and we use it here in machine learning as well it's used all over information theory and what this is doing here is essentially computing the loss between this zero one output and the true output that the student either passed or failed to class let's suppose instead of computing a zero one output now we want to compute the actual grade that you will get on the class so now it's not 0-1 but it's actually a grade it could be any number actually right now we want to use a different loss because the output of our net of our neural network is different and defining losses is actually kind of one of the arts in deep learning so you have to define the questions that you're asking so you can define the loss that you need to optimize over so here in this example since we're not optimizing over zero one loss we're optimizing over any real number we're gonna use a mean squared error loss and that's just computing the squared error so you take the difference between what you expect the output to be and what you're actually output was you take that difference you square it and you compute the mean over your entire population okay great so now let's put some of this information together we've learned how to build neural networks we've learned how to quantify their loss now we can learn how to actually use that loss to iteratively update and train the neural network over time given some data and essentially what this amounts to what this boils down to is that we want to find the weights of the neural network W that minimize this empirical loss so remember again the empirical loss is the loss over the entire training set it's the mean loss of all of the popular of all of the individuals in the training set and we want to minimize that loss and that essentially means we want to find the weights the parameterization of the network that results in the minimum loss remember again that W here is just a collection it's just a set of all of the weights in the network so before I define W as W 0 W 1 which is the weights for the first layer second layer third layer etc and you keep stacking all of these weights together you combine them and you want to compute this optimization problem over all of these weights so again remember our loss function what does our loss function look like it's just a simple function that takes as inputs our weights and if we have two weights we can actually visualize it again we can see on the x-axis one way so this is one scaler that we can change and another way on the y axis and on the z axis this is our actual loss if we want to find the lowest point in this landscape that corresponds to the minimum loss and we want to find that point so that we can find the corresponding weights that were set to achieve that minimum loss so how do we do it we use this technique called loss optimization through gradient descent we start by picking an initial point on this landscape an initial w0 w1 so here's this point this black cross we start at this point we compute the gradient at this local point and in this landscape we can see that the gradient tells us the direction of maximal ascent now that we know the direction of the maximal ascent we can reverse that gradient and actually take a small step in the opposite direction that moves us closer towards the lowest point because we're taking a greedy approach to move in the opposite direction of the gradient we can iteratively repeat this process over and over and over again we computing the gradient at each time and keep moving moving closer towards that lowest minimum we can summarize this algorithm known as gradient descent in pseudocode by this the pseudocode on the left-hand side we start by initializing our weights randomly computing this gradient DJ DW then updating our weights in the opposite direction of that gradient we used this small amount ADA which you can see here and this is essentially what we call the learning rate this is determining how much of a step we take and how much we trust that with that gradient update that we computed we'll talk more about this later but for now let's take a look at this term here this gradient DJ DW is actually explaining how the lost changes with respect to each of the weights but I never actually told you how to compute this term this is actually a crucial part of deep learning and neural networks in general computing this term is essentially all that matters when you try and optimize your network is the most computational part of training as well and it's known as back propagation we'll start with a very simple network with only one hidden input sorry with one input one hidden layer one handed and unit and one output computing the gradient of our loss with respect to W to corresponds to telling us how much a small change in our and W two affects our output or loss so if we write this as a derivative we can start by computing this by simply expanding this derivative into a chain by using the chain rule backwards from the loss through the output and that looks like this so DJ DW 2 becomes DJ dy dy DW 2 ok and that's just a simple application of the chain rule now let's suppose instead of computing DJ DW 2 we want to compute DJ DW 1 so I've changed the W 1 the W 2 to a W 1 on the left hand side and now we want to compute this well we can simply apply the chain rule again we can take that middle term now expand it out again using the same chain rule and back propagate those gradients even further back in in the network and essentially we keep repeating this for every weight in the network using the gradients for later layers to back propagate those errors back into the original input we do this for all of the weights and and that gives us our gradient for each weight yeah you're completely right so the question is how do you ensure that this gives you a global minimum instead of a local minimum right so you don't we have no guarantees on that this is not a global minimum the whole training of stochastic gradient sent is a greedy optimization algorithm so you're only taking this greedy approach and optimizing only a local minimum there are different ways extensions of stochastic gradient descent that don't take a greedy approach they take an adaptive approach they look around a little bit these are typically more expensive to compute stochastic gradient side is extremely cheap to compute in practice and that's one of the reasons it's used the second reason is that in practice local minimum tend to be sufficient so that's the back propagation algorithm in theory it sounds very simple it's just an application of the chain rule but now let's touch on some insights on training these neural networks in practice that makes it incredibly complex and this gets back to that that previous point that previous question that was raised in practice training neural networks is incredibly difficult this is a visualization of the lost landscape of a neural network in practice this is a paper from about a year ago and the authors visualize what a deep neural network lost landscape really looks like you can see many many many local minimum here lot minimizing this loss and finally the optimal true minimum is extremely difficult now recall the update equation that we fought defined for a gradient descent previously we take our weights and we subtract we move towards the negative gradient and we update our weights in that direction I didn't talk too much about this parameter heydo this is what we called the learning rate I briefly touched on it and this is essentially determining how large of a step we take at each iteration in practice setting the learning rate can be extremely difficult and actually very important for making sure that you avoid local minima again so if we set the learning rate to slow then the model may get stuck in local minimum like this it could also converge very slowly even in the case that it gets to a global minimum if we set the learning rate too large the gradients essentially explodes and we diverge from the loss itself and it's also been setting the learning rate to the correct amount can be extremely tedious in practice such that we overshoot some of the local minima get ourselves into a reasonable local global minima and then converge in within that global minima how can we do this in a clever way so one option is that we can try a lot of different possible learning rates see what works best in practice and in practice this is actually a very common technique so a lot of people just try a lot of learning rates and see what works best let's see if we can do something a bit smarter than that as well how about we design an adaptive algorithm that learnt that you that adapts its learning rate according to the lost landscape so this can take into account the gradient at other locations and loss it can take into account how fast we're learning how how large the gradient is at that location or many other options but now since our learning rate is not fixed for all of the iterations of gradient descent we have a bit more flexibility now in learning in fact this has been widely studied as well there are many many different options for optimization schemes that are present in tensorflow and here are examples of some of them during your labs I encourage you to try out different of these different ones of these optimizers and see how they're different which works best which doesn't work so well for your particular problem and they're all adaptive in nature so now I want to continue talking about tips for training these networks in practice and focus on the very powerful idea of batching gradient descent and batching your data in general so to do this let's revisit this idea of gradient descent very quickly so the gradient is actually very computational to compute this back propagation algorithm if you want to compute it for all of the data samples in your training data set which may be massive in modern data sets it's essentially amounting to a summation over all of these data points in most real life problems this is extremely computational and not feasible to compute on every iteration so instead people have come up with this idea of stochastic gradient descent and that involves picking a single point in your data set computing the gradient with respect to that point and then using that to update your grade to update your your weights so this is great because now computing a gradient of a single point is much easier than computing the gradient over many points but at the same time since we're only looking at one point this can be extremely noisy sure we take a different point each time but still when we move and we take a step in that direction of that point we may be going in in a step that's not necessarily representative of the entire data set so is there a middle ground such that we don't have to have a stochastic a stochastic gradient but we can still be kind of computationally efficient in the sense so instead of computing a noisy gradient of a single point let's get a better estimate by batching our data into mini batches of B data points capital B data points so now this gives us an estimate of the true gradient by just averaging the gradient from each of these points this is great because now it's much easier to compute than full gradient descent it's a lot less points typically B is on the order of less than 100 or approximately in that range and it's a lot more accurate than stochastic gradient descent because you're considering a larger population as well this increase in gradient accuracy estimation actually allows us to converge much quicker as well because it means that we can increase our learning rate and trust our gradient more with each step which ultimately means that we can train faster this allows for massively parallel lyza become potations because we can split up batches across the GPU send batches all over the GPU compute their gradients simultaneously and then aggregate them back to even speed up even further now the last topic I want to address before ending is this idea of overfitting this is one of the most fundamental problems in machine learning as a whole not just deep learning and at its core it involves understanding the complexity of your model so you want to build a model that performs well and generalized as well not just to your training set but to your test set as well assume that you want to build a model that describes these points you can go on the left-hand side which is just a line fitting a line through these points this is under fitting the complexity of your model is not large enough to really learn the full complexity of the data or you can go on the right-hand side which is overfitting where you're essentially building a very complex model to essentially memorize the data and this is not useful either because when you show a new data it's not going to sense it's not going to perfectly match on the training data and it means that you're going to have high generalization error ideally we want to end up with a model in the middle that is not too complex to memorize all of our training data but still able to generalize and perform well even we have when we have brand new training and testing inputs so to address this problem let's talk about regularization for deep neural networks deep neural regularization is a technique that you can introduce to your networks that will discourage complex models from being learned and as before we've seen that it's crucial for our models to be able to generalize to data beyond our training set but also to generate generalize to data in our testing set as well the most popular regularization technique in deep learning is a very simple idea called dropout let's revisit this and a picture of a deep neural network again and drop out during training we randomly set some of our activations of the hidden neurons to 0 with some probability that's why we call it dropping out because we're essentially killing off those neurons so let's do that so we kill off these random sample of neurons and now we've created a different pathway through the network let's say that you dropped 50 percent of the neurons this means that those activations are set to zero and the network is not going to rely too heavily on any particular path through the network but it's instead going to find a whole ensemble of different paths because it doesn't know which path is going to be dropped out at any given time we repeat this process on every training iteration now dropping out a new set of 50 50 % of the neurons and the result of this is essentially a model that like I said creates an ensemble of multiple models through the paths of the network and is able to generalize better to unseen test data so the second technique for a regularization is this notion that we'll talk about which is early stopping and the idea here is also extremely simple let's train our neural network like before no dropout but let's just stop training before we have a chance to overfit so we start training and the definition of overfitting is just when our model starts to perform worse on the test set then on the training set so we can start off and we can plot how our loss is going for both the training and test set we can see that both are decreasing so we keep training now we can see that the training the validation both losses are kind of starting to plateau here we can keep going the training loss is always going to decay it's always going to keep decreasing because especially if you have a network that is having such a large capacity to essentially memorize your data you can always perfectly get a training accuracy of 0 that's not always the case but in a lot of times with deep neural networks since they're so expressive and have so many weights they're able to actually memorize the data if you let them train for too long if we keep training like you see the training set continues to decrease now the validation set starts to increase and if we keep doing this the trend continues the idea of early stopping is essentially that we want to focus on this point here and stop training when we reach this point so we can key basically records of the model during training and once we start to detect overfitting we can just stop and take that last model that was still occurring before the overfitting happened right so on the left hand side you can see the under fitting you don't want to stop too early you want to let the model get the minimum validation set accuracy but also you don't want to keep training such that the validation accuracy starts the increase on the other end as well so I'll conclude this first lecture by summarizing three key points that we have covered so far first we learned about the fundamental building blocks of deep learning which is just a single neuron or called the perceptron we learned about back propagation how to stack these neurons into complex deep neural networks how to back propagate and errors through them and learn complex loss functions and finally we discussed some of the practical details and tricks to training neural networks that are really crucial today if you want to work in this field such as batching regularization and and others so now I'll take any questions or if there are no questions and I'm gonna hand the mic over to ovah who will talk about sequence modeling thank you [Applause] |
MIT_6S191_Introduction_to_Deep_Learning | MIT_6S191_2021_Deep_Generative_Modeling.txt | Hi everyone and welcome to lecture 4 of MIT 6.S191! In today's lecture we're going to be talking about how we can use deep learning and neural networks to build systems that not only look for patterns in data but actually can go a step beyond this to generate brand new synthetic examples based on those learned patterns and this i think is an incredibly powerful idea and it's a particular subfield of deep learning that has enjoyed a lot of success and and gotten a lot of interest in the past couple of years but i think there's still tremendous tremendous potential of this field of degenerative modeling in the future and in the years to come particularly as we see these types of models and the types of problems that they tackle becoming more and more relevant in a variety of application areas all right so to get started i'd like to consider a quick question for each of you here we have three photos of faces and i want you all to take a moment look at these faces study them and think about which of these faces you think is real is it the face on the left is it the face in the center is it the face on the right which of these is real well in truth each of these faces are not real they're all fake these are all images that were synthetically generated by a deep neural network none of these people actually exist in the real world and hopefully i think you all have appreciated the realism of each of these synthetic images and this to me highlights the incredible power of deep generative modeling and not only does it highlight the power of these types of algorithms and these types of models but it raises a lot of questions about how we can consider the fair use and the ethical use of such algorithms as they are being deployed in the real world so by setting this up and motivating in this way i first i now like to take a step back and consider fundamentally what is the type of learning that can occur when we are training neural networks to perform tasks such as these so so far in this course we've been considering what we call supervised learning problems instances in which we are given a set of data and a set of labels associated with that data and we our goal is to learn a functional mapping that moves from data to labels and those labels can be class labels or continuous values and in this course we've been concerned primarily with developing these functional mappings that can be described by deep neural networks but at their core these mappings could be anything you know any sort of statistical function the topic of today's lecture is going to focus on what we call unsupervised learning which is a new class of learning problems and in contrast to supervised settings where we're given data and labels in unsupervised learning we're given only data no labels and our goal is to train a machine learning or deep learning model to understand or build up a representation of the hidden and underlying structure in that data and what this can do is it can allow sort of an insight into the foundational structure of the data and then in turn we can use this understanding to actually generate synthetic examples and unsupervised learning beyond this domain of deep generative modeling also extends to other types of problems and example applications which you may be familiar with such as clustering algorithms or dimensionality reduction algorithms generative modeling is one example of unsupervised learning and our goal in this case is to take as input examples from a training set and learn a model that represents the distribution of the data that is input to that model and this can be achieved in two principle ways the first is through what is called density estimation where let's say we are given a set of data samples and they fall according to some density the task for building a deep generative model applied to these samples is to learn the underlying probability density function that describes how and where these data fall along this distribution and we can not only just estimate the density of such a probability density function but actually use this information to generate new synthetic samples where again we are considering some input examples that fall and are drawn from some training data distribution and after building up a model using that data our goal is now to generate synthetic examples that can be described as falling within the data distribution modeled by our model so the key the key idea in both these instances is this question of how can we learn a probability distribution using our model which we call p model of x that is so similar to the true data distribution which we call p data of x this will not only enable us to effectively estimate these probability density functions but also generate new synthetic samples that are realistic and match the distribution of the data we're considering so this this i think summarizes concretely what are the key principles behind generative modeling but to understand how generative modeling may be informative and also impactful let's take this this idea a step further and consider what could be potential impactful applications and real world use cases of generative modeling what generative models enable us as the users to do is to automatically uncover the underlying structure and features in a data set the reason this can be really important and really powerful is often we do not know how those features are distributed within a particular data set of interest so let's say we're trying to build up a facial detection classifier and we're given a data set of faces which for which we may not know the exact distribution of these faces with respect to key features like skin tone or pose or clothing items and without going through our data set and manually ex inspecting each of these instances our training data may actually be very biased with respect to some of these features without us even knowing it and as you'll see in this lecture and in today's lab what we can actually do is train generative models that can automatically learn the landscape of the features in a data set like these like that of faces and by doing so actually uncover the regions of the training distribution that are underrepresented and over represented with respect to particular features such as skin tone and the reason why this is so powerful is we can actually now use this information to actually adjust how the data is sampled during training to ultimately build up a more fair and more representative data set that then will lead to a more fair and unbiased model and you'll get practice doing exactly this and implementing this idea in today's lab exercise another great example in use case where generative models are exceptionally powerful is this broad class of problems that can be considered outlier or anomaly detection one example is in the case of self-driving cars where it's going to be really critical to ensure that an autonomous vehicle governed and operated by a deep neural network is able to handle all all of the cases that it may encounter on the road not just you know the straight freeway driving that is going to be the majority of the training data and the majority of the time the car experiences on the road so generative models can actually be used to detect outliers within training distributions and use this to again improve the training process so that the resulting model can be better equipped to handle these edge cases and rare events all right so hopefully that motivates why and how generative models can be exceptionally powerful and useful for a variety of real world applications to dive into the bulk of the technical content for today's lecture we're going to discuss two classes of what we call latent variable models specifically we'll look at autoencoders and generative adversarial networks organs but before we get into that i'd like to first begin by discussing why these are called latent variable models and what we actually mean when we use this word latent and to do so i think really the best example that i've personally come across for understanding what a latent variable is is this story that is from plato's work the republic and this story is called the myth of the cave or the parable of the cave and the story is as follows in this myth there are a group of prisoners and these prisoners are constrained as part of their prison punishment to face a wall and the only things that they can see on this wall are the shadows of particular objects that are being passed in front of a fire that's behind them so behind their heads and out of their line of sight and the prisoners the only thing they're really observing are these shadows on the wall and so to them that's what they can see that's what they can measure and that's what they can give names to that's really their reality these are their observed variables but they can't actually directly observe or measure the physical objects themselves that are actually casting these shadows so those objects are effectively what we can analyze like latent variables they're the variables that are not directly observable but they're the true explanatory factors that are creating the observable variables which in this case the prisoners are seeing like the shadows cast on the wall and so our question in in generative modeling broadly is to find ways of actually learning these underlying and hidden latent variables in the data even when we're only given the observations that are made and it's this is an extremely extremely complex problem that is very well suited to learning by neural networks because of their power to handle multi-dimensional data sets and to learn combinations of non-linear functions that can approximate really complex data distributions all right so we'll first begin by discussing a simple and foundational generative model which tries to build up this latent variable representation by actually self-encoding the input and these models are known as auto encoders what an auto encoder is is it's an approach for learning a lower dimensional latent space from raw data to understand how it works what we do is we feed in as input raw data for example this image of a two that's going to be passed through many successive deep neural network layers and at the output of that succession of neural network layers what we're going to generate is a low dimensional latent space a feature representation and that's really the goal that we're we're trying to predict and so we can call this portion of the network an encoder since it's mapping the data x into a encoded vector of latent variables z so let's consider this this latent space z if you've noticed i've represented z as having a smaller size a smaller dimensionality as the input x why would it be important to ensure the low dimensionality of this latent space z having a low dimensional latent space means that we are able to compress the data which in the case of image data can be you know on the order of many many many dimensions we can compress the data into a small latent vector where we can learn a very compact and rich feature representation so how can we actually train this model are we going to have are we going to be able to supervise for the particular latent variables that we're interested in well remember that this is an unsupervised problem where we have training data but no labels for the latent space z so in order to actually train such a model what we can do is learn a decoder network and build up a decoder network that is used to actually reconstruct the original image starting from this lower dimensional latent space and again this decoder portion of our auto encoder network is going to be a series of layers neural network layers like convolutional layers that's going to then take this hidden latent vector and map it back up to the input space and we call our reconstructed output x hat because it's our prediction and it's an imperfect reconstruction of our input x and the way that we can actually train this network is by looking at the original input x and our reconstructed output x hat and simply comparing the two and minimizing the distance between these two images so for example we could consider the mean squared error which in the case of images means effectively subtracting one image from another and squaring the difference right which is effectively the pixel wise difference between the input and reconstruction measuring how faithful our reconstruction is to the original input and again notice that by using this reconstruction loss this difference between the reconstructed output and our original input we do not require any labels for our data beyond the data itself right so we can simplify this diagram just a little bit by abstracting away these individual layers in the encoder and decoder components and again note once again that this loss function does not require any labels it is just using the raw data to supervise itself on the output and this is a truly powerful idea and a transformative idea because it enables the model to learn a quantity the latent variables z that we're fundamentally interested in but we cannot simply observe or cannot readily model and when we constrain this this latent space to a lower dimensionality that affects the degree to which and the faithfulness to which we can actually reconstruct the input and what this you the way you can think of this is as imposing a sort of information bottleneck during the model's training and learning process and effectively what this bottleneck does is it's a form of compression right we're taking the input data compressing it down to a much smaller latent space and then building back up a reconstruction and in practice what this results in is that the lower the dimensionality of your latent space the poorer and worse quality reconstruction you're going to get out all right so in summary these autoencoder structures use this sort of bottlenecking hidden layer to learn a compressed latent representation of the data and we can self-supervise the training of this network by using what we call a reconstruction loss that forces the forces the autoencoder network to encode as much information about the data as possible into a lower dimensional latent space while still being able to build up faithful reconstructions so the way i like to think of this is automatically encoding information from the data into a lower dimensional latent space let's now expand upon this idea a bit more and introduce this concept and architecture of variational auto encoders or vaes so as we just saw traditional auto encoders go from input to reconstructed output and if we pay closer attention to this latent layer denoted here in orange what you can hopefully realize is that this is just a normal layer in a neural network just like any other layer it's deterministic if you're going to feed in a particular input to this network you're going to get the same output so long as the weights are the same so effectively a traditional auto encoder learns this deterministic encoding which allows for reconstruction and reproduction of the input in contrast variational auto encoders impose a stochastic or variational twist on this architecture and the idea behind doing so is to generate smoother representations of the input data and improve the quality of the of not only of reconstructions but also to actually generate new images that are similar to the input data set but not direct reconstructions of the input data and the way this is achieved is that variational autoencoders replace that deterministic layer z with a stochastic sampling operation what this means is that instead of learning the latent variables z directly for each variable the variational autoencoder learns a mean and a variance associated with that latent variable and what those means and variances do is that they parameterize a probability distribution for that latent variable so what we've done in going from an autoencoder to a variational autoencoder is going from a vector of latent variable z to learning a vector of means mu and a vector of variances sigma sigma squared that parametrize these variables and define probability distributions for each of our latent variables and the way we can actually generate new data instances is by sampling from the distribution defined by these muse and sigmas to to generate a latent sample and get probabilistic representations of the latent space and what i'd like you to appreciate about this network architecture is that it's very similar to the autoencoder i previously introduced just that we have this probabilistic twist where we're now performing the sampling operation to compute samples from each of the latent variables all right so now because we've introduced this sampling operation this stochasticity into our model what this means for the actual computation and learning process of the network the encoder and decoder is that they're now probabilistic in their nature and the way you can think of this is that our encoder is going to be trying to learn a probability distribution of the latent space z given the input data x while the decoder is going to take that learned latent representation and compute a new probability distribution of the input x given that latent distribution z and these networks the encoder the decoder are going to be defined by separate sets of weights phi and theta and the way that we can train this variational autoencoder is by defining a loss function that's going to be a function of the data x as well as these sets of weights phi and theta and what's key to how vaes can be optimized is that this loss function is now comprised of two terms instead of just one we have the reconstruction loss just as before which again is going to capture the this difference between the input and the reconstructed output and also a new term to our loss which we call the regularization loss also called the vae loss and to take a look in more detail at what each of these loss terms represents let's first emphasize again that our overall loss function is going to be defined and uh taken with respect to the sets of weights of the encoder and decoder and the input x the reconstruction loss is very similar to before right and you can think of it as being driven by log likelihood a log likelihood function for example for image data the mean squared error between the input and the output and we can self-supervise the reconstruction loss just as before to force the latent space to learn and represent faithful representations of the input data ultimately resulting in faithful reconstructions the new term here the regularization term is a bit more interesting and completely new at this stage so we're going to dive in and discuss it further in a bit more detail so our probability distribution that's going to be computed by our encoder q phi of z of x is a distribution on the latent space z given the data x and what regularization enforces is that as a part of this learning process we're going to place a prior on the latent space z which is effectively some initial hypothesis about what we expect the distributions of z to actually look like and by imposing this regularization term what we can achieve is that the model will try to enforce the zs that it learns to follow this prior distribution and we're going to denote this prior as p of z this term here d is the regularization term and what it's going to do is it's going to try to enforce a minimization of the divergence or the difference between what the encoder is trying to infer the probability distribution of z given x and that prior that we're going to place on the latent variables p of z and the idea here is that by imposing this regularization factor we can try to keep the network from overfitting on certain parts of the latent space by enforcing the fact that we want to encourage the latent variables to adopt a distribution that's similar to our prior so we're going to go through now you know both the mathematical basis for this regularization term as well as a really intuitive walk through of what regularization achieves to help give you a concrete understanding and an intuitive understanding about why regularization is important and why placing a prior is important so let's first consider um yeah so to re-emphasize once again this this regularization term is going to consider the divergence between our inferred latent distribution and the fixed prior we're going to place so before we to to get into this let's consider what could be a good choice of prior for each of these latent uh variables how do we select p of z i'll first tell you what's commonly done the common choice that's used very extensively in the community is to enforce the latent variables to roughly follow normal gaussian distributions which means that they're going to be a normal distribution centered around mean 0 and have a standard deviation and variance of 1. by placing these normal gaussian priors on each of the latent variables and therefore on our latent distribution overall what this encourages is that the learned encodings learned by the encoder portion of our vae are going to be sort of distributed evenly around the center of each of the latent variables and if you can imagine in picture when you have sort of a roughly even distribution around the center of a particular region of the latent space what this means is that outside of this region far away there's going to be a greater penalty and this can result in instances from instances where the network is trying to cheat and try to cluster particular points outside the center these centers in the latent space like if it was trying to memorize particular outliers or edge cases in the data after we place a normal gaussian prior on our latent variables we can now begin to concretely define the regularization term component of our loss function this loss this term to the loss is very similar in principle to a cross-entropy loss that we saw before where the key is that we're going to be defining the distance function that describes the difference or the or the divergence between the inferred latent distribution q phi of z given x and the prior that we're going to be placing p of z and this term is called the kublac libor or kl divergence and when we choose a normal gaussian prior we res this results in the kl divergence taking this particular form of this equation here where we're using the means and sigmas as input and computing this distance metric that captures the divergence of that learned latent variable distribution from the normal gaussian all right so now i really want to spend a bit of time to get some build up some intuition about how this regularization and works and why we actually want to regularize our vae and then also why we select a normal prior all right so to do this let's let's consider the following question what properties do we want this to achieve from regularization why are we actually regularizing our our network in the first place the first key property that we want for a generative model like a vae is what i can what i like to think of as continuity which means that if there are points that are represented closely in the latent space they should also result in similar reconstructions similar outputs similar content after they are decoded you would expect intuitively that regions in the latent space have some notion of distance or similarity to each other and this indeed is a really key property that we want to achieve with our generative model the second property is completeness and it's very related to continuity and what this means is that when we sample from the latent space to decode the latent space into an output that should result in a meaningful reconstruction and meaningful sampled content that is you know resembling the original data distribution you can imagine that if we're sampling from the latent space and just getting garbage out that has no relationship to our input this could be a huge huge problem for our model all right so with these two properties in mind continuity and completeness let's consider the consequences of what can occur if we do not regularize our model well without regularization what could end up happening with respect to these two properties is that there could be instances of points that are close in latent space but not similarly decoded so i'm using this really intuitive in illustration where these dots represent abstracted away sort of regions in the latent space and the shapes that they relate to you can think of as what is going to be decoded after those uh instances in the latent space are passed through the decoder so in this example we have these two dots the greenish dot and the reddish dot that are physically close in latent space but result in completely different shapes when they're decoded we also have an instance of this purple point which when it's decoded it doesn't result in a meaningful content it's just a scribble so by not regularizing and i'm abstracting a lot away here and that's on purpose we could have these instances where we don't have continuity and we don't have completeness therefore our goal with regularization is to be able to realize a model where points that are close in the latent space are not only similarly decoded but also meaningfully decoded so for example here we have the red dot and the orange dot which result in both triangle like shapes but with slight variations on the on the triangle itself so this is the intuition about what regularization can enable us to achieve and what are desired properties for these generative models okay how can we actually achieve this regularization and how does the normal prior fit in as i mentioned right vaes they don't just learn the latent variable z directly they're trying to encode the inputs as distributions that are defined by mean and variance so my first question to you is is it going to be sufficient to just learn mean and variance learn these distributions can that guarantee continuity and completeness no and let's understand why all right without any sort of regularization what could the model try to resort to remember that the vae or or that the vae the loss function is defined by both a reconstruction term and a regularization term if there is no regularization you can bet that the model is going to just try to optimize that reconstruction term so it's effectively going to learn to minimize the reconstruction loss even though we're encoding the latent variables via mean and variance and two instances two consequences of that is that you can have instances where these learned variances for the latent variable end up being very very very small effectively resulting in pointed distributions and you can also have means that are totally divergent from each other which result in discontinuities in the latent space and this can occur while still trying to optimize that reconstruction loss direct consequence of not regularizing by in order to overcome these pro these problems we need to regularize the variance and the mean of these distributions that are being returned by the encoder and the normal prior placing that normal gaussian distribution as our prior helps us achieve this and to understand why exactly this occurs is that effectively the normal prior is going to encourage these learned latent variable distributions to overlap in latent space recall right mean zero variance of one that means all the all the latent variables are going to be enforced to try to have the same mean a centered mean and all their variances are going to be regularized for each and every of the latent variable distributions and so this will ensure a smoothness and a regularity and an overlap in the lane space which will be very effective in helping us achieve these properties of continuity and completeness centering the means regularizing the variances so the regularization via this normal prior by centering each of these latent variables regularizing their their variances is that it helps enforce this continuous and complete gradient of information represented in the latent space where again points and distances in the latent space have some relationship to the reconstructions and the content of the reconstructions that result note though that there's going to be a trade-off between regularizing and reconstructing the more we regularize there's also a risk of suffering the quality of the reconstruction and the generation process itself so in optimizing gaze there's going to be this trade-off that's going to try to be tuned to fit the problem of interest all right so hopefully by walking through this this example and considering these points you've built up more intuition about why regularization is important and how specifically the normal prior can help us regularize great so now we've defined our loss function we know that we can reconstruct the inputs we've understood how we can regularize learning and achieve continuity and completeness by this normal prior these are all the components that define a forward pass through the network going from input to encoding to decoded reconstruction but we're still missing a critical step in putting the whole picture together and that's of back propagation and the key here is that because of this fact that we've introduced this stochastic sampling layer we now have a problem where we can't back propagate gradients through a sampling layer that has this element of stochasticity backpropagation requires deterministic nodes deterministic layers for which we can iteratively apply the chain rule to optimize gradients optimize the loss via gradient descent all right vaes introduced sort of a breakthrough idea that solved this issue of not being able to back propagate through a sampling layer and the key idea was to actually subtly re-parametrize the sampling operation such that the network could be trained completely end-to-end so as we as we already learned right we're trying to build up this latent distribution defined by these variables z uh defining placing a normal prior defined by a mean and a variance and we can't simply back propagate gradients through the sampling layer because we can't compute gradients through this stochastic sample the key idea instead is to try to consider the sampled latent vector z as a sum defined by a fixed mu a fixed sigma vector and scale that sigma vector by random constants that are going to be drawn from a prior distribution such as a normal gaussian and by reparameterizing the sampling operation as as so we still have this element of stochasticity but that stochasticity is introduced by this random constant epsilon which is not occurring within the bottleneck latent layer itself we've reparametrized and distributed it elsewhere to visualize how this looks let's consider the following where originally in the original form of the vae we had this deterministic nodes which are the weights of the network as well as an input vector and we are trying to back propagate through the stochastic sampling node z but we can't do that so now by re-parametrization what we've we've achieved is the following form where our latent variable z are defined with respect to uh mu sigma squared as well as these noise factor epsilon such that when we want to do back propagation through the network to update we can directly back propagate through z defined by mu and sigma squared because this epsilon value is just taken as a constant it's re-parametrized elsewhere and this is a very very powerful trick the re-parametrization trick because it enables us to train variational auto encoders and to end by back propagating with respect to z and with respect to the actual gradient the actual weights of the encoder network all right one side effect and one consequence of imposing these distributional priors on the latent variable is that we can actually sample from these latent variables and individually tune them while keeping all of the other variables fixed and what you can do is you can tune the value of a particular latent variable and run the decoder each time that variable is changed each time that variable is perturbed to generate a new reconstructed output so an example of that result is is in the following where this perturbation of the latent variables results in a representation that has some semantic meaning about what the network is maybe learning so in this example these images show variation in head pose and the different dimensions of z the latent space the different latent variables are in this way encoding different latent features that can be interpreted by keeping all other variables fixed and perturbing the value of one individual lane variable ideally in order to optimize vas and try to maximize the information that they encode we want these latent variables to be uncorrelated with each other effectively disentangled and what that could enable us to achieve is to learn the richest and most compact latent representation possible so in this case we have head pose on the x-axis and smile on the y-axis and we want these to be as uncorrelated with each other as possible one way we can achieve this that's been shown to achieve this disentanglement is rather a quite straightforward approach called beta vaes so if we consider the loss of a standard vae again we have this reconstruction term defined by a log likelihood and a regularization term defined by the kl divergence beta vaes introduce a new hyperparameter beta which controls the strength of this regularization term and it's been shown mathematically that by increasing beta the effect is to place constraints on the latent encoding such as to encourage disentanglement and there have been extensive proofs and discussions as to how exactly this is achieved but to consider the results let's again consider the problem of face reconstruction where using a standard vae if we consider the latent variable of head pose or rotation in this case where beta equals one what you can hopefully appreciate is that as the face pose is changing the smile of some of these faces is also changing in contrast by en enforcing a beta much larger than one what is able to be achieved is that the smile remains relatively constant while we can perturb the single latent variable of the head of rotation and achieve perturbations with respect to head rotation alone all right so as i motivated and introduced in the beginning in the introduction of this lecture one powerful application of generative models and latent variable models is in model d biasing and in today's lab you're actually going to get real hands-on experience in building a variational auto encoder that can be used to achieve automatic de-biasing of facial classification systems facial detection systems and the power and the idea of this approach is to build up a representation a learned latent distribution of face data and use this to identify regions of that latent space that are going to be over-represented or under-represented and that's going to all be taken with respect to particular learned features such as skin tone pose objects clothing and then from these learned distributions we can actually adjust the training process such that we can place greater weight and greater sampling on those images and on those faces that fall in the regions of the latent space that are under represented automatically and what's really really cool about deploying a vae or a latent variable model for an application like model d biasing is that there's no need for us to annotate and prescribe the features that are important to actually devise against the model learns them automatically and this is going to be the topic of today's lab and it's also raises the opens the door to a much broader space that's going to be explored further in a later spotlight lecture that's going to focus on algorithmic bias and machine learning fairness all right so to summarize the key points on vaes they compress representation of data into an encoded representation reconstruction of the data input allows for unsupervised learning without labels we can use the reparameterization trick to train vas end to end we can take hidden latent variables perturb them to interpret their content and their meaning and finally we can sample from the latent space to generate new examples but what if we wanted to focus on generating samples and synthetic samples that were as faithful to a data distribution generally as possible to understand how we can achieve this we're going to transition to discuss a new type of generative model called a generative adversarial network or gam for short the idea here is that we don't want to explicitly model the density or the or the distribution underlying some data but instead just learn a representation that can be successful in generating new instances that are similar to the data which means that we want to optimize to sample from a very very complex distribution which cannot be learned and modeled directly instead we're going to have to build up some approximation of this distribution and the really cool and and breakthrough idea of gans is to start from something extremely extremely simple just random noise and try to build a neural network a generative neural network that can learn a functional transformation that goes from noise to the data distribution and by learning this functional generative mapping we can then sample in order to generate fake instances synthetic instances that are going to be as close to the real data distribution as possible the breakthrough to achieving this was this structure called gans where the key component is to have two neural networks a generator network and a discriminator network that are effectively competing against each other they're adverse areas specifically we have a generator network which i'm going to denote here on out by g that's going to be trained to go from random noise to produce an imitation of the data and then the discriminator is going to take that synthetic fake data as well as real data and be trained to actually distinguish between fake and real and in training these two networks are going to be competing each other competing against each other and so in doing so overall the effect is that the discriminator is going to get better and better at learning how to classify real and fake and the better it becomes at doing that it's going to force the generator to try to produce better and better synthetic data to try to fool the discriminator back and forth back and forth so let's now break this down and go from a very simple toy example to get more intuition about how these gans work the generator is going to start again from some completely random noise and produce fake data and i'm going to show that here by representing these data as points on a one-dimensional line the discriminator is then going to see these points as well as real data and then it's going to be trained to output a probability that the data it sees are real or if they are fake and in the beginning it's not going to be trained very well right so its predictions are not going to be very good but then you're going to train it and you're going to train it and it's going to start increasing the profit probabilities of real versus not real appropriately such that you get this perfect separation where the discriminator is able to perfectly distinguish what is real and what is fake now it's back to the generator and the generator is going to come back it's going to take instances of where the real data lie as inputs to train and then it's going to try to improve its imitation of the data trying to move the fake data the synthetic data that is generated closer and closer to the real data and once again the discriminator is now going to receive these new points and it's going to estimate a probability that each of these points is real and again learn to decrease the probability of the fake points being real further and further and now we're going to repeat again and one last time the generator is going to start moving these fake points closer and closer to the real data such that the fake data are almost following the distribution of the real data at this point it's going to be really really hard for the discriminator to effectively distinguish between what is real and what is fake while the generator is going to continue to try to create fake data instances to fool the discriminator and this is really the key intuition behind how these two components of gans are essentially competing with each other all right so to summarize how we train gowns the generator is going to try to synthesize fake instances to full discriminator which is going to be trained to identify the synthesized instances and discriminate these as fake to actually train we're going to see that we are going to define a loss function that defines competing and adversarial objectives for each of the discriminator and the generator and a global optimum the best we could possibly do would mean that the generator could perfectly reproduce the true data distribution such that the discriminator absolutely cannot tell what's synthetic versus what's real so let's go through how the loss function for again breaks down the the loss term for again is based on that familiar cross-entropy loss and it's going to now be defined between the true and generated distributions so we're first going to consider the loss from the perspective of the discriminator we want to try to maximize the probability that the fake data is identified as fake and so to break this down here g of z defines the generator's output and so d of g of z is the discriminator's estimate of the probability that a fake instance is actually fake d of x is the discriminator's estimate of the probability that a real instance is fake so one minus d of x is its probability estimate that a real instance is real so together from the point of view of the discriminator we want to maximize this probability maximize probability fake is fake maximize the estimate of probability really is real now let's turn our attention to the generator remember that the generator is taking random noise and generating an instance it cannot directly affect the term d of x which shows up in the loss right because d of x is solely based on the discriminator's operation on the real data so for the generator the generator is going to have the adversarial objective to the discriminator which means is going to try to minimize this term effectively minimizing the probability that the discriminator can distinguish its generated data as as uh fake d of g of z and the goal for the generator is to minimize this term of the objective so the objective of the generator is to try to synthesize fake instances that fool the discriminator and eventually over the course of training the discriminator the discriminator is going to be as best as it possibly can be discriminating real versus fake therefore the ultimate goal of the generator is to synthesize fake instances that fool the best discriminator and this is all put together in this min max objective function which has these two components optimized adversarially and then after training we can actually use the generator network which is now fully trained to produce new data instances that have never been seen before so we're going to focus on that now and what is really cool is that when the train generator of a gam synthesizes new instances it's effectively learning a transformation from a distribution of noise to a target data distribution and that transformation that mapping is going to be what's learned over the course of training so if we consider one point from a latent noise distribution it's going to result in a particular output in the target data space and if we consider another point of random noise feed it through the generator it's going to result in a new instance that and that new instance is going to fall somewhere else on the data manifold and indeed what we can actually do is interpolate and trans and traverse in the space of gaussian noise to result in interpolation in the target space and you can see an example of this result here where a transformation in series reflects a traversal across the alert the target data manifold and that's produced in the synthetic examples that are outputted by the generator all right so in the final few minutes of this lecture i'm going to highlight some of the recent advances in gans and hopefully motivate even further why this approach is so powerful so one idea that's been extremely extremely powerful is this idea of progressive gans progressive growing which means that we can iteratively build more detail into the generated instances that are produced and this is done by progressively adding layers of increasing spatial resolution in the case of image data and by incrementally building up both the generator and discriminator networks in this way as training progresses it results in very well resolved synthetic images that are output ultimately by the generator so some results of this idea of progressive a progressive gan are displayed here another idea that has also led to tremendous improvement in the quality of synthetic examples generated by gans is a architecture improvement called stylegan which combines this idea of progressive growing that i introduced earlier with a principles of style transfer which means trying to compose an image in the style of another image so for example what we can now achieve is to map input images source a using application of coarse grained styles from secondary sources onto those targets to generate new instances that mimic the style of of source b and that's that result is shown here and hopefully you can appreciate that these coarse-grained features these coarse-grained styles like age facial structure things like that can be reflected in these synthetic examples this same style gan system has led to tremendously realistic synthetic images in the areas of both face synthesis as well as for animals other objects as well another extension to the gan architecture that's has enabled particularly powerful applications for select problems and tasks is this idea of conditioning which imposes a bit of additional further structure on the types of outputs that can be synthesized by again so the idea here is to condition on a particular label by supplying what is called a conditioning factor denoted here as c and what this allows us to achieve is instances like that of paired translation in the case of image synthesis where now instead of a single input as training data for our generator we have pairs of inputs so for example here we consider both a driving scene and a corresponding segmentation map to that driving scene and the discriminator can in turn be trained to classify fake and real pairs of data and again the generator is going to be learned to going to be trained to try to fool the discriminator example applications of this idea are seen as follows where we can now go from a input of a semantic segmentation map to generate a synthetic street scene mapping that mapping um according to that segmentation or we can go from an aerial view from a satellite image to a street map view or from particular labels of an architectural building to a synthetic architectural facade or day to night black and white to color edges to photos different instances of paired translation that are achieved by conditioning on particular labels so another example which i think is really cool and interesting is translating from google street view to a satellite view and vice versa and we can also achieve this dynamically so for example in coloring given an edge input the network can be trained to actually synthetically color in the artwork that is resulting from this particular edge sketch another idea instead of pair translation is that of unpaired image to image translation and this is can be achieved by a network architecture called cyclegan where the model is taking as input images from one domain and is able to learn a mapping that translates to another domain without having a paired corresponding image in that other domain so the idea here is to transfer the style and the the distribution from one domain to another and this is achieved by introducing the cyclic relationship in a cyclic loss function where we can go back and forth between a domain x and a domain y and in this system there are actually two generators and two discriminators that are going to be trained on their respective generation and discrimination tasks in this example the cyclogan has been trained to try to translate from the domain of horses to the domain of zebras and hopefully you can appreciate that in this example there's a transformation of the skin of the horse from brown to a zebra-like skin in stripes and beyond this there's also a transformation of the surrounding area from green grass to something that's more brown in the case of the zebra i think to get an intuition about how this cyclogan transformation is going is working let's go back to the idea that conventional gans are moving from a distribution of gaussian noise to some target data manifold with cycle gans the goal is to go from a particular data manifold x to another data manifold why and in both cases and i think the underlying concept that makes gans so powerful is that they function as very very effective distribution transform transformers and it can achieve these distribution transformations finally i'd like to consider one additional application that you may be familiar with of using cycle gans and that's to transform speech and to actually use this psychogam technique to synthesize speech in someone else's voice and the way this is done is by taking a bunch of audio recordings in one voice and audio recordings in another voice and converting those audio waveforms into an image spec representation which is called a spectrogram we can then uh train the cycle gan to operate on these spectrogram images to transform representations from voice a to make like to make them appear like they appear that they are from another voice voice be and this is exactly how we did the speech transformation for the synthesis of obama's voice in the demonstration that alexander gave in the first lecture so to inspect this further let's compare side by side the original audio from alexander as well as the synthesized version in obama's voice that was generated using a cyclegan hi everybody and welcome to mit 6s191 the official introductory course on deep learning taught here at mit so notice that the spectrogram that results for obama's voice is actually generated by an operation on alexander's voice and effectively learning a domain transformation from obama domain onto the domain of alexander domain and the end result is that we create and synthesize something that's more obama-like all right so to summarize hopefully over the course of this lecture you built up understanding of generative modeling and classes of generative models that are particularly powerful in enabling probabilistic density estimation as well as sample generation and with that i'd like to close the lecture and introduce you to the remainder of of today's course which is going to focus on our second lab on computer vision specifically exploring this question of de-biasing in facial detection systems and using variational auto encoders to actually achieve an approach for automatic de-biasing of classification systems so i encourage you to come to the class gather town to have your questions on the labs answered and to discuss further with any of us thank you |
MIT_6S191_Introduction_to_Deep_Learning | MIT_6S191_2023_The_Modern_Era_of_Statistics.txt | thank you hi everyone it's Alexander for the introduction all right very excited to um talk about these modern era of Statistics so you you've heard throughout the lectures probably a lot about the you know deep learning and the technologies that enables this but what I want to talk about I want to say why it works okay so and give you some intuitions about um where we are standing in terms of the theoretical analysis of this system because a lot of people think that machine learning is an ad hoc field and uh deep learning is extremely ad hoc just change a couple of hyperparameters and everything starts working but that's not actually the case so we have some cues like what's what's the whole thing is about okay all right so to motivate I want to tell you about um this uh I think it's a very common photo these days that you can see that uh the mors doubling law got broken after 2012 and we got into kind of a new type of models that as we grow in size on the Y AIS here what you see is a u energy consumption of these models it could be even accuracy you know the accuracy of this models or generalization ability of these models goes higher and higher as uh we scale them okay and and that's the kind of observation that we had so far and that's what I mean by modern era because we are exhaustively uh um increasing their size as this um photo can show you like some scale of these large language models that are out there you can see that we have models up to 540 billion parameters where uh you know it's beyond our understanding how these models perform representation learning and um it's not only in the realm of language model modeling but it's also across like different fields for example in Time series modeling uh in medical diagnosis in financial time series we have new technologies that are getting better and better with size and um this seems to be the case across different data modalities and perhaps the one that Ava was presented before generative modeling we went from in 2014 generative models like that to generative models like this right so the quality drastically improved not only because of the size but also the underlying structures that enabled uh us to scale these neural networks because we already knew that if you have two stack two layers of neural networks next to each other you have a universal approximator right so then why couldn't we scale it now we had to find the right structure in order to do that for example diffusion was one of those structures that ala actually presented so it seems like bigger seems better but why okay let's find out the reason so you all um I assume that you would all know how to solve these two equations so n equations requires n unknown right so how many of you you know still how to solve this kind of thing I yeah so I think it's pretty but then the crazy thing about deep learning is that it comes and says that yeah make the number of these X and Y larger and larger excessively for these two equations and things things would start working even better and what does it mean even better when you have like two equations and you have like multip a lot of unknowns how does that even make sense you know let's do an analysis a numerical analysis of the M data set you you've all seen this data set before it's a hand handwritten digit and as you see like it has 60k points it has theion out of 28 by 28 eight grayscale images and then today's models they have millions of parameters to model 60k images so the performance on this data set actually keeps improving as we uh scale the neural networks and how does this make sense like where is the information basically that we learning from 60 image 60k images with millions of data uh uh parameters what are we learning now uh we know that generalization error or test error has this kind of proportion from the theory that uh it's proportional to the number of parameters of the system over the square root of number of parameters of the system over the data set size that means if I increase the number of parameters of my system then I would expect at some point that generalization or the error goes high so then why is it that we make them larger and larger but they work in case of image net another larger scale data set a very common in machine learning we have like 1.4 million uh images of size 256 2563 and then you have uh models with hundreds of millions of parameters that you fit this 1.4 million images in NLP as we showed before we have data points of few billions and then models with 100 billions and then perhaps like the best and my favorite kind of illustration of this size improving per performance in this generative AI when you have a prompt this is like a generative AI that receives text as input and generates output images so the prompt or the input to this system was a portrait photo of a kangaroo wearing an orange hoodie and blue uh sunglasses standing on the grass in front of the Sydney Opera House holding a sign on the chest that says welcome friends right and as we see that on the top of each image we have the size of the model and as we improve the quality of the uh the size SI of the models the quality improves and we're getting closer and closer to that description that we provided um as an input to the system right like the first image actually 350 it still like has all the components of that input but it's not as good as the last one right it misses on a couple of things all right so this happens and let's now figure out a way to explain this okay how many of you have heard about an a phenomena called double descent one two okay good all right so all right so uh this is not the learning curve okay first x-axis shows the model size you have an image classification problem CFR 10 you know and these are 10 classes you want to classify these images and now on the xaxis you are increasing the size of the networks and on the y- AIS we see the test error okay let's just concentrate on the um on the purple uh side of the basically uh process because that's at the end of the training process okay you're looking at the test error at the end of the process so classical statistics was telling us that as you improve as you increase the size of neural networks up to a certain point you find a nominal point where the performance of model the generalization that you have is optimal and from that moment on if you increase the size of the network you're overfitting what that means that means like the the kind of accuracy basically that Bell the first Bell shape that starts going up as you see that part was going and and they project they projected that that thing is going to go up as we actually scale the models but then the phenomena that we observed is that there is a second descent as we scale the size so this is called a double descent phenomenon so the first descent we knew already in the modern era we figured out that as we scale this neural networks the size of the models and this is this has been observed across many different architectures okay so as we scale it could be a resonet architecture it could be a convolution on neural network it could be an lsdm or multi percepton or whatever that you you know but as we increase the size that's the kind of phenomenal that we see now let me uh make it a little bit easy easier to understand this was classical statistics in terms of accuracy now in the previous one we were seeing errors now we see accuracy we go high up to a certain point and then as we increase the side in the models we start over fitting now up to a certain point now if you have um the new uh kind of experimental results showed that that's actually the case where up to a certain point we go up and then again there is an improve Improvement in performance this regime is called an overparameterization regime I'll tell you more uh and I'll give you more substance why we call them overparameterized over parameterization regime I mean it should be pretty obvious but I'll put it in the theoretical and the technical kind of language of uh machine learning all right so one of the things that we can observe from this image is that the accuracy at the very end of this overparameterization regime you see that it's slightly better than the first accuracy that you get from a model that is reasonably sized okay it's not that much better the accuracy but then the discovery of deep learning was that when we are in the overparameterization regime we get a new type of behavior starts emerging what kind of behavior the characteristics that emerge in that overparameterization regime is something that I'm going to talk about next so one of the things is um as we go larger and larger in networks sizes they learn Concepts that they can pass on to different tasks that means they can generalize across different tasks that means those Concepts that they learn they can be transferred to actually perform more tasks not even a single task that they have been trained on okay so it seems like you're giving them more ability to generalize and getting closer to those kind of uh a kind of General representation of AI I'm afraid to say AGI yet okay but yeah but that's the kind of thing that we observe the kind of Concepts that they learn they are being able to um perform across different tasks and across different domains they get better and better another observation that we had is that the scale improves robustness what does it mean for u a deep learning model to be robust that means if I change the input a little bit by per terbit with noise on the output it doesn't completely go crash so the change that the input is proportional to the output changes so robust means that you are being able to control the error on the output if you have a little bit of error or deviation at the input so this curve actually shows that as we scale these noral networks we see that the size of uh the the robustness actually improves okay I'm going to talk about this in much more detail in uh because that's the part where we have the theory for from a deep learning perspective now not everything improves by uh by um grow growing uh in size for example the problems of bias and accountability and and accuracy on minority samples as we scale they get worsen so that's we have evidence for those uh uh uh kind of cases as well so you need to be careful when you're deploying this in our society now another very important part of intelligence which is reasoning the ability to log logically talk about kind of different phenomenon is basically reasoning and um and reasoning is basically stays unchanged if you just scale a model unless you provide a system with uh a simulation engine let's say you want to do physical reasoning let say there are two cubes they're both moving in uh on x- axis one of them is faster than the other one so one thing that you have as a human being in your head is the simulation engine that you can basically simulate that kind of reality so if you provide that kind of reality to a language mod model the results of that simulation to language model you you would see again increasing reasoning so that part is also extremely important now the part that we are like all these results that I'm showing here they are very experimental so there has been like uh large corporations involved to actually perform this kind of analysis to act to see the behavior of these large models but do we have an actual theory that fundamentally is true about these models or this behavior that we see let me focus on the robustness okay this is from uh this a kind of graph is from Alexander madre's uh group from here MIT where they're studying robustness in neural network what they do I mean it doesn't matter what what the three lines are what they do they try to attack the input images and they and then uh compute the accuracy of models as we scale again on the xaxis we see the scale and accuracy as we start increasing the capacity of the networks in image classification amist classification here we see that there's a jump in robustness that means the attacks that they were doing were perturbing the input images with something called an attack is called projected gr and Ascent it was basically like you you got into a really good accuracy after you actually increased the size of the network up to a certain point and there was this transition this there was this shift Uh u in in in performance and that has been confirmed as I said by experiment and we said all right so scale the conclusions of those kind of results was that scale improves robustness but then in uh the best paper award at the uh the conference on neural information processing system neurx uh 2021 came and said scale is a law of robustness okay what that means okay that means like let's formulate how scale is basically contributing to robustness okay so I'm going to talk a little bit more technical about this thing hopefully you can you can all follow but you you can ask questions like even now if you um have questions let me just finish this one and then um we'll take some questions so um let's say fix any reasonable um reasonable function what is a reasonable function it's basically a function that has a smooth Behavior okay like like a sigmoid function okay that's a very reasonable function okay the crazy function would be like if you you have like jumps across like different modes you know like you have a reasonable function okay and it's basically U it has it's parameterized okay like a sige function that you can parameterize it and then uh it's poly Siz like it has also reasonable size in parameters and it's not a colog Arnold type Network what does that mean that means um I'm showing you a representation that was discovered by Arnold and and and kog that shows that there are this the this this composition that we see can approximate any continuous function multivariate continuous function okay so these are basically two nonlinearities and the sum operator is basically applying joining the the inner kind of processes but one problem with these uh kind of functions is that they're non smooth and um as uh po Tommy poju actually mentioned they show wild Behavior like they could be anything the function inner function could be anything so let's let's rule out those kind of uh functions let's talk about only neural networks okay like very reasonable functions and really function as well now let's sample n data points for the case of M data set you have 60k data points truly High dimensional I mean mes is like 28 by 28 Time 1 so it's like 728 right so it's not that truly High dimensional but let's say image net is 256 * 256 * 3 so it's like truly High dimensional and then um add label noise so why why do we want to add noise to the labels what does that mean even that means we want to make the problem harder for the neural network that means it should not be the mapping between the input High dimensional input than the output classes should not be trivial okay so we add noise a little bit of label noise to basically create this um you know a complex problem okay so that means like if you have all all your labels are random the problem is really complex right so you you want to add a little bit of random noise as well and then then to memorize this data set what does memorization mean that means if I have a function and I'm fitting a parametric model like a neural network to it I want to be completely fitting every single data point okay in during training that means I want to achieve a zero loss on my training set okay that means like memorizing so definition of memorization during learning now optimize the training error below the uh noise level and below the noise level what does that mean again that's like another definition that you guys have to know is um as I said we want to complicate the process so that it's not a trivial mapping so you add that noise and then when you're accuracy is a little bit higher than the amount of noise that you injected for example for the mes basically the hard part of the training of the M data set let's say if you all kind of machine learning models can get up to 92% 91% accuracy on the M state set but then what's the hard part the hard part of it is the last 2% to 5% to reach the 100% so that's the hard part that's what we call below the noise level okay that means if you can learn this process really like would be really good accuracy now and to do this robustly in the sense of lip sheets who knows what is a lip shits function good five six okay all right so lip sheet so good that I put this in here so so as I said look so you're moving your your inputs with Epsilon okay A little bit of perturbation at the input of a function then if the output is also moving with Epsilon or proportional linearly proportional to that Epsilon that means this function is lip Shi so you have a controlled kind of process change in the input do not not dramatically change the output of a function that's called a leap sheets function now yes so you memorize the data and you want to do this memorization robustly so if you want to do that it is necessary is absolutely necessary to parameterize your neural network at least equal to n n is basically the dimensionality of your data set which is 60k time d d is the dimensionality of your uh of every input sample let's say if your input is an M data set I mean I'm going to H give you an example later on but uh let's say the input is 728 in the order of 10 the^ 3 and the number of data set is 60k right then the minimum size of an M data set has to be 10 the^ 8 that would be the size of a neural network Model 10^ 9 to robust fastly learn amness data set that's a huge neural network but this is one of the fundamental explanations very recent that we have about the theory of deep learning and why is it called uh dramatic overparameterization because intuitively as we showed before memorizing n data points would require n uh parameters and we had a Eric B and David hestler they were showing that two uh two layer neural network with threshold activation function they showed it in 1988 that you would need p is the number of parameters you would need almost equivalent order of number of data set data points that you have that would actually would be enough theoretically and then um they showed it recently for the reu networks and um we even have that in the neural tangent kernels so how many of you are familiar with neural tangent kernels good three out of three good one two yeah so neural tangent kernels is that so imagine you the process of training a neural network with gradiant descent the whole process from beginning of training to the end of training is a dynamical process that means you're updating the weights of the system as you go further given a data set given a neural network and given the parameters of your Optimizer you can this learning Dynamics can be converge if I can actually be modeled by a dynamical process by differential equation that explains how the updates of gradient descend of a neural network would work now if I uh increase the size of the neural network to infinite width if I increase the size of the neural network to infinite width then this Dynamic of learning has a closed form solution that means it has actually a solution which is a kernel method so basically you will end up having a kernel and that kernel explains the entire behavior and dynamics of your learning process in the infinite way this is called the neural tangent kernel Theory and um we have a a PhD student is actually sitting over there newu that is focusing on that and this is actually his PhD topic that he's working on in uh Danielle's lab okay so now let's let's move move from this U Theory and let's let's give an example so we have the M data set it has around 10^ 5 number of parameters the dimensionality of each data point is 10^ 3 is 728 so just the scale and then we saw the transition in the uh conf robustness that means like at least you need to have one minut 10^ six parameters to robustly fit the M data set but then there are a couple of notes that we want to have the notion of robustness in the um in the theory paper was a little bit different than the notion of robustness and leap shitness that we we showed in the uh um in the theory of law of robustness and then another point is that the law seems to be contradicted because if you multiply we just said the law is like you know n * d right like that's the minimum number of samples that you can you have to transition so but here if you do that 10 times uh 10 the^ 5 * 10 the^ 3 is 10 the power uh 8 and it's much larger than that 10^ 6 that was observed in uh um reality but one of the things that you have to also notice is that there's something about data sets it's called effective dimensionality and effective dimensionality means that when I show you an image there are principal components of an image so finding the principal components of an image that how small that image what's the information the information is not the same size as as the pixel itself the information is much smaller so the effective dimensionality of it's hard to actually determine what's the effective dimensionality of a given data set but still for the am data set it's 10 the^ 1 and now we have 10 the^ 5n and if the effective dimensionality is basically at 10^ one then you would have basically the same you can confirm with experiments the theoretical results that was observed uh with the law of robustness now the noisy labels as I said it's basically learning the difficult part of the training and then uh in regard to image net is basically shows the law of robustness predicts because we haven't yet trained super large neural networks it seems like the networks that we have trained so far they are actually small you know we have trained like really large neural networks still but they have to be on the order of uh uh 10^ 12 or 10^ 10 you know and this is like the prediction of the law of robustness okay so now let's get back to this image so all these networks that I was talking about and law of robustness is basically in that regime in that regime I showed you that we have great generalization we get more robust provably more robustness uh and then reasoning still we have questions about how to achieve it and then bias and fairness which is very important energy consumption and account accountability of the models there is a way and that has been the focus of the research that we have been doing at Daniel's lab with Alexander and a couple of other uh graduate students of ours is to find out how can we get out of that overparameterization regime while addressing all these um social technical challenges and how we did that we went back to Source okay basically we looked into brains okay we looked into how we can get inspiration from brains to improve over architectural biases that we have in uh neural networks in order to break the law of robustness now and then we we invented something called liquid neural networks with the direct uh inspiration from how uh neurons and synapses interact with each other in a brain okay that's how uh uh we we started our focus and the second half of my talk I'm just going to talk about liquid neural networks and their implications in this realm of model error of Statistics so as I said we started with nervous systems then in order to understand nervous systems you want to go down and you can look into neural circuits so neural circuits are circuits that are composed of neurons then they can get sensory inputs and generate uh some some sort of uh um outputs motor outputs and then we went even deeper than that and we looked into how neurons individual neurons communicate with each other two cells receiving information between each other and then we arrived at an equation that or a formulation for the interaction of neurons and synopses that is abstract enough so that we can perform computation efficient computation but at the same time has more details of how the actual competition happens in in in in brains and not just the threshold or activ very U kind of the sigmoid function that we always see so let's get more deeper into that and we show that if you really get into more uh biologically plausible representation of neural networks you can gain more expressivity you can handle memory in a much uh uh more uh natural way you gain some properties such as causality and which was basically something that we haven't discussed and what's the cause and effect of a task can we actually learn that you can get to robustness and you don't need to be crazy like large be out of the that Reg you can perform generative modeling which was uh what I described uh before and you can even do extrapolation that means you can you can even go beyond the the kind of data that you have been trained on that means you can you can go out of distribution that's something that we couldn't do and the like the the area that we were focused on was robotics real word you know like we called it mixed Horizon decision making that means if you have a robot interacting in an environment in a real setting now how to bring this kind of network Works inside uh uh real world and that's the focus of our research um yeah so what are the building blocks of these type of neural networks I said the building blocks are basically interaction of two neurons or two cells with each other one of some observation about this process the first observation is that there kind of interaction between two two neurons in the brains is a continuous time process so that's something into account for so we we're not talking about discrete processes another thing is that synaptic release right now in neural networks when you connect to nodes with each other you connect them with a scalar weight right now what if I tell you that when you when two neurons interact with each other it's not only a scalar rate but there is actually a probability distribution that how many kind of neurotransmitters generated from one cell is going to bind to the channels of the other cell and how it can actually activate uh uh the other cell so the communication between the nodes is very sophisticated and it's one of the fundamental building blocks of intelligence which uh we have actually abstracted away in artificial n network to a weight to a scale a weight now another uh point is that we have massive uh parallelization and massive recurrence and feedback and memory and sparsity kind of uh uh uh structures in brains and that's something that is missing from um uh not entirely but but some somehow it's missing from the artificial Neal Network so we deviated a little bit now what I want to argue is that if we incorporate all these building blocks we might get into better representation learning more flexible models and robust and at the same time be able to interpret those results and this and for because of the first reason that the whole process was buil built on top of continuous time processes let's get get into these continuous time processes and where we are in terms of neural networks with continuous time processes so here I'm showing you an equation so f is a neural network it could be a five layer neural network six layer neural network fully connected but this F that uh has n layers and with K and then it has certain activation function it has it's a function that means it receives input it receives recurrent connections from the other cells it receives exogenous input as well like this I and it is parameterized by Theta this neural network is parameterizing the derivatives of the Hidden State and not hidden State itself that builds a continuous time neural network now if you have a continuous time process that means like the updates or the outputs of a neural network is actually generating the updates of your uh the derivative of the Hidden State and not hidden State itself what kind of you can give rise to continuous time Dynamics with neural networks that we haven't explored before now what does this mean in terms of like uh neural networks let's look at this um this image um how many of you have heard about residual networks okay residual networks are deep neural networks that have a kind of a skip connections okay like from one layer to the other one you have basic the skip connections and that skip connection is modeled by this equation HT + 1 equal to HT plus F of whatever that HPT is basically uh resembling basically a a skip connection and now here on the Y AIS what we see we see the depths of neural network and each dot each um black dot here shows a computation that happened in the system so when you have a neural network that is layer wise like let's call it layer wise because the photo is actually showing the layers in the vertical axis so if you look at the vertical axis you see that computation happen at each layer okay because as the input is computed from the first layer to the second one and then the next one next one but if you have a continuous process continuous process equivalent to this process you have the ability to compute at an arbitrary point in a vector field so if you if you know what are differential equations and um how how do they basically change the entire space into a vector field how can you basically go uh you know you can do adaptive computation okay and that's one of the massive benefits of uh continuous time processes now in terms of things that you have seen before standard recurrent neural networks you had the lecture here so a neural network f is basically computes the next step of the Hidden state so it's a discretized kind of process of updating the next step and neural OD up States the stat with a neural network like that uh in a continuous time fashion and then there is a better uh more stable version of this differential equation that is called continuous time or ctrnn recurrent r networks there are continuous time recurr Ral networks where they have a damping Factor so that differential this a linear OD orary differential equation that describes basically the Dynamics of a uh of a neural network hidden state of a neural network now in terms of what kind of what kind of uh benefits will you get let's look at the top one these are reconstruction plots from uh data data corresponding on to a spiral Dynamics if you have a discretized neural network work this spiral Dynamics is being able to you know it's the the the kind of interpolation that you have is kind of edgy but if you have a continuous time process it actually can compute very smoothly and also it can even generalize to the Unseen area of this which is the red part of this as we see here the normal recurrent Ro Network misses out on on this kind of spiral Dynamics so with A continuous time process it is believe that you can get better and better uh representation learning but then uh the the problem here is that um if you U if if you actually bring these models in practice a simple lsdm network works better than this type of network so basically you you you can actually outperform everything that we showed here within LSM Network so far so what's the point of basically creating such complex architectures and stuff so this was the place where we thought that uh The Continuous time processes that we bring from nature can actually help us build better inductive biases so we introduced a neural network called liquid time constant networks or in short ltcs ltcs are constructed by neurons and synapsis so a Neuron model is a continuous process like that it's a differential equation a linear differential equation that receives the synaptic input s it's a linear differential equation neurons and synapses now we have a synapse Model S oft is a function that has a neural network multiplied by a term called a difference of potential that nonlinearity resembles the nonlinear Behavior between the synapses in real kind of neurons if you have synaptic connections and they are basically uh uh designed by nonlinearity because that's actually the reality of the thing and then if you if you plug in this s inside this linear equation you will end up having a differential equation which is looking very sophisticated but you will see the implications of this differential equation it has um a neural network as a coefficient of x of T which is this x ofd here that neural network is input dependent that means the inputs to the neural network defines how the behavior of this differential equation is going to be so it sets the behavior of the differential equation in a way that when you're deploying this system on board it can be uh uh adaptable to inputs so this neural network this system if you have it in practice it's input dependent and as you change the input as the result of inputs the behavior of this differential equation changes now in terms of connectivity structures it has um if you look at the range of possible connectivity structures that you have in a standard normal Network you would have um sigmoid basically activation functions you might have reciprocal connections between two nodes you might have a external inputs and you might have recurrent connections for a standard neural network now for a liquid neural network instead each node in the system is a differential equation x uh XJ and x i they have Dynamics and then they're connected by synapses and then there are process nonlinear processes that controls the interaction of synapsis so now in some sense you can think about liquid neural networks as being um being processed that have um nonlinearity of the system on synapsis instead of the neurons so in neural networks we have the activation functions which are the nonlinearity of the system now you can think about nonlinearity of the system being on the synapsis or the weights of the system now in terms of practice let's look at this application here we trained an artificial neural network that can control a car that can drive the car in this environment and what we are showing on the sides of this middle image is the activity of two neurons that are inside a standard neural network and a liquid neural network on the xaxis we see the time constant or sensitivity of the behavior of the output Behavior basically controlling the vehicle's a steering angle the sensitivity of that control on the xaxis and on the uh y AIS we see the steering angle the color also represents the output which is mapping between the uh steering angle and the the outputs of the neural network okay so now as we see we added like with liquid neural network we have an additional degree of Freedom that we that these networks can set its uh sensitivity a liquid neural network depending on whether we are turning whether we are going straight if you're going straight the uh the time constant you want to be more cautious when you're taking turns right so your neural network you want to you want to to be faster so to be able to actually control during those kind of uh uh kind of events where you are actually turning so that's the kind of degree that you can add even at the neuronal level for interpreting the systems now let's look at a case study of uh liquid neural networks so usually you saw a deep neural network for an autonomous driving application we have um a conv a stack of convolution on Ral networks they receive camera inputs and they can output the a kind of a steering angle at its output these kind of um neural network one of the things that we can do first of all they have like a lot of parameters in this system what we want to do we want to take out the fully connected layers of this neural network and replace it with recurrent neural network processes one of those recurrent processes that we replace it with its uh liquid neural network and the LT and lstms and the normal continuous time neural networks now if I replace this I would end up having like four different variants these four variant one of them is called an NCP which is a neural circuit policy that has a four layer architecture each node of this system is defined by an LTC neuron that equations that I was showing you and it is sparsely connected to the other parts of the network so we have a sparse neural network architecture here we have then one neural network that has lsdm as the control signal it receives the perception with a convolutional networks and then we have ctrnn and we have convolution networks now I want to show you the driving performance of these different types of networks and what kind of uh characteristics is added to these uh uh systems so first let's look at this dashboard where where I'm showing you a convolution neural network followed by fully connected layers the normal uh very standard deep Learning System that receives those camera inputs and has learned to control this car okay so those camera inputs comes in and the output decisions are basically driving decisions now on the bottom left what we see we see the the decision making process of the system the inputs if you realize the inputs has a little bit of noise on top of them so I added some noise at the input so that so that we can see the robustness in decision making of this process now as we see actually there is the the the kind of decision making the brighter regions here is where the neural network is paying attention when it's taking a a driving decision and we see that the attention is basically scattered with a little bit of noise on top of the system now if you do the same thing and add noise we then replace that fully connected layer of neural network with 19 liquid neurons you can actually get to a performance of Lane keeping on the same environment while having the attention of the uh a system being basically focused on the roads Horizon like the way that we actually drive right so and the fact that the parameters that are needed for performing this TS basically 19 neurons with a very small convolution or neural network as perception module that's the fascinating part that you get from that inductive bias that we put inside neural networks from brains okay so if you model that so you you have like an a set of neurons that you can really go through their Dynamics you can analyze them you can understand that process of that system and you get benefits like real world benefits for example if on the x- axis I increase the noise I increase the noise the amount of noise that I add at the input and on the y axis is I compute the output crashes number of crashes that actually happens when the drive when the network wants to drive the car outside we see that uh these are the other networks and here we see um a liquid neural network where it actually keeps the level extremely low ltcs if you look into the attention map of different those four different network variants how do they make driving decisions when they are deployed in the environment we see that the consistency of attention is different across different uh networks while where we have a liquid neural network actually maintains its focus and uh that's one of the nice thing about this but then we ask why why is it that the liquid neural network can focus on the actual task which is Lane keeping here there is no this is driving and it's just a simple driving example where you have just you know like a road and you want to uh stay in the road right then um now what as I said like the represent presentation that learned by liquid neural network is much more causal so that means they can they can really focus and find out the essence of the task and then if you look into the machine learning modeling statistical modeling is like the least form of causal modeling it's basically you just extract uh these are like these are the taxonomies of the causal models you have on the bottom of this axis you have the statistical models that they can learn from data but they cannot actually establish the causal relationship between the input and outputs the best type of models that we could have is physical models that describes the exact dynamics of a process then you have basically a causal model and in the middle you have a kind of a spectrum of different types of causal models so one thing that we realize is um basically liquid neural networks are something that is called Dynamic causal models Dynamic causal models are models that can adapt their Dynamics to um so that they can extract the the the of a task and and really find out the the the the relationship of an input output relationship of a task based on a mechanism that uh is ruled out by this differential equation here so it has parameters a b and c that they account for internal interventions to the system that means if I change something in the middle of the system there are mechanism that can control that processes if if something that comes from outside an intervention inside the process of a system then you would actually get into uh uh you can you can control those kind of processes with the dynamic causal model now uh a little bit I uh wanted to go through um the causal modeling um uh uh process of a neural network basically a differential equation can form a causal structure was what does that mean that means it can predict from the current situation we can predict the future uh one one step in the future of a process that's temporal causation and another thing is that if I change something or intervene in a system how can I uh uh can I actually control that intervention so or or can I account for that intervention that I had in the system this would construct these two points being able to account for future uh evolution of a system and interventions and being able to account for interventions in a system so if you have these two models then you have a causal structure so so for that I'm just uh uh uh skipping over over this part I just want to tell you that I mean I wanted to show you like more about uh uh how your ding this uh connection between liquid neural networks and causal models but I share the slide so you can see and also Professor Rose tomorrow is going to give a lecture on the topic that hopefully uh she can cover uh U these parts and um just a couple of uh uh remarks on on on the performance of the network and what's the implication of having a causal model now let's look at this environment we trained some neural networks that these neural networks are basically um Learn to Fly towards a Target in an unstructured environment so we collected some data we trained neural networks instances and then we tested this then then we take this neural networks that we train by moving towards like basically these are scenes that are a drone is inside the forest and the Drone is navigating towards a Target okay now we collect this data from a couple of traces okay that we simulate then we take uh this um data we train neural networks we bring the neural networks back on the Drone and we test them on the Drone to see to see whether they can learn to navigate this task without even uh programming uh uh uh what is the objective of the task basically the objective of the task has to be distilled from the data itself and as we saw here on the right we see the decision- making process of these networks that um liquid networks learned to pay attention to the Target as the target becomes visible that means it actually figured out the most important part of this process was really focusing on the target even if there is an unstructured kind of uh kind of input data to the system now if we compare the attention map of these neural circuit policies which is the first uh the second uh column compared to the other attention Maps we see that the only network type of network that learn from this data to pay attention to this target is liquid neural networks and that's the kind of implication that you have so you learn the cause and effect of a task using a liquid r netor that has dramatically less number of parameters compared to other types of neural networks now as I told you the the relation between two neurons can be defined by a differential equation by a neuron equation and a synapse equation we recently have solved this differential equation also in closed form and then this gave rise to a new type of neural network which is again a close form uh uh um continuous time neural networks you call them CFC these are the close for from liquid networks they have the same behavior but they are uh uh basically defined in close form now what does that mean that means if I have a differential equation you see the OD on the top if I simulate this OD okay the close form solution would actually give you the very same behavior of the differential equation itself but it's it does not require any numerical solvers so it does not require any kind of you know complex numerical solvers so as a result you could scale liquid neural networks much larger to to much larger instances if you want to scale the networks in terms of performance in modeling Dynamics liquid neural networks we see here a series of different types of advanced recurrent neural networks and we see variants of uh uh closed form liquid neural networks and ltcs themselves that are performing remarkably better than the other systems in modeling physical Dynamics and the last thing I want to show is the difference between this understanding causality and understanding the cause and effect of a task and not being able to understand that now here you see a drone which is an agent that wants to navigate towards a Target okay there's a Target in the environment now we collected traces of a drone navigating towards this Target in this Forest then what we did we collected this data then we trained neural networks and now we brought back these neural networks on the Drone to see whether they learned to navigate towards that Target or not now here I'm showing the performance of an lstm neural network so as we see the lstm is basically completely like looking around and it does cannot really control their drone is actually if you look at the attention map of the system actually looks uh all around the the place it did not really realize what's the objective of the task from data so it could not really associate anything or learn something meaningful from the data and then here is an liquid neural network on the same task look at the attention map here as the Drone is going towards that Target and that's the kind of uh uh flexible flexibility that you can learn the actual uh uh cause and effect of a task from neural networks now okay so I'm going to conclude now I showed you this plot I showed you that there is overp parameterization regime and we have an idea about what kind of um uh uh benefits you would get in that regime and what kind of uh intuitive understand theoretical understanding do we have for neural networks and then I showed you that there are neural networks like liquid neural networks that have inductive biases from brains that can actually Sol resolve a couple of uh problems for example the robustness while being significantly smaller than overparameterized networks so it is not that you always have to dramatically overp parameterize neural networks but there you can have inductive biases to actually um uh gain good performance to sum up the law of robust is real so that's something that is came out of theory is um number of parameters it's it's necessarily has to be large if you want to have with effective dimensionality so one of the ideas that we have I'll talk about effective dimensionality here the overparameterization definitely improves generalization and robustness but it has some socio technical challenges so inductive biases or architectural motive why do we have different types of architectures and why studying the brain is actually a good thing is because we can actually get smarter in designing neural networks and not just blindly overp parameter over parameterizing them so we can really put incorporating some of some ideas to really get what's going on and liquid neural networks they can enable robust representation learning outside of overp paration regime uh because of their causal mechanisms they reduce um networks Perce so that's the speculation that I have I have to uh we have to actually think about the theoretical implication here but I what I think is that the reason why liquid neural networks can pass or break the the the universal law of robustness is because they can extract a better or more minimum uh uh kind of effective dimensionality out of data so you have data sets so if the if the effective dimensionality get reduced by a parameterized neural network then the law of robustness still holds okay given the number of data and that's the end of my presentation I'm just putting here at the end a couple of resources if you want to get handson with liquid neural networks we have put together like really good documentations and I think you would enjoy to play around with these systems for your own applications thank you for your attention [Applause] |
MIT_6S191_Introduction_to_Deep_Learning | MIT_6S191_Towards_AI_for_3D_Content_Creation.txt | great yeah thanks for a nice introduction um i'm gonna talk to you about 3d content creation and particularly deep learning techniques to facilitate 3d content creation most of the work i'm going to talk about is the work um i've been doing with my group at nvidia and the collaborators but it's going to be a little bit of my work at ufc as well all right so you know you guys i think this is a deep learning class right so you heard all about how ai has made you know so much progress in the last maybe a decade almost but computer graphics actually was revolutionized as well with you know many new rendering techniques or faster rendering techniques but also by working together with ai so this is a latest video that johnson introduced a couple of months ago [Music] quietly so this is all done all this rendering that you're seeing is done uh real time it's basically rendered in front of your eyes and you know compared to the traditional game you're used to maybe real time in gaming but here there's no baked lights there's no brake light everything is computed online physics real time retracing lighting everything is done online what you're seeing here is rendered in something called omniverse is this visualization a collaboration software that nvidia has just recently released so you guys should check it out it's really awesome all right [Music] oops these lights always get stuck yeah so when i joined nvidia this was two years and a half ago it was actually the orc that i'm in uh was creating the software called omniverse the one that i just showed and i got so excited about it and you know i wanted to somehow contribute in this space so somehow introduce ai in in into this content creation and graphics pipeline and and 3d content is really everywhere and graphics is really in a lot of domains right so in architecture you know designers would create office spaces apartments whatever everything would be done uh you know you know in some some modeling software with computer graphics right such that you can judge whether you like some space before you go out and build it all modern games are all like heavy 3d um in film there's a lot of computer graphics in fact because directors just want too much out of characters or humans so you just need to have them all done with computer graphics and animate in realistic ways now that we are all home you know vr is super popular right everyone wants a tiger in the room or have a 3d character version 3d avatar of yourself and so on there's also robotics so healthcare and robotics there's actually also a lot of computer graphics and and in these areas and these are the areas that i'm particularly excited about and and why is that um it's actually for simulation so before you can deploy any kind of robotic system in the real world you need to test it in a simulated environment all right you need to test it against all sorts of challenging scenarios on healthcare for you know robotic surgery robotics are driving cars you know warehouse robots and and stuff like that i'm going to show you this uh simulator called drive sim that nvidia has been developing um and this is uh this video is a couple of years old now it's not a lot better than this um but basically simulation is kind of like a game it's really a game engine for robots where now you expose a lot more out of the game engine you want to have the creator the roboticist some control over the environment right you want to decide how many cars you're going to put in there what's going to be the weather night or day and so on so this gives you some control over the scenarios you're going to test against but the nice thing about you know having this computer graphics pipeline is um everything is kind of labeled in 3d you already have created a 3d model of car you know it's a car and you know the parts of the car is you know something is a lane and so on and instead of just rendering the picture you can also render you know grand truth for ai to both train on and be tested against right so you can get ground truth lanes crunch with weather ground truth segmentation all that stuff that's super hard to collect in the real world okay um my kind of goal would be you know if we if we want to think about all these applications and particular robotics you know can we simulate the world in some way can we just load up a model like this which looks maybe good from far but we want to create really good content at street level and you know both assets as well as behaviors and just make these virtual cities alive so that we can you know test our robot inside this all right so it turns out that actually is super slow let me play this require significant human effort here we see a person creating a scene aligned with a given real world image the artist places scene elements edits their poses textures as well as scene or global properties such as weather lighting camera position this process ended up taking four hours for this particular scene so here the artist already had the assets you know bottom online or whatever and the only goal was to kind of recreate the scene above and it already took four hours right so this is really really slow and i don't know whether you guys are familiar with you know games like grand theft auto that was an effort by a thousand engineers a thousand people working for three years um basically recreating la los angeles um going around around the city and taking tons of photographs you know 250 000 photographs many hours of footage anything that would give them you know an idea of what they need to replicate in the real world all right so this is where ai can help you know we know computer vision we know deep learning can we actually just take some footage and recreate these cities both in terms of the construction the the assets as well as behavior so that we can simulate all this content or this live content all right so this is kind of my idea what we need to create and i really hope that some guys you know some of you guys are are going to be equally excited about these topics and i'm going to work on this so i believe that we we need ai in this particular area so we need to be able to synthesize worlds which means both you know scene layouts uh you know where am i placing these different objects maybe map of the world um assets so we need some way of creating assets like you know cars people and so on in some scalable way so we don't need artists to create this content very slowly as well as you know dynamic dynamic parts of the world so scenarios you know which means i need to be able to have really good behavior for everyone right how am i going to drive as well as you know animation which means that the human or any articulated objective animate needs to look realistic okay a lot of this stuff you know it's already done there for any game the artists and engineers need to do that what i'm saying is can we have ai to do this much much better much faster all right so you know what i'm going to talk about today is kind of like our humble beginning so this was this is the main topic of my um you know toronto nvidia lab and and i'm gonna tell you a little bit about all these different topics that we have been slowly addressing but there's just so much more to do okay so the first thing we want to tackle is can we synthesize worlds by just maybe looking at real footage that we can collect let's say from a self-driving platform so can we take those videos and and you know train some sort of a generative model is going to generate scenes that look like the real city that you know we want to drive in so if i'm in toronto i might need brick walls if i'm in la i just need many more streets like i need to somehow personalize this content based on the part of the world that i'm gonna be in okay if you guys have any questions just write them up i i like if the uh lecture is interactive all right so how can we compose scenes and our thinking was really kind of looking into how games are built right in games you know people need to create very diverse levels so they need to create in a very scalable way very large walls and one way to do that is using some procedural models right or probabilistic grammar which basically tells you you know rules about how the scene uh is created such that it looks like a valid scene so in this particular case and i would i would sample a road right with some number of lanes and then on each lane you know sample some number of cars and maybe there's a sidewalk next to a lane with maybe people on walking there and there's trees or something like that right so this this this probabilistic models can be fairly complicated you can quickly imagine how this can become complicated but at the same time it's not so hard to actually write this anyone could would be able to write a bunch of rules about how to create this content okay so it's not it's not too tough but the tough is to really the the tough part is you know setting all these distributions here and you know such that the render scenes are really going to look like your target content right meaning that if i'm in toronto maybe i want to have more cars if i'm in a small village somewhere i'm going to have less cars so for all that i need to go and you know kind of personalize these models set the distributions correctly so this is just some one example of you know sampling from a probabilistic model here the uh the probabilities for the orientations of the cars become randomly set but there's so much the scene already looks kind of kind of okay right because it already incorporates all the rules that we know about the world and the model will be needing to to training all right so you can think of this as some sort of a graph right where each node defines the type of asset we want to place and then we also have attributes meaning we need to have location height pose anything that is necessary to actually place this car in the scene and render it okay and and this this these things are typically said by an artist right they they look at the real data and then they decide you know how many pickup trucks i'm going to have in the city or so on all right so basically they said this distribution by hand what we're saying is can we actually learn this distribution by just looking at data okay and we had this paper column metasim a couple of years ago where the the idea was let's assume that the structure of the scenes that i'm sampling so in this particular case apps you know how many lanes i have how many cars i have that comes from some distribution that artist has already designed so the the graphs are going to be correct but the attributes um should be modified so if i sample this original scene graph from that i can render like you saw that example before the cars were kind of randomly rotated and so on the idea is can a neural network now modify the attributes of these nodes modify the rotations the colors maybe even type of object such that when i render those those scene graphs i get images that look like real images that i have recorded in distribution so we don't want to go after exact replica of each scene we want to be able to train a generative model it's going to synthesize images that are going to look like images we have recorded that's the target okay so basically we have some sort of a graph neural network that's operating on scene graphs and it's trying to repredict attributes for each node i don't know whether you guys talked about graph neural nets and then the loss that's coming out is through this renderer here and we're using something called maximum indiscreptency so i'm not going to go into details but basically the idea is you could you need to compare two different distributions you could compare them by you know comparing the means of the two distributions or maybe higher order moments and mmd was designed to to compare higher order moments okay now this last can be back prop through this non-differentiable renderer back to graph neural net okay and we just use numerical gradients to do this step and the cool part about this is we haven't really needed any sort of annotation on the image we're comparing images directly because we're assuming that the image the synthesized images already look pretty good all right so we actually don't need data we just need to drive around and record these things okay you can do something even cooler you can actually try to personalize this data to the task you're trying to solve later which means that you can train this network to generate data that if you train some other neural net on top of this data it's an object detector it's going to really do well on you know whatever task you have in the end collected in the real world okay which might not mean that the object need to look really good in the scene you just might it just means that you need to generate scenes that are going to be useful for some network that you want to train on that data okay and that you you again back prop this and you can do this with reinforcement learning okay so this was now training the distribution for the attributes we were kind of the easy part and we were sidestepping the issue of well what about the structure of these graphs meaning if i had always generated you know five or eight or ten cars in a scene but now i'm in a village i will just not train anything very useful right so the idea would be can we learn the structure the number of lanes the number of cars and so on as well okay and and it turns out that actually you can do this as well where here we had a probabilistic context free grammar which basically means you have a you have a root note you have some symbols and which can be non-terminal terminal symbols and rules that they that basically expand non-terminal symbols into new symbols so an example would be here right so you have a road which you know generates lanes lanes can go into lane or more lanes right and so on so these are the the rules okay and basically what we want to do is we want to train a network that's going to learn to sample from this probably the context-free grammar okay so we're going to have some sort of a latent vector here we know where we are in the tree that or the graph we've already generated before so imagine we are in in we have sampled some lane or whatever so we now we know the the corresponding symbols that we can actually sample from here we can use that to mask the probabilities for everything else out all right and our network is basically gonna learn how to produce the correct probabilities for the next symbol we should be sampling okay so basically at each step i'm going to sample a new rule until i hit all the terminal symbols okay that basically gives me something like that these are the sample the rules in this case which can be converted to a graph and then using the previous method we can you know augment this graph with attributes and then we can render the scene okay so basically now we are also learning how to generate um the the the actual scenario the actual structure of the same graph and the attributes and and this is super hard to train so there's a lot of bells and whistles to make this to work but essentially because this is all non-differentiable steps you need something like reinforcement learning and and there's a lot of tricks to actually release to work but i was super surprised how well you this can actually turn out so on the right side you see samples from the real data set uh kitty is like a real driving data set on the left side is samples from probabilistic grammar here we've set this first probabilities manually and we purposely made it really bad which means that this probably is the grammar when you sample you got really few cars almost no buildings and you can see this is like almost not not populated scenes after training you the generative model learned how to sample this kind of scenes because they were much closer to the real target data so these were the final trained things okay and now how can you actually evaluate that we have done something reasonable here you can look at for example the distribution of cars in the real real data set this is kitty over here so here you will have a histogram of how many cars you have in each scene um you have this orange guy here which is the prior meaning this badly initialized prolistic grammar where we only were sampling most of the time very few cars and then the learned model which is the green the line here so you can see that the generated scenes really really closely follow this distribution of the real data without any single annotation at hand right now you guys could argue well it's super easy to write you know these distributions by hand and and we're done with it i think there's just this just shows that this can work and the next step would just be make this really large scale make this you know really huge probabilistic models where it's hard to tune all these parameters by hand and the cool part is that because everything can be trained now automatically from real data no any end user can just take this and it's going to train on their end they know they don't need to go and set all this stuff by hand okay now the next question is you know how can i evaluate that my model is actually doing something reasonable and one one way to do that is by actually sampling from this model synthesizing these images along with gran truth and then train some some you know end model like a detector on top of this data and testing it on the real data and and just seeing whether the performance has somehow improved um compared to you know let's say on that badly initialized um probabilistic grammar and it turns out that that's that's the case okay now this was the example shown on driving but oh sorry so so this model is is just here i'm just showing basically what's happening during training let me just go quickly so the first snapshot is the first sample from the model and then what you're seeing is how this model is actually training so how is modifying the scene during training let me show you one one more time so you can see the first frame was really kind of badly placed cars and then it's slowly trying to figure out where to place them and to be correct and of course this generative model right so you can sample tons of scenes and everything comes labeled cool right um this model here was shown on on on driving but you can also apply it everywhere else like in other domains and here you know medical or healthcare now is very um you know important in particular these days when everyone is stuck at home um so you know can you use something like this to also synthesize medical data and what do i mean by that right so doctors need to take you know city or mr mri volumes and go and label every single slice of that with you know let's say a segmentation mask such as they can then train like a you know cancer segmentation or card segmentation or lung segmentation kobe detection whatever right so first of all data is very hard to come by right because in some diseases you just don't have a lot of this data the second part is that it's actually super time consuming and you need experts to label that data so in the medical domain it's really important if we can actually somehow learn how to synthesize this data label data so that we can kind of augment the real data sets with that okay and the model here is going to be very simple again you know we have some generative model let's go from a latent codes to some parameters of a of a mesh in this case this is our asset within a material map and then uh we synthesize this with a physically based um ct simulator uh which you know looks a little bit blurry and then we train a enhancement model with something like again and then you get simulated data out obviously again there is a lot of belts and whistles but you know you can get really nice looking synthesized volumes so here the users can actually play with the shape of the heart and then they can click synthesize data and you get some some labeled volumes out where the label is basically the stuff on the left and this is the simulated sensor in this case okay all right so now we talked about using procedural models to generate to generate worlds and of course the question is well do we need to write all those rules can we just learn how to recover all those rules and here was our first take on this um and here we wanted to generate or learn how to generate city road layouts okay which means you know we want to be able to generate something like that where you know the lines over here representing roads okay this is the base of any city and we want to again have some control over these worlds we're going to have something like interactive generation i want this part to look like cambridge it's parked to look like new york inspired to look like uh toronto whatever and we want to be able to generate or synthesize everything else you know according to these styles okay you can interpret road layout as a graph okay so what does that mean i have some control points and two control points being connected means i have a road line segment between them so really the problem that we're trying to solve here is can we have a neural net generate graphs graphs with attributes where each attribute might be an x y location of a control point okay and again giant graph because this is an entire city we want to generate um so we had actually a very simple model where you're kind of iteratively generating this graph and imagine that we have already you know generated some part of the graph what we're going to do is take an a node from from like an unfinished set what we call we encode every path that we have already synthesized and leads to this node which basically means we wanna we wanna kind of encode how this node already looks like what are the roads that it's connecting to and we want to generate the remaining nodes basically how these roads continue in this case okay and this was super simple you just have like r and n's encoding each of these paths and one rnand that's decoding these neighbors okay and you stop where basically you hit some predefined size of the city okay let me show you some some results so here you can condition on the style of the city so you can generate barcelona or berkeley you can have this control or you can condition on part of the city being certain style and you can use the same model the generative model to also parse real maps or real aerial images and create and create variations of those maps for something like simulation because for simulation we need to be robust uh to the actual layouts so now you can turn that graph into an actual small city where you can maybe procedurally generate the rest of the content like we were discussing before where the houses are where the traffic signs are and so on cool right so now we can generate you know the map of the city um we can place some objects somewhere in the city so we're kind of close to our goal of synthesizing worlds but we're still missing objects objects are still a pain that the artists need to create right so all this content needs to be manually designed and that just takes a lot of time to do right and maybe it's already available you guys are going to argue that you know for cars you can just go online and pay for this stuff i first of all it's expensive and second of all it's not really so widely available for certain classes like if i want a raccoon because i'm in toronto there's just tons of them there's just a couple of them and they don't really look like real raccoons right so the question is can we actually do this all these tasks by taking just pictures and synthesizing this content from pictures right so ideally we would have um something like an image and we want to produce out you know a 3d model 3d texture model right there can i then insert in my real scenes and ideally we want to do this on just images that are widely available on the web right i think the new iphones all have lidar so maybe this world is going to change because everyone is going to be taking 3d pictures right with some 3d sensor but right now the majority of pictures that are available of objects on flickr let's say it's all single images people just snapshotting a scene or snapshotting on a particular object so the question is you know how can we learn from all the data and go from an image on the left to a 3d model and in our case we're going to want to produce as an output from the image and mesh which basically has you know location of vertices xyz and you know some color material properties on each vertex right and 3d vertices along with faces which means which vertices are connected that's basically defining this 3d object okay and now we're going to turn to graphics to help us with our goal to do this from you know the kind of without supervision learning from the web okay and in graphics we know that images are formed by geometry interacting with light right that's just principle of rendering okay so we know that you can you you if you have a mesh if you have some light source or sources and you have a texture and also materials and so on which i'm not writing out here and some graphics renderer you know there's many issues choose from you get out a rendered image okay now if we make this part differentiable if you make the graphics renderer differentiable then maybe there is hope of going the other way right you can think of computer vision being inverse graphics graphics is going for 3d to images computer vision wants to go from images into 3d and if this model is differentiable maybe there's hope of doing that okay so there's been quite a lot of work lately on basically this kind of a pipeline with different modifications um but basically this summarizes the the ongoing work where you have an image you have some sort of a neural net that you want to train and you're making this kind of button like predictions here which smash light texture maybe material okay now instead of having the loss over here because you don't have it you don't have the ground truth mesh for this car because you otherwise you need to annotate it what we're going to do instead is we're going to send these predictions over to this renderer which is going to render an image and we're going to have the loss defined on the rendered image and the input image we're basically going to try to make these images to match okay and of course there's a lot of other losses that people use here like multi-view alloys you're assuming that in training you have multiple pictures multiple views of the same objects you have masks and so on so there's a lot of bells and whistles how to really make this pipeline work but in principle it's a very clean idea right where we want to predict these properties i have this graphics renderer and now i'm just comparing input and output and because this is this render is differentiable i can propagate these slots back to all my desired you know neural light weights so i can predict this these properties okay now we in particularly had a very simple like opengl type render which we made differentiable there's also versions where you can make rich racing differentiable and so on but basically the idea that we employed was super simple right a mesh is basically projected onto an image and you get out triangles and each pixel is basically just a butter centric interpolation of the vertices of this projected triangle and now if you have any properties defined of those vertices like color or you know texture and so on then you can compute this value here through your you know renderer that assumes some lighting or so on um in a differentiable manner using this percentage coordinates this is a differential function and you can just go back through whatever lighting or whatever um shader model you're using okay um so very simple and there's you know much much richer models that are available richer differentiable renders available these days but here we try to be a little bit clever as well with respect to data because most of the related work was taking synthetic data to train their model why because most of the work needed multi-view data during training which means i have to have multiple pictures from multiple different views of the same object and that is hard to get from just web data right it's hard to get so people will just basically taking synthetic cars from synthetic data sets and rendering in different views and then training the model which really just maybe makes a problem not so interesting because now we are actually relying on synthetic data to solve this and the question is how can we get data and and and we try to be a little bit clever here and we turn to generative models of images i don't know whether you guys cover in class uh you know image gans but if you take something like stylogen which is uh you know generative adversarial network designed to really produce high quality images by by sampling from some some prior you get really amazing pictures out like all these images have been synthesized none of this is real this is all synthetic okay you know these guys basically what they do is you have some latent code and then there's a you know some nice progressive architecture that slowly transforms that latent code into an actual image okay what happens is that if you start analyzing this this latent code or i guess i'm going to talk about this one if you take certain dimensions of that code and you try and you freeze them okay and you just manipulate the rest of the code it turns out that you can find really interesting controllers inside this latent code basically the gun has has learned about the 3d world and it's just hidden in that latent code okay what do i mean by that so you can find some latent dimensions that basically control the viewpoint and the rest of the code is kind of controlling the content meaning the type of car and the viewpoint means the viewpoint of that car okay so if i look at it here we basically varied the viewpoint code and kept the this content called the rest of the code frozen and and this is just basically synthesized and the cool part is that it actually looks like you know multiple views of the same object it's not perfect like this guy the third the third object in the top row doesn't look exactly matched but most of them look like the same car in different views and the other side also also holds so if i keep a content like a viewpoint code fixed in each of these columns but they vary the the content code meaning different rows here i can actually get different cars in each viewpoint okay so this is basically again synthesized and that's precisely the data we need so we didn't do anything super special to our technique the only thing we were smart about was how we got the data and and no now now you can use these data to train our you know differentiable rendering pipeline and you got you know predictions like this you have an input image and a bunch of 3d predictions but also now we can do cars so the input image on the left and then the 3d prediction rendered in that same viewpoint here in this column and that's that prediction rendered in multiple different viewpoints just to showcase the 3d nature of the predictions and now we basically have this tool that can take any image and produce a 3d asset so we can have tons and tons of cars by just basically taking pictures okay here is a little demo in that omniverse tool where the user can now take a picture of the car and get out the 3d model notice that we also estimate materials because you can see the windshields are a little bit transparent and the car body looks like it's shiny so it's metal because we're also predicting 3d parts and you know it's not perfect but they're pretty good and now just uh you know a month ago we have a new version that can also animate this prediction so you can take an image predict this guard this guy and we can just put you know tires instead of the predicted tires you can estimate physics and you can drive these cars around so they actually become useful assets this is only on cars now but of course the system is general so we're gonna we're in the process of applying it to sorts of different content cool i think i don't know how much more time i have so maybe i'm just gonna skip today and i have always too much slides um so i have all these behaviors and whatever and i wanted to show you just the last project that we did because i think you guys gave me only 40 minutes um so you know i we also have done some work on animation using reinforcement learning and behavior that you know maybe i skipped here but we basically are building modular deep learning blocks for all the different aspects and the question is can we can we even sidestep all that can we just learn how to simulate data everything with one neural net and we and we're going to call it neural simulation so can we have one ai model that can just look at our interaction with the world and then be able to simulate that okay so you know in computer games we know that you know they accept some user action left right keyboard control or whatever and then the computer engine is basically synthesizing the next frame which is going to tell us you know how how the world has changed according to your action okay so what we're trying to attempt here is to replace the game engine with a neural net which means that we still want to have the interactive part of the of the game where the user is going to be inputting actions gonna be playing but the screens are going to be synthesized by a neural net which basically means that you know this neural net needs to learn how the world works right if i hit into a car it needs to you know produce a frame that's gonna look like that okay now in the beginning our first project was can we just learn how to emulate a game engine right can we take a pac-man and try to mimic it try to see if the neural net can can learn how to mimic pac-man but of course the interesting part is going to start where we don't have access to the game engine like the world right you can think of the world as being the matrix where we don't have access to the matrix but we still want to learn how to simulate and emulate the matrix um and that's really exciting future work but basically we have you know now we're just kind of trying to mimic how what what a game engine does where you're inputting some you know action and maybe the previous frame and then you'll have something called dynamics engine which is basically just an ls stand that's trying to learn how the dynamics in the world looks like how how frames change uh we have a rendering engine that takes that latent code is going to actually produce a nice looking image and we also have some memory which allows us to push any information that we want to be able to consistently produce you know the consistent gameplay uh in some some additional block here okay and and here's here was like our first result on pac-man and release this on the 40th birthday of batman [Music] what you see over here is all synthesized and the to me is even if it's such a simple simple game it's actually not so easy because you know the neural net needs to learn that uh pac-man if it eats the food the food needs to disappear if the ghost can become blue and then if you eat a blue ghost you survive otherwise you die so there's already a lot of different rules that you need to recover along with just like synthesizing images right and of course our next step is can we can we scale this up can we go to 3d games and can we eventually go to the real world okay so again here the control is going to be the steering control so like speed and the steering wheel this is done by the user by a human and what you see on the right side is you know the frames painted by by game gun by this model so here we're driving this car around and you can see what what the model is painting is is a pretty consistent world in fact and there's no 3d there's no nothing we're basically just synthesizing frames and here is a little bit more complicated version where um we try to synthesize other cars as well and this is on a carla simulator that was the game engine we're trying to emulate it's not perfect like you can see that the cars actually change color and resume it's quite amazing that it's able to do that entirely um and right now we have a version actually training on the real driving videos like a thousand hours of real driving and it's actually doing an amazing job already and you know so i think this could be a really good alternative on to the rest of the pipeline all right you know one thing to realize when you're doing something that's so broad and such a big problem is that you're never gonna solve it alone you're never gonna solve it alone so one one mission that i have is also to provide tools to community such that you know you guys can take it and build your own ideas and build your own 3d content generation methods okay so we just recently released 3d deep learning is an exciting new frontier but it hasn't been easy adapting neural networks to this domain cowlin is a suite of tools for 3d deep learning including a pi torch library and an omniverse application cowlin's gpu optimized operations and interactive capabilities bring much needed tools to help accelerate research in this field for example you can visualize your model's predictions as its training in addition to textured meshes you can view predicted point clouds and voxel grids with only two lines of code you can also sample and inspect your favorite data set easily convert between meshes point clouds and voxel grids render 3d data sets with ground truth labels to train your models and build powerful new applications that bridge the gap between images and 3d using a flexible and modular differentiable renderer and there's more to come including the ability to visualize remote training checkpoints in a web browser don't miss these exciting advancements in 3d deep learning research and how cowlin will soon expand to even more applications yeah so a lot of the stuff i talked about all the basic tooling is available so you know please take it and do something amazing with it i'm really excited about that just to conclude you know my goal is to really become democratized 3d content creation you know i want my mom to be able to create really good 3d models and she has no idea even how to use microsoft word or whatever so it needs to be super simple um have ai tools that are going to be able to also assist maybe more advanced users like artists game developers but just you know reduce the load of the boring stuff just enable their creativity to just come to play much faster than it can right now um and all that is also connecting to learning to simulate for robotics simulation is just a fancy game engine that needs to be real as opposed to being from fan fantasy but it can be really really useful for robotics applications right and what we have here is really just like two years and a half of our lab but you know there's so much more to do and i'm really hoping that you guys are gonna do this i just wanted to finish with one slide because you guys are students um my advice for for research um you know just learn learn learn this deep learning course is one don't stop here continue um one in very important aspect is just be passionate about your work and never lose that passion because that's where you're really going to be productive and you're really going to do good stuff if you're not excited about what the research you're doing though you know choose something else through something else don't rush for papers focus on getting really good papers as opposed to the number of papers that's not a good metric right hunting citations maybe also not the best metrics right some some not so good papers have a lot of citations some good papers don't have a lot of citations you're going to be known for the good work that you do um find collaborations find collaborators and that's particularly kind of in my style of research i want to solve real problems i want to solve problems which means that how to solve it is not clear and sometimes we need to go to physics sometimes we need to go to graphics sometimes we need to go to nlp whatever and i have no idea about some of those domains and you just want to learn from experts so it's really good to find collaborators and the last point which you know i have always used as guidance it's very easy to get frustrated because 99 of the time things won't work but just remember to have fun um because research is really fun and that's all from me whether you guys have some questions |
MIT_6S191_Introduction_to_Deep_Learning | Barack_Obama_Intro_to_Deep_Learning_MIT_6S191.txt | This year I figured we could do something a little bit different and instead of me telling you how great this class is I figured we could invite someone else from outside the class to do that instead. So let's check this out first. Hi everybody and welcome MIT 6.S191, the official introductory course on deep learning taught here at MIT. Deep learning is revolutionizing so many fields from robotics to medicine and everything in between. You'll learn the fundamentals of this field and how you can build some of these incredible algorithms. In fact, this entire speech and video are not real and were created using deep learning and artificial intelligence. And in this class, you'll learn how. It has been an honor to speak with you today and I hope you enjoy the course! Hi everybody and welcome to MIT 6.S191. Hi everybody and welcome to MIT 6.S191. The official introductory course on deep learning taught here at MIT! |
MIT_6S191_Introduction_to_Deep_Learning | MIT_6S191_2018_Deep_Learning_Limitations_and_New_Frontiers.txt | I want to bring this part of the class to an end so this is our last lecture but for our series of guest lectures and in this talk I hope to address some of the state of deep learning today and kind of bring up some of the limitations of the algorithms that you've been seeing in this class so far so we got a really good taste of some of the limitations specifically in reinforcement learning algorithms that Lex gave in the last lecture and that's really going to build on or I'm gonna use that to build on top of during this lecture and just to end on I'm gonna bring you I'm gonna introduce you to some new frontiers in deep learning that are really really inspiring it and at the cutting edge of research today before we do that I'd like to just make some administrative announcements so t-shirts have arrived and we'll be distributing them today and we'd like to distribute first to the registered for credit students after that we will be happy to distribute to registered listeners and then after that if there's any remaining we'll give out to listeners if they'd want if they're interested so for those of you who are taking this class for credit I need to reiterate what kind of options you have to actually fulfill your credit requirement so the first option is a group project proposal presentation so for this option you'll be given the opportunity to pitch a novel deep learning idea to a panel of judges on Friday you'll have exactly one minute to make your pitch as clear error as clear and as concisely as possible so this is really difficult to do in one minute and this kind of one of the challenges that were we're putting on you in addition to actually coming up with the deep learning idea itself if you want to go down this route for your final project then you'll need to submit your teams which have to be of size three or four by the end of today so at 9:00 p.m. today we'd like those in you'll have to do teams of three or four so if you want a group working in groups of one or two then you'll have to you're you're welcome to do that but you won't be able to actually submit your final project as part of a presentation on Friday you can submit it to us and we'll we'll give you the grade for the class like that so groups are due 9:00 p.m. today and you have to submit your slides by 9:00 p.m. tomorrow presentations are at class on Friday in this room if you don't want to do a project for presentation you have a second option which is to write a one-page paper review of a deep learning idea so any idea or any paper that you find interesting is is welcome here so we really accept anything and we're really free in this in this option as well I want to highlight some of the exciting new talks that we have coming up after today so tomorrow will have two sets of guest lectures first we'll hear from aura's Muller who is the chief architect of nvidia x' self-driving car team so hers and his team were actually known for some really exciting work that abu is showing yesterday during her lecture and they're known for this development of an end-to-end platform for autonomous driving that takes directly image data and produces a steering control command at the car for the car at the output then we'll hear about we'll hear from two Google brain researchers on recent advancements on image classification at Google and also we'll hear about some super recent advancements and additions to the tensorflow pipeline that were actually just released a couple days ago so this is really really new stuff tomorrow afternoon we'll get together for one of the most exciting parts of this class so what will happen is we'll have each of the sponsors actually come up to the front of the class here we have four sponsors that will present on each of these four boards and you'll be given the opportunity to basically connect with each of them through the through the ways of a recruitment booth and basically they're going to be looking at students that might be interested in deep learning internships or employment opportunities so this is really an incredible opportunity for you guys to connect with these companies in a very very very direct manner so we highly recommend that you take advantage of that there will be info sessions with pizza provided on Thursday with one of these guest lectures with one of these industry companies and we'll be sending out more details with that today as well so on Friday we'll continue with the guest lectures and hear from Lisa and meanie who is the head of IBM Research in Cambridge she's actually also the director of the MIT IBM lab and this is a lab that was just founded a couple or actually about a month ago or two months ago we'll be hearing about how IBM is creating AI systems that are capable of not only deep learning but going a step past deep learning they're capable of or trying to be capable of learning and recently on a higher-order sense and then finally we'll hear from a principal researcher at $0.10 AI lab about combining computer vision and social networks it's a very interesting topic that we haven't really touched upon in this class this topic of social networks and using massive big data collected from from humans themselves and then as I mentioned before in the afternoon we'll go through and hear about the final project presentations we'll celebrate with some pizza and the awards that will be given out to the top projects during those during that session as well so now let's start with the technical content for this class I'd like to start by just kind of over viewing the type of architectures that we've talked about so far for the most part these architectures can be thought of almost pattern-recognition architectures so they take as input data and the whole point of their pipeline their internals are performing feature extraction and and what they're really doing is taking all the sensory data trying to figure out what are the important pieces what are the patterns to be learned within the data such that they can produce a decision at the output we've seen this take many forms so the decision could be a prediction it could be a detection or even an action like an agree enforcement learning setting we've even learned how these models can be viewed in a generative sense to go in the opposite direction it actually generate new synthetic data but in general we've been dealing with algorithms that are really optimized to do well and only a single task but they really fail to think like humans do especially when we consider a higher order level of an telogen Slyke I defined on the first day to understand this in a lot more detail we have to go back to this very famous theorem that was dating back almost 30 years from today this theorem which is known as the universal approximation theorem was one of the most impactful theorems and neural networks when it first came out because it had such a profound it proved such a profound claim what it states is that a neural network with a single hidden layer is sufficient to approximate any function to any arbitrary level of accuracy now in this class we deal with networks that are deep they're not single layered so they're actually more than a single layer so actually they contain even more complexity than the network down referring to here but this theorem proves that we actually only need one layer to accomplish or to approximate any function in the world and if you believe that any problem can actually be reduced to a sets of inputs and outputs in this form of a function then this theorem shows you that it's that a neural network with just a single layer is able to solve any problem in the world now this is an incredibly powerful result but if you look closely there are a few very important caveats I'm not actually telling you how large that hidden layer has to be to accomplish this task now with the size of your problem the hidden layer and the number of units in that hidden layer may be exponentially large and they'll grow exponentially with the difficulty of your problem this makes training that network very difficult so I never actually told you anything about how to obtain that Network I just told you that it existed and there is a possible network in the realm of all neural networks that could solve that problem but as we know in practice actually training neural networks because of their non convex structure is extremely difficult so this theorem is really a perfect example of the possible effects of overhyping in AI so over the history of AI we've had two AI winters and this theorem was one of the resurgence after the first day I winter but it also caused a huge false hype in the power of these neural networks which ultimately led to yet another AI winter and I feel like as a class it's very important to bring this up because right now we're very much in the state of a huge amount of overhyping in deep learning algorithms so these algorithms are especially in the media being portrayed that they can accomplish human level intelligence and human level reasoning and simply this is not true so I think such over hype is extremely dangerous and resulted well we know it resulted in in both of the two past AI winters and I think as a class it's very important for us to focus on some of the limitations of these algorithms so that we don't overhyped them but we provide realistic guarantees or realistic expectations rather on what these algorithms can accomplish and finally going past these limitations the last part of this talk will actually focus on some of the exciting research like I mentioned before that tries to take a couple of these limitations and really focus on possible solutions and possible ways that we can move past them okay so let's start and I think one of the best examples of a potential danger of neural networks comes from this paper from google deepmind named understanding deep neural networks requires rethinking generalization and generalization was this topic that we discussed in the first lecture so this is the notion of a gap or a difference between your training accuracy and your test accuracy if you're able to achieve equal training and test accuracy that means you have essentially no generalization gap you're able to generalize perfectly to to your test dataset but if there's a huge disparity between these two datasets and your model is performing much better on your training data set than your test dataset this means that you're not able to actually generalize to brand new images you're only just memorizing the training examples and what this paper did was they performed the following experiment so they took images from imagenet so you can here see four examples of these images here and what they did was they rolled a case I did die where K is the number of all possible labels in that data set and this allowed them to randomly assign brand new labels to each of these images so what used to be a dog they call now a banana and what they used to be that banana is now called the dog and what it used to be called that second dog is now a tree so note that the two dogs have actually been transformed into two separate things so things that used to be in the same class are now in completely disjoint classes and things that were in disjoint classes maybe now in the same class so basically we're completely randomizing our labels entirely and what they did was they tried to see if a neural network could still learn random labels and here's what they found so as you'd expect when they tested this neural network with random labels as they increase the randomness on the x axis so going from left to right this is the original labels before randomizing anything and then they started randomizing their test accuracy gradually decreased and this is as expected because we're trying to learn something that has absolutely no pattern in it but then what's really interesting is that then they looked at the training accuracy and what they found was that the neural network was able to with 100% accuracy get the training set correct every single time no matter how many random labels they introduced the training set would always be shattered or in other words every single example in the training set could be perfectly classified so this means that modern deep neural networks actually have the capacity to brute-force memorize massive data sets even on the size of imagenet with completely random labels they're able to memorize every single example in that data set and this is a very powerful result is it drives home this point that neural networks are you really excellent function approximator x' so this also connects back to the universal approximation theorem that I talked about before but they're really good approximator is for just a single function like I said which means that we can always create this maximum likelihood estimate of our data using a neural network such that if we were given a new data point like this purple one on the bottom it's easy for us to compute its estimate probability or its estimate output just by intercepting it with that maximum likelihood estimate but if that's only if I'm looking at a place that we have sufficient training data already what if I extend these x axes and look at what the neural network predicts beyond that in these locations these are actually the locations that we care about most right these are the edge cases and driving these are the cases that we don't have met many or a lot of data that was collected and these are usually the cases where safety critical applications are like our most important right so we need to be able to make sure when we sample the neural network from these locations are we able to know that the neural network are we able to get feedback from the neural network that it actually doesn't know what it's talking about so this notion leads nicely into the idea of what is known as advertised adversarial attacks where I can give you directly and give and neural network to images like on the left like this one and on the right an adversarial image that to a human look exactly the same but to the networks they're incorrectly classified 100% of the time so the image on the right shows an example of a temple which when I feed to a neural network it gives me back label of a temple but when I apply some adversarial noise it classifies this image incorrectly as an ostrich so for this I'd like to focus on this piece specifically so to understand the limitations of neural networks the first thing we have to do is actually understand how we can break them and this perturb noise is actually very intelligently designed so this is not just random noise but we're actually modifying pics in specific locations to maximally change or mess up our output prediction so we want to modify the pixels in such a way that we're decreasing our accuracy as much as possible and if you remember back to how we actually trained our neural networks this might sound very similar so if you remember training and neural network is simply optimizing over our weights theta so to do this we simply compute the gradient of theta with respect to our loss function with respect to theta and we simply perturb our weights in the direction that will minimize our loss now also remember that when we do this we're perturbing theta but we're fixing our X and our Y this is our training label our training data and our training labels now for adversarial examples we're just shuffling the variables a little bit so now we want to optimize over the image itself not the weight so we fix the weights and the target label itself and we optimize over the image X we want to make small changes to that image X such that we increase our loss as much as possible and we want to go in the opposite direction of training now and these are just some of the limitations of neural networks and for the remainder of this class I want to focus on some of the really really exciting new frontiers of deep learning that focus on just two of these specifically I want to focus on the notion of understanding uncertainty and deep neural networks and understanding when our model doesn't know what it was trained to know maybe because it wasn't it didn't receive enough training data to support that hypothesis and furthermore I wanted to focus on this notion of learning how to learn models because optimization of neural networks is extremely difficult it's extremely limited in its current nature because they're optimized just to do a single task so what we really want to do is create neural networks that are capable of performing not one task but a set of sequences of tasks that are maybe dependent in some fashion so let's start with this notion of uncertainty in deep neural networks and to do that I'd like to introduce this field called Bayesian deep learning now to understand Bayesian deep learning let's first understand why we even care about uncertainty so this should be pretty obvious but suppose we were given a network that was trained to distinguish between cats and dogs that input were given a lot of tests imitating images of cats and dogs and it's simply at the output we're producing an output probability of being a cat or a dog now this model is trained on either on only cats or dogs so if I showed another cat it should be very confident in its output well let's suppose I give it a horse and I force that network because it's the same network to produce an output of being a probability of a cat or a probability of a dog now we know that these probabilities have to add up to 1 because that's actually the definition that we constrain our network to follow so that means by definition one of these categories so the network has to produce one of these categories so the notion of probability and the notion of uncertainty are actually very different but a lot of deep learning practitioners often mix these two ideas so uncertainty is not probability neural networks are detect or trained to detect or produce probabilities at their output at their output but they're not trained to produce uncertainty values so if we put this horse into the same network we'll get a set of uncertainty bility values that add up to 1 but what we really want to see is we want to see a very low uncertainty in that a very low certainty in that prediction and one possible way to accomplish this in deep learning is through the eyes of Bayesian deep learning and to understand this let's briefly start by formulating our problem again so first let's go through like the variables right so we want to approximate this variable Y or output Y given some raw data X and really what we mean by training is we want to find this functional mapping F parameterize by our weights theta such that we minimize the loss between our predict examples and our true outputs y so Bayesian neural networks take it different approach to solve this problem they aim to learn a posterior over our weights given the data so they attempt to say what is the probability that I see this model with these weights given the data in my training set now it's called Bayesian deep learning because we can simply rewrite this posterior using Bayes rule however in practice it's rarely possible to actually compute this compute this Bayes rule update and it just turns out to be intractable so instead we have to find out ways to actually approximate it through sampling so one way that I'll talk about today is a very simple notion that we've actually already seen in the first lecture and it goes back to this idea of using dropout so if you remember what dropout was dropout is this notion of randomly killing off a certain percentage of neurons in each of the hidden layers now I'm going to tell you not how to use it as a regularizer but how to use dropout as a way to produce reliable uncertainty measures for your neural network so to do this we have to think of capital T stochastic passes through our network where each stochastic pass performs one iteration of dropouts each time you iterate dropout you're basically just applying a Bernoulli mask of ones and zeros over each of your weights so going from the left to the right you can see our weights which is like this matrix here different colors represent the intensity of that weight and we element-wise multiply those weights by our Bernoulli mask width which is just either a 1 or a 0 in every location the output is a new set of weights with certain of those dropped out with certain aspects of those dropped out now all we have to do is compute this T times capital T times we get Kappa theta T weights and we use those theta T different models to actually produce an empirical average of our output class given the data so that's this guy well we're actually really interested in why I brought this topic up was the notion of uncertainty though and that's the variance of our predictions right there so this is a very powerful idea all it means is that we can obtain reliable model uncertainty estimates simply by training our network during runtime with dropout and then instead of estimating or classifying just a single pass through this network at test time we classify capital teet iterations of this network and then use it to compute a variance over these outputs and that variance gives us a estimation of our uncertainty now to give you an example of how this looks in practice let's look at this this network that was trained to take as input images of the real world and outputs predicted depth maps oh it looks like my text was a little off but that's okay so at the output we have a predicted death map where at each pixel the network is predicting the depth in the real world of that pixel now when we run Bayesian model uncertainty using the exact same dropout method that I just described we can see that the end the model is most uncertain in some very interesting locations so first of all pay attention to that location right there if you look where where is that location exactly it's just the window sill of this car and in computer vision windows and specular objects are very difficult to to basically model because we can't actually tell their surface reliably right so we're seeing the light from actually the sky we're not actually seeing the surface of the window in that location so it can be very difficult for us to model the the depth in that place additionally we see that the model is very uncertain on the edges of the cars because these are places where the depth is changing very rapidly so the prediction may be least accurate in these locations so having reliable uncertainty estimates can be an extremely powerful way to actually interpret deep learning models and also provide human practitioners especially in the realm of safe AI that's a way to interpret the results and also trusts our results with a certain amount of or a certain grain of salt so for the next and final part of this talk I'd like to address this notion of learning to learn so this is a really cool sounding topic it aims to basically learn not just a single model that's optimized to perform a single task like we've learned basically and all of our lectures previous to this one but it learns how to learn which model to use to train that task so first let's understand like why we might want to do something like that I hope this is pretty obvious to you by now but humans are not built in a way where we're learning where we're executing just a single task at a time we're executing many many many different tasks and all of these tasks are constantly interacting with each other in ways that learning one task can actually aid speed-up or deter the learning of another task at any given time modern deep neural network architectures are not like this they're optimized for a single task and this goes back to the very beginning of this talk where we talked about the universal approximator and as these models become more and more complex what ends up happening is that you have to have more and more expert knowledge to actually build and deploy these models in practice and that's exactly why all of you are here you're here to basically get that experience such that you yourselves can build these deep learning models so what we want is actually an automated machine learning framework where we can actually learn to learn and this basically means we want to build the model that learns which model to use given a problem definition one example that I'd like to just use as an illustration of this idea so there are many ways that auto ml can be accomplished and this is just one example of those ways so I'd like to focus on this illustration here and I like to walk through it it's just a way that we can learn to learn so this this system focuses on two parts the first part is the controller RNN in red on the left and this controller RNN is basically just sampling different architectures of neural networks so if you remember in your first lab you created an RNN that could sample different music notes this is no different except now we're not sampling music notes we're sampling an entire neural network itself so we're sampling that parameters that define that neural network so let's call that the architecture or the child's network so that's the network that will actually be used to solve our task in the end so that network is passed on to the second bottle so that network is passed on to the second one and in that piece we actually use that network that was generated by the Arnon to train a model depending on how well that model did we can provide feedback to the RNN such I can produce an even better model on the next time step so let's go into this piece by piece so let's look at just the RNN part in more detail so this is the RNN or the architecture generator so like I said this is very similar to the way that you are generating songs in your first lab except now we're not generating songs the time steps are going from layers on the x-axis and we're just generating parameters or hyper parameters rather for each of those layers so this is a generator for a convolutional neural network because we're producing parameters like the filter height the filter width the stride height etc so what we can do is we can add each time step produce a probability distribution of over each of these parameters and we can essentially just sample an architecture or sample a child Network once we have that child Network which I'm showing right here in blue we can train it using our data set that we ultimately want to solve so we put our training data in and we get our predicted labels out this is the this is the realm that we've been dealing with so far in this right so we have our this is basically what we've seen so far so this is just a single network and we have our training data that we're using to Train it we see how well this does depending on the accuracy of this model that accuracy is used to provide feedback back to the RNN and update how it produces or how it generates these models so let's look at this one more time to summarize this is an extremely powerful idea it's really really really exciting because it shows that an RNN can be actually combined in a reinforcement learning paradigm where the R and n itself is almost like the agent in reinforcement learning it's learning to make changes to the child network architecture depending on how that child network performs on a training set this means that we're able to create an AI system capable of generating brand-new neural networks specialized to solve specific tasks rather than just creating a single neural network that we create just to solve that tasks that we want to create that we want to solve thus this has significantly reduced the difficulty in optimizing these neural networks for architectures for different tasks and this also reduces the need for expert engineers to design these architectures so this really gets at the heart of artificial intelligence so when I began this course we spoke about what it actually means to be intelligent and loosely I defined this as the ability to take information process that information and use it to inform future decisions so the human learning pipeline is not restricted to solving just one task at a time like I mentioned before how we learn one task can greatly impact speed up or even slow down our learning of other tasks and the artificial models that we create today simply do not capture this phenomenon to reach artificial general intelligence we need to actually build AI that can not only learn a single task but also be able to improve its own learning and reasoning such that it can generalize two sets of related dependent tasks I'll leave this with you as a thought-provoking point and encourage you to to all talk to each other on some ways that we can reach this this higher-order level of intelligence that's not just pattern recognition but rather a higher-order form of reasoning and actually thinking about about the problems that we're trying to solve thank you [Applause] |
MIT_6S191_Introduction_to_Deep_Learning | MIT_6S191_Convolutional_Neural_Networks.txt | [Music] hi everyone and welcome back to day two of 6s one91 introduction to deep learning So today we're going to be talking about a subject which is actually uh one of my favorite topics in this entire course and that's how we can give machines the sense of sight just like we all have so vision is one of the most important human senses right cited people rely on Vision quite a lot to navigate in their daily lives to express emotions and receive emotions and really just to understand everything about their world as we know it right so I think it's safe to say that sight and vision is a massive part of of our everyday lives and today we're going to learn about how we can build deep learning models that have that same ability right that are able to see and process information and what I like to think of in terms of what we want to do in this lecture in particular is to give machines this ability to know what is where by looking at a visual input right and I like to think of this actually as a as a simple definition really of of what Vision means at its core as well but in reality vision is much more than simply understanding what is where it also involves understanding so much about the the semantics and the Dynamics of a scene as well so take this scene for example we can build computer vision algorithms that can try to identify all of the different objects in this scene so for example here highlighting a couple of the yellow taxis and cars on the side of the road but when we want to really achieve Vision what we really are talking about is not just picking out these objects and localizing them but also paying attention to all of their surroundings as well paying attention to the pedestrians accounting for the details in the scene and we take all of this for granted because it's much more than just understanding what is where and localizing all of these elements but it's about anticipating the future as well so when you see the scene you see the pedestrians that are not only you know on the side of the road but you see some that are mid walking across the road you can even deduce that you know probably this Yellow Taxi is dynamic whereas this white van is stationary right all of this goes far beyond localization of these objects within the scene but actually goes into the understanding of all of these visual elements and how they dynamically interact with each other and deep learning is really bringing forward a revolution towards computer vision algorithms and their application so for example allowing robots to pick up on these key visual properties uh for navigation abilities manipulation abilities to change their world that navigate through it these algorithms like you'll learn about today become so mainstream that we have all of them in our pockets right they're fitting in our smartphones and we're using them actually every time that you take a picture it's being passed through a neural network on today's smartphones we're seeing some very exciting applications of computer vision now for biology and medical applications as well to pick up on subtle cues for cancer as well as autonomous driving we've been hearing about and accessibility for many many years so deep learning I think it's safe to say that deep learning has taken this field of computer vision by storm and it's because of this ability that we talked about in yesterday's lecture to learn from raw data massive amounts of raw data that it allows it to really accomplish this amazing feat so one example one of the first examples actually of computer vision really revolutionizing the ability for computers to uh complete this automated task is for facial detection and recognition another common example is now in the context of self-driving vehicles or autonomous vehicles where we want to basically take a simplified version of this task which is to take a single image as input pass that into an AI system and ask the AI system based on this one image how would you steer the car into the future right and then you keep doing that over and over again as you see more and more images you see a video you can steer the car over time so the key idea here is that this entire approach this entire self-driving car is being operated end to end by a single neural network and it's learned entirely from data by watching humans drive and and learning those operations those relationships between the images the vision and the actions that the humans actually steer the vehicles with deep learning is being used to diagnose cancer and other diseases directly from raw medical scans that uh human doctors would see and like I said we often take of these tasks for granted because we as humans do them every day maybe they require more specialization some require more specialization than others but we take them very for granted because every day we're relying on Vision as one of our primary senses to accomplish our our daily tasks right and in order to see that we need to ask ourselves first of all in order to build you know computers that can do the same thing we need to first by start by asking ourselves you know what are the properties similar to how we saw yesterday when we were talking about vision and we wanted to convert Vision to be amendable for computer uh sorry when we wanted to convert language excuse me in yesterday's lecture to be amenable for computers to process we had to ask ourselves how can computers process language and in today's class we're going to ask ourselves how can computers see right what are the form of images that images need to take in order for computers to properly ingest them process them and then make actions based on those uh pixels on the screen so in order to do that a computer you know what does a computer see when it looks at an image right it's just a a bunch of numbers in specific it's a two-dimensional array of numbers right suppose we have this picture here of Abraham Lincoln right this is a two-dimensional Matrix of numbers it's made up of pixels every pixel is just a single number because this is a grayscale image so it's a single number uh capturing the Luminosity or kind of the brightness of that pixel right so it's one number per pixel and you can think of this as just a two-dimensional array it's not even you can think of this as a two-dimensional array it is a two-dimensional array of numbers and we can represent that as a matrix right for each pixel in the image there's one element in this Matrix and that's exactly how we will pass in this information to a neural network right so we pass in images in the form of these matrices to a neural network now if we want to make this slightly more complex and now we have RGB images color images now every one number becomes three numbers because you need to represent how much red green and blue is in this in this pixel right so instead of a two-dimensional Matrix now you have a three-dimensional Matrix height by width by three right one of for each of these red green blue channels so now we have a way to represent images to Compu computers we now need to think about you know what are the types of computer vision tasks that we can perform with this ability right so there are two common types that it's very important for all of you to be very familiar with as part of this topic the first task is that of regression right so regression and then the other task is the task of classification so in regression our output takes a continuous value okay and in classification it takes a single class label think of regression as something continuous think of classification as something discreet so let's consider first the class of classification because that's a more intuitive problem in my opinion so in this task we want to predict a single class per image right so for example let's say we have a bunch of US presidents in a set and we want to predict a label for this image so we imput this image of AE Lincoln and we have a bunch of possibilities in terms of what this uh what this image could represent in terms of which president it could be of so in order to correctly classify these images this image that the neural network sees what would we have to do our pipeline would have to be able to tell what is unique about each one of these precedents right so what is unique about a blink that makes him different than uh Washington or Obama right what are the unique patterns in each of these images that would be the distinguishing factor between them so another way to think about this task of image classification at a high level again is to think of the patterns or the features that are necessary in order to distinguish or disentangle between the different classes so classification is really just done by detecting these patterns or these features in the data right so if the features of a particular class are present then you would deduce that this this data is of that class right so for example on the left hand side right if you see an image with eyes and noses and mouths we talked about this yesterday then you could deduce that these are probably images of faces right on the right hand side if you see images of you know doors windows and steps then these are patterns that are all common themes to houses and you would deduce that this is an image of a house right so the the key thing here is to really think about solving this problem first of pattern recognition right so what are the pieces or what are the patterns that are available that we can pick up on and extract from our uh from our image set and we as humans can even start to say that okay if we want to detect faces what are these patterns can we write them down and Define them such that you know we can try to manually detect all of those different patterns ourselves so if we go out and we say I want to go out and detect you know faces so let me first start by detecting eyes noses and ears and I'll look for those in the images first and then if I find them I'll call it a face right what are the problems with this first of all right like first of all it requires us as humans to manually Define what are those patterns right eyes noses and ears and then it also requires us to have some detection system for those features as well right maybe those are much easier features to detect because they're much smaller they're less complex than a full face but still I think the first problem is very important because we need to actually detect and know what to detect right now the fundamental problem that's really critical to this lecture in particular is that images remember they're just three-dimensional arrays of numbers and that images can take a lot of different types of variation they can take occlusions they can take variations in illumination they can take rotations even they can take intraclass variations so variations within a single class so in our classification pipeline if we wanted to build one manually what we would need to do is to be able to handle all of these different types of variations in a very robust way while still being sensitive to the interclass variation so we want to be robust against the intraclass so variations within classes but still maintain all of the variations across classes so even though our pipeline could use features that we as humans manually Define this manual extraction step would basically break down still at the detection task just due to the inevitable invariability or variability rather of all of these different types of settings that we could find eyes noses and ears in all of the different types of of uh faces that exist so now the natural question that you should all be asking is you know how can we do better than this right we want to build a way that we can both extract as step one and as step two detect which features are critical in the presence of different types of classes different types of images automatically and we want to be able to do this in a hierarchical fashion right because we want to start from very simple features and then slowly build up to things like eyes noses and ears and then build up from there to faces and this is exactly what neural networks do right so we learned in the first lecture neural networks are based on this approach that the whole concept is this hierarchical learning of features right directly from data to learn this hierarchy to construct a representation of an image internal to that Network so neural networks will allow us to directly learn those visual features think of those as the visual patterns that Define our our classes directly from the data itself right if we construct them cleverly right so now the next part of the class is all going to be about how can we construct neural networks cleverly such that we can give them this ability to learn these visual features for this special type of data right so visual data is very different than the other data that we saw yesterday both the time series data as well as you know in the first lecture we were just looking at just feature data right just inputting uh you know non-ordered sets of features to a neural network now we have a lot of structure pixels have order to them and beyond that they even have geometric structure right so how can we enable neural networks to capture all of that cleverly right so in lecture one we learned about this type of neural network right this is a fully connected Network where every model can have multiple layers hidden layers between the input and the output and where each neuron is given in a given layer is connected to every neuron in the prior layer and every neuron in the future layer so let's say we wanted to use just for argument sake right let's say we wanted to use this type of neural network for the image classification task so in our dimensional input right how would we pass this into a fully connected layer our only option would be something like we would have to flatten our two-dimensional image into a one-dimensional array and then every pixel we would just feed into an input in this neural network so right off the bat right so what are some problems with this approach right we have completely destroyed all of the geometric structure available in our image because we've collapsed this two-dimensional image down into one dimension we've basically thrown out all of the beautiful structure that exists in images and the fact that one pixel is actually very close to another pixel we've lost that automatically and additionally even worse than that we've now introduced a ton of parameters because our image if you think of even a relatively small image let's say you know 100 by 100 pixels that's really nothing in terms of images that's 10,000 inputs to your neural network right off the bat and that those 10,000 inputs will be you know connected to let's say another 10 ,000 hidden neurons on the next layer that's 10,000 squared neuron parameters and you've only got one layer of this network right 10,000 squared parameters with one layer extremely inefficient type of processing and completely impractical in in in in reality so instead we need to ask how we how we can not lose all of this spatial geometric beautiful structure that comes with images how can we keep all of that and instead we need to ask how we can build that spatial structure into our neural network instead right so instead of losing it on the input let's change our neural network to make it compatible with accepting spatial inputs so to do this let's now represent our two-dimensional image back as an array of of pixels right so it's a it's a matrix it's a two-dimensional Matrix just like we started as and one way that we can use the spatial structure inherent in our input is to connect let's think of like patches right in our original input to the next hidden layer so instead of connecting every single neuron to every single neuron in the next layer we're now just going to consider a single patch in our input image and connect those to a single neuron in the next layer so for example here's this red patch in our input image and we're going to connect those to the next layer right we'll talk about how that connection is done in a second but that is to say that this one neuron that I'm showing right here on the right hand side this one hidden neuron is only seeing a small particular region of our input image it's not seeing the entire image right that one neuron only sees a small image and that's going to allow us to leverage a couple of very nice properties number one is that we're going to leverage this property that inside that patch there's probably a lot of uh correlation between pixels it's unlikely that those pixels are going to be completely different there's likely going to be a lot of you know relationship because those are pixels cl to each other and that's we know that's the way images work so notice how again the only region of the input layer that influences this one neuron is only inside that red box now to Define connections across the entire input we can simply apply that same principle of connecting patches in the input layer to single neurons in the hidden layer right so we're going to do this now by just sliding that patch across our input to produce a analogous uh you know New Image you can think of it of IM of of neurons on the hidden layer and in this way we're going to take into account we're going to retain all of that beautiful spatial structure that came in our input but also remember that our ultimate task here is not just to retain structure which is all great but really what we want to do is we want to detect the features we want to detect the visual patterns that are present in this in this inp and by doing this and by cleverly waiting these connections from the input to this hidden layer right we can actually allow our neurons in that hidden layer to detect particular features right because every neuron is observing a small patch so its responsibility could actually be to detect the features in that patch so in practice this operation that I'm describing right I'm describing in kind of a high level intu fashion but the mathematical description of this exact operation is something called a convolution right so we'll think of this at first again still staying at the high level suppose we have this 4x4 patch let me start using more formal language and call it a filter right so this 4x4 patch let's call it a filter it has 16 different elements those are the weights of this patch okay so they have 16 different weights in this convolutional filter and we're going to apply the same 4x4 convolutional filter across all of our input and the result of that operation is to define the state of the neurons in the next layer right so then we're just going to shift our patch our convolutional filter two pixels to the right to get the next patch and we're going to get the next patch and next and we're going to keep sliding it across the image until we fill out our entire next hidden layer so that's how we should start to think ofu at a very high level and make sure that that first of all that's clear to everyone and then we'll keep building up the intuition from there okay so now you're probably wondering at this point you know we've introduced this convolution operation but how does the convolution operation actually allow us to extract these features right I think that's the missing piece that we haven't yet talked too much about so let me just make this more concrete by walking through a very um a very concrete example of the application of a convolutional filter so let's suppose here we we want to classify X's from a set of black and white images okay so here we're putting black as negative one and white is going to be positive one so the entire pixels There's No Gray it's either plus one or minus one and we're trying to detect if this image has an X in it or not so to classify it's clearly not possible to just compare these two matrices right so we have a matrix of a ground truth X we cannot simply compare this one to that ground truth X because you know it's slightly different and there are some variations between x's and not everyone writes the X the exact same way so simply comparing the two images it's not going to work and we want to be able to classify an X even if it's you know shifted shrunk grown deformed right all of these different variations we want to be robust to all of these different things so how can we do this so instead of comparing the entire image of an X to our entire new image let's again revisit the same idea of patching right we want to compare the images of our X piece by piece right and the important pieces that we are going to look for are the features right these are the features that we're going to be trying to detect so if our model can find rough features uh you know in our desired image that reflect the X then we should be able to confidently say that yes this is indeed an X Okay so this would also be a lot better than simply you know seeing the similarity between these different types of of x's right because now we're patch based so let's look at each of these features so each feature is indeed its own miniature image right each patch you can see three different of them three different versions of those patches as Min miniature images right there are small two-dimensional arrays of values and we'll use those features those filters themselves to be able to pick up on the features common to an X so what are the the patterns that truly Define an X defined by these small miniature images so all of the important characteristics of an X should really be represented by these fil filter sets and note that these smaller matrices that I'm showing on the top uh the top row here these are actually filters of the weight matrices that we will use to detect the corresponding features in the input image and and all that's left to Define really as part of all of this is to Define an operation that picks up where these features pop up in our images and then kind of return something when they do pop up and that operation again is the convolution right it's exactly how we defined it before but now let's just walk through it with this particular filter right so here we have a filter with this diagonal set of ones right on the top left you can see it right here so it has ones on the diagonal and it's back everywhere else so this is trying to detect a diagonal line present in our image right convolution preserves that spatial relationship between the pixels just by The Virtue that it's learning image features into these small little miniature uh squares of the input data so to do this what we're going to do is just perform this elementwise multiplication of our patch our miniature image with a patch on the input image right element wise multiply these two things together add a bias and pass it through a nonlinearity right so hopefully this should sound very similar to the lecture yesterday right but let's tie it let's step a little bit slower right so the result here is a 3X3 image when we element wise multiply our weights with our input patch we're going to get another 3x3 image if there is perfect alignment between our filter our features and our input this image is going to be all one right because everything matches with everything and when I said we add a bias and then we we sum up everything that's again following from yesterday's lecture but it's the same thing here we add up all of those answers and we're going to get one number which is the number nine in this case right and finally uh you know that's that's the output of the neuron at this patch location right not over the whole image right we now have to slide that patch across the entire image and let's let's see an example doing that as well so suppose now we want to compute the convolution of this 5 to by five input image right so that's the green Matrix on the left and we want to compute its convolution with this orange Matrix on the right so to do this we need to entirely cover our entire input image by sliding the filter over the image and Performing this element wise multiplication and then adding up the outputs over every single patch so let's see what this looks like so we're going to start first on the the upper left hand corner we element wise multiply this 3x3 patch with the entries of the filter we add them up and there's results in the first entry of our output or our hidden neuron state right on the right hand side the first output of our neuron is going to be four okay that's so this output I'm going to start using more formal terminology now that output is going to be our feature map right it's the hidden state of our next layer of our operation and then we start by sliding this filter over our input image so now we've slid one element over to the right and we repeat this process over and over and over again we now repeat it until we keep filling out this entire feature map on the right hand side and until it's all filled out and that's it the feature map now reflects you know exactly where in the input image is being activated is being most similar to this desired filter right this desired orange filter so we can actually see exactly and pick out based on which numbers are the highest in this feature map where there is the most alignment where there's the most match and is the activation of this filter itself and that's the mechanism of the entire convolution operation we can now see how different types of yep we can now see how different types of these filters can be used to produce different types of uh feature Maps as well so for example picture this uh image of a woman's face the output of I'm showing here three or let's see four different comu no sorry three the left side is the original the right the right three are three different types of filters convolutional filters that could be slid over this input image and simply by changing the weights in these 3x3 filters we get three very different types of feature maps that come out right this is just by applying a different 3x3 filter to that original input image of the woman's face so just by changing again this is a really important concept I'll say it again just by changing those those nine numbers the 3X3 Matrix in the filter we can achieve detection of enormously different types of featur Fe in our input image so I hope that now you can actually appreciate how the convolution operation is allowing us to do this detection ability and Leverage The spatial structure of our original input image in a way that's not detrimental we maintain all of that structure but we still are enabled to detect and extract these types of features and these concepts of preserving spatial structure and and local feature extraction using this convolutional operation are is really at the core of today's lecture and that's why we're spending so much time on it is because it's the core of neural networks that are being used for computer vision tasks so those neural networks unsurprisingly are called convolutional neural networks because the convolution is the backbone of those types of models and now you can start to think of you know how we can build these types of convolutional neural networks that can actually do and are designed for this original test that we started the lecture on right so that original test that we were really fixated on in the beginning of class today was image classification right so now let's see how we can take this convolution operation that we've just learned about and apply it into Network form such that we can learn models that can do this image classification task so let's start by thinking about a simple convolutional neural network or CNN for short right and let's think of it in the context of image classification so now the goal is to learn those features those filters instead of me giving the filters right like we did in the previous slides those 3x3 filters we're now going to ask you know our neural network to say those are your weights instead of before we were looking at those weight matrices now those 3x3 filters those are your weights can you learn those weights such that the filters that you learn are valuable pieces of information to extract from your input image right so so there are three main operations that are really important to to get across as part of the CNN number one we've talked about already this is the convolution this is the how we saw is the ability it's the operation that allows us to generate those feature Maps right so given a weight Matrix given a weight filter right the convolution is the operation that allows us to compute the feature map the next one is the application of the nonlinearity right so this is Concepts taken now from the prev lectures we're going to also use them in today's lecture after you apply the convolution you're going to apply nonlinearity so that your model is still nonlinear and can handle complex nonlinearities in the data itself the next one is one that we haven't talked about yet it's called pooling so pooling is a down sampling operation it allows us to progressively grow the relative size of your filters right so the 3X3 filter is very small but you can think of after every operation if you're downscaling the size of your image 3x3 is covering a relatively larger part of your original image because your image is shrinking right and then finally the last part of this model is that you need to now take all of those features that you've learned that have been extracted by your convolutional neural network and you need to now run a computation somehow you need to get a computation of class scores and you need to ultimately output a probability for your classes right because your ultimate task here is image classification that means that ultimately you need to be left with not features but you need to be left with probabilities of different classes so we can train this model on a set of images and we're training it to actually learn the weights for the filters of these convolutional features right of these convolutional layers um and of course at the very end we also want to train it to learn the weights of the fully connected layer which is the types of layers that we learn from lecture one right so right between these first so the input is on the left we have two convolutional layers that I'm showing for illustration here or sorry one convolutional operation followed by a pooling operation which we talked about in we'll talk more about in a second and then you need to compute the probabilities right so then at that point you're out of image space and you're talking about probabilities so you can go back to lecture one and actually use the fully connected layers and then we'll go now more in detail about each of these operations in specific so we'll go deeper into the architecture of a CNN so let's first consider the operation of the convolution right and see how the convolution operation can be expanded to create a convolutional layer right now of course we stack those together to make convolutional neural networks so just as before each layer or sorry each neuron in our hidden layer will be computed as a weighted sum between the patch and our input weighted by element wise weighted by our filter patch right that's the patch of Weights that we want to learn and what's special here is the local connectivity right each neuron in the hidden layer only sees a single patch in the input layer and that's really critical and you're reusing those weights as you slide across the uh the the input right so now let's actually Define the real computation right so what is the computation that's running as part of this layerwise oper so for a neuron in a hidden layer its inputs are those neurons in the patch from the previous layer we're going to apply this Matrix of weights right those are the filters again in this case we're showing this 4x4 red patch of filters and we do an element wise multiplication right with the the following layer we add the outputs we apply a bias then we activate it with a nonlinearity right remember our element wise multiplication and addition is that convolution operation that we had defined before but the layer the convolutional layer is a bit more than just a convolutional operation so this defines how neurons and convolutional layers can actually be processed right how they're connected and how they're processed in mathematical form but within a single convolutional layer we can also have multiple filters well why would you want to do that right so you would want to do that if you want your convolutional layer to not just learn one type of feature to detect but maybe you want to learn a whole set of different 3x3 features that you want to detect in your image so think back to the X example right to detect an X you need to not only detect let's say a crossing of two edges in the center of the X but you also need to detect a a diagonal line like this and you would also need to detect a diagonal line like that so there's three features that roughly make up an X and if you detect those three features and some connectivity with each other then you've found an X right and just like that the the convolutional layers may need to learn a set of different features not just one feature so the output layer after a convolution is not actually just one one two-dimensional image it's actually a volume where the height and the width are the spatial dimensions of your output but then the depth is the dimensions of how many filters or how many features we have learned right so we're sliding those features over our input and the stride here there's some comment about stride as well so the stride is just think of this as kind of how quickly you slide across right so the bigger your slide is you're going to slide and Skip more and that ultimately means that your feature map will be smaller right because you're not going to go very slowly over the the input image and we can also think of these connections of convolutions in the layer itself in terms of their receptive field right and that's a really helpful intuition that you can get about what these models are actually doing is to think about the locations in their input that every node uh in that path is connected to right so if you think about kind of like one neuron in in this part of your feature map which parts of the input image was it shown in order to produce that feature understanding so these parameters actually Define exactly that space IAL arrangement of the output right it defines the spatial arrangement of detecting all of those features in our output of a convolutional layer and to summarize I mean we've seen how the the connections Within These convolutional layers are actually capable of uh being defined right from a mathematical point of view and we've also seen how the outputs of this convolutional layer are a volume right they're not just one image but rather they're a volume of different features that have been successfully detected so really we're well on our way now with that Foundation we're well on our way to understanding the complete convolutional neural network or CNN architecture the next step is to apply that nonlinearity right so we apply that convolution operation now not with one feature but now with a volume of different features and as introduced in the first lecture we apply this nonlinearity because data is highly nonlinear right it's the exact same story that we've been seeing so far in this entire lecture in this entire class a common activation for for convolutional neural networks are rectified linear units or relu right um think of this as really just deactivating pixels in your feature map that are negative right so anything that's positive gets passed on and anything that's NE negative gets set to zero so think of it almost like a thresholding function right so anything above zero it just passes below zero it gets set to zero so the next key value after your nonlinearity we talked about earlier is pooling pooling is an operation that's used to reduce the dimensionality of your features such that you have relatively increased you have effectively increased the dimensionality of your filters right you don't want to just increase the dimensionality of your filters because that would blow up the number of Weights that you'd have in your model so instead you keep the number of filter weights the same but you reduce the dimensionality of your image progressively over the depth of the network a common technique for pooling is called Max pooling basically it means that you take the maximum of every small patch in here and you pass it on to a single number in the output unit right so here this is going to reduce the size of your feature maps by a factor of two right because you're pooling over 2 by two grids in your input there are many other ways that we can perform this down sampling or pooling operation you don't only have to take the maximum right there are many different ways there's pros and cons to all of them you can think about a few on your own another good example is average pooling right so instead of taking the maximum of these fours we could take the average and put it into the next element on the right okay so now we're we've covered basically a set of operations that Define not only the convolution but the full convolutional layer of a convolution neural network and now we're really ready to take that convolutional layer and start to think about how we can compose a full neural network out of those layers and perform full image classification or full image processing so with CNN's we can Define these operations to learn those hierarchical sets of features right so not just learning now one feature right one 3x3 patch but as we Cascade the different layers on top of each other and Progressive pull down the information we're learning this hierarchy of features where every feature in a future layer is building off of features that it has already seen in the prior layers so a CNN built for image classification can roughly be broken down into two parts and I like to think of it always like this as the first part being a feature extraction head right this is where you take as input an image and try to extract all of the relevant features from this image and then the second part is going to be uh trying to actually do your classification based on those features right so the second part is not going to be reliant on uh on convolution right you've already extracted your image features and now you're going to use the concepts in lecture one which will take those features and learn a classification or a regression problem from those features so for classification we're going to feed those outputed features into a fully connected layer and perform the classification right how do we do that we can do that using this function that we saw from yesterday called a softmax right the output of a softmax is is guaranteed to be summing to one right and it's guaranteeing the values are between zero and one and all of the values will sum to one so it's a perfect probability distribution it's a perfect uh categorical probability distribution so let's see how now from a code perspective we've talked about this from the intuitive perspective we talked about it from the mathematics perspective now let's talk about it from the code perspective to make it even more concrete we can start by defining our feature extraction layers right so this is a set of convolutional layers we're going to start by defining in the first layer 32 different feature Maps right so that means we're going to try and detect 32 different types of sets of features in this very first layer you can see by this 32 represented right here and each feature is going to be 3x3 pixels large okay so next we're going to feed that those sets of features into our next convolutional uh layer right the next convolutional layer is going to be yet again another set of convolutional layers with pooling layers right now we have not 32 feature Maps anymore but we have 64 feature Maps so we're progressively growing the number of feature maps that we're building and we're progressively shrinking our inputs such that we can grow again the size and the receptive field that our weights are covering finally now we've extracted all of our image features we can flatten them down into a one-dimensional vector now which is totally fine because now those features are no longer spatial right we've we've gone hierarchically enough deep into the network such that spatial structure can be ignored and now flatten them and pass them onto a fully connected layer and the fully connected layer's job is to ultimately transform it into this ultimate 10 class labels let's say so let's suppose we're trying to classify into 10 10 different classes so we'll have 10 different outputs each output will be a probability of that image being in that class or not and we'll pick the class that has the highest probability output so so far we've talked about you know using cnns for image classification tasks but in reality the architecture that I just showed on the previous slide if you ignored that part about the softmax at the end it's actually an extremely generalizable architecture right you can use that architecture for a variety of different applications so I want to spend a few minutes now talking about what those different applications could be and the importance of them and and how that all fits into this framework that we've been describing so far as well so when considering CNN's for classification what we saw was that we had these two parts right we had a feature extraction part or a feature learning part and we had a classification part and that was the pipeline right what makes the convolutional neural network really powerful is the first part right it's the feature learning part and after that we can really change and we can remove that second part and replace it with a bunch of different heads that come after that because now we''ve kind of gone past the image portion we've already processed all of our spatial structure out of our inputs now we can actually use a variety of different types of neural network architectures that follow that for different types of tasks for so for example that portion that we'll look at right that later half of the the network portion that we were looking at for a different type of image classification domain we may want that to be different right or maybe if we're not doing image classification maybe we're doing regret over different uh forecasting values of this image again we would want that different head to look very differently and we can introduce new architectures for tasks such as let's say segmentation or let's say another task is image captioning that we'll talk about today as well so more on the classification task so that we can build up from that point is let's look at an example so there has been significant impact in the medical and Healthcare domains of using these types of convolutional neural networks to do diagnosis of different diseases based on Raw imagery coming from different types of medical scans so when deep learning models have been applied all across the board to these types of domains and this paper was actually uh demonstrating that you know how a CNN exactly like the ones that you've learned about today can extract the features relevant to performing this type of diagnosis task detection task and can actually outperform Radiologists at detecting breast cancer from mamogram images right now let's think of okay now instead of detecting something which is the classification problem let's see what else we can do if we replace out that last head so classification tells us a binary prediction yes or no is this cancer or is it not cancer just by looking at what the image contains but we can actually go much deeper into this problem and determine you know not just what an image is but now let's see you know okay for example here that this image has a taxi right classifying what this image has inside of it but now let's let's look at a much harder problem which is you know trying to detect where these things are in the images right so let's say now we feed in this image to another CNN we may want to detect not only that there is a taxi but we may want to localize that within a box right that box is now no longer a classification that's a that's a continuous valued problem the Box could be anywhere in the image right so it's not a one zero yes or no it's where is the box localize the box and it's also a classification problem because even after I've detected the box I need to say what's inside the box so there's a implicit classification problem that exists for every uh image detection problem as well right so you have these two tasks right it's the localization and then the class ification within it so our Network needs to be much more flexible to be able to handle these types of uh domains right because number one you have a much more complex problem you have to do a localization now as well but also on top of that you have a dynamic number of possible objects that are present in your image right you could have an example like this where your output would be a single box right taxi at this XY coordinate with this height and width is your box coordinates on the other hand if your image has many objects right then potentially you have also many classes as well you could imagine situations where your output needs to handle extraordinarily flexible types of scenarios right not just one type of box that's coming out many different boxes many different types of objects that are detected in those boxes and so on now this is very complicated because the boxes can be anywhere in the image and they can also be of different sizes so how can we do this so let's consider a very naive way of doing this first and we'll tie it back into the convolutional neural network architecture that we saw earlier so what would be the naive way so let's take our image and let's try to draw this single box on it right so this is a random box I've put it this white white square right so it has a random location random height random width and I'm now going to pass that single box through a convolutional neural network that I had previously trained for classification so taking the first part of today's lecture only doing classification no localization of boxes I'll just patch in pass in that little white box into this CNN and I'll ask it you know is there anything there and is there if there is something there tell me what it is if the CNN detects something I will save the box and return it as part of my process and then we basically just keep repeating this process for another box and another box and we keep doing this for all possible boxes that could exist in our input image we pass every box individually through our CNN if something is detected we return it we store it and we repeat now the problem with this is that uh you know this is completely intractable this was a naive solution to the problem in reality even for small images it would be totally infeasible to go through all of these different white boxes and then for each box you'd have to pass it through your CNN to do the detection task so how can we do better than this so instead of picking random boxes let's focus on that part first right so could we use a simple heuristic to identify some places in the image which are more let's say interesting than other places so instead of just picking the boxes randomly what if we pick the boxes using some puristic this is still very slow but uh the good news is that now at least we have to feed through a much small smaller set of boxes right because we have this simple heuristic that maybe we generate a thousand boxes where you know this simple heuristic says there's something interesting going on there are ways that you can do this just by detecting kind of like blobs in an image you don't care what's inside the blob but you say okay I can see something's there so I'll draw a box right there so it's it's it helps speed up the process because now you have fewer options to choose from but it's incredibly still incredibly slow because you still have to feed in each of these boxes even if there's let's say 2,000 of the boxes which is still far better than before it's still a lot of boxes that you have to feed in and on top of that it's extremely brittle because the networks have to actually completely detach from the rest of the scene around the box right you're cutting out this box passing it to the network and you can't really see the rest of the environment around the box it kind of has to go just based on what you show it so there are many variations that have been approached proposed to tackle these issues right that I just described but one that I'd like to extremely uh briefly touch on for you is called the faster rcnn method so this actually attempts to learn the regions instead of those heuristics of different boxes right so those the previous boxes I told you about with these heuristics think of those as kind of a a proposal system that's not based on a neural network now we're going to see if we can you know learn some of those proposals directly using a neural network and then use those proposals to pass on to our CNN so this means that we can feed in the image to this let's call it a region proposal neural network only once we're going to learn features that Define the regions to be proposed and we're directly going to get those regions and process them through the later parts of the model right each of those regions are then processed with their own feature extra contractors individually right and then each of them also has their own CNN heads that accompany them on the downstream task as well so then the classifier finally comes at the very end to detect and classify each of those region proposals right and of course now because this is all neural network based you can back propagate so that your region proposal Network at the very bottom the weights that it's trying to detect for those regions are also being learned so it should learn to detect better and better region propos as you train the model as well and those gradients actually flow backwards so in order to detect better images and classify images the region proposal network will have to propose better regions as well and you're training all of those objectives simultaneously together yeah the number of features that are produced like will it be more relying on the pooling method or the dimensions of Great question so the question about is there kind of restrictions on the the size of the feature sets that are being learned so yes similar it's a similar question to basically saying is there a limitation or is there any rules or or guidance on how large to like to make fully connected layers right how many neurons to put in a single fully connected layer it's a similar question right so how many features should we learn for a convolutional neural network is is a comparable question and the answer is it greatly depends on the complexity of the task right if your task is very simple and it can be detected like this x right if you only want to detect X's you probably need a very small number of features to detect those types of images it also depends on the resolution of the images but of course even if you make x's much bigger there're still fundamentally still straightforward images to detect so it matters a little bit on the image size and the scale of the images but it matters mostly on the complexity of the task right and and how uh disentangled those features are from other classes of images as well so in classification right let's let's recap this point once more in classification we have this idea of outputting just a single class probability based on an input image in object detection we brought that to the next level to basically say okay now we don't want just a single class probability but we want to detect bounding boxes around different classes localize the classes and classify them on top of that now we're going to go even further into this idea we're still using cnns here but now we're going to predict based on our entire image as as well right so not only bounding boxes but for every single Pixel in our image we want to classify that pixel right so this is the task called semantic segmentation so the idea of semantic segmentation is to take as input a colored image an RGB image you can see one on the left hand side pass it through a neural network and learn a classification problem for every single Pixel right so not just detecting one class for the entire image but now classifying pixelwise right much more complicated than our original classification task and you can see this example of you know cow pixels being predicted as as cows Sky pixels are being predicted as a different class grass pixels are a different class and so on and the output is created you know we still have this feature extraction part of the network and then the output part is now going to be created using this upsampling operation that allows our convolutional features to be now decoded back into an image right so we're going to use our encoder to basically construct or learn the features based on our image and then we will use kind of an upsampling operation to reconstruct a new image that will learn features from our original image to produce those probabilities in our new image so the output again here is is the same dimensionality as our input right and that's done through this kind of mirroring effect of the convolutional operations are these convolutional operations any different than the operations that we saw earlier in the class no they're not these are also learning features from the previous layer and Computing feature Maps as the next layer but now instead of pulling down we're just going to pull up on the pulling step that's the only difference now of course this can be applied to even more types of applications in healthcare as well for you know segmenting not just you know cows and Fields but now trying to segment uh different types of cancerous regions or identifying different parts of the blood that are infected with malaria let's see one final example here of a different type of application for convolutional neural networks and now we'll turn to self-driving cars let's say we want to learn a car a neural network for autonomous navigation specifically we want a neural network that will go from image extract features from that image pass those features onto another neural network which will then try to learn how to steer the car right in addition to the image of the street or of the road of the surroundings we also want to pass in maybe some GPS maps right similar to what we would have as humans if you navigate in a new scenario you don't only have your eyes but you also have Google Maps on your phone and we're going to pass in two images now right one of the the street the camera but also so a screenshot basically from Google Maps you can think of this as it's a bird's eyee view map of where the car currently is this is coming from GPS right the goal here is now to directly infer not only the steering wheel angle that the car should take at this time but to infer a full probability distribution of all possible actions that this car could take such that the car has a full complete understanding of how to navigate through the world so this entire model that I described can be built du using the techniques that we've learned about in today's lecture right there's nothing new here we're going to train this end to end by passing each of the cameras here we were showing three cameras right through their own convolutional feature extractor those are a hierarchical progression of convolutional layers just as we saw before we're going to take all of the Learned convolutional features concatenate them together into a single feature Vector right so that's one big Vector of all of the possible were all of the features that have been detected by our convolutional layers and then we're going to pass those into a fully connected Network which is just going to learn that probability distribution over our uh over our steering wheel angles right again this is all end to end we were never told we never told the car you know look for Lane markers in order to drive or look for stop signs in order to stop right it learns all of this through back propagation and by observing humans stop and then it detects the features in the image that correlate to those stopping behaviors it says okay humans are stopping and I'm pressing on the gas or pressing on The Brak when I'm detecting this red octagon so let me build some features to detect that and when I detect that I'll also initiate a stopping inti stopping action right so all of this is really learn from scratch rather than a human instructing or trying to train the autonomous vehicle on how to drive yes elate how this CNN is different from the previous earlier techniques of labeling images before one can analyze it and second question is how do we apply CNN to video so do we treat sounds images different arrays or vectors and do the same thing question so question number one is how do these cnns differ from kind of like the manual feature extraction approach that that used to be done let's say in in older machine learning techniques so it's very different because the we that we're learning as part of these cnms are being trained with back propagation whereas in manual definition of those features it would require a human to define those weights right so the the key Insight here is that uh now instead of a human defining the weights we're going to show it a bunch of data and those weights will have to be optimized using gradient descent and all the same techniques that we learned from yesterday to learn those weights and to learn those features and then uh sorry second question was yes so here we're showing just images uh you can do it a couple different ways you could run a CNN for every step of your video so on the next step the next time step of your video you could just ignore what you did previously pass it through a new CNN that's not optimal of course so the better way to do it is actually you would take those convolutional features that you learn on this step and use them as inputs to a recurrent model like we saw in yesterday's secture right so it's a sequence-based model but now you're taking the features extracted by your convolutional model those represent the image features now and those are your inputs to sequential based models which could be RNN based it could be Transformer based right but those those sequence-based models are the ones that give you the temporal processing abilities right these models give you the single time step abilities and then you combine them with other models that that expand the abilities to time series y deep videos like the President Obama video yes exactly so there there has to be some temporal consistency one big problem if you weren't to do that is basically that one time step one time step is not going to be consistent necessarily with the previous time steps so every time step could be a good answer on its own but really you want something that is consistent temporarily as well so a big reason why these sequence-based models are extremely important in practice is that there are some applications that are truly image based right like you're looking at a medical scan there's no time series involved in that um but there are other situations like driving a car right where you need consistency right you don't want your your car to be like jittering on the road because it's not seeing a temporally consistent input stream of video either [Music] was detction and yer was correct it the same thing same thing the the key is that the output uh so the question is about yesterday's video was generated whereas this video is uh is input right it's like an input versus an output question but the way to think of this is the output here is also a Time series right the output here is a Time series of steering wheel angles so it's a generated signal of steering wheel angles yesterday's video is a generated signal of video right both are generated signals of Time series right so you would need temporal consistency over both cases they're just not videoos so here the input's a video the output is a Time series in yesterday's example the output was was a video itself but both in all cases the output is video or sorry in all cases the output is time series yes how would the relative speed of the car be considered in in's well know steering wheel angle is part of it but obviously of course yeah so when you pass these so here we're just showing a CNN so this is considered to be instantaneous right and actually the video that you saw in the beginning of lecture of the self-driving car that was actually not a temporarily consistent model that was just CNN based so there's no RNN or Transformers to give temporal consistency just imagine steering wheel angle and pedal out right so there there actually you can you can do a lot even without the temporal consistency even though it is very important awesome okay so actually let me show you a video of of how this looks in practice right so this is a human actually entering a car it's being trained with no temporal consistency see this is purely CNN based it's seeing just one image at a time and it predicts a steering wheel angle from that note here that the vehicle is actually able to successfully navigate this environment and recognize when it's approaching intersections taking those intersections it's never driven through this environment before this is a big problem for self-driving cars today even still is that they need to be trained in the environments where they are tested this is one of the reasons why we have self-driving cars hubbed around several cities like Phoenix like San Francisco because these are where the self-driving cars have collected all of their data and they can't drive outside of those places so here you can see an example of it driving through a brand new environment because this is an end to-end model just learning the patterns in the environment it has picked up on just how humans drive and just like humans we can be dropped in a brand new city and with some basic navigation information right GPS and Google Maps we can actually navigate we can drive through those environments pretty reliably right and in a similar way we show something here so the impact of cnns has been very wide reaching as we've talked about in today's lecture but really beyond all of these examples that I've explained here the concepts that are really important to take to take home right are this idea that cnns perform feature extraction and the second part of the network can be kind of tuned and morphed for a variety of different applications right you can do steering wheel steering wheel prediction right that's a continuous value it's not classification you can do classification which is discreet you can do uh localization of of uh boxes and images right all of these are drastically different types of tasks but the modular architecture makes it possible for all of these things to be captured within one framework so I'll just conclude Now by taking a look at what we've covered in today's lecture so we started with the origins of computer vision we discussed how images are actually constructed in these models right they're constructed and represented as an array of brightness values and convolutions what are they how do they work and how they can be built up into this convolutional layer architecture building them up all the way from convolutional layers to convolutional networks and finally we talked about all the extensions of how we can apply these different types of uh architectures and feature extraction abilities to do different types of tasks range them from classification to regression to many other things in order to peer into what's actually going on under the hood with all of these different types of models okay so I'll take a pause there and in the next lecture we'll we'll talk about kind of extensions of these ideas for generative modeling as well and which is going to talk about not just how to classify what's inside the image but kind of go in the reverse direction of how we can generate brand new types of content brand new types of images that's a really uh very exciting lecture as well and then in the lab later today we'll kind of tie in all of these different pieces together seeing how that we can actually uh build models that can not only understand the underlying features within faces to detect them but also use that for uh fixing underlying problems that they have in those systems as well so we'll take a brief pause right now we'll switch speakers and we'll continue the lecture in a couple minutes thank you |
MIT_6S191_Introduction_to_Deep_Learning | MIT_6S191_Building_AI_Models_in_the_Wild.txt | [Music] all right so let's give a Short Round of Applause to welcome Doug and Nico and let them take it [Applause] away awesome thanks so much and just a quick sanity check people can hear in the back it's okay not too loud not too quiet two3 all works okay fantastic all right thank you so much a bit of background on us so uh as as AA mentioned my name is NCO um I'm on the the the founding team of comet started as a research scientist at elel um maybe five six years ago uh got into ml working on weather forecasting models started a company uh and then started building Comet and then maybe Doug do you want to say a quick intro sure um I guess uh if you're watching us on video uh go watch the previous talk from Dr Doug E uh that was talking about generative media it's funny so uh my name is Dr Doug blank and uh Doug and I were both uh at uh as he mentioned in grad school together at Indiana University uh Bloomington in the 9s and uh there weren't that many of us that were like committed to doing neural networks and Doug was one and I was another and uh Doug was focused on sequential uh problem and I was focused on making analogies and so it's funny to see where we are now I didn't know that Doug was going to be talking earlier uh so it's very funny to see how our lives have uh been very similar uh and we'll talk a little bit uh about that as we go so um yeah I'll hand this over to Nico uh we'll talk about some case studies from um the perspective of our company comet.com and then I'll talk a little bit about uh large language models U both uh fails and how you might deal with those in production at a company awesome thanks Doug um yeah so just a bit of background of a comment I think some of you may be uh uh may have been exposed to us this week and the um you know notebooks associated with this intro course uh we build mlops uh software uh to hopefully make your lives a bit easier building models um all of the founding team Doug myself and the other co-founders all came to this uh basically building for the pain points that we wanted to have solved the other two co-founders Gideon who was at Google at the time uh uh built for you so I'm hoping that over the course of this week as you maybe trained your first neural networks uh that was a bit easier um and we support some of the best research teams in the world and best universities in the world as as well um uh Mila which Doug EK mentioned uh MIT and and many many more and what we want to focus on today uh Doug and I uh my Doug not Doug at we're talking about as we prepared this this talk is um uh you've just spent a week uh diving really really really deep into machine learning into artificial intelligence um you've learned the basics you understand how these system works you understand how to build them how to code them uh we're not going to give you any any more information on those topics than you already received what we spend all of our time doing and what we think might be interesting for all of you now is we spend our time working with businesses companies Enterprises who are trying to take these models and do something useful with them right make money save money make things more efficient in the real world and as beautiful and elegant as all of this uh you know math and code looks in the real world things can get a little messy for reasons that you might not expect um so we're going to do a couple case studies um some interesting case studies from some of our customers you know models they've built in most cases frankly very very simple models where nevertheless things pop up that you wouldn't expect I'm going to see if this first week of learning about deep deep deep learning uh will provide any of you ideas as to how to diagnose these issues uh and then as Doug mentioned we'll talk about large language models uh again with this uh with a specific focus on some of the interesting things that emerge around large language models when you try to take them into production and do things with them so that's our plan uh just a quick primer these are very very simple this is like pomp to forehead material for anyone who's built these things but as we go through these uh case studies think about if it might be a data problem a model problem or maybe a system problem right and we'll go from there awesome okay so our first uh our first example is a customer of ours uh who we can't name but you can probably pick one of the two or three companies you expect this to be it's one of them um uh a team uh building a user verification model right so basic problem given a photo uh uploaded by any user detect whether the photo matches the actual user's profile picture uh the marketed purpose of this model is to minimize fraudulent accounts between you and me there's a couple other types of photos that commonly appear on dating apps that this was also being used to detect and minimize but we won't talk about those um and so think about this from a system perspective it's actually quite simple right what's our data set here images from the app right and these are manually labeled so they have a huge team of manual label labelers excuse me uh going through um and labeling true photos from user accounts and false photos from known fraudulent accounts very very simple you know neural neural classifier uh send it into production you have an app uh excuse me you have a model now uh you know running inference in real time in the application minimizing fraudulant accounts uh and they deployed this model and all goes well fraudulent accounts you know are are like by a large margin going down everyone's happy the business is happy they're spending way less money on manual tasks for identifying these things happy happy happy until all of a sudden it stops working right like after a couple of months we start to see this deterioration in model performance down 10% down 15% and to give you a little bit of a sense of like the economic impact of this degradation in model performance like a 10% decrease in model accuracy is an additional 10,000 manual moderations per day so from their support team it was like in the hundreds of thousands of dollars a month so it's it's you know it's an expensive problem um anyone have a first guess as to what what you would do first if you're working on this team of Engineers your model starts to get worse and worse you can just shadow that and I'll repeat no ideas okay we'll speed through the first attempt was just a retraining right you know you know say the answer is out there you just have to yell it uh so the first idea is look we probably just have some data drift right we had a we had some we there was some Corpus of data which we used to train this model offline we achieved like a high enough performance to deploy this model and now the data that our model is seeing out there in the wild is is is different right it's seeing new data let's gather the new data and let's build a new data set and we'll just re we'll retrain our model uh and it should work right little to no impact makes makes no impact on the performance of this model they're completely stuck um so retraining doesn't work uh think about the full system that's going into this model in production any other any ideas at all as to what might be happening here people gaming the mod better it's a good idea and it might have been happening anything else any other possibilities okay so what actually what they found out and this was a bit of a pomp to forehead moment for them was just a new iPhone and the new iPhone camera was taking photos with a significantly higher resolution than the ones that the initial model had been trained on that the that the existing model didn't have enough layers to actually C captur that information so it was doing horribly so the solution here was literally to add like two layers to the same architecture and re train and everything worked um so uh very very funny story I think the the only lesson here for people by the way just as a show of fans like who here is interested in going to work in ml and Industry after MIT as a data scientist machine learning engineer um yeah so there's no like Silver Bullet to the story there's no sort of like Al Al gor rithmic way to predict this just weird things are going to happen think about the full system end to end from even potentially the machine you know assembling the data that you may be using to build your model upon um okay second case study leading e-commerce company uh and our task here is listing and add recommendations so this is a very similar use case to one you might experience every day on Netflix or max.com right you log in you have a curated set of columns and rows of you know titles in that context their goal is to keep you in the app as long as possible keep you happy watching movies have that monthly $199 continue to deposit into their bank account from yours in this case they're trying to get you to buy clothes right um so how do you build a good system that's going to effectively retrieve and list sort of top candidate uh advertisements or just listings from the application in a way that fits uh a given user profile um so the specific problem uh as formulated by this one of our customers for a given user maximize the likelihood of a click on a served advertisement or listing uh and our goal is like maximizing Revenue right so you can look at every single user session um and you can build sort of like a label data set based on ones that led to a sale um so in this case our data set is embeddings of historic engagement data so this is a little bit interesting and there's like a there are some there are some there are a handful of good papers on this you if you want to learn more but um this data set is actually uh of much more than actual purchases so other activities that a user might engage in during a session on a website that are directionally uh suggestive of leading to a buy so search queries item favorites views at to carts and purchases are all sort of you know labeled as true samples in this case um build an embedding model uh that basically retrieves a top end you know uh listing candidates whether it's advertisements or actual you know items uh based on a given user profile right so again build this model train this model you have a service running in production you have a shopper um and this model works great there's actually no issue with the performance of this model in production The Peculiar thing here oh and let me just say I think I I think I actually already touched on this a second ago but just a bit more of a deep dive into uh the specific system here uh for retrieval and Rex ranking um we can come back if people have questions later um so there's actually no issue with the performance of this model there's no degradation there's no issue uh it's just really hard to beat like the team wants to build a better model uh they have a big team of researchers great researchers they're you know testing new architectures um you know uh you know Gathering a ton more data um it's not working like the production model is just better and they can't make it they can't build something better it's very hard to sort of get um to so to put it a different way they think they have breakthroughs they think they see these offline performance metrics so we have this this new model it's like 8% better they ship it doesn't do any better um so there's sort of two it's like a bifurcated issue here you have uh amazing results offline leading to no effect online um and then also even with lots more data and sophisticated models the online model keeps outperforming uh any sort of uh new candidate um so any thoughts as to why it's so hard to beat the production model is it just seasonal Trends it's a really good idea I guess in training their different local Minima so maybe it just happened to be like a really one possible yeah local Minima it's a good idea and the first idea sorry I'm supposed to repeat this because we're on video was it's a seasonal model another good idea any other ideas as to what might be happening here yeah the back consumer data like consumer segmentation consumer segmentation is a very good idea um anything else yeah distribution shifts of data over time and then um as you move all on in time then you have like distributions that are more accurate to that point in time yeah it's a very good idea as well these are all completely plausible as well yes are they just not deploying the the models quickly enough if someone searches for something they might want it then right now not a week from now yeah another good idea um all these are totally feasible uh uh explanations for what was happening uh to prevent this team from from from making a better model um what was in fact happening and this is again a bit of a palm to forehead moment there's a theme here whenever you go work for a big team of data scientists often times uh the solution is going to be a bit of a palm to forehead but a researcher realized after a while that the way that they were generating the training data set to train a new model uh the historical data that was being uh was retrieved using embeddings from the prod model itself so basically the data that you're using to build a better model is coming from embeddings that are being produced by the model in production so the odds are pretty good that the prod model will do well compared to that data um so it was very very hard to do so um so be it was basically like the the new model was being trained uh like with a mask on its face a little bit and it just wasn't going to it was not going to do better um and I want to make sure that I speed up here a little bit to leave the other Doug time to talk about his stuff so the interesting system solution that they that they devised here to prevent this was uh they actually built multiple retrieval agents in production and so and so this is also um sometimes talked about as Shadow models where you might deploy like four or five models in a production environment at once but only actually allow one of those models to serve predictions to an end user that's a great way to also build like four or five uh different candidates at the same time using real-time data from your user base without necessarily um over complicating a system and still having you know a fantastic user experience so by having multiple retrieval engines in production what it allowed them to do is they had multiple models that were each generating different sets of embeddings uh based on the user activity in the application and they would take the embeddings from one of those retrieval engines to train a new model and then just don't compare it to the model that made those embeddings right ultimately like somewhat simple so um those are two case studies uh that Doug and I find very very funny and often times uh again there's no Silver Lining or silver bullet here for you all but uh you should completely go work in industry and just be prepared for you know some uh non-standard or nonexpected things to start happening once you do um I'm going to let Doug hop up here and talk about llms next all right thank you Nico yeah of course so uh comet's been a company uh helping um other companies do machine learning for about five or six years now and so those two case studies are ones that we actually know about you know we have a lot of customers that don't actually give us insight into how they have something that doesn't work so those are are two that uh we were able to discover uh what I want to talk about now is a different kind of animal and uh although this is probably um we have more customers asking us about large language models than we have them ask about anything so we don't have uh those kind of case studies from existing customers right now however uh we can uh probably guess what uh some of these issues are uh that they're having and so what I want to do is first of all ask uh how many of you be honest now have used chat GPT or some other model raise your hand okay I see a few people that didn't raise their hand I'm not going to point you out but almost everybody has used one of these models uh so have you used it for anything uh useful and important and this is not an implication of guilt I know this is uh you know being students where you have perhaps something doe uh all right so people have actually getting used these that and that's great and uh for those of you have who have tried it have you seen anything funny or weird or wrong all right so okay almost as many people that have views chat GPT have seen something wrong or funny uh not Not Unusual uh so I what I want to do is uh think about how we could actually uh work with large language models in production um but in order to do that and also for you just to think about those uh llms s i I want to take a quick look at how they work and this is really 30,000 ft Bird's eyee view um so uh as you may know that uh systems like chat GPT are based on Transformers and they're not exactly the same there's been a lot of tweaks um with Transformers over the years and uh it's also true that we don't know exactly what goes in to some of the later models they they started out being pretty open and having papers that described all the Gory details not So Much Anymore um why might that be money yeah so uh indeed uh this is big business so if you have a system that's able to you know beat the competing chat GPT model or chatbot uh then uh you you're ahead of the game so um the uh gpts are often just decoder only systems but uh looking at this uh Transformer model uh I think the the one thing that I wanted to uh talk about is that uh and stress is the inputs are largely a block of things uh that get encoded all at once and then these decoders uh as we saw uh in the last uh talk uh you have things that get generated and you've seen this too in the web page if you've used chat GPT you get a little bit of and then it generates more and it generates more so it's iterative it builds and just uh a little bit of a this is an old Transformer um problem of uh translation but you can imagine this in the same kind of system used for uh chat GPT so you have the inputs that come in all as one block and every token gets uh converted into a set of numbers either through uh an embedding system and maybe those were learned previously or maybe they're being learned as the entire model uh is being trained uh and it outputs something uh and then that output comes back in as an input to the decoder so you've got that one big block of input plus now you have the output of whatever you just outputed and so this uh the previous output from your own model comes back in as input and so then you end up with a a complete translation or in chat GPT you get an entire paragraph uh of output now this has been called uh by some stochastic parrots and it is uh very statistical uh so the words the tokens that are being output are based on the statistics and you can ramp up that variability the the noise uh in the system so you get more creativity or you can uh make it more deterministic um and also uh starting with uh chat gpt3 uh people realize that you can add a little bit more of a context before the actual prompt so uh and you've probably seen these kinds of things that you can add to a prompt like get think about this problem step by step or you are a useful assistant and it turns out who would have ever guessed I wouldn't have that adding things like that can really affect the kinds of output and the quality of the output that uh these systems give uh and this is amazing and it's um it really I think it has big implications about how these systems work of course this is called prompt engineering now and you can probably get a job uh being a prompt engineer making very good money uh so what I want to do now is now that you uh have a a glimpse of how those systems work we can take a look at some actual fails and we can think about them reason about them why they fail why do why don't they work so the the first of these is uh this is uh the input is given a coded sentence and then uh the system is supposed to Output what the uncoded system is and uh you you may have if you've played with simple codes uh there's this uh rot 133 rotation 13 so you just take the character and you rotate at 13 more and so uh they tried this um task you know give it the input the coded input tell it it's rot 13 or rot two uh and let it the system see if it can come up with the decoded answer um so what do you think uh is this something that uh chat GPT can do or not yeah tokens aren't they can grou of it's not necessarily recognizing it as a ABC okay so this uh if I can paraphrase you it it's not a a task that they typically see uh it's not a um you know given it's not the kind of sequences that they were trained on well wait a minute they actually are trained on that you'll see if you uh train on the entire internet you'll probably find examples in Wikipedia of rote 13 and probably rote two and in fact they uh the people that did this study they looked at that in the Corpus how often rote 13 and rote two examples and it turns out that uh rote 13 was 60 times more more common so there actually are examples but will that uh varying disparity in probabilities play a role in how it does yeah absolutely so if you take a look at what the actual output is it's sort of amazing crazy that you can take a system that wasn't trained to do decoding and you can ask it to decode something and it does pretty good job um makes mistakes though and it makes mistakes on those where it doesn't see as many examples okay so that's a that's a minor fail uh here's one that is uh this one makes me a little sad I must admit so uh here's an example uh and the problem is uh so they start out with a very nice prompt you're a critical and unbiased writing Professor you know you're going to set the stage for this chat GP to uh be in the right mode when it grades this essay output the following format essay score score out of 10 and then here's the essay and it's followed by uh a an an essay this is an example from Chip Hinn and uh so is this going to work or this not going to work so this this is a tricky question uh it's going to work in that you're going to get output but but what does it mean so let's take a look at these uh and we don't have to read exactly what they are but uh the first uh thing is you know the one on the left essay score 7 out of 10 this essay has some Merit okay that's good if you're getting feedback for your professor and they start out with that you're you're on the right track okay let's look at the other one essay score four out of 10 okay not so good all right while the essay captures oh this is not a good start if you're getting feedback uh while the essay captures you know some it lacks depth and Analysis okay so thinking about the output and knowing a little bit about how these systems work what why might I be sad about this yeah was it the exact same input for both um yes it was so the question is was it the same exact input for both of these and yes so let's see if I can get some [Applause] yes inste kind of stereotyp what yeah so the question is was this trained to do this and I think the answer is no uh this is just regular chat GPT working so uh let me give you a hint um how what's the output how does the output come out of chat GPT iteratively right and so what's the first thing that these two things have to do they have to give you a score oh this one is a four out of 10 so the rest of the text that gets generated has to be driven by that and so it's going to whatever I didn't really read these to see I doubt that there's really anything useful in this uh feedback but it the the rest of the text is going to make sure it matches that score so if you got a 4 out of 10 then the text is going to validate that four out of 10 that makes me a little bit sad uh to know that that's the way these systems are working it's consistent it's giving you what you want so it's sort of works because it looks sort of like that essay was graded would would you say that that essay was graded I see some skeptical NOS um would you want your essay graded this way no okay all right here's uh another one and uh the question is who is Evelyn Hartwell can you answer this question so a good answer might be I don't know Evelyn Hartwell does not exist that's a madeup name so what do you think will happen if you ask this question to a chat bot it's going to give you an answer it's going to give you a great answer uh here are uh four different samples Evelyn Hartwell is an American Author I'm sure it's seen that phrase before Evelyn Hartwell is a Canadian ballerina that sounds very realistic Elvin Evelyn Hartwell is an American actress known for roles in the television series girlfriends and The Parkers I don't even know if those exist Evelyn Hartwell is an American television producer and philanthropist she's a nice person okay so you know uh you you may have seen these before and actually if you read The Source down below you see uh these are called hallucinations and I must admit that uh I hate that term because it implies that sometimes chat GPT is hallucinating you know maybe it got some bad input and you know it started hallucinating but no that's that's not true it's just outputting the uh output the way that it normal normally does so uh I thought you know what is hallucination what what is h the agreed upon definition so I didn't want to cite somebody so I asked chat GPT so I can just put that text up there and this is pretty good when we say that an AI model hallucinates we're ref referring to situ situations where the model generates outputs that seem unrealistic nonsensical or diverge significantly but who who knows that uh chat GPT doesn't know that uh you have to you know use your own knowledge or maybe you have to go searching for Evelyn Hartwell to realize that she doesn't actually exist and you can only do that through your own investigation so how could you possibly put this into production when you're getting output that is all over the map so uh you could ask the question what's different about the model it's hallucinating versus when it's not there there's actually some idea that you might come up with what what might be a possibility that's happening inside the the model doesn't know that from the outside but if you looked inside what might be happening any ideas there there's probably some if you look at the output it's very low probabilities Evelyn Hartwell is low probability we'll pick one and then it's an okay if you say an you better go with the next word that starts with The Vow okay an American uh okay an American what low probabilities let's pick one ballerina so if you looked at the internal uh system you might find that there are some ways to measure that uh internally um and then what should it do um there's actually some work uh showing that you can actually use the model itself to give you a pretty good estimation of how well of an answer it is and that's pretty fascinating there there are some measurements you can do but they're very expensive very time consuming uh to compute but uh here's a I'll show you the code that's actually doing this uh who is Evelyn Hartwell I'm sorry I don't have that specific information who's Nicholas Cage Nicholas Cage is an American actor okay that one is is good so the the code that actually does that is uh it's using uh the chatbot itself and I've got a link to this code this is some python code basically it's got a little bit of self Sim similarity score it ask asks chat GPT what do you think about this comes back with something very quickly uh within a half a second and then it it's hardcoded just to say if it's too low sorry I don't know so I want to move to some uh epic fails now uh you may have heard about the lawyer that submitted a chat GPT written motion to the court did you hear about this uh uh we'll make these slides available uh or you can Google it and you'll probably find it uh but there this actually happened in New York Um Manhattan and um the lawyer overworked I'm sure uh used or uh one of his assistant created a 10-page brief citing more than half a dozen Court decisions with names like Martinez versus Delta Airlines zickerman versus Korean Airlines these cases don't exist completely fabricated made up out of whole cloth hallucinated um and the the uh the the judge W or the uh lawyer was just in court explaining what happened and uh showed some output from chat GPT and chat GP ended that with I hope that helps it did not I don't know if he got disbarred or not but uh you know serious problem uh with that uh here's another uh I wouldn't say it's an epic fail but you can see where this person is going with the prompt if one woman can make one baby in nine months oh I recognize this pattern this is math how many months does it take nine women to make one baby and of course you know chat GPT doesn't know about babies and women and you know how you make one oh it probably you could say it does in in one way but not in the way that it it would be able to answer this question of course chat GPT did the math uh and of course you know if no it nine women cannot make one baby in one month no it doesn't work like that so epic fail uh on that uh so in in summary uh and this is a little bit opinionated if I can go with the the first Speaker the other Dr Doug um LM llm text generation is a bit of a parlor trick I would say so it's definitely doing something it's learning patterns that is it's amazing it's able to create this text but we're using that in a way that's really simplified and a little bit stupid we're using it to you know flip a coin and generate a word flip another coin generate another word and it's all consistent or or if it's music you know you know it can actually sound really good but when it's text that means something very very different uh I actually heard somebody describe um uh chat GPT this way so I went to chat GPT and I gave it a question and it went off and did some research and then it came back with no that is not what chat GPT does it does not do research it doesn't do logic it doesn't do encodings it detects patterns and statistics uh llm results may be racist unethical demeaning and just plain weird llms don't understand much of anything uh one of my colleagues thought that uh chat GPT would good be good at uh the New York Times connections do you know this game it's very fun you pick uh you get uh a bunch of words and you have to pick the way that they're connected in four categories chat GPT is not good at that um it's not the kind of thing it was trained on to create uh this is one that I've seen often uh llms don't do well on non-textual inputs uh and I mean non-ex you can put tech make them text but uh a chess diagram you say lay out a chest or even tic-tac-toe you got some x's and o's and you ask it you know who what's your next move or who won that's not a very good task for chat GPT um because it's not a textual sequence problem uh they don't deal very well with those uh they can answer you yeah you'll win no that's not right uh llms don't know what they don't know there's no idea of Truth or uh you know punching down or inappropriate or you know anything like that you know don't talk about Nazis that it just doesn't know that llms do give you back output but the kind of output that you expect that you want and what the output actually says is anybody's gifts so my opinion is that llms are amazing they're stunning uh Dr uh e showed uh examples that he did in back in 2002 of Music generation same kind of problem it wasn't that good of music but it's amazing that these systems can do that so what are they good for today what what do you use chat GPT or or other being for yeah coding write pretty good code I I've looked at code I've never used it I write a lot of code for my work but uh sometimes I like let's see what chat GPT would do just to get a hint and but I I thought oh that's not bad that's there's a bug there uh what else yeah inspiration inspiration yes I use chat GPT a lot for uh just getting ideas uh I created a Blog on genealogy and I was I need a good name and I it didn't give me good names but it inspired me then so it part of the creative process I do know one uh group I lived in San Francisco and I went to a meet up there and there was a a group there that named their internal group based on a name that chat GPT had given him so definitely inspired yeah helping you rate Christmas cards I should say ah helping you write Christmas cards yeah my my wife is a professional in uh education and she has to write a lot of cover letters and personal letters and sometimes she does fundraising she uses chat GPT for you know generating the structure and then she'll go through and tweak it a little bit so definitely yes what else is useful in Long acts which are passed by Congress like the inflation reduction act oh summarizing large blocks of text I I would be a little bit uh worried about that I I I'm not sure I would trust that 100% custom GPT okay so that that's a different uh kettle of fish uh EX in indeed so there's definitely and I see lots of qu U ideas of ways that you can use chat GPT usefully and successfully and we can talk about that uh perhaps when the the talk is over but uh I guess I just wanted to uh to end with that uh that there are some good uses for chat GPT ones that are ethical uh and that are uh you know you can trust and you can be inspired by but uh not every task you know is appropriate to give to chat GPT grading assignments I would argue do not do that uh if you know anybody that's like grading with by giving it to chat GPT uh I want to talk to them uh you know one-on-one over coffee but uh I I don't think that's a good idea we we should talk about that uh more so uh to wrap up everything that uh Nico and I talked about today uh takeaways from uh this talk um our suggestion would be use a modular framework perhaps by a company that uh gives you the ability to manage your experiments keep track of them log all of your metrics your hyperparameters keep track of your model storage your deployment get a good hand on that model your production monitors have alerts uh and be willing to be flexible with what happens look at the data look at the training dig down and maybe add additional uh parts to that pipeline uh nothing is static in this world uh Dr Doug E and Dr Doug blank have been doing this for 30 years and I'm sure he would agree with me we would never imagine that these systems are able to do what they do today it's fast it's interesting it's exciting there is a job for you uh no doubt um but you have to be on stay on top of the processes and the practices and uh make sure that you use these in appropriate ethical ways Nico that I think that's the [Applause] end uh thanks for the great talk uh I have a quick question on the grading problem so uh you mentioned that okay if the uh if the chpt gives the grade first then they need to use all the paragraph to solidify their uh grades if you make the reverse like as I say to give the grades at last then will it just take what they uh what it use and then give the grade I would love to read that paper at nerps uh next year or I hear you have some projects do tomorrow is that true uh that would be a great experiment to try does it affect if you force the output to be one way first versus another does that have a large impact I I would think it does but big you never know with these models you really have to test them out hello uh great talk I just had a question so could you use like corrected user output to train models to prevent hallucinations so like very low probabilities uh if you could tell chadt you were wrong could that change the hallucination from happening again yeah there are definitely systems Uh custom models that do take chat GPT and they fine-tune it to give you specific answers the the one of the big problems though is that if you're trying to monitor this in production there are too many to do manually you have to come up with automated system automatic measurements so that uh you these will pop out and you can go in and look at them but once you do find them you could definitely fine-tune models that's a thing with the same prompt why do you end up getting different outputs is the entropy of the system somehow Disturbed from a previous run does it change like what causes that variance I couldn't let you answer one of these okay so this these are built into a lot of the models uh there's typically a value between like 0 and 1 or 0 and 10 and you can set that value to be you know seven give you some uh a lot of variability it can make it one and it'll be more deterministic um I sus I don't know exactly what that number does for example in chat GPT I suspect what it does is when the probabilities come on the output and it's deciding what token to Output next um if you have a a deterministic number uh highly deterministic it'll go with the highest probability just Max and that'll be the output and so you'll get the same output every time you move that up and then say okay I I'll maybe go with a lower probability word word to give you some more creativity and craziness it's fun to play with that's a setting you can uh I believe set on the dashboard of chat GP uh hi in the first set we did saw that some models didn't perform well uh when there were higher quality images uh so how would you suggest working on with that uh problem because that problem would you would least expect to happen during the production uh so let's say for the problem that we saw in the first date for a dating app that they recognize some images but when the camera was changed a higher quality was cam camera was used the models didn't perform well right so how would you uh I mean for any scenario when you're trying to work on your models how would you go ahead trying to solve that problems because that that's the area you Le least expect to be yeah so can you hear me by the way is this on okay uh I so we we spent a lot of time with this team when they were uh when they were dealing with this um it's kind of like you have to monitor the entire system and be aware that at at some Cadence maybe every year or two something extraneous to the system will happen like this like a new iPhone will launch unless you have that coded in like every you know every every 18 months send myself an alert over email that we should probably add a layer to our architecture because there's probably iPhone 16 is out now like something like that it's it's hard to do deterministically um and it's a I mean the lesson that they took was they do like an annual like architecture review from like a philosophical perspective I and they do and they do their best to think about all the possible like external perturbations to their system that was totally bulletproof a year ago but now might not be right um yeah I have a question about the the llm business use cases right now so I'm curious what you guys are seeing from the business point of view of the market for llms with all of these problems that you mentioned right I I assume the marketplace for llms is very much evolving right and I'm curious what you're seeing from the customers side of things right what what is the market for llms today yeah I can take that so um without sharing specifics I think some generalizable Trends were seeing uh and you can probably all infer this from the talk that Doug just gave but I would say as a general rule uh teams are uh more wary of deploying llm Services into applications where the model might be making an important decision with a customer so if it's going to impact like a buying motion or if it's going to impact something like that that's going to have a hit on uh the Topline financial performance uh it's less likely that you'll see a model shipped teams are very eager to ship llm um you know models for internal use cases so coding we see a lot of teams who are building internal agents to um to speed up the productivity of their devs we see a lot of Team shipping models to do um like note annotation from internal meetings or summarizing videos so if someone misses a meeting just like take the zoom recording and email out the synopsis so a lot of things that are aimed at improving the advocacy and the efficiency of internal teams that's a very rough thing but we're seeing that as like a general rule it's very similar to where we were uh like five or six years ago when people were doing uh just regular neural network experimentation uh back then we would ask a team how are you keeping track uh and logging your results well I have an Excel spreadsheet and I run this and I copy and I paste it I uh there were so many teams doing that and I think there's probably a lot of teams doing that with llms so the the first loow hanging fruit uh that we're working on is just making it easy to log and then be able to visualize sort through um find you take a model it gives the output you fine-tune the model it gives slightly different output being able to track that lineage of the model all of those kind of what we now call mlops uh which is an analogy with devops you know a developmental system uh software engineering kind of systems but uh this is for machine learning um the thank you for your talk um I can imagine how in production it would be very useful if if uh your LM llm could ask you a question um me personally I've been trying really really really hard to force chat GPT to ask me a question Y is that due to it being trained on text um and um how would like is it possible to change it like the uh the neural net such that it does ask questions I don't know anything about the details of you know what how Jet chat GPT works or you know the internal algorithm um but as mentioned before you can fine-tune it to do anything that you want um so you can take a large language model and you know if you want it to do something that's inappropriate you could train it to do that uh if you want to train it to ask you questions you can do that so it's really uh this fine-tuning is uh what you can take a model and then use it to go One Direction or another I found very interesting the alocation part like using the low probability to detect like when is making up something I wonder if that works also for the opposite problem when it spit out something textually from the training set for example New York Times is so like chpt because some of the articles are H very similar if not the same so how do you tackle that problem is the probably like 100% or is that something that needs something different what what would the measurement be how do you detect this using exactly the same from the training so it's taking a text from the training set um outputting it completely like it was that that's a hard problem because the training set is the entire you know internet so um yeah I I uh mean when it's out putting like H you ask a question and it will output exactly the same thing so you want it to be like a little mixed up or a little uh different or you want to get different outputs yeah not textual textually exactly the what I have seen before uh I think you could pump up the randomness on that number and uh if you change that number you start getting Wild and Wilder deviations you change it a little bit you get some variability you change it a lot you get a lot of variability yeah so that is built into these models okay maybe one more question okay come over sorry she rised first enthusiastic hand up is the last question of the night okay thank you hi thanks for the presentation I'm curious about your company like could you share like uh what's the differentiator of your company compare with your competitors sure yeah happy to talk about it I I we were hoping not to make this like a comet sales pitch so I think I'll honestly just say uh we'd love you to go test it out so go use it for your own you know ml use cases if you're trying to build models if you're trying to go work in Industry I think in general whether you use us or not is is less important in this context I think what we're uh arguing is that these are useful ideas to incorporate into the way that you build and and manage models as like a general rule so whether there's open source tools that you love whether you want to use Comet or something else um use them all and see what makes the most sense for your use case and your model and go with that I'm not going to pretend like I know what you're working on and tell you that ours is the best tool um but in industry for all of the reasons that we you know mentioned weird things are going to happen keeping track of everything matters building meaningful ml systems is hard and it makes it a lot easier when you're managing everything in this way so uh I'm equivocating and avoiding answering but I would say use what you want test them all out see what works best but you should use something thank you so much Nico and Doug again this was an awesome talk and I think everyone really enjoyed it um let's thank them once more [Applause] |