playlist
stringclasses
160 values
file_name
stringlengths
9
102
content
stringlengths
29
329k
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_Reinforcement_Learning.txt
[Music] okay hi everyone and welcome back to day three of uh intro to deep learning today is a really exciting day because we're going to learn how we can really try to fuse two really important topics in this field so we're going to look at how we can marry these two topics of rein enforcement learning with deep learning which is the main topic of this class so this field this this marriage between these two topics is really exciting because it moves away from this Paradigm completely that we've seen so far up until this day in this class uh to date right so instead of deep learning instead of the models that we train in deep learning being shown fixed data sets that don't change over time and super iing them teaching them based on these data sets we're now going to look into problems that involve the Deep learning model exploring and interacting with the data sets in a dynamic way so I can learn how to improve based on these scenarios and these environments that they're being placed in dynamically and evolve over time and the goal of course is to do this entirely without any human supervision so previously our data sets were largely being created through some supervision of humans but now we're going to look into scenarios where these these models are interacting in an unsupervised manner with their environment and can actually because they're not being constrained by human supervision can actually learn to go to superhuman performances so this has huge obvious implications and impacts in many fields and just to name a few there's robotics self-driving cars but but also on the other side there are strategic scenarios with game playing and also with uh problem solving scenarios as well so we're going to get a huge flavor of all of these different types of topics and how we can deploy these new types of algorithms and these new settings in these different types of environments so to start I just want to take a step back and see how yes go ahead yes okay they'll be up soon okay okay so I want to start and just uh take a step back first of all and think about how this topic of reinforcement learning is different at a high level from all of the other topics that we've seen so far in this class right so in day one we focused primarily on this class of learning problems called supervised learning right so in supervised learning which was actually day one and also the first lecture in day two it covers this domain where the input label where the input data X and you have also output labels y now the goal of supervised learning as we saw in those first three lectures is really to try and learn a functional mapping that can go from X to Y right so you showed a bunch of examples of input data you showed a bunch of labels Y and you want to build some function in the middle that can transform X into a reasonable y so for example this is like if I showed you this picture of an apple on the bottom and you were to classify this picture as this is an apple right so that's supervised learning you showed a bunch of these pictures you train it to say okay these are these are apples these are oranges these are bananas and when you see a new picture of an apple you have to correctly classify it in the last lecture of day two we discussed a completely new paradigm of problems called unsupervised learning problems and this is where you only have access to the data only the X's you don't have any y's you don't have any labels and you want to try now to learn a model not to transform X's to y's X's to labels but now you're just trying to learn a model that can capture the underlying structure of X of your original data so for example going back to the Apple example on the bottom using unsupervised learning we can now show our model a lot of data of uh these types of different images and the model should try to learn how to essentially cluster them into different categories right so they should put images that look similar to each other close to each other in some space in some feature space or latent space as we saw yesterday and things that are dissimilar to each other should be very far away right so they the model here doesn't know anything about that this thing is called an apple but it recognizes that this thing shares similar features to this other thing right and they they should be close to each other so now today's lecture in in reinforcement learning we're going to talk about yet another completely new paradigm of learning problems for deep learning and this is called now where you have data not in the form of inputs and labels but you're going to be shown data in the form of states and actions and these are going to be paired pieces of data now states are the observations of your agent we'll get back to what those terms mean in a second and the actions are the decisions that that agent takes when it's sees itself in a certain state so the goal of reinforcement learning said very simply is to create an agent to create a model that can learn how to maximize the future rewards that it obtains right over many time steps into the future so this is again a completely new paradigm of learning problems that we've seen before there are no ground truth labels in reinforcement learning right you only have these State action Pairs and the objective the goal is to maximize some reward function so now in this apple example one more time we might see this picture of an apple and the agent would have to learn you know to eat this thing because it has learned that it gives it some nutrition and it's good for it if it eats this thing again it knows nothing that this thing is an apple it knows nothing about kind of like the structure of the world but it just has learned some actions to do with these objects by interacting with the world in a certain way so in today's lecture that's going to be the focus of what we talk about and we'll explore this in Greater detail but before we go any further I think it's really important for us to dissect and really understand all of the terminology because because this is a new type of learning problem there's a lot of new uh pieces of terminology that are associated to reinforcement learning that we don't really have in supervised learning and unsupervised learning so I want to start by firstly building up some vocabulary with everyone such that it will be necessary for the rest of this lecture and we'll keep referring back to all of these different pieces so I'll start with an agent an agent is uh something that can take actions right so for example in uh in let's say autonomous delivery of packages a drone would be an agent okay in Super Mario games Super Mario would be the agent right it's the thing taking the actions uh the agent or the algorithm itself is the agent right it's the thing that takes the action so in life for example we are all agents the next piece of vocabulary is the environment you should think of this as simply the world in which the agent lives and it can take actions in and the agent can sem send commands to its environment in the form of these quote actions uh and we can also denote just for formality let's call a the set of all possible actions that this agent could possibly take so in a very simplified world we could say that this agent let's say it's in a two-dimensional World it can move forwards it can move backwards it can move left it can move right right so the set of all possible actions are these four different actions okay now coming back in the opposite direction observations are simply how the environment interacts back to the agent so observations are states essentially that the environment s and shows to the agent and how the agent observes the world now a single state is just a concrete and immediate situation in which the agent finds itself in right that's the immediate observation that the agent is present in and finally here's another new part specifically towards reinforcement learning is the reward so in addition to providing a state from environment to action the environment will also provide a reward so a reward is simply the feedback which we can measure or which the environment provides to measure the success or the failures or the penalties of the agent in that time step okay so what are some examples so in a video game when Mario touches a gold coin you know the points go up right so he gets a positive reward right uh let's see what are some other examples so if if he were to jump off the cliff and fall to the bottom he gets a very negative reward right penalty and the game is over okay so rewards can be both immediate in the sense of the gold coin Right Touch the gold coin you get an immediate reward but they can also be delayed and that's a very important concept so you may take some actions that result in a reward much later on but they were critical actions that you took at this time step and the reward was delayed and you don't recognize that reward until much later it's still a reward and that's a very important concept so we can now look at not only the reward at one time step which is rft but we can also look at the total reward which is simply you can think of the sum of all all rewards up until that time okay so we'll call this capital r of T that's just going to be the sum of all of the rewards up until time T so from Time Zero to time t if we expand it it would look like this right so it's the reward of time t plus the reward of time uh T minus one and so on so often it's useful to not only consider the reward at time T right or the sum of all rewards up until that time but also to consider the discounted What's called the discounted reward over time okay so what does that mean so the discount Factor which here is denoted as this gamma term okay so it's a gam it's a typically a fixed term so you have one discounting factor in your environment and your environment provides that typically you can think of the discounting factor as a factor that will dampen the effects of a reward over time okay so why would you want to do this essentially a discounting factor is designed such that it will make immediate or let me say it will make future rewards much less uh worth much less than immediate rewards okay so what's a what's an example of kind of a uh an example where you have this enforcing of a short-term learning on an agent right so for example if I was to offer you a reward of $5 today or a reward of $5 in one year it's still a reward of $5 but you have an implicit discounting factor which allows you to prioritize that reward of $5 today over the reward of $5 in one year's time right and that's what the discounting factor does the way you apply it in this framework though is that you multiply it by the future Awards as discovered by the agent in order to dampen all of the the future of rewards that the agent sees now again this is just meant to make it such that the future rewards are less worth less than any of the immediate rewards that it's okay now finally there's one more really important concept that's critical to understand as part of all of this and that's called The Q function now and and yeah I think it's also critical that we think about the Q function in the context of all of the previous terminologies that we've covered so far so let's look at how the Q function is defined in relation to all of those previous uh variables that we had just EXP explored so remember that the total reward it's also called The Return of your agent the return is capital r of T so that's the discounted sum of rewards from that time point on now remember that the Q function now is a function that can take as input the state the current state that you're in and a possible action that you take from this state and it will try to return the expected total future reward or the the the return of that agent that can be received from that time point up until the Future Okay so let's think about that a little bit more just digested so given a state that you're in and an action that you take from that state the Q function will tell you what is the the expected amount of reward that you will get from that point on if you take that action right now if you change the action AF of T in the current state your Q function will say that okay this is actually going to return you even more reward or maybe even less reward right than the previous one so this Q function is actually a critical function that allows you to do a lot in re in reinforcement learning so now the question is I guess if we're for now assume that we're given this magical Q function right I give it to you and you can query it you can call it at your desire how can we as an agent so put yourself in the agent's point of view and I give you the Q function how would you take actions to solve an environment right what actions would you take in order to properly solve this environment so any ideas yes just keep maximizing the Q function right so what actions would you take you would take the ones that would a maximize the Q function right so at every time Point ultimately what you have to recognize as the agent's point of view right is at every time step you want to create some policy right so what we're thinking of here is now this policy function which is slightly different than the Q function so the policy function will denote it with a pi it only takes us input s and it's going to say what is the optimal action that I should take in this state so we can actually go from or we we can try to evaluate valate the policy function using the Q function right and as as was stated this can be done just by trying to choose an action which maximizes the future return of that agent right so the Q function the sorry the optimal policy function Pi how to act in a current state s is simply going to be defined by taking the action which gives you the highest Q value right so you can evaluate all possible actions from your Q function and pick the action that gives you the highest return or the highest future reward now in this lecture what we're going to focus on at a high level is really looking at two different ways that we can think about solving this reinforcement learning problem and thinking about this Q function in specific right so there are two broad categories which we're going to think about in terms of learning this Q function right because so far I've only said if I give you the Q function I haven't told you actually how we create that Q function and basically these two categories can be split up such that on one side we have value learning algorithms and on the second side we have policy learning algorithms so the second class of algorithms these policy learning algorithms in particular are directly trying to learn not the Q function as we previously said but they're directly trying to learn that final policy function right which if you think about it are is is a much more direct way of accomplishing your goal ultimately what the agent wants to do is see a state and know the action to take in that state right and that's exactly what the policy function takes policy function gives you but first let's focus on the value learning side of things and we'll cover that and then we'll build up our our foundations to go to policy learning so in order to cover value learning we have to first start by digging a bit deeper into this Q function right this is a really critical function especially when thinking about the value learning side of the problem so first I'll introduce this Atari Breakout game probably many of you have seen it before but in case not I'll just quickly describe it so the the concept of the game is that the agent is this paddle This horizontal line on the bottom it can take two actions roughly it can move left or it can move right or can can also stay you know not move at all but for Simplicity let's just say two actions left or right and the environment is composed of this you know two-dimensional World it has this ball that is uh being a projectile it's coming towards the paddle and the objective of the paddle is to basically move left and right to reflect and bounce the ball off because you want to hit as many of the blocks as possible on the top every time you hit a block on the top those colored blocks you will break off a block and you get a point so if the objective of the game is actually to remove all of the colored blocks on the top you can do that by keep moving around and keep hitting the ball and breaking them off one at a time so the Q function here is going to tell us the expected total return of our agent such that we can understand how uh or we can expect you know which given a certain state right and a certain action that this paddle could take as that state right and which of the two which of the Rewards or which of the actions would be the optimal action to take right which of the actions would be the one that Returns the most in expectation a reward by breaking off as many of those uh blocks on the top as possible right so let me show just a very quick example right so we have these two states and actions okay so for every state State you can also see the desired action by the paddle so here's State a which is where the the ball is basically coming straight onto the paddle and the paddle chooses the action of not moving at all right just reflecting that ball back up and we have state B which is the ball coming in at an angle almost missing the paddle and the paddle is moving towards the the thing it might miss it it might get it but it's trying to catch up and and grab that padal between these two State action pairs right so they they each have or they can each be fed into the Q function and if we were to evaluate the Q value of each of these State action pairs a question for all of you is which state action pair do you think will return a higher expected reward for this agent which one is a better State action pair to be in any ideas A or B so let's see between a raise sense okay B okay so someone that said a can can we explain the answer H okay yep seem pretty safe you will not lose your ball just back and obviously there will be some breaks there and do get point exactly so yeah the answer is it's it's a very safe action it's very conservative you know the um yeah you're you're you're definitely going to get some reward from this state action pair okay now what about B anyone answering for B yep uh if you choose b um there's a possibility that it could Ricochet and uh like you can hit more blocks with a um it just bounces up and down like and you get a minimum amount of points but you can probably like exploit it go around uh if you can get it above yeah exactly so yeah the answer is basically B is a bit more erratic of an an of a state action pair and there's a possibility that you know some some crazy things could happen uh if if you have this like very extreme fast moving thing that comes off to the side so this is just a great example of you know why uh determining the Q function and learning a q function is not always so obvious right so it's it's a very intuitive answer would be a but in reality we can actually look at how policy behaves in two of these different cases so first let's look at a policy that was trained to maximize situations like a and we'll we'll play this video forward a little bit right so this is a very conservative policy it's usually just hitting the balls straight back up to the top and learning to you know solve the game it does break off the balls or break off the colored things on the top but it's doing it in a very conservative manner right so it takes some time now let's switch over and let's look at B right so B is having this more erratic uh behavior and it's actually moving away from the ball just so that it can come back towards it and hit it with that extreme value just so it can you know break off something on the side get the ball stuck on the top and then break off and get a lot of free points basically so it's learned kind of this this hack to the system in some ways right to to uncover this right and it's just a good example of you know the intuition behind Q values for us as humans is not always aligned with with you know what the AI algorithms can uh can discover during reinforcement learning so they can actually create and find some very interesting Solutions maybe they are not always exactly as we expect so now let's see if if we know that Q function right then we can directly use it to determine the best action that the the agent should take in that scenario right we saw how that was possible before the way that we would do that is we would take you know all of our possible actions we would feed them through the Q function one at a time evaluate the Q value for each of those actions and then be able to determine which action resulted in the greatest Q value right so now how can we train a network to determine the Q function right so there are two different ways that we could even think about you know structuring and architecting such a network right we could have the state and the action be inputs to our system and then we output a q function Cube value for that state action pair that's going to be basically as you see on the the left hand side so it's a single number for a state and an action input to the network or you could imagine something on the right hand side which probably is a bit more efficient right because you only feed in a state and then the model has to learn the Q value for each of the possible actions so this would work if you have a fixed and small set of actions right if you're only going left or right your output would just be two numbers Q value for left Q value for right and often it's much more convenient to do it like this right where you just output the Q value for all of your actions uh just so you can you don't have to run your network multiple times with multiple different action inputs so how can we actually train this what's called Deep Q Network right it's a network that learns the Q value we should think about first the best case scenario of our of our agent right so how would our agent perform in the ideal scenario right what would happen if the agent were to take all of the the very best actions right at each step it's taking the best actions that it could this would mean that the target return right the this this Q function or this Q value I should say the Q value would be maximized because it's taking all of the optimal actions and this would essentially serve as the ground truth to train our agent right so our agent is going to be trained using a Target Q value function function which is going to be assumed to be the optimal one the one that maximizes all of our uh agents action or agents Q values now the only thing left now is we have to you know use that Target to train our predicted Q value right so to do this now all we have to do is formulate our expected uhe our expected return our expected Q value where we take all of the best actions right so that's the initial reward that we start with we select an action that maximizes the expected Q value at that point and then we apply that discounting factor we compute the the total sum of returns from that point on and then we use that as our Target and now all we have to ask is you know what would the network predict in this case so that we can learn to optimize this right and of course for our Network prediction that's a lot easier because now at least in the second example we have formulated the network to direct tell us the predicted Q value for each of these actions right so we have a ground we have a prediction as desired and now all we have to do is formulate you know a mean squared error subtract these two pieces between the Target and the prediction you know compute a distance from them so maybe we Square them compute the norm right and then that's the piece that we want to minimize right we want to minimize the deviation between the predicted Q value coming from our model and kind of that Target Q value which is obtained on the left hand side so this is known as the Q loss and this is exactly how Q networks deep Q networks are trained so let's take a second now and just summarize that whole process end to end because that was a lot in there so our deep neural network is now going to see only States as inputs and it's going to Output the Q value for each of the possible actions right so here we're showing three actions going left staying stationary and going right so our network will have three outputs it will be the value of uh the Q value for each of those actions taken with this input state so here the actions again left right or stay stationary now the policy how an agent would act in this state right if you now see a new state you have to figure out what action do you take you take all of your outputs all of the Q values you pick the maximum one because that's the one that has the highest expected return at least predicted by your network and you pick the action that that corresponds to so for example here we have uh Q value of 20 for moving left Q value of three for moving stationary and Q value of zero for moving right so here we would pick the action of Left Right and finally we would send this action back to the environment we would actually take that left action the state would update through our game engine right so we will now see a new state and then this process will repeat again right and then the network will have to receive this new state process all of the Q values for that possible action for those possible actions and then pick the best one again yes what about when the action is not one specific Val a ring y we'll get to that in a second y yes so do we compute let's say Q of s and A1 one step or we have to play the game until it ends great question so you actually have to compute every step right because you have to know you can't play the game until you know what to do on the next step right so you see a state you need to do something right so the thing that you do has to require running this Q Network so that you can compute the [Music] action right so that's why to train the network so in the beginning these networks they don't know anything right they're going to Output gibberish as their Q values right and that's why you actually have to train it using the Q law so over time those values will improve and over time you're going to predict better and better actions great yeah um how do you know when to update the next state so for example if you go to the left the ball might just Bounce Around really wacky for a while and then at some point you would need to read the state but you might end up reading it prematurely and doing something suboptimal because the ball is still up in the air or whatever right so the question is about uh you know when to update the state as I understand it right so the state update so this is happening by the environment actually so you don't have control of that right so you send an action the game always updates the screen right so that's your new state right and you have to make a new action you don't have a Cho right like so even doing nothing is an action right being stationaries and that so that that is kind of not up to the agent right life goes on even if we don't take actions right and the states are constantly changing regardless of what we choose to do okay so with this framework in mind right we can now see a few cool examples of of how this can be applied so a few years ago Deep Mind showed actually the the first really uh scalable example of training these types of deep CU networks and how they could be applied to solve not just one type of game but they actually solve a whole variety of different types of Atari games by providing the state as an input on the left hand side okay and then on the right hand side it's Computing the output the Q value output for every possible action you can see here it's no longer three possible actions right this is from like a a game controller right so you have a bunch of different actions right still not a huge number of actions we're going to think if like the action Space is really large how can you build these networks to handle that but for now I mean this is already a really impressive result because what they showed was that if you test this network on many different games in fact all of the Atari games that existed they tested on they showed that for over 50% of the games right these deep Q networks were able to surpass Human Performance right just with like very basic you know algorithm that we talked about today using a basic CNN that takes us input just the pixels on the screen of an Atari game with no human supervision no ground truth labels the agents just learn how to play in these games they watch they they play basically uh you know a game they they update and they reinforce their learning and they evolve over time and just using that algorithm that we talked about for deep Q learning they were able to surpass 50 50% of the games surpassed Human Performance the other games were more challenging so you can see those on the right hand side they were more challenging but given how Simple and Clean This algorithm is that we've just talked about in this lecture it's it's still remarkable impressive to me at least how this thing even works and and beats humans for even 50% of Atari games okay so now let's talk very briefly about downsides of Q learning so a couple were already mentioned uh just organically by by some of you already but let's talk about it now more formally so number one the complexity right so the model scenarios where Q learning can work in this framework that we've just described really involve scenarios where the action space is very small and also even more important the action space has to be discret right so it has to be a fixed number of action categories right it's not saying you know so the actions would have to be like left versus right it's not how fast to move to the left or how fast to move to the right right that's a continuous number it's a speed but it's not an it's a not a discrete category left right okay the other one is the flexibility right so Flex in in policies if we in the Q value case sorry the Q values are needed to compute your policy right your policy remember is the function that takes us input a state and computes an action so that function requires your Q value your Q function and it's done by you know just maximizing that Q function over all of the possible different actions so that means that inherently you cannot learn stochastic policy with this type of framework that we've discussed so far right the the policies have to be deterministic because you're always picking that maximum so to address these kind of challenges let's now move on to the next part of the lecture which is going to be this second class of reinforcement learning algorithms called policy learning or policy gradient uh algorithms right so again in this class of algorithms now we're not going to try and learn the Q function and use the Q function to comput a policy function we are directly going to try and find the policy function so Pi of s it's a function that takes us input the state and you compute or you sample an action by sampling from that distribution of the policy okay so to to cover this again very briefly reiterate the the Q deep Q function networks so here State comes in all of the Q values for all of the actions come out and you pick the action which maximizes your uh Q value now instead of outputting your Q values what we're going to do now is directly optimize a policy function okay so our policy function is going to also take as input our state right so the input here the architecture so far is the same that we saw before right and Pi of s which is going to be our policy function that's our that's our output that's going to be the policy distribution right let's now think of it as a probability distribution which governs the likelihood that we should take any action given this state okay so here the outputs are going to give us a desired action or I should even say the likelihood that this action is the best action given the state that we are in so this is actually really nice because the outputs now are probabilities right which has a few nice probabil properties because those probabilities that an action is the one that we should take tells us very important things about our state so if we were to look and predict these probabilities for this given State let's imagine we see something like this right so if we were to see 90% the left uh action is the best possible action we could take 10% that doing nothing is the best thing and 0% that do going right is the best thing we can aggregate these into a uh into a prediction function which is essentially now we want to compute our action how would we do that we would not take the maximum anymore now what we're going to do is sample from this probability distribution right so even with this same probability distribution if you ran at a 100 times on expectation 10% of them should not be the maximum right about 10% of them should have the agent stay still so note that again this is a probability distribution so all of those action or all of those uh outputs have to sum to one and I want to spend a moment here and maybe this is a question for all of you what are some some advantages or especially from the flexibility side right now of formulating the architecture in this kind of way does anyone see any any concrete advantages yes we could potentially learn some amazing new technique that maybe we couldn't think of ourselves right so uh the idea is it can it can learn some things that we couldn't think of ourselves this is true and the reason is because the sampling of these actions is now going to be stochastic right you're not always picking what the network thinks is the best action you're going to have some more exploration of the environment right because you're going to constantly sample even if your answer is like this 90% 10% 0% you know you'll still pick the the non- maximum answer 10% of the time right that allows you to explore the environment a lot more yes so for nonzero sum games policy uh gradient you so you could actually use both types of algorithms in non-zero sum games but what you would want to make sure is you know from a flexibility point of view right so the differences that we're seeing so far are much more focused on the actions and the sampling of those actions right than the environment itself at least so far yes there's a fundamental thing here that when you say Q orig is an expectation of you know s give s today and then how can expectation onless you some distribution or some assume form of the future so I'm missing how you can have a well defined CU level on a p without specifying what do mean by expectation value over the future there so many different features right so yeah this is where the the Q learning and policy learning differ greatly right so in the Q learning side you do have an expectation because you do actually roll out the whole game until the end of the game right in the policy learning side there is no expectation except from the point of view that you have this now distribution over the different actions that you can take and that gives you an expectation as well but that you know that that's separate right from the learning side okay so getting back to the advantages of policy learning over Q learning right and and digging into this a bit deeper so one advantage here is is just in the context of discret versus continuous which was mentioned earlier in the lecture right so discret action spaces what does that mean it means that we have a finite set of possible actions that could be taken at any possible time so for example we've been seeing this case of pong right or or breakout right and and the action space is indeed discreet you can go left right or stay stationary but let's assume I reformulate the actions of this game a little bit but and now I make them continuous so now instead of left right or or nothing I'm going to make it a speed at which I should move on that horizontal AIS so now if we were to plot kind of the probability of any possible speed being the the best speed that will maximize the uh the reward of this agent it might look something like this right it's going to be a now continuous probability distribution as opposed to a discrete categorical distribution that we saw previously so let's let's dig into that key idea right and think about how we could Define these types of continuous policy learning networks as well how can you have your model output a distribution right and we can actually do this uh using the policy gradient method so instead of predicting a probability of taking an action over every possible state that you could be in let's instead learn a parameter or the parameters of a distribution which Define that probability function instead okay so let's let's step through that a bit more so in yesterday's lecture we learned how that our latent spaces could actually be predicted by a neural network but those are continuous latent spaces over gausian random variables right we're not learning the PDF over every possible gausian uh random variable but instead we're just learning the parameters the M's and the sigmas that define those gaussians so now let's do something similar instead of outputting the probability for every possible action which in a continuous action space there are an infinite number of possible actions now let's let's learn a mu and a sigma that Define the probability distribution for us so for this image for example that we see on the left hand side we can see that the paddle needs to move to the left so if we plot that distribution predicted by this neural network we can see that the density lies on the left hand side of the number line right and it's telling us not only that it should move left but it should it also tells us some information about you know how quickly or with what speed or with what urgency it needs to move to the left so we can Now sample not only inspect this distribution but we can Now sample from that distribution to get a concrete action that the paddle should execute in this state right so if we sample from this gusan with mean minus one and standard deviation .5 we might get a value for example like this which is minus8 okay so it's just a random sample every time you sample from this distribution you'll get something different and again even though this is a continuous extension of the same ideas that we saw previously it still follows that this is still a a uh proper probability distribution so if we were to take the integral over this probability output of the neural network it's integral with sum to one right this is a valid probability distribution okay so let's now turn to look how policy gradients could be applied applied in a concrete example so let's revisit kind of this RL terminology learning Loop that we had looked at earlier in the lecture and let's think of how we could train now this uh this use case right so let's imagine we want to train a self-driving car uh and we want to use the policy gradient algorithm the agent here is the car it's the vehicle itself right the environment is the the world in which the vehicle drives the states are all of the sensory informations that the the vehicle sees so it's coming from the cameras the Liars The Radars Etc uh the actions right these are the steering wheel angles we'll think of it simply just in terms of steering wheel angle let's not think about you know speed for now let's just only think about steering wheel angle and the reward let's think of a very simple one which is just drive as far as you can without crashing right um so we're not optimizing for for obvious things like Comfort or safety let's just say let's drive as f as far as you can not crashing implicitly you will care about safety right because you can't drive far without uh being safe to some degree okay so let's let's start with this example so how would we train a policy gradient model in this context of self-driving cars let's start with initializing the agent on the road okay so we'll put it in the middle of the road and we'll start this system off we'll now run a policy the policy here is an neural network so we'll we'll run forward the policy this is called a roll out so we'll execute a roll out of this agent through its environment and we'll see kind of the the trajectory of the vehicle over the course of training right so this this policy has not been trained right so it it didn't do very well it kind of veered off and and crashed on side of the road what we're going to do here is now take that policy or take that roll out I should say and record all of the state action pair so at every state we're going to record what that state was and what the action that our policy did at that time step now something very simple the the the optimization process in policy gradients is just going to assume the following that I'm going to decrease the probability of everything that I did in the second half of my roll out because I came close to that crash and I'm going to increase the probability of doing everything that I did in the first half okay so is this is this a uh is this really a good thing to do it's probably not the optimal thing to do right because there could definitely be some very bad actions that you took in the beginning of your roll out that caused you to get into a bad State and that caused you to to crash so you shouldn't necessarily increase those beginning actions but in reality remember you don't have any ground truth labels for any of this right so this is a reasonable heuristic that in expectation it actually works out pretty well and repeat this process again so you decrease the probability of all of those things that came towards the crash and you increase the probability of everything that came in the beginning and now the the policy or the agent is able to go a bit farther and you do this again and you keep repeating this process over and over again and and you'll eventually see that the agent performs better and better actions that allow it to accumulate more and more rewards over time until eventually it starts to follow the lanes without crashing right and that's that's EX exactly the policy gradient algorithm I've left out kind of the exact loss function but we'll talk about that in one second and the remaining question here is you know uh you know how can you actually update the policy based on those rollouts that you're observing So based on those State action pairs that are being observed by the model as a course of the rollout we're going to now increase the probability of everything that came in the beginning and decrease the probability of everything that came at the end so this is really the the the last critical question of how do you do this in practice right so let's look at the loss function for you know doing that optimization decreasing the probability of everything that came close to the crash increasing the probability of everything that came in the beginning so the loss consists here of of two terms right so the first is this log likelihood term right this is going to be selecting an action that was chosen for this particular State and then the second is going to be that we multiply it by the total discounted reward R of T okay now let's assume that we get a lot of reward for an action that had a very high log likelihood so it had a very high probability and we got a lot of return a lot of reward so that's great because when you multiply those two numbers together you're going to try and optimize that action even more to happen on the next time but now let's assume you took an action that resulted in a very negative return that would mean that you're now going to try those will be multiplied against each other and now you'll try to decrease the probability of those actions from happening again in the future so when you plug you know that loss into the gradient descent algorithm right that we've been using as part of all of the trainings for all of the neural networks that we've seen in this course we're going to see now the policy gradient algorithm or sorry the policy gradient itself right so it's the gradient of that policy term uh highlighted in blue and that's exactly how this this algorithm got got its name so I want to talk a little bit now on how we can extend this approach to perform reinforcement learning in real life right and I want to think specifically in the context of this use case that we had gone through with self-driving cars for a second right so if we wanted to deploy this exact algorithm as we've described it in the real world which of these steps I guess on the the left hand side here this training algorithm which of these steps do you think are the ones that would kind of break down in real life yep I heard y go ahead recording all states yeah I think the so the recording the states is okay right so why why do you think this um because I feel like when we're looking at the car example yeah um the path is continuing and so like the path bu that we choose to look at we can constantly decrease decreasing get so like the size of the the magnitude of the state that can keep increasing yes yeah so as you basically the the point is that if the if the car gets really good over time then recording the states will be harder and harder because you have to store them all in memory that's true but it would also be a problem in in simulation as well right you you'd have that same problem so there are tricks to get around that maybe you don't store all of the states you only optimize a subset of them yes we run a simulation with the original states in SE cont like driving yes yes so reproducibility is a problem if you're in simulation if you're in the real world it's it's even harder right to to reproduce the the roll outs that's true uh exactly so yeah number two is the is the real problem in the real world right um so if you wanted to to do number two it involves crashing the car right by definition so and that's only for one step of the algorithm right so it's it's a lot of bad things just to you don't want to train your car to learn how to drive just by collecting a lot of data of crashes right so this uh even though in theory all of this sounds really great in practice you know doesn't work so well for for real world let's say autonomy situations uh one really cool result that we had created actually here at MIT was this simulator which is hyper photorealistic right so it looks like this these are simulated scenes of an autonomous vehicle driving through these environments right and these are basically environments that you can you can train in a way that is safe to to to crash right these are built from real data of the real world so you'll see you may even recognize massav in some of these right so it's uh right here at MIT and and the beautiful thing is that if you can have a simulation environment which is safe to do step number two right to do this crashing Behavior essentially then this type of uh problem is is really well suited the only problem then becomes the photo realism right you need your simulation to accurately be faithful to reality but in in these types of hyperphoto realistic simulators right like this one that we had created here at MIT the the agent can indeed be placed inside the simulator and it can be trained using the exact same model and the exact same loss function that we have just seen as part of this class and then when you take those policies trained in simulation you can put them on a full-size car in the real world so now this is running in the real world and they can drive through roads that they've never seen before right so very powerful idea going straight from SIM to real and this was actually the first time that uh an endtoend neural network was trained in simulation and could drive in the real world on a brand new road and that came right here from MIT okay so now we've covered the the fundamentals behind value learning and policy gradient optimization for reinforcement learning approaches what are some more exciting applications and and advances so for that I want to turn very quickly to the game of Go which has gotten a lot of interest over the past few years so the game of Go is basically where orl agents want to learn how to execute actions on this board game right it's a strategic board game to test against human Champions their performance and what was achieved several years ago was a really exciting result because you know this game of Go is a massive has a massive number of possible States it's a 19 by9 board game the number of possible uh or or let me start with first the objective the objective of the game is to basically collect more area on the board than your opponent right so you have two player game white and black pieces and the objective is to basically occupy more territory than your opponent uh even though the grid right so the environment looks a lot simpler than kind of our real world environment it's just a two-dimensional grid and it has 19 by9 squares go is extrem complex even despite all of that and that's because of the huge number of different possible board positions itself so in fact in a full-size board there's a greater number of legal positions than there are atoms in the universe right so the objective here is to see if you can train an AI that can Master this game of Go and put it up against human human agents and see if you could uh beat even the gold standard of humans so how can we how can we do this so a couple years ago this was an approach that was presented and the way it works is that you can develop a reinforcement learning policy that uh that is actually at its core it's not at all that much different from exactly the techniques that you've learned about today so first you'll start by training a neural network that will watch games of humans playing the game of go right so you'll train a network that is now supervised this is not a reinforcement learning model you'll just record a bunch of humans playing the game of Go you'll record the states that they were in and the actions that they took and you'll train a supervised model that will mimic those actions based on those States so this is so far no reinforcement learning you're just building an imitation system essentially right so this will never surpass humans but you can use it in the beginning just to learn some of the human strategies and techniques just at the beginning of training the next step is to use those pre-trained models that were learned by watching humans and actually now play them in a reinforcement learning fashion against each other right so it's this idea of self-play that you can now take two models that have some very basic idea of how to play the game of Go and you put them against each other and they play the game of go against each other and the one that gets the reward is going to be the one that wins the game and you're going to increase the probability of all of the actions that the one the the agent that won you will increase all of their actions and you will decrease all of the actions of the agent that lost the game right regardless of where you know in the the game they won or lost all of the actions of the loser will be decreased all of the actions of the winner will be increased very simple algorithm and in practice you run it for you know millions and billions of steps and eventually you achieve superhuman performance such that you can actually get some intuition about how to solve the game such that you're not only looking at you know state in action out but also you want to achieve this state in value out like how good of a board state are you in based on the state that you currently see yourself so a recent extension of this actually Tred to explore what would happen if you abandoned kind of the imitation learning part in the very beginning right so the imitation part is where you watched a bunch of these human players of the game and see if you could bootstrap them essentially the beginning the beginnings of the learning of this model from the human experts so the question here is can you start from a completely random network still keep all of the self-play and the reinforcement learning but you now start from a randomly initialized model and what was shown is that these types of scenarios are also capable of you know achieving superhuman performance they take a bit obviously longer to train but it's possible to train them entirely from scratch with absolutely no knowledge of the games such that they can overtake Human Performance and in fact they even overtake the original models that were bootstrapped by human performance so the human kind of imitation in the beginning of Learning helps in the beginning just to help accelerate learning but it's actually quite limiting right because and often times these models can figure out new strategies to these comp complex games that we as humans have not created yet right and you can see it now that same idea is being you know deployed on a variety of different types of games right from go to chess and many others as well it's a very generalizable strategy but the remarkable thing is that you know the foundations the way those models are trained is nothing more than policy gradient optimization exactly in the way that we saw today no labels just increase the probability of everything that came with a win and decrease the prob of everything that comes with a loss so with that I'll just Briefly summarize the the learnings from today so we started just by you know laying the lay laying the the foundations a little bit defining all of these terminologies and thinking about reinforcement learning is a completely new paradigm compared to supervised and unsupervised learning and then we basically cover two different ways that we can learn these topics and learn these policies right we covered both re uh sorry Q learning which covers the model and how the model can output Q values for each of the possible actions we talked about disadvantages and how policy learning could overtake some of those problems that comes with Q learning and how you can achieve continuous value uh reinforcement learning and also stochastic action spaces and so on so a lot of exciting advances from policy gradients that can come in that field as well and I'll pause there the next lecture will be Ava who's going to to share New Frontiers of deep learning and kind of all of the new advances so it's the the the more recent I guess uh couple years of what has been going on in this field and yeah we'll just as always take a couple minute break switch speakers and thank you
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2019_Deep_Generative_Modeling.txt
hi everyone so nice to see so many familiar faces and glad we haven't scared you off yet so today I'm gonna be talking about an extension on a lot of the computer vision techniques that Avila was discussing in the previous lecture and specifically focusing on a class of learning problems for generating new data and we refer to these problems or these models as generative models so these are actually systems that don't actually look to only extract patterns and data but they go step beyond that and actually use those patterns to actually learn the underlying distribution of that data and use it to generate brand new data this is an incredibly complex idea and it's something that we really haven't seen in this course previously and it's a particular subset of deep learning and machine learning that's enjoying a lot of success especially in the past couple years so first to start off I want to show you an example of two images so these are two faces and I want to take a poll now about which face you guys think is fake which is a which face is not real so if you guys think that the left face this one is not fake how about you raise your hands sorry fake okay I think like 20 30 percent okay how about this one okay that's a lot higher interesting okay so the answer is actually both are fake okay so this is incredibly amazing to me because both of these faces when I first looked at them first of all incredibly high dimensional they're very high-resolution and to me I see a lot of details in the faces I choose a wrinkle structure I see shadows I mean like I even see coloring in the teeth and special like defects in the teeth it's incredibly detailed what's going on in these images and they're both completely artificial these people don't exist in real life so now let's take a step back and actually learn what is generative modeling so generative modeling is a subset of unsupervised learning so far in this class we've been dealing with models which are focused on supervised learning models so supervised learning takes as input the data X which we've been calling it and the labels Y it attempts to learn this functional mapping from X to Y so you give it a lot of data and you want to learn the classification or you want to learn the regression problem for example so in in one example that Abba gave you give it pictures of the road from self-driving car and you want to learn the steering wheel angle that's a single number that's a supervised learning problem now an unsupervised learning we're actually talking a complete tackling a completely different problem we're just giving it data now and we're trying to learn some underlying patterns or features from this data so there's no labels involved but we still want to learn something meaningful we want to actually learn some underlying structure that's capable of generating brand-new data from this from this set of inputs that we receive so like I said the goal here is to take training examples from one input distribution and actually learn a model that represents that distribution once we have that model we can use that model to then generate brand new data so let me give you an example we can do this in one of two ways the first way is to density estimation so let's suppose I give you a lot of points in a one dimensional grid drawn at random from this distribution that I'm not showing you but I'm giving you the points here can you create this density function just by observing those points so can you observe the underlying dis abuse that generated those points if you can create this probability distribution here well now you can actually generate brand new points according to that distribution another example more complex example is using sample generation so now you're not focused on only estimating the density of this function but now you actually want to generate brand new samples go go straight to the generation portion so here I give you a lot of training examples which are just images on the left hand side right here and these were drawn some sorry these were drawn from some distribution which we'll call P data okay this is a probability distribution of the data and we want to generate brand new training examples that are drawn from another distribution which we'll call from the probability of the model the probability distribution of the model and our goal in this whole task is to actually learn a probability distribution of the model that is as similar as possible to the probability distribution of the data so when we draw samples from the model they almost look like they were being drawn from the original probability distribution that generated the real data so why do we care about actually learning the underlying model here one really good example is something that you'll get exposure to in the lab that Abu is mentioning where we're doing facial classification for D biasing the key aspect of D biasing is K is being able to learn the underlying latent variables intrinsic within the data set a lot of times when we were presented with a machine learning problem we're given a lot of data sometimes it's not properly vetted or labeled and we want to make sure that there are no biases hidden inside this data set so sometimes you might see that maybe if you're given a data set of a facial classifier you're given a lot of faces and a lot of not faces you want to learn now a classifier to predict faces maybe all the faces that you're given are falling from one distribution that is not actually representative of the real distribution that you want to be testing on so let's suppose maybe you have a lot of faces from one ethnicity or one gender type can you detect in an unsupervised manner by learning the underlying distributions without being told to look for something like gender or ethnicity can you just learn this underlying distribution of the data and then use that distribution during training time 2d bias your model this is something that you'll get exposure with during the lab today and you'll actually do biased biased facial classifiers that you'll create another great example is for outlier detection and it's actually a very related example to the D biasing one that I just gave here the problem is a great example of the problem is in self-driving cars so when we're collecting self-driving car data we collect a lot of data that looks extremely similar so 95% of the data that we collect is probably going to be on very simple roads straight roads maybe highway driving not very complex scenarios and not really interesting scenarios not the edge cases that we really care about one strategy is to use generative modeling to detect the outliers here so we want to learn the underlying distribution of the data and then actually understand that when we see events like this can we detect that they are coming from the tail end of the distribution that we should pay more attention to these points instead of always focusing on the very simple or not complex examples which are coming from the main mode of the distribution so let's start discussing two classes of models that we'll talk about in this lecture the first being auto-encoders and variational autoencoders the second class of models I'm probably sure that you've all heard of is called generative adversarial networks and that's what was creating the faces that I showed you on the first slide both of these models are classes of latent variable models so let's actually start by talking about what is a latent variable I saw some giggles so does anyone know what this image is yeah okay nice so this is an image of it's called the myth of the cave it's from one of plato's books I believe and I'll quickly describe what's going on in this image it's a great example I think of describing what is a latent variable and maybe this helps motivate why we care about latent variables even further so in this example a group of prisoners is actually chained their back start to a wall and they're forced to look only forward and they can see these shadows projected onto the wall but they can only see the shadows they'd never see the actual objects behind them that are generating the shadows because of this fire to those prisoners the shadows are the observed variables they can measure them they can give them names even though they aren't real objects they aren't the real objects that are underlying what is actually projected onto this wall but because that's all I can see that is their reality that's their observations they cannot directly observe or measure the true objects behind them because they can't turn around but they and they don't actually know that these things are shadows they have no concept of that the latent variables here are like the objects that they cannot directly observe but are the ones that are informing or generating these shadows so in this case the observable variables just to reiterate once again are the shadows that the prisoners see the hidden variables are the latent variables are the underlying variables or the underlying objects that are generating these shadows now the question here is going back to like a deep learning frame of mind can we have learn the two variables the true latent variables from only observable data so if I give you a lot of data can you learn some underlying latent variables from that structure so to start with we're going to talk about the first class of models which is called auto-encoders it's a very simple generative model which actually tries to do this by encoding its own input so just to reiterate let's suppose we want to have an unsupervised approach remember again this is all unsupervised so we're not given any labels here so we don't know that this is a - we're simply given an image and we don't know what it is now we want to feed these raw input image pixels and pass it through a series of convolutional layers in this case let's suppose just because this is an image so the green the green layers here you can see our convolutional layers let's suppose and you're going from your input X through convolutional layers to this output latent variable which we'll call Z you might notice here that the latent variable is much smaller than that of the observed variable and then actually this is a very common case so we're calling this this model that I'm showing here an encoder so this is encoding your input X into a lower dimensional latent space Z and now I actually want to raise a question to see how many of you are following me why do we care about having low dimensional Z's in this case is it important that we have a low dimensional z does it not really matter why is it the case that usually we care only about low dimensional Z's yeah exactly yeah so I think so the suggestion was get rid of noise that's a great idea by getting along the same lines I think what you're getting at is that you want to find the most meaningful features in your input right that are really contributing to what's going on what's underlying this data distribution pixels are incredibly sparse representation of images and often we don't need all that information to accurately describe an image so in this case this - it's found that you can usually model n this digit so that these images are drawn from the data set called amnesty which is just handwritten digits you can usually model endless digits with about five latent variables and these are images with hundreds of pixels in them but you can compress them down and very accurately describe them with just five numbers so doing this actually allows us to find the most accurate or sorry the most rich features in the image that accurately describe and abstract away all of the details underlying that image so how can we learn this late in space I mentioned it's not a supervised problem here so we don't actually know z so we can't directly just apply back propagation by supervising on Z so instead let's apply a trip where we actually trying to reconstruct the original data so now on the right hand side you can see a reconstruction by applying these up sampling convolutions into a reconstructed version of the original data so you can might notice here that the right hand side is a blurry version of the left hand side and that's because we lose some information by going through such a small latent space and that's to be expected that's the whole point of finding what are the most important features and also to get rid of noise like was the suggestion in the audience so here we denote that reconstruction as X hat it's just denoting the estimated or reconstructed version of the input data we can supervise this by basically just comparing the output which is the reconstructed version of the input with the original input and you might think of something just like computing the difference of these two images subtracting them and computing their square and adding up all these differences that will give you a loss term that we can use in back propagation just like before so now we're kind of shifting the problem from a completely unsupervised problem where we have no notion of how to create these latent variable Z now we have some notion of supervising it end to end and by doing it end to end this reconstruction loss is forcing the decoder to learn the most accurate version or sorry the most rich latent variable possible so that it can describe for so that it can reconstruct the image as as much as possible right so if this latent space was not descriptive at all of the image then the decoder could never create such a such an accurate image depicting its input so by forcing it to reconstruct this too you're forcing the lane space to be very rich in its latent variables right and just to reiterate it again this loss function does not require any labels we're not telling it that it's a two here we're simply feeding it images of twos or or whatever numbers and we're learning the underlying latent variables associated with this data now let's just start abstracting away this image a little bit so we can get more complexity here I'm removing all of the convolutional layers and I'm just drawing the encoder with a single trapezoid so basically denoting that we're going from this X to a low dimensional Z and then going from Z to a higher dimensional X hat and reconstructing it and this idea of bottlenecking the network forcing all of the activations and all of the information to be bottle necked into this Lane space is essentially this idea of compression Auto encoding is a form of compression and basically smaller latent spaces will result in noisier and noisier or should I say blurrier and blurrier outputs the more bottleneck that you apply on the network means the smaller that your latent space will become and the result applied on n this you can see something like this so if you only have two two variables in your latent space you can abstract away a little bit of the problem you can still generate the images but you're losing a lot of the crisp detail in the images so you still get the high level structure of what's going on here but you're missing a lot of the details increasing that up to five dimensional now you can alway already see a lot of the details that you had previously lost and comparing that to ground truth you can at least for these low dimensional inputs it's very close at human high level so in summary auto-encoders utilize this bottleneck layer to actually learn the rich representation of the latent space and that forces the network to learn a hidden representation of the data it's a self supervising approach because the latent loss that actually forces the network to encode as much information as possible in the data in order to reconstruct it later on in the network and that's where the name come from so it's called Auto encoding because it's trying to automatically encode that data so that it can learn to reconstruct it as well so now let's build on this idea in a bit more advanced a concept called variational autoencoders which are simply just an extension on auto-encoders with the probabilistic spin so let's revisit the standard auto encoder picture that I just showed you here at the Leighton space is a deterministic function right so all of these layers are completely deterministic if I feed in the same input multiple times I will always get the same output on the other side that's the definition of determinism what's the difference in a variational auto encoder so now in a variational autoencoder we're replacing that intermediate latent space which was deterministic now with a probabilistic or stochastic operation so instead of predicting a single deterministic Zee we're now predicting a single mu and a single Sigma which will then sample from in order to create a stochastic Zee so now if we feed in the same image this two on the left-hand side multiple times through the network we'll actually get different twos appearing on the other side and that's because of the sampling operation from the muse and Sigma's that we predict so again just to reiterate the muse and Sigma is up until that point these are computed deterministically you take this as input to then compute the stochastic sample of Z which you then use for the decoding and like I said before variational auto-encoders essentially represent a probabilistic spin on normal or vanilla autoencoders where we can sample from the mean and standard deviation of the distributions that they predict in order to compute their latent sample so let's break this up a little bit more so it's broken up just like a normal autoencoder into two parts an encoder which takes the image and produces now the probabilistic a probabilistic a probability distribution over the latent space and as as opposed to before which was deterministic now we have an actual probability distribution here parameterize by these weights phi which are the parameters of the encoder and we have a decoder which does the opposite it takes as it finds a probability distribution of the data given your latent distribution and it has parameters theta and these are just the parameters of your decoder so you can think of these parameters as the weights of the convolutional neural network that define this decoding operation and this is V is the weights of the encoder the weights of convolutional layers that define the encoding operation our loss function is very similar to the loss function in vanilla auto-encoders we simply want to compute the reconstruction loss like before that forces this bottleneck layer to learn a rich representation of the latent space but now we also have this other term which is called a regularization term or a or sometimes it's called like the VA II lost because it's specific to VA es so what is this let's take a look at this in a bit more detail so like I said the loss function here is comprised of inputs from both the encoder and the decoder it also takes us input the input data X and it wants to actually by training this you want to learn some loss function that's a weighted sum between the reconstructions and your latent loss reconstruction is the same as before you can think of it as just a pixel wise difference between your inputs and your outputs and you self supervise them to force the lane spaces to learn the regularization term here is a bit more interesting so let's touch on it in a little more detail before moving forward so Phi or the probability distribution Phi of Z given X is the probability distribution it's the distribution of the encoder it's the distribution of your latent space given your data and it's computed like I said by the encoder as part of regularizing we want to place a fixed prior on this distribution to make sure that the Z's that we compute follows some prior that we define so essentially what this term here is describing is d stands for distance so the regularization term is minimizing the distance between our inferred distribution which is the Meuse and sigmaz that we learn and we want to minimize the distance between that distribution and a distribution prior distribution that we define I haven't told you what that prior is yet and we'll get to that soon so a common choice of a prior here a very common choice for VA E's and specific is to make this prior a normal Gaussian prior so we want it to be centered at meeting zero and standard deviation 1 what this does it basically encourages our latent space or it encourages our encoder to project all of the input images or all of our input data in general into a latent space which is centered at zero and has roughly a variance of 1 if the network tries to deviate from that and place images on different parts of latent space potentially by trying to memorize or cheat and put special inputs in different parts of the latent space it's going to be penalized for doing that it's going to be kind of constrained to work only in this probabilistic setting by following this Gaussian normal Gaussian prior and like we saw it's very similar in the cross entropy loss we can actually define this distance function that we can use to regularize our neural network or VI e and if we're using this case like I said before where we want to place a prior on a zero one normal Gaussian it's just going to take this form between the predicted muse and sigmaz and we want to constrain those Meuse and sigmaz to be as close as possible to a normal Gaussian of zero and one sometimes you'll see this term denoted the KL divergence of the data and and that's where this comes from great so now we've defined our loss function that lets us know how we can actually train this network using back propagation and reconstruct the inputs and how that actually different differs from a vanilla auto encoder the regularization term here is taken care of and we can use it with a normal Gaussian prior let's assume for the rest of this lecture but we actually still have a big problem here so I've defined the forward pass for you pretty clearly I have to find the sampling operation between the muse and the Sigma's to create these Z's but I think I forgot a very important step which is one of the reasons why VA E's were never being used until a few years ago because you could never actually back propagate gradients through a sampling layer so these sampling layers that take as input stuff deterministic and using Sigma's an output a stochastic sample at the output you can compute their forward pass it's simple but then if you want a back propagate through a sampling node there's no notion of back propagation through stochastic nodes or layers and one of the key ideas that occurred about a few years ago was this idea of reprioritizing reaper a motorizing the sampling layer so that you could perform back propagation through it and train this network and to end I'll quickly give you a brief intro or introduction to how the repairment translation trick occurs so we already learned that we can't just simply sample from this distribution in a backward pass because we can't compute that backwards pass and we can't compute a channel through it so let's instead consider a different alternative which is let's sample the latent vector as a sum between the Meuse and the Sigma's so we'll start with the Mews these are the means of the distribution we'll add those to a weighted version of the Sigma vectors which is the standard deviation and we'll just scale those standard deviations by a random number which we can normally which we can draw from a normal distribution 0 1 so we still have a stochastic node here because we're doing this this sampling of epsilon but the difference here is the sampling is actually not occurring within the layer itself it's occurring as a by-product we're reaper amat rising where the sampling is occurring so instead of a cry instead of sampling directly within this see you can imagine the sampling occurring off to the side and being fed into the sampling layer so let's take a look at what that might look like so in the original example we have deterministic nodes which are just the weights of the network and the input data that we have and we have a sampling layer which takes those two to compute a sample according to the distribution defined by the encoder we already learned that we can't do back propagate we can't do propagation through this layer because of because of the nature of the sampling node and it's not deterministic so when we Reaper it we get a chart that looks like this which is a lot nicer and now if you look at it we have the same encoder weights we have the same data but now our sampling node has moved off to the side okay so the sampling node is being drawn from normal distribution 0 1 and Z is now deterministic with respect to that sampling node okay so now when we want to do back propagation and actually back propagate gradients to update our encoder because this is ultimately what we want to back propagate gradients to these are the weights of the neural network that we want to update when we back propagate we can back propagate through this z now because these epsilon czar just taken as constants so we don't need to back propagate in this direction if we did then that would be a problem because this is a sampling node again but we don't care about that we want to back propagate this way now this is a really powerful idea it lets us now train these variational auto-encoders end to end and since we impose these distributional priors on them we can actually slowly increase or decrease latent variables that we learn to get some really cool features of the output and basically by taking a latent vector you fix all of the data except for one variable in that vector and you basically increase or decrease that variable and then you run the decoder each time you increase or decrease it and you can see that the output actually has some semantic meaning so what does it look like this variable is actually capturing does anyone see it yeah exactly it's the tilt of the face or the pose of the face on the left hand side the face is pointing to the right and on the right hand side the face is pointing on the left and as you move from left to right you can see the face kind of turning and it's a smooth transition and you can actually do this because the latent variables are continuous in nature they're following that normal distribution and you can just walk along them and create the output at each step this is just one example of a latent variable but the network is learning many different latent variables it's that whole vector Z that we showed before and each of these vector each of these elements in Z is encoding a different interpreted latent feature ideally we want these features to be independent and uncorrelated with each other so that we can actually have we can walk along each dimension and observe different semantic meanings so here's the same example as before where we're walking left and right an observing head pose we can also walk up and down and observe something like smile so these are all again fake images this person doesn't exist but you can actually create and remove smiles and change the position of their head just by perturbing these two numbers this is the idea of disentanglement so when you have two variables that are uncorrelated with each other and when affecting one of them affects some semantic meaning or changing one of them affects some semantic meaning and changing the other one it affects a different semantic meaning when those two semantic meanings are disentangled that's when these variables are uncorrelated with each other right so those were in exam that was an example for images here's another pretty cool example for music so let's see if this plays so here the four quadrants are representing different positions in the latent space so this quadrant is one particular song this quadrant is another particular song and you're using now your auto encoder to actually interpolate and walk anywhere in this space so currently we're at this location and we can just move where we're sampling from and generate brand new music that's essentially an interpolation between any of these songs so here's an example so now you're moving down and the song changes to the other style right so going back to images we can get the same idea again going back to endless we can walk along different dimensions of a two-dimensional amnesty problem and observe that we can sample an interpolation between the different endless figures and actually see these really cool visualizations where we can generate all the different figures from just two latent variables right ok so just to summarize now in VI use we learn to compress the world down into some low dimensional latent space that we can use to learn we learned that reconstruction allows for unsupervised learning without labels read parameterization trick to actually train these networks end to end and interpret the hidden layer hidden latent variables using perturbations and increasing and decreasing them iteratively to actually generate brand new examples so now the question I'd like to bring up is in VA ease we brought up density estimation as core now we'll transition to a new type of models called generative adversarial networks which are focused on a slightly different problem and that's focused mainly on sample generation so now you're not concerned as much with estimating the density of your distribution but you care more about just generating samples at the output and the key idea here is that you have the same Z this encoding but now in generative adversarial networks there's not as much semantic meaning as there was in via is instead you're just feeding in raw noise so it's just a random noise label just a random noise vector and you train you want to train this generator to predict these fake images at the output you want these fake images to be as close as possible - the real images from your training distribution and the way we can do this is actually using generative adversarial networks which we have this generator on the bottom generating fake images we have a discriminator which is taking in the fake images and also the real images and learning to distinguish between fake and real and by having these two neural networks - generator and discriminator compete against each other you force the discriminator to learn how to distinguish between real and fake and the better that the discriminator becomes at distinguishing real from fake it forces the generator to produce better and better or more realistic and more realistic fake examples to keep fooling the generator so let's walk through a really quick example a toy example on the intuition behind guns so the generator starts from noise and a crowd tries to create some imitation of data so here's one dimensional data is trying to just just generate some random data because it hasn't been trained so these are the points on a one dimensional line of fake data the discriminator sees these points but it also sees some real data now you train the discriminator to recognize what is the probability that this is real the discriminator knows what's real and what's fake so you train it to recognize what's real and what's fake and in the beginning again it's not trained but then you train it and it starts increasing the probabilities what of what's real decreasing the probabilities of what's fake until you get this perfect separation point where the discriminator is able to separate and distinguish what is real from what is fake now the generator comes back and sees how well the discriminator is doing and it tries to move its generated points closer to the real data to start fooling the generator to start fooling the discriminator so now it's moving those points closer and closer to the green points now let's go back to the discriminator discriminator gets these new points now its previous predictions are a little bit messed up right so it's it's used to seeing some of those red points farther away it you can retrain it now it can learn again decreasing the probability of those red points coming in increasing the probability of those green or real points even more and we repeat again now the generator one last time starts moving those points even closer to the real distribution what you can see here is now that the points are following almost in this toy example the same distribution as the real data and it's very hard for the discriminator to distinguish between what is real and what is fake and that's the idea behind ganz so get in ganz we have a discriminator just to summarize a discriminator that is trying to identify real from fake well the generator tries to imitate real data and fooled the discriminator this can be formalizing using a min/max objective function where the discriminator d so the the parameters of the discriminator d is trying to maximize the likelihood objective and increase the chances that this term on the left this is the probability that the real data is as close as possible to one and it wants to also get this so let's get this probability as close as possible to one it wants to get this probability as close as possible to zero because that's the probability of seeing fake data and on the other hand the generator now which is theta g tries to change its weights in order to maximize that same objective so i minimize that same objective so now it's going to change its weights theta g inside of here to generate new fake data that is going to fool that generate that discriminator so briefly what are the benefits of gans as opposed to variational autoencoder these are two versions of very latent variable models but there are some benefits to using gans as opposed to VA e's imagine we're trying to fit a model to this latent manifold which you can see cartoon ax fide here so if we're using a traditional maximum likelihood estimation we can see on the left-hand side that we're having a very fuzzy estimation here it's noisy and it's kind of taking a smooth round not really capturing a lot of the details of the manifold the difference here is that again it's not using a maximum likelihood estimate it's using this minimax formulation between two neural networks it's able to capture a lot of the details and the nooks and crannies of this manifold in that sense it's able to create a lot crisper or it's able to model a lot more detail in the real data so let's explore what we can generate with this new data and this actually leads nicely into a lot of the recent advances that Gant's have enjoyed in the past couple of years and I'll talk about some results even in the past couple months that are really astonishing and that starts with this idea from earlier 2018 involving the progressive growth of Bret against so the idea here is that you want to iteratively build more and more detailed image generators so you start your generator by just predicting 4x4 images very coarse-grained images not detailed at all but when you start with this it's able to learn a representation a very coarse-grained representation of how to generate these coarse images once the generator has a good intuition on this coarse-grained representation you start progressively growing its dimensionality and start progressively adding new and new layers and increasing the spatial resolution of the generator this is good because it's able to stable the synthesis of the output and also it actually ends up speeding up training as well and it's able to create these high-resolution input outputs at the end 1000 by a thousand images that are very realistic again these weren't the images that I showed you I'll get to those in a second but these are still incredibly realistic images and here's some more examples of that same network producing those images the same team actually released another paper just a month ago where they create an extension of this work to a more complex architecture that builds on this work this progressive growing of gans to actually use what they call a style based approach where they're actually using the underlying style which you can think of as like a latent variable of the faces and using that as an intermediate mapping to automatically learn unsupervised separation of the high-level attributes such as the pose or like the subjects hair or skin color etc as a result this approach which you can see generated examples here these are not a real examples these are fake or generated examples the model is able to generate highly varied outputs that highlight that represent very realistic human faces so the idea behind style transfer is actually have the generator transfer styles from a source image which you can see on the top row so these are source images to a destination image which you can see on the right hand side and it's taking the features from the source image and applying it to the destination image and it's actually able to realize in certain cases that if you try and apply it a male's face to a female it actually realized that something is wrong and starts to add male features to this face it realizes that something's wrong so it starts to add a beard here even though there was no beard on this destination image and it's things like this that you're actually understanding that it's able to get some intuition about the underlying distribution of males and females males have beards facial hair whereas females typically don't and there are other examples here as well it's really remarkable work transforming skin color hair color even like patterns on the face to very fine-grained details in the in the image in the output image and finally one very last applications I'll touch on here is this notion of cycle gang which is the idea of having unpaired image to image translation which is very closely related to what we just discussed in the progressive growing of Gans through styles and what we want to do here is actually take a bunch of images in one domain and without having the corresponding image in the say in a different domain we want to just learn a generator to take an image in one domain generate a new image so take an image in Ex generate a new image and why following wise distribution and likewise take image of why created an image in X and the way they actually do this a really cool advancement of this paper was what they did was they actually created this cycle lost the cycle consistency that the network has to abide by where and if they create going from X to Y they then take that same generated output and go back from Y to X and check how close they are to the original input data and they enforce another supervised loss using that and that's why they call this approach cycle again because you're creating the cycle loss between the two generators and you also have these two discriminators that are trying to distinguish real from fake in each of those distributions and what you're able to do is actually transfer domains from unpaired images like horses on the left hand side and you're now taking that horse and making it into a zebra okay so this is just taking images of a lot of horses and a lot of zebras and without actually supervising how to go from horse to zebra it learns the underlying distribution of what's a horse what's a zebra so you can take a new image of a horse and make it look like a zebra and it's actually really interesting here because you might notice the obvious thing which is its adding stripes what I actually noticed here was even more interesting with it's changing the color of the grass which if you think about why is it doing this zebras are typically found in drier climates for example like in in Africa zebras are often found and the grass in these climates is probably not as green as the one that this horse is it so it's actually realizing not just the details of the horse going to the zebra stripes but also realizing the surroundings of the horse and transferring these details as well so finally I'll just summarize this lecture and conclude we covered two main techniques for generative modeling focusing first on variational autoencoders which are which we introduce as latent variable models here we try to learn a low dimensional input or sorry although dimensional latent space of our input so we can actually get some intuition and interpretation behind the underlying data distribution then we extended this idea into gans which is another form of variable models but one now trained between a minimax game using a generator and the discriminator to complete to create even more complex outputs or generated samples from the distributions and that's it thank you [Applause]
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_Taming_Dataset_Bias_via_Domain_Adaptation.txt
i'm really happy to be here today to talk to you guys about something that i'm very excited and interested in because it's my research area so it's always fun to give a talk about your own research so the topic of my talk is taming data set bias via domain adaptation um and i believe you've already had some material in this course talking about sort of bias issues and perhaps fairness so this will dovetail with that pretty well i think okay so um i don't probably don't need to tell you guys or spend a lot of time on how successful deep learning has been for various applications here i'll be focusing mostly on computer vision applications because that's my primary research area so we know that in computer vision deep learning has gotten to the point where we can detect different objects pretty accurately in a variety of scenes and we can even detect objects that are not real people or could be even cartoon characters as long as we have training data we can train models to do this and we can do things like face recognition and emotion recognition so there are a lot of applications where deep learning has been super successful but there's also been some problems with it um and the one that i want to talk about is data set bias so in data set bias what happens is you you have some data set let's say you're training a computer vision model to detect pedestrians and you want to put it on a self-driving car uh and so you went and collected some data um you labeled the the pedestrians with bounding boxes and you trained your deep neural network and it worked really well on your held out test set that you held out from that same data set but now if you put that same model on your car and have it try to recognize pedestrians in a different environment like in new england which is where i am right now and in fact if i look on my window that's exactly what it looks like there's snow uh there could be sort of people wearing heavy coats and looks different from my training data which i would say i collected it in california where we don't see a lot of snow so visually this new data that i'm supposed to label with my model looks quite different from my training data so this is what we refer to as data set shift and it leads to problems in terms of missing detections and generally just lower accuracy of our trained model right so it's called data set bias it's also referred to as domain shift right in the primary issue here again is that the training data uh looks well i'm gonna just say looks different but i'll define a little more more um specific in a specific way later uh it's it looks different from the target test data when does data set bias happen well it happens in a few different scenarios i'll just show a few here and they're actually i will argue that it happens with every data set you collect but one example is you collect as already mentioned collect a data set in one city and you want to test on a different city or maybe you collect a data set from the web and then you want to put your model on a robot that gets images from its environment where the angle and the background and the lighting is all different another very common issue is with simulated training that we then want to transfer to the real world so that's a sim to real data set shift very common in robotics and another way that this could happen is if your training data set um is primarily of a particular demographic say if we're dealing with people it could be mostly light-skinned faces and then at test time you're given darker skin faces and you didn't train on that kind of data so again you have a data set bias or perhaps you're classifying weddings but your training data comes from images of weddings in western culture and then at test time you have other cultures so you again have a data set by so it could happen my point is that it can happen in many different ways and in fact i i believe that no matter what data set you collect it will have data set bias no matter what just because especially in the visual domain our visual world is so complex it's just very hard to collect enough variety to cover all possible situations so let's talk about more specifically why this is a problem and i'll give you an example that i'll use throughout my talk just to put some real numbers on this right so we probably all know by now the mnist data set so it's just a handwritten digits very common for benchmarking neural networks so if we train a neural network on amnest and then we test it on the same domain on mnist we know that we'll have very good performance upwards of 99 accuracy this is more or less a solved task but if we train our network on this street view house numbers data set which is uh also 10 digits the same time 10 digits but visually it's a different domain it's from the street view data set now when we test that model on the mnist data set performance drops considerably this is really really bad performance for this task right the 10 digits and in fact even if we train with this much smaller shift so this is from usps to mnist visually actually they look very similar to the human eye but there it there are some some small differences between these two data sets still performance drops uh pretty much the same uh as before and if we swap we still have bad performance when training on mnist and testing on usps so that's just to put some numbers like even for such a very simple task that we should have solved a long time ago in deep learning um it doesn't work right so we if we have this data set shift and we test the model on a new domain it pretty much breaks um so okay but this is a very academic data set um what about the real world what are the implications of data set bias have we seen any actual implications of data set bias in the real world and i would argue that yes we have this is one example where there have been several studies of face recognition models and gender recognition models commercial software that's being deployed for these problems in the real world and these studies show that facial recognition algorithms are far less accurate at identifying african-american and asian faces compared to caucasian faces and a big part of the reason why is data set shift because the training data sets for use for these models are biased towards caucasian faces so another real world example that i want to show is a very sad example actually where a while back there was a self-driving car accident it's actually the first time that a robot has killed a person so there was an accident that was fatal with an uber self-driving car and according to some reports the reason uh they think that the car failed to stop is that its algorithm was not designed to detect pedestrians outside of a crosswalk right so you could actually think of this as a data set bias problem again if your training data contains only pedestrians on a crosswalk which is reasonable to assume right because majority of the time pedestrians follow the rules and cross on the crosswalk and only a few times they you know you might see people jay walking you will probably not see a lot of examples like that in your data set so you might be wondering at this point well wait a minute can't we just fix this problem by collecting more data just getting more data and labeling it well yes we could theoretically however it gets very very expensive very quickly and to illustrate why let's take this example this is again in the self-driving domain this is a images from the berkeley bdd dataset which has actually quite a variety of domains already so it has the nighttime images also has daytime images and the labels here are semantic segmentation labels so each pixel is labeled like with road or pedestrian so on so if we wanted to label um 1000 pedestrians with these polygons that would cost around one thousand dollars this is just you know kind of standard market price however if we now want to uh multiply that times how many different variations we want in the pose times variations in gender times variations in age race clothing style so on so on we very quickly see how much data we have to collect right and somewhere in there we also want people who ride bicycles so this becomes this blows up very quickly becomes very expensive so instead maybe what we want to do is design models that can use unlabeled data rather than labeled data that's what i'm talking about today um so now let's think about okay what causes this poor performance that we've already seen and there are basically two main reasons i think the first reason is that the training and test data distributions are different and you can see that in this picture so here the blue points are feature vectors extracted from the digit domain the mnist digit domain using a network that was trained on the digit domain so we train a network and we take the second to last layer activations and we plot them using t-sne embeddings so that we can plot them in 2d so you see these blue points are the training eminence points and then we take that same network and extract features from our target domain which you can see here it's basically m this but with different colors as opposed to black and white and so those are the red points and you can see very clearly that the distribution over the inputs is very different in the training and test domains so that's one issue is that our classifier which is trained on the blue points will not generalize to the red points because of this distribution shift another problem is actually it's a little bit more subtle but if you look at how the blue points are much better clustered together with spaces between them and so these are clusters according to category of the digit but the red points are a lot more kind of spread out and not as well clustered into categories and this is because the model learned discriminative features for the source domain and these features are not very discriminative for the target domain so the test target points are not being grouped into classes using those features because you know they just the kinds of features the model needs weren't learned from the source domain all right so what can we do um well we can actually do quite a lot and here actually is a list of methods that you could use to try to deal with data set shift that are fairly simple standard things that you could do for example if you just use a better backbone for your cnn like resnet 18 as opposed to alexnet you will have a smaller performance gap due to domain shift batch normalization done per domain is a very good trick um you can combine it with instance normalization of course you could do data augmentation use semi-supervised methods like pseudo-labeling and then what i'll talk about today is domain adaptation techniques okay so let's define domain adaptation all right so we have a source domain which has a lot of unlabeled data sorry which has a lot of labeled data so we have inputs x i and labels y i in our source domain and then we have a target domain which has unlabeled data so no labels just the inputs and our goal is to learn a classifier f that achieves a low expected loss under the target distribution dt right so we're learning on the source labels but we want to have good performance on the target and a key assumption in domain adaptation that is really important to keep in mind is that in domain adaptation we assume that we get to see the unlabeled data we we get access to it which is which is important we don't get the labels um because again we assume it's very expensive or we just can't label it for some reason um but we do get the unlabeled data okay so what can we do um i'll so here's the outline of the rest of my talk um and i'm i'm sure i'm gonna go pretty quickly um and i'll try to have time at the end for questions so please if you have questions note them down um so i'll talk about kind of the standard very very uh at this point conventional technique called adversarial domain alignment and then i'll talk about a few more recent techniques that have been applied to this problem um and then we'll wrap up okay so let's start with adversarial domain alignment okay so say we have our source domain with labels and we're trying to train a neural network here i've split it into the encoder cnn it's a convolutional neural network because we're dealing with images and then the classifier which is just the last layer of the network and we can train it in a normal way using standard classification laws and then we can extract features from the encoder to plot them here to visualize the two categories just for illustration purposes i'm showing just two and then we can also visualize some notion of discriminator between classes that the classifier is learning this decision boundary now we also have unlabeled target data which is coming from our target domain we don't have any labels but we can take the encoder and generate features and as we've already seen we'll see a distribution shift between the source blue features and the target orange features so the goal in adversarial domain alignment is to take these two distributions and align them so update the encoder cnn such that the target features are distributed in the same way as the source okay so how can we do this well it involves adding a domain discriminator think of it as just another neural network which is going to take our features from the source and the target domain and it's going to try and predict the domain label so its output is a binary label source or target domain okay and so this domain discriminator is trying to distinguish the blue points from the orange points okay so we train it just with classification loss on the domain labels and then that's our first step and then our second step is going to be to fix the domain discriminator and instead update the encoder such that the encoder results in a poor domain discriminator accuracy so it's trying to fool the domain discriminator by generating features that are essentially indistinguishable between the source and target domain okay so it's an adversarial approach because of this adversarial back and forth first we train the domain discriminator to do a good job at telling the domains apart then we fix them we train the encoder to fool the discriminator so that it can no longer tell the domains apart if everything goes well we have a line distributions so does this actually work let's take a look at our digits example from before so here we have again two digit domains and you see before adaptation the distributions of the red and the blue points were very different now after applying this adversarial domain alignment on the features we can see that in fact the feature distributions are very well aligned now you you more or less cannot tell the difference in terms of the distribution between the red and the blue points okay so it works and not only does it work to align the features only also works to improve the accuracy of the classifier we train because we're still here training this classifier uh using the source domain labels right and this actually is what prevents our alignment from diverging into something uh you know really silly like mapping everything to one point because we still have this classification loss that has to be satisfied so the classifier also improves so let's see how much so here i'm going to show results from our cdr17 paper called ada or adversarial discriminative domain adaptation so with this technique we can improve accuracy when training on these domains and then testing on these target domains quite a lot so it's a significant improvement it's a little harder on it's not as good on the svh end to end this shift because that is the the hardest of these shifts great so the takeaway uh so far is that domain adaptation can improve the accuracy of the classifier on target data without requiring any labels at all so we didn't label our target domains here at all we just trading with unlabeled data and so you can think of this as a form of unsupervised fine tuning right so fine tuning is something we often do to improve a model on some target task but it requires labels so this is something we can do if we have no labels and we just have a label data we can do this kind of unsupervised fine-tuning great so okay so so far i talked about domain alignment in the feature space because we were updating features next i want to talk about pixel space alignment so the idea in pixel space alignment is what if we could take our source data the images themselves the pixels and actually just make them look like they came from the target domain and we can actually do that thanks to adversarial generative models organs which work in a very similar way to what i already described but they the discriminator is looking at the whole image not the features but the actual image that's being generated so we can take this idea and apply it here and train again that will take our source data and translate it in the image domain to make it look like it actually comes from the target domain right so we can do this because we have unlabeled target data and there are a few approaches for this uh just for doing this image to image translation a famous one is called cyclogan but essentially these are conditional gans that use some kind of loss to align the two domains in the pixel space so what's the point of this well if we now have this translated source data we still have labels for this data but it now looks like it comes from the target domain so we can just train on this new fake target like data with the labels and hopefully it improves our classifier error on the target by the way we can still apply our previous feature alignment so we can still add a domain discriminator on the features just like we did before and do these two things in tandem and they actually do improve performance when you do both on many problems okay so let me show you an example here's a training domain which is uh so we're trying to do semantic pixel labeling so our goal is for a neural network is to label each pixel as one of several categories like road or car or pedestrian or sky and we want to train on this gta domain which is from the grand theft auto game which is a nice source of data because it basically is free the labels are free we just get them from the game and then we want to test on this cityscapes dataset which is a real world dataset collected in germany in multiple cities so you can see what it looks like so i'm going to show you the result of doing pixel to pixel domain alignment between these two domains so you see that here we're actually taking the real data set and translating it to the game so the original video here is from cityscapes and we're translating it to the gta game all right so what happens if we apply this idea to domain adaptation um so here our source domain is gta here's an example of the adapted source image that we take from gta and translate it now into the real domain and then when we then train the classifier on these translated images our accuracy goes up from 54 to 83.6 per pixel accuracy on this task so it's a really good improvement in accuracy again without using any additional labels on the target domain and also going back to our digit problem remember that really difficult shift we had from the street view image digits to the mnist digits well now with this pixel space adaptation we can see that we can take those source images from from the street view domain and make them look like mnist images so this middle plot middle image shows the images on the left that are original from svhn and we translated them to look like amnest and so if we train on these now we can improve our accuracy to 90.4 on mnist and if we compare this with our previous result using only feature space alignment we were getting around 76 so so we're improving on that quite a bit so the takeaway here is that unsupervised image to image translation can discover and align the corresponding structures in the two domains so there so there is corresponding structure right we have digits in both domains and they have similar structure and what this method is doing is trying to um align them by discovering these structures and making them correspond to each other great um so next i want to move on to talk about fu shot pixel alignment okay so so far i didn't tell you this explicitly but we actually assume that we have quite a lot of unlabeled target data right so in the case of that game adapting from the game to the real data set we took a lot of images from the real world uh they weren't labeled but we had a lot of them so we had like i don't know how many thousands of images what happens if we only have a few images from our target domain well it turns out these methods that i talked about can't really handle that case they need a lot more images so what we did was with my graduate student and my collaborator at nvidia we came up with a method that can do translation with just one or a few maybe two or three or up to five images in the target domain so suppose we have our source domain where we have a lot of images that are labeled here we're going to look at an example of different animal species so the domain will be the species of the animal so here we have a particular breed of dogs and now we want to translate this image into a different domain which is this other breed of dog but we only have one example of this breed of dog so our target domain is only given to us in one image so and then our goal is to output a translated version of our source image that preserves the content of that source image but adds the style of the target domain image so here the content is the pose of the animal and the style is the species of the animal so in this case it's the breed of the dog and you can see that we're actually able to do that fairly successfully because as you can see the we've we've preserved the pose of the dog but we've changed the the breed of the dog to the one from the target image okay so so this is a pretty uh cool result um and the way we've achieved this is by modifying an existing model called funet which you see on the left here basically by updating the style encoder part of this model so we call it coco or content conditioned style encoder and so the way our model works is it takes the content image and the style image it encodes the content using an encoder this is just a convolutional network and then it also takes both the style and the content encodes it as a style vector and then this image decoder takes the content vector and the style vector and combines them together to generate the final output image and there's there's a gan loss on that image make sure that we're generating um images that look like our target so the main difference between the previous work unit and ours that we call cocof unit is that this style encoder is structured differently it's conditioned on both the content image and the style image okay so if we um kind of look under the hood of this model a little more some more detail the main difference again and this is in the style encoder right so it takes the style image um encodes it with features and it also learns a separate style bias vector which is concatenated with the image encoding and these are parameters that are learned for the entire data set so they're they're constant they don't believe they don't depend on the image essentially what that does is it helps the model kind of learn how to account for pose variation because in different images we'll have sometimes very drastic change in pose in one image we see the whole body of the animal and then we could have very occluded animal with just the head visible like in this example and then the content encoding is combined with these style encodings to produce the final style code which is used in the adaptive instance normalization framework if you're familiar with that if not don't worry about it just some way to combine these two vectors to generate an image so here is some example outputs from our models on top we have a style so it's an image from our target species that we want our animal to look like and then below that is the content which is the the source essentially the pose that we want to preserve and then at the bottom in the bottom row you see the generated result which our model produced and so you can see that we're actually able to preserve the pose of the content image pretty well but combine it with the style or the the species of the source sorry of the target style image and sometimes we even you know make something that is a cat look more like a dog because the the target domain is a dog breed but the pose is the same as the original cat image or here in the last one it's actually a bear that's generated to look like a dog so if we compare this to the previous method called unit that i mentioned before we see that our model is getting significantly better generations than funit which in this case a lot of the time fails to produce realistic images it's just not generating images that are convincing or photorealistic and so here i'm going to play a video just to show you a few more results here we're actually taking the whole um a whole video and translating it into some target domain so you see various domains on top here so the same input video is going to be translated into each of these target domains where we have two images for each target domain right so the first one is actually a fox and now there's another example here with different bird species so you can see that the pose of the bird is preserved from the original video but its species is changed to the target so there's varying levels of you know success there but overall it's doing better than the previous approach and here's another final example here we're again taking the content image combining it with the style and generating the output not really sure what species this would be some kind of strange new species okay so the takeaway is that by conditioning on the content and style image together we're able to improve the encoding of style and improve the the domain translation in this view shot case all right so uh i have a little bit of time left i think how much time do i have actually about 10 minutes yes okay so in just the last few minutes i want to talk about more recent work that we've done that goes beyond these alignment techniques that i talked about and actually improves on them and the first one is self-supervised learning so one assumption that all of these methods i talked about make is that the categories are the same in the source and target and they actually break if that assumption is violated so why would we violate this assumption so suppose we have a source domain of of objects and we want to transfer to a target domain from this real source to say a drawings domain but in the drawings domain we have some images of which some of those images are the same categories that we have in the source but some of the source categories are missing in our target domain suppose like we don't have cup or cello and also we might even have new categories in the target domain that are not present in the source okay so here what we have is a case of category shift not just feature shift not just visual domain shift but actually the categories are shifting and so this is a difficult case for domain alignment because in domain alignment we always assume that the whole domain should be aligned together and and if we try to do that in this case we'll have catastrophic uh adaptation results so we actually could do worse than just doing nothing doing no adaptation so uh in our recent paper from europe's in 2020 we propose a solution to this that uses doesn't use domain alignment but uses self-supervised learning okay so the idea is let's say we have some source data which is labeled that's the blue points here we have some target which is unlabeled and some of those points could be unknown classes right so the first thing we do is we find points pairs of points that are close together and we train a feature extractor in such a way that these points are embedded close together so we're basically trying to cluster neighboring points even closer together while pushing far away points even further apart and we can do that because we're starting with a pre-trained model already so let's say it's pre-trained on imagenet so already gives us a pretty good initialization so after this neighborhood clustering which is an unsupervised loss or we can call it self-supervised we get a better clustering of our features that already is clustering the unlabeled target points from the known classes closer to the source points from the known classes and then it's clustering the yellow unknown classes in the target away from those known classes now what we want to do is add an entropy separation loss which further encourages points that have excuse me points that um have a certain entropy away from the known classes so this is essentially an outlier rejection mechanism right so if we look at a point and we see that it has very high entropy it's probably an outlier so we want to reject it and push it even further away and so finally what we obtain is an encoder that gives us this feature distribution where points of the same class are clustered close to the source but points of novel classes are clustered away from the source okay so if we apply it on this data set called the vista challenge which is training on synthetic images and adapting to a target domain which is real images but some of those categories are missing in the target and again we don't know which ones in real life because the target is unlabeled right so if we approve if we apply this dance approach that i just described we can improve performance compared to a lot of the recent domain adaptation methods and also compared to just training on the source so that's we get pretty low performance if we trade only on the source data um and then if we do this uh domain alignment on the entire domain this that's this d a and n method we actually see that we have worse accuracy than doing nothing than just training on the source again because this same category uh assumption is violated in this problem okay but with our method we're actually able to do much better than just training on source and improve accuracy okay and then finally i want to mention another cool uh idea that um has become uh more prevalent recently in semi-supervised literature and we can actually apply it here as well so here we start again with self-supervised pre-training on the source and target domains but in this case we're doing a different uh cell supervised task instead of clustering points here we are predicting rotation of images so we can rotate an image and we know exactly what orientation it's in but then we train our feature extractor to predict that orientation for example is it is it rotated 90 degrees or zero degrees okay so but again that's just another self-supervised task it helps us pre-train a better feature encoder which is um more discriminative for our source and target domain and then we apply this consistency loss so what is the consistency loss so here we're going to do some data augmentation on our unlabeled images okay so we're going to take our pre-trained model and then use that model to generate probability distributions of the target uh sorry of the class um on the original image and also on the augmented unlabeled image where the augmentation is you know cropping color transformation adding noise adding small rotations and things like that so it's it's designed to preserve the category of the object but it changes the image and then we take these two probability outputs and we add a loss which ensures that they're consistent right so we're telling our model look if you see an augmented version of this image you should still predict the same category for that image we don't know what it is because the image is unlabeled but it should be the same as the original image so with this idea so we call the combination of this rotation prediction pre-training and consistency training we call this pack and this is just one small taste of the results we got just because i don't have much time but essentially here we are again adapting from the synthetic data set in the visited challenge to real images but now we are assuming a few examples are labeled in our target domain and we are actually able to just with this pack method improve a lot on the domain alignment method which is called mme that's our previous work in this case so basically the point that i want you to take away from this is that domain alignment is not the only approach and we can use other approaches like self-supervised training and consistency training to improve performance on target data all right so i'll stop here just to summarize what i talked about i hope i've convinced you that data set bias is a major problem and i've talked about how we can solve it using domain adaptation techniques which try to transfer knowledge using unlabeled data and we can think of this as a form of unsupervised fine-tuning and the technique i talked about include adversarial alignment and also some other techniques that are relying on self supervision and consistency training and so i hope you enjoyed this talk and if you have any questions they'll be very happy to answer them
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2018_Introduction_to_Deep_Learning.txt
good morning everyone thank you for thank you all for joining us this is MIT six s-191 and we'd like to welcome to welcome you to this course on introduction to deep learning so in this course you'll learn how to build remarkable algorithms intelligent algorithms capable of solving very complex problems that just a decade ago were not even feasible to solve and let's just start with this notion of intelligence so at a very high level intelligence is the ability to process information so that it can be used to inform future predictions and decisions now when this intelligence is not engineered but rather a biological inspiration such as in humans it's called human intelligence but when it's engineered we refer to it as artificial intelligence so this course is a course on deep learning which is just a subset of artificial intelligence and really it's just a subset of machine learning which involves more traditional methods where we tried to learn representations directly from data and we'll talk about this more in detail later today but let me first just start by talking about some of the amazing successes that deep learning has had in the past so in 2012 this competition called imagenet came out which tasked AI researchers to build an AI system capable of recognizing images objects in images and there was millions of examples in this data set and the winner in 2012 for the first time ever was a deep learning based system and a when it came out it absolutely shattered all other competitors and crushed the competition across the country crush the challenge and today these deep learning based systems have actually surpassed human level accuracy on the image net challenge and can actually recognize images even better than humans can now in this class you'll actually learn how to build complex vision systems building a computer that how to see and just tomorrow you'll learn how to build an algorithm that will take as input x-ray images and as output it will detect if that person has a pneumothorax just from that single input image you'll even make the network explain to you why it decided to diagnose the way it diagnosed by looking inside the network and understanding exactly why I made that decision deep neural networks can also be used to model sequences where your data points are not just single images but rather temporally dependent so for this you can think of things like predicting the stock price translating sentences from English to Spanish or even generating new music so actually today you'll learn how to create and actually you'll create yourselves an algorithm that learns that first listens to hours of music learns the underlying representation of the notes that are being played in those songs and then learns to build brand new songs that have never been heard before and there are really so many other incredible success stories of deep learning that I could talk for many hours about and will try to cover as many of these as possible as part of this course but I just wanted to give you an overview of some of the amazing ones that we'll be covering as part of the labs that you'll be implementing and that's really the goal of what we want you to accomplish as part of this class firstly we want to provide you with the foundation to do deep learn to understand what these algorithms are doing underneath the hood how they work and why they work we will provide you some of the practical skills to implement these algorithms and deploy them on your own machines and will talk to you about some of the stating state of art and cutting edge research that's happening in deep learning industries and deep learning academia institutions finally the main purpose of this course is we want to build a community here at MIT that is devoted to advancing the state of artificial intelligence advancing a state of deep learning as part of this course we'll cover some of the limitations of these algorithms there are many we need to be mindful of these limitations so that we as a community can move forward and create more intelligent systems but before we do that let's just start with some administrative details in this course so this course is a one-week course today is the first lecture we meet every day this week 10:30 a.m. to 1:30 p.m. and this during this three hour time slot were broken down into one and a half hour time slots around 50% of the course see each and each of those have half sections of this course will consist of lectures which is what you're in right now and the second part is the labs where you'll actually get practice implementing what you learn in lectures we have an amazing set of lectures lined up for you so today we're going to be talking about some of the introduction to neural networks which is really the backbone of deep learning we're also talking about modeling sequence data so this is what I was mentioning about the temporally dependent data tomorrow we'll talk about computer vision and deep generative models we have one of the inventors of generative adversarial networks coming to give that lecture for us so that's going to be a great lecture and the day after that we'll touch on deep reinforcement learning and some of the open challenges in AI and how we can move forward past this course we'll spend the final two days of this course talking or hearing from some of the leading industry representatives doing deep learning in their respective companies and these are bound to be extremely interesting or extremely exciting so I highly recommend attending these as well for those of you who are taking this course for credit you have two options to fulfill your graders assignment the first option is a project proposal it's a one-minute project pitch that will take place during Friday and for this you have to work in groups of three or four and what you'll be tasked to do is just come up with interesting deep learning idea and try to show some sort of results if possible we understand that one week is extremely short to create any type of results or even come up with a interesting idea for that matter but we're going to be giving out some amazing prizes so including some nvidia gpus and google homes on friday you'll like I said give a one-minute pitch there's somewhat of an arts to your idea in just one minute even though it's extremely short so we will be holding you to a strict deadline of that one minute the second option is a little more boring but you'll be able to write a one-page paper about any deep learning paper that you find interesting and really that's if you can't do the project proposal you can do that this class has a lot of online resources you can find support on Piazza please post if you have any questions about the lectures the labs installing any of the software etc also try to keep up to date with the course website we'll be posting all of the lectures labs and video recordings online as well we have an amazing team that you can reach out to at any time in case you have any problems with anything feel free to reach out to any of us and we wanted to give a huge thanks to all of our sponsors who without this without their support this class would simply not happened the way the way it's happening this year so now let's start with the fun stuff and let's start by actually asking ourselves a question why do we even care about deep learning so why now and why do we why do we even sit in this class today so traditional machine learning algorithms typically define sets of pre-programmed features and the data and they work to extract these features as part of their pipeline now the key differentiating point of deep learning is that it recognizes that in many practical situations these features can be extremely brittle so what deep learning tries to do is learn these features directly from data as opposed to being hand engineered by the human that is can we learn if we want to learn to detect faces can we first learn automatically from data that to detect faces we first need to detect edges in the image compose these edges together to detect eyes and ears then compose these eyes and ears together to form higher-level facial structure and in this way deep learning represents a form of a hierarchical model capable of representing different levels of abstraction in the data so actually the fundamental building blocks of deep learning which are neural networks have actually been existing have actually existed for decades so why are we studying this now well there's three key points here the first is that data has become much more pervasive we're living in a big data environment these algorithms are hungry for more and more data and accessing that data has become easier than ever before second these algorithms are massively parallel Liza below and can benefit tremendously from modern GPU architectures that simply just did not exist just less more than a decade ago and finally due to open-source tool boxes like tensor flow building and deploying these algorithms has become so streamlined so simple that we can teach it in a one-week course like this and it's become extremely deployable for the massive public so let's start with now looking at the fundamental building block of deep learning and that's the perceptron this is really just a single neuron in a neural network so the idea of a perceptron or a single neuron is extremely simple let's start by talking about the forward propagation of information through this data unit we define a set of inputs x1 through XM on the left and all we do is we multiply each of these inputs by their corresponding weight theta1 through theta m which are those arrows we take this weighted we take this weighted combination of all of our inputs sum them up and pass them through a nonlinear activation function and that produces our output why it's that simple so we have M inputs one output number and you can see it summarized on the right-hand side as a mathematic single mathematical equation but actually I left that one important detail that makes the previous slide not exactly correct so I left that this notion of a bias a bias is a that green term you see on the left and this just represents some way that we can allow our model to learn or we can allow our activation function to shift to the left or right so it allows if we provide allows us to when we have no input features to still provide a positive output so on this equation on the right we can actually rewrite this using linear algebra and dot products to make this a lot cleaner so let's do that let's say X capital X is a vector containing all of our inputs x1 through XM capital theta is now just a vector containing all of our Thetas theta 1 to theta M we can rewrite that equation that we had before is just applying a dot product between X and theta adding our bias theta 0 and apply our non-linearity G now you might be wondering since I've mentioned this a couple times now what is this nonlinear function G well I said it's the activation function but let's see an example of what in practice G actually could be so one very popular activation function is the sigmoid function you can see a plot of it here on the bottom right and this is a function that takes its input any real number on the x-axis and transforms it to an output between 0 and 1 because all outputs of this function are between 0 & 1 it makes it a very popular choice in deep learning to represent probabilities in fact there are many types of nonlinear activation functions in Durrell networks and here are some of the common ones throughout this presentation you'll also see tensorflow code snippets like the ones you see on the bottom here since we'll be using tensorflow for our labs and well this is some way that I can provide to you to kind of link the material in our lectures with what you'll be implementing in labs so the sigmoid activation function which I talked about in the previous slide now on the left is it's just a function like I said it's commonly used to produce probability outputs each of these activation functions has their own advantages and disadvantages on the right a very common activation function is the rectified linear unit or Lu this function is very popular because it's extremely simple to compute it's piecewise linear it's zero before with inputs less than zero it's X with any input greater than zero and the gradients are just zero or one with a single non-linearity at the origin and you might be wondering why we even need activation functions why can't we just take our dot product at our bias and that's our output why do we need the activation function activation functions introduce nonlinearities into the network that's the whole point of why activations themselves are nonlinear we want to model nonlinear data in the world because the world is extremely nonlinear but suppose I gave you this this plot green and red points and I asked you to draw a single line not a curve just a line between the green and red points to separate them perfectly you'd find this really difficult and probably you could get as best as something like this now if your activation function in your deep neural network was linear since you're just composing linear functions with linear functions your output will always be linear so the most complicated deep neural network no matter how big or how deep if the activation function is linear your output can only look like this but once we introduce nonlinearities our network is extremely more as the capacity of our network has extremely increased we're now able to model much more complex functions we're able to draw decision boundaries that were not possible with only linear activation options let's understand this with a very simple example imagine I gave you a train to network like the one we saw before sorry a trained perceptron not in network yet just a single node and the weights are on the top right so theta 0 is 1 and the theta vector is 3 and negative 2 the network has two inputs X 1 and X 2 and if we want to get the output all we have to do is apply the same story as before so we apply the dot product of X and theta we add the bias and apply our non-linearity but let's take a look at what's actually inside before we apply that non-linearity this looks a lot like just a 2d line because we have two inputs and it is we can actually plot this line when it equals zero in feature space so this is space where I'm plotting x1 one of our features on the x-axis and x2 the other feature on the y-axis we plot that line it's just the decision boundary separating our entire space into two subspaces now if I give you a new point negative 1/2 and plot it on the sub in this feature space depending on which side of the line it falls on I can automatically determine whether our output is less than 0 or greater than 0 since our line represents a decision boundary equal to 0 now we can follow the math on the bottom and see that computing the inside of this activation function we get 1 minus 3 minus 2 sorry minus 4 and we get minus 6 at the output before we apply the activation function once we apply the activation function we get zero point zero zero two so negative what was applied to the activation function is negative because we fell on the negative piece of this subspace well if we remember with the sigmoid function it actually divides our space into two parts greater than 0.5 and less than 0.5 since we're modeling probabilities and everything is between 0 & 1 so actually our decision boundary where the input to our network equals 0 sorry the side the input to our activation function equals 0 corresponds to the output of our activation function being greater than or less than 0.5 so now that we have an idea of what a perceptron is let's just start now by understanding how we can compose these perceptrons together to actually build neural networks and let's see how this all comes together so let's revisit our previous diagram of the perceptron now if there's a few things that you learned from this class let this be one of them and we'll keep repeating it over and over in deep learning you do a dot product you apply a bias and you add your non-linearity you keep repeating that many many times three each node each neuron and your neural network and that's a neural network so it's simplify this diagram a little I remove the bias since we were going to always have that and we just take you for granted from now on I'll remove all of the weight labels for simplicity and note that Z is just the input to our activation function so that's just the dot product plus our bias if we want the output of the network Y we simply take Z and we apply our non-linearity like before if we want to define a multi output perceptron it's very simple we just add another perceptron now we have two outputs y1 and y2 each one has weight matrices it has weight vector theta corresponding to the weight of each of the inputs now let's suppose we want to go the next step deeper we want to create now a single layered neural network single layered neural networks are actually not deep networks yet they're only there's still shallow networks they're only one layer deep but let's look at the singledom layered neural network where now all we do is we have one hidden layer between our inputs and outputs we call this a hidden layer because it's states are not directly observable they're not directly enforced by by the AI designer we only enforce the inputs and outputs typically the states in the middle are hidden and since we now have a transformer to go from our input space to our hidden hidden lair space and from our hidden lair space to our output layer space we actually need two weight matrices theta 1 and theta 2 corresponding to the weight matrices of each layer now if we look at just a single unit in that hidden layer it's the exact same story as before it's one perceptron we take its top product with all of the X's that came before it and we apply I'm sorry we take the dot product of the X's that came before with the weight matrices theta is theta one in this case we apply a bias to get Z 2 and if we look we're to look at a different hidden unit let's say Z 3 instead we would just take different weight matrices different our dot product to change our bias would change but and this means that Z would change which means this activation would also be different so from now on I'm going to use this symbol to denote what is called as a fully connected layer and that's what we've been talking about so far so that's every node and one layer is connected to every node and another layer by these weight matrices and this is really just for simplicity so I don't have to keep redrawing those lines now if we want to create a deep neural network all we do is keep stacking these layers and fully connected weights between the layers it's that simple but the underlying building block is that single perceptron set single dot product non-linearity and bias that's it so this is really incredible because something so simple at the foundation is still able to create such incredible algorithms and now let's see an example of how we can actually apply neural networks to a very important question that I know you are all extremely worried about you care a lot about here's the question you want to build an AI system that answers the following question will I pass this class yes or no one or zero is the output to do this let's start by defining a simple two feature model one feature is the number of lectures that you attend the second feature is the number of hours that you spend on your final project let's plot this data in our feature space reply Greenpoint's are people who pass red points are people I fail we want to know given a new person this guy he spent ersity they spent five hours on their final project and we went to four lectures we want to know did that person pass or failed a class and we want to build a neural network that will determine this so let's do it we have two inputs one is for the others five we have one hidden layer with three units and we want to see the final output probability of passing this class and we computed as 0.1 or 10% well that's really bad news because actually this person did pass the class they passed it with probability one now can anyone tell me why the neural network got this such so wrong why I do this yeah it is exactly so this network has never been trained it's never seen any data it's basically like a baby it's never learned anything so we can't expect it to solve a problem and knows nothing about so to do this to tackle this problem of training a neural network we have to first define a couple of things so first we'll talk about the loss the loss of a network basically tells our algorithm or our model how wrong our predictions are from the ground truth so you can think of this as a distance between our predicted output and our actual output if the two are very close if we predict something that is very close to the true output our loss is very low if we predict something that is very far in a high-level sense far like in distance then our loss is very high and we want to minimize this from happening as much as possible now let's assume we're not given just one data point one student but we're given a whole class of students so as previous data I used this entire class from last year and if we want to quantify what's called the empirical loss now we care about how the model did on average over the entire data set not for just a single student but across the entire data set and how we do that is very simple we just take the average of the loss of each data point if we have n students it's the average over end data points this has other names besides empirical law sometimes people call it the objective function the cost function etc all of these terms are completely the same thing now if we look at the problem of binary classification predicting if you pass or fail this class yes or no 1 or 0 we can actually use something that's called the softmax cross entropy loss now for those of you who aren't familiar with cross entropy or entropy this is a extremely powerful notion that was actually developed or first introduced here at MIT over 50 years ago by Claude Shannon and his master's thesis like I said this was 50 years ago it's huge in the field of signal processing thermodynamics really all over computer science that seen in information theory now instead of predicting a single one or zero output yes or no let's suppose we want to predict a continuous valued function not will I pass this class but what's the grade that I will get and then as a percentage let's say 0 to 100 now we're no longer limited to 0 to 1 but can't actually output any real number on the number line now instead of using cross entropy we might want to use a different loss and for this let's think of something like a mean squared error loss whereas your predicted and your true output diverged from each other the loss increases as a quadratic function ok great so now let's put this new loss information to the test and actually learn how we can train a neural network by quantifying its loss and really if we go back to what the loss is at the very high level the loss tells us how the network is performing right that loss tells us the accuracy of the network on a set of examples and what we want to do is basically minimize the loss over our entire training set really we want to find the set of parameters theta such that that loss J of theta that's our empirical loss is minimum so remember J of theta takes as input theta and theta is just our weight so these are the things that actually define our network remember that the loss is just a function of these weights if we want to think about the process of training we can imagine this landscape so if we only have two weights we can plot this nice diagram like this theta zero and theta one are our two weights they're on the four they're on the planar axis on the bottom J of theta zero and theta one are plotted on the z axis what we want to do is basically find the minimum of this loss of this landscape if we can find the minimum then this tells us where our loss is the smallest and this tells us where theta want with where or what values of theta zero and theta one we can use to attain that minimum loss so how do we do this well we start with a random guess we pick a point theta zero theta one and we start there we compute the gradient of this point on the lost landscape that's DJ D theta it's how the loss is changing with respect to each of the weights now this gradient tells us the direction of highest ascent not descent so this is telling us the direction going towards the top of the mountain so let's take a small step in the opposite direction so we negate our gradient and we adjust our weight such that we step in the opposite direction of that gradient such that we move continuously towards the lowest point in this landscape until we finally converge at a local minima and then we just stop so let's summarize this with some pseudocode so we randomly initialize our weights we loop until convergence the following we compute the gradient at that point and simply we apply this update rule where the update takes as input the negative gradient now let's look at this term here this is the gradient like I said it explains how the lost changes with respect to each weight in the network but I never actually told you how to compute this this is actually a big big issue in neural networks I just kind of took it for granted so now let's talk about this process of actually computing this gradient because it's not that gradient you kind of helpless you have no idea which way down is you don't know where to go in your landscape so let's consider a very simple neural network probably the simplest neural network in the world it contains one hidden unit one hidden layer and one output unit and we want to compute the gradient of our loss J of theta with respect to theta to just data to for now so this tells us how a small change in theta 2 will impact our final loss at the output so let's write this out as a derivative we can start by just applying a chain rule because J of theta is dependent on Y right so first we want to back propagate through Y our output all the way back to theta 2 we can do this because Y our output Y is only dependent on the input and theta 2 that's it so we're able to just from that perceptron equation that we wrote on the previous slide compute a closed-form gradient or closed form derivative of that function now let's suppose I change theta to 2 theta 1 and I want to compute the same thing but now for the previous layer and the previous weight all we need to do is apply the chain rule one more time back propagate those gradients that we previously computed one layer further it's the same story again we can do this for the same reason this is because z1 our hidden state is only dependent on our previous input X and that single weight theta one now the process of back propagation is basically you repeat this process over and over again for every way in your network until you compute that gradient DJ D theta and you can use that as part of your optimization process to find your local minimum now in theory that sounds pretty simple I hope I mean we just talked about some basic chain rules but let's actually touch on some insights on training these networks and computing back propagation in practice now the picture I showed you before is not really accurate for modern deep neural network architectures modern deep neural network architectures are extremely non convex this is an illustration or a visualization of the landscape like I've plotted before but of a real deep neural network of ResNet 50 to be precise this was actually taken from a paper published about a month ago where the authors attempt to visualize the lost landscape to show how difficult gradient descent can actually be so there's a possibility that you can get lost in any one of these local minima there's no guarantee that you'll actually find a true global minimum so let's recall that update equation that we defined during gradient descent let's take a look at this term here this is the learning rate I didn't talk too much about it but this basically determines how large of a step we take in the direction of our gradient and in practice setting this learning rate it's just a number but setting it can be very difficult if we set the learning rate too low then the model may get stuck in a local minima and may never actually find its way out of that local minima because at the bottom a local minima obviously your gradient is 0 so it's just going to stop moving if I set the learning rate to large it could overshoot and actually diverge our model could blow up ok ideally we want to use learning rates that are large enough to avoid local minima but also still converge to our global minima so they can overshoot just enough to avoid some local local minima but then converge to our global minima now how can we actually set the learning rate well one idea is let's just try a lot of different things and see what works best but I don't really like the solution let's try and see if we can be a little smarter than that how about we tried to build an adaptive algorithm that changes its learning rate as training happens so this is a learning rate that actually adapts to the landscape that it's in so the learning rate is no longer a fixed number it can change it can go up and down and this will change depending on the location that that the update is currently at the gradient in that location may be how fast were learning and many other many other possible situations in fact this process of optimization in in deep neural networks and non convex situation has been extremely explored there's many many many algorithms for computing adaptive learning rates and here are some examples that we encourage you to try out during your labs to see what works best and for your problems especially real-world problems things can change a lot depending on what you learn in lecture and what really works in lab and we encourage you to just experiment get some intuition about each of these learning rates and really understand them at a higher level so I want to continue this talk and really talk about more of the practice of deep neural networks this incredibly powerful notion of mini batching and I'll focus for now if we go back to this gradient descent algorithm this is the same one that we saw before and let's look at this term again so we found out how to compute this term using back propagation but actually what I didn't tell you is that the computation here is extremely calm is extremely expensive we have a lot of data points potentially in our data set and this takes as input a summation over all of those data points so if our data set is millions of examples large which is not that large and the realm of today's deep neural networks but this can be extremely expensive just for one iteration so we can compute this on every iteration instead let's create a variant of this algorithm called stochastic gradient descent where we compute the gradient just using a single training example now this is nice because it's really easy to compute the gradient for a single training example it's not nearly as intense as over the entire training set but as the name might suggest this is a more stochastic estimate it's much more noisy it can make us jump around the landscape in ways that we didn't anticipate doesn't actually represent the true gradient of our data set because it's only a single point so what's the middle ground how about we define a mini batch of B data points compute the average gradient across those B data points and actually use that as an estimate of our true gradient now this is much faster than computing the estimate over the entire batch because B is usually something like 10 to 100 and it's much more accurate than SGD because we're not taking a single example but we're learning over a smaller batch a larger batch sorry now the more accurate our gradient estimation is that means the more or the easier it will be for us to converge to the solution faster means will converge smoother because we'll actually follow the true landscape that exists it also means that we can increase our learning rate to trust each update more this also allows for massively parallel Liza become petition if we split up batches on different workers on different GPUs or different threads we can achieve even higher speed ups because each thread can handle its own batch then they can come back together and aggregate together to basically create that single learning rate or completely complete that single training iteration now finally the last topic I want to talk about is that of overfitting and regularization really this is a problem of generalization which is one of the most fundamental problems in all of artificial intelligence not just deep learning but all of artificial intelligence and for those of you who aren't familiar let me just go over in a high level what overfitting is what it means to generalize ideally in machine learning we want a model that accurately describes our test data not our training data but our test data said differently we want to build models that can learn representations from our training data still generalized well on unseen test data assume we want to build a line to describe these points under fitting describes the process on the left where the complexity of our model is simply not high enough to capture the nuances of our data if we go to overfitting on the right we're actually having to complex of a model and actually just memorizing our training data which means that if we introduce a new test data point it's not going to generalize well ideally what we want to something in the middle which is not too complex to memorize all the training data but still contains the capacity to learn some of these nuances in this in the test set so address to address this problem let's talk about this technique called regularization now regularization is just this way that you can discourage your models from becoming too complex and absolutely as we've seen before this is extremely critical because we don't want our data we don't want our models to just memorize data and only do well in our training set one of the most popular techniques for regularization in neural networks is dropout this is an extremely simple idea let's revisit this picture of a deep neural network and then drop out all we do during training on every iteration we randomly drop some proportion of the hidden neurons with some probability P so let's suppose P equals 0.5 that means we dropped 50% of those neurons like that those activations become zero and effectively they're no longer part of our network this forces the network to not rely on any single node but actually find alternative paths through the network and not put too much weight on any single example with any single single node so it discourages memorization essentially on every iteration we randomly drop another 50% of the node so on this iteration I may drop these on the next iteration I may drop those and since it's different on every iteration you're encouraging the network to find these different paths to its answer the second technique for regularization that we'll talk about is this notion of early stopping now we know that the definition of overfitting actually is just when our model starts to perform worse and worse on our test data set so let's use that to our advantage to create this early stopping algorithm if we set aside some of our training data and use it only as test data we don't train with that data we can use it to basically monitor the progress of our model on unseen data so we can plot this curve we're on the x axis we have the training iterations on the y axis we have the loss now they start off going down together this is great because it means that we're learning we're training right that's great there comes a point though where the testing data where the testing data set and the add the loss for that data set starts to Plateau now if we look a little further the training data set loss will always continue to go down as long as our model has the capacity to learn and memorize some of that data but that doesn't mean that it's actually generalizing well because we can see that the testing data set has actually started to increase this pattern continues for the rest of training but I want to focus on this point here this is the point where you need to stop training because after this point you are overfitting and your model is no longer performing well on unseen data if you stop before that point you're actually under fitting and you're not utilizing the full potential the full capacity of your network so I'll conclude this lecture by summarizing three key points that we've covered so far first we've learned about the fundamental building blocks of neural networks called the perceptron we've learned about stacking these units these perceptrons together to compose very complex hierarchical models and we've learned how to mathematically optimize these models using a process called back row back propagation and gradient descent finally we adjust some of the practical challenges of training these models in real life that you'll find useful for the labs today such as using adaptive learning rates batching and regularization to combat overfitting thank you and I'd be happy to answer any questions now otherwise we'll have Ferrini talk to us about some of the deep sequence models for modeling temporal data
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2020_Deep_Learning_New_Frontiers.txt
you okay so as Alexander mentioned in this final lecture I'll be discussing some of the limitations of deep learning and you know as with any technology not just deep learning or other areas of computer science it's important to be mindful of not only what that technology can enable but also what are some of the caveats and limitations when considering such approaches and then we'll move into discuss some of the new research directions that are specifically being taken to address some of those limitations before we dive into the technical content we have some important logistical and course related announcements which I think will be very relevant to most of you first and foremost we have class t-shirts and they've arrived and so we'll be distributing them today at the end of the lecture portion of the class and at that time we'll take a little bit of time to discuss the logistics of how you can come and receive a t-shirt for your participation in the course so to check in quickly on where we are in the course this is going to be the last lecture given by Alexander and myself and tomorrow and Friday we will have a series of guest lectures from leading researchers in industry today we'll also have our final lab on reinforcement learning and thank you everyone who has been submitting your submissions for the lab competitions the deadline for doing so is tomorrow at 5:00 p.m. and that's for lab 1 2 & 3 so if you're interested in that please email us with your entries and on Friday will be our final guest lectures and project presentations and so for those of you who are taking the course for credit as was mentioned on day one you have two options to fulfill your credit requirement and we've received some questions about the logistics so I'd like to go through them briefly here so you can present a proposal for the project proposal competition and the requirements for this you can present as an individual or in a group from one person to four people and in order to be eligible for a prize you must have at least one registered student a registered MIT student in your group and we recognize that one week is an extremely short period of time to implement you know a new deep learning approach but of course will not necessarily be judging you based on your results although results will be extremely helpful for you in the in the project competition but rather more on the novelty and the potential impact and the quality of your presentation so these are going to be really short they're going to be three minutes and we're going to hold you to that three-minute window as strictly as we can and so there's a link on this slide which you can find on the PDF version that's going to take you to a document where the instructions for the final project are laid out including the details for group submission and slide submission and yeah here are additional links for the for the final project proposal and the second option to fulfill the credit requirement is a short one page review of a recent deep learning paper and this is going to be on do on the last day of class by Friday at 1:00 p.m. via email to us okay so tomorrow we're going to have two guest speakers we're going to have David Cox from IBM who is actually the director of the MIT IBM Watson AI lab come and speak and we're also going to have an amass a professor at U Toronto and a research scientist at Nvidia and he's going to speak about robotics and robot learning and the lab portion of tomorrow's class will be dedicated to just open office hours where you can work with your project partners on the final project you can continue work on the labs you can come and ask us and the TAS any further questions and on Friday we're going to have two additional guest speakers so Tuan Lee who is the chief scientific officer at lamda Labs it's a company that builds new hardware for deep learning is going to speak about some of the research that they're doing and then we're going to have a exciting talk from Google the Google brain team on how we can use machine learning to understand the scent and smell properties of small molecules and importantly on Friday will be our project proposals and our awards ceremony so if you have submitted entries for the lab competitions that is when you would be awarded prizes and so we really encourage you to attend Thursday and Friday's lectures and classes in order to be eligible to receive the prizes okay so now to get into the technical content for this last lecture from Alexander at nine so hopefully over the course of the past lectures you've seen a bit about how deep learning has enabled such tremendous applications in a variety of fields from autonomous vehicles to medicine and healthcare to advances in reinforcement learning that we just heard about generative approaches robotics and a whole host of other applications and areas of impact like natural language processing finance and security and along with this hopefully you've also established a more concrete understanding of how these neural networks actually work and largely we've been dealing with algorithms that take as input data in in some form you know as signals as images or other sensory data to directly produce a decision at the ALPA or a prediction and we've also seen ways in which these algorithms can be used in the opposite direction to generatively sample from them to create brand-new instances and day examples but really what we've been talking about is algorithms that are very well optimized to perform at a single task but they failed to generalize and go beyond that to achieve sort of a higher-order level of power and I think one really good way to understand this limitation is to go back to a fundamental theorem about the capabilities of neural networks and this was a theorem that was presented in 1989 and it generated quite the stir and it's called the universal approximation theorem and what it states is that a feed-forward feed-forward neural net with a single hidden layer could be sufficient to approximate any function and we've seen you know with deep deep learning models that use multiple hidden layers and this this theorem is actually completely ignoring guy and say saying oh you just need one hidden layer if you believe that any problem can be reduced to a functional mapping between inputs and outputs you can build a neural net that would approximate this and while you may think that this is really an incredibly powerful statement if you look closely there are a few key limitations and considerations that we need to have first this theorem is making no guarantees about the number of hidden units or the size of the hidden layer that would be required to make this approximation and it's also leaving open the question of how you actually go about finding those weights and optimizing the network for that task it just proves that one theoretically does exist and as we know from gradient descent this optimization can actually be really tricky and difficult in practice and finally there's no guarantees that are placed about how well such a network would generalize to other related tasks and this theorem is is sort of a perfect example of the possible effects of overhype of deep learning and artificial intelligence broadly and as a community and now you know you you all are part of that community and as a community that's interested in advancing the state of deep learning I believe that we need to be really careful about how we market and advertise these algorithms and while the universal approximation theorem generated a lot of excitement when it first came out also in some senses provided some degree of false hope to the AI community that neural nets could be used to solve any problem and this hype can be very dangerous and when you look back at the history of AI and sort of the the peaks and the Falls of the literature there have been these two AI winters where research in AI and neural networks specifically came to a halt and experienced a decline and that's kind of motivating why for the rest of this lecture we want to discuss further some of the limitations of these approaches and how we could potentially move towards addressing them okay so what are some of those limitations one of my favorite examples of a potential danger of deep neural nets comes from this paper from a couple years ago named understanding deep neural networks requires rethinking generalization and this was a paper from Google and really what they did was quite simple they took images from this huge image data set called image net and each of these images is annotated with a label right and what they did was for every image in the dataset they flipped a die that was K sided they made a random assignment of what that of what a new label for that image was going to be so for example if you randomly randomly choose the the labels for these images you could generate something like this and what this means is that these new labels that are now associated with each image are completely random with respect to what is actually present in that image and so and so if you see the two examples of the dog have two completely different labels right and so we're literally trying to randomize our data and and the labels entirely and after they did that what they then tried to do was to fit a deep neural net to this sampled data ranging from the original untouched data to data on the right where the labels were now completely randomly assigned and as you may expect the accuracy of the resulting model on the test set progressively tended to zero as you move from the true labels to the random labels but what was really interesting was what happened when they looked at the accuracy on the training set and this is what they found they found that no matter how much they randomized the labels the model was able to get 100% accuracy or a close to 100% accuracy on the training set meaning that it's basically fitting to the data and their labels and this is really a powerful example because it shows once again in a similar way as the universal approximation theorem that deep neural Nets are very very good at being able to perfectly fit or very close to perfectly fit any function even if that function is this random mapping from data to labels and to drive this point home even further I think the best way to understand neural nets is as function approximator z' and all the universal approximation function approximation theorem states is that neural networks are very good at doing this so for example if we have this data visualized on a 2d grid we can learn we can use a neural network to learn a function a curve that fits this data and if we present it with a candidate point on the x axis it may be able to produce a strong very likely estimate of what the corresponding Y value would be but what happens to the left and to the right right what if we extend the spread of the data a bit in those directions how does the network perform well there are absolutely no guarantees on what the training data looks like in these regions right what the data looks like in these regions that the network hasn't seen before and this is absolutely one of the most significant limitations that exist in modern deep learning and this races are the questions of what happens when we look at these places where the model has insufficient or no training data and how can we as implementers and users of deep learning and deep neural nets have a sense of when we know that the model doesn't know when it's not confident when it's uncertain in making a prediction and I think this notion leads very nicely into this other idea of adversarial attacks on neural nets and the idea here is to take some data instance for example this image of a temple which you know a standard CNN trained on image data can classify with very high accuracy and then apply some perturbation to that image such that when we take the results after that perturbation and now feed it back into our neural network it generates a completely nonsensical prediction like ostridge about what is actually in that image and so this is maybe a little bit shocking why is it doing this and how how is this perturbation being created to fool the network in such a way so remember when we're training our networks we use gradient descent and what that means is we have this objective function J that we're trying to optimize for and what specifically we're trying to optimize is the set of weights W which means we fix our data and our labels and iteratively adjust our weights to optimize this objective function and the way an adversarial example is created is kind of taking the opposite approach where we now ask how can we model why the input image our data X in order to increase the error in the networks prediction to fool the network and so we're trying to perturb and adjust X in some way by fixing the weights fixing the labels and iteratively changing X to generate a robust adversarial attack and an extension of this was recently done by a group of students here at MIT where they devised an algorithm for synthesizing adversarial examples that were robust to different transformations like changing the shape scaling color changes etc and what was really cool is they moved from beyond the 2d setting to the 3d setting where they actually 3d printed physical objects that were designed to fool a neural network and this was the first demonstration of actual adversarial examples that existed in the physical world in the 3d world and so here they 3d printed a bunch of these adversarial Turtles and when they fit fed images of these turtles to a neural network trained to classify these images the network incorrectly classified these adversarial examples as rifles rather than Turtles and so this just gives you a taste of some of the limitations that exists for neural networks and deep learning and other examples are listed here including that the fact that they can be subject to algorithmic bias that they can be susceptible to these adversarial attacks that they're extremely data hungry and so on and so forth and moving forward to the next half of this lecture we're going to touch on three of these sets of limitations and how we can enable how we can bit how we can push research to sort of address some of these limitations specifically we'll focus on how we can encode structure and prior domain knowledge into designing our network architecture we'll talk about how we can represent uncertainty and understand when our model is uncertain or not confident in its predictions and finally how we can move past deep learning where models are built to solve a single problem and potentially move towards building models that are capable to address many different tasks ok so first we'll talk about how we can encode structure and domain knowledge into designing deep neural nets and we've already seen an example of this in the case of convolutional neural networks that are very well equipped to deal with spatial data and spatial information and if you consider you know a fully connected network as sort of the baseline there's no sense of structure there the the nodes are connected to all other nodes and you have these dense layers that are fully connected but as we saw you know CN NS can be very well-suited for processing visual information and visual data because they have this structure of the convolution operation and recently research has researchers have moved on to develop neural networks that are very well suited to handle another class of data and that's of graphs and graphs are an irregular data structure that encode this very very rich structural information and that structural information is very important to the problem that can be considered and so some examples of data that can be well represented by a graph is that of social networks or Internet traffic those problems that can be represented by state machines patterns of human mobility or transport small molecules and chemical structures as well as biological networks and you know there are a whole class of problems that can be represented this way an idea that arises is how can we extend neural networks to learn and process the data that's present in these graph structures and this folds very nicely from an extension of convolutional neural nets and with convolutional neural networks as we saw what we have is this rectangular filter right that slides across an image and applies this patchy convolution operation to that image and as we go across the entirety of the image the idea is we can apply the set of weights to extract particular local features that are present in the image and different sets of weights extract different features in graph convolutional networks the idea is very similar where now rather than processing a 2d matrix that represents an image we're processing a graph and what graph convolutional networks use is a kernel of weights a set of weights and rather than sliding this set of weights across a 2d matrix the weights are applied to each of the different nodes present in the graph and so the network is looking at a node and the neighbors of that node and it goes across the entirety of the graph in this manner and aggregates information about a node and its neighbors and encodes that into a high-level representation and so this is a really very brief and a high-level introduction to what a graph convolutional network is and on Friday we'll hear from a expert in this domain Alex will Chico from Google brain who will talk about how we can use graph convolutional networks to learn small molecule representations okay and another another class of data that we may encounter is not 2d data but rather 3d sets of points and this is what is often referred to as a point cloud and it's basically just this unordered cloud of points where there is some spatial depend and it represents you know sort of the depth and our perception of the 3d world and just like images you can perform classification or segmentation on this 3d point data and it turns out that graph convolutional networks can also be extended to handle and analyze this point cloud data and the way this is done is by dynamically computing a graph based on these point clouds that essentially creates a mesh that preserves the the local depth and the spatial structure present in the point cloud ok so that gives you a taste of how different types of data and different network structures can be used to encode prior knowledge into our network another area that has garnered a lot of interest in research recent years is this question of uncertainty and how do we know how confident a model is in its predictions so let's consider a really simple example a classification example and what we've learned so far is that we can use in network to output a problem a classification probability so here we're training a network to classify images of cats versus images of dogs and it's going to output a probability that a particular image is a cat or it's a dog but what happens if we feed the network an image of a horse it's still going to output a probability that that image is a cat or that it's a dog and because probabilities have to sum to one they're going to sum to one right and so this is a clear distinction between the probability the prediction of the network and how confident the model is in in that prediction a probability is not a metric of confidence and so in this case we would you could imagine it would be desirable to have our network also give us a sense of how confident it is in prediction so maybe when it sees an image of a horse it says okay this is a dog with probabilities 0.8 but I'm not confident at all in this prediction that I just made right and one possible way to accomplish this is through Bayesian deep learning and this is a really new and emerging field and so to understand this right to reiterate our learning problem is the following we're given some data X and we're trying to learn an output Y and we do that by learning this functional mapping F that's parametrized by a set of weights W in bayesian neural Nets what is done is rather than directly learning the weights the neural network actually approximates a posterior probability distribution over the weights given the data X and the labels Y and Bayesian neural networks are considered Bayesian because we can rewrite this posterior P of W given x and y using Bayes rule but computationally it turns out that actually computing this posterior distribution is infeasible and intractable so what has been done is there have been different approaches in different ways to that try to approximate this distribution using sampling operations and one example of how you can use sampling to approximate this posterior is by using dropout which was a concept that we introduced in the first lecture and in do in doing this you can actually obtain a metric and an estimate of the models uncertainty and to think about a little bit how this may work let's consider a convolutional Network where we have sets of weights and what is done is we perform different passes through the network and each time a pass is made through the network the set of weights that are used are stochastically sampled and so here right these are our convolutional kernels are sets of weights and we apply this dropout filter this dropout mask where any each filter some of those weights are going to be dropped out to zero and as a result of taking a element-wise multiplication between the kernel and that mask we generate these resulting filters where some of the weights have been stochastically dropped out and if we do this many times say tea times we're going to obtain different predictions from the model every time and by looking at the expected value of those predictions and the variance in those predictions we can get a sense of how uncertain the model is and one application of this is in the context of depth of estimation so the goal here is to take images and to train a network to predict the depth of each pixel in that image and then you also ask it okay provide us a uncertainty that's associated with each prediction and what you can see here in the image on the right is that there's this particular band a hotspot of uncertainty and that corresponds where to that portion of the image where the two cars are overlapping which kind of makes sense right you may we may not have as clear of a sense of the depth in that region in particular and so to to conceptualize this a bit further this is a general example of how you can ensemble different instances of models together to obtain estimates of uncertainty so let's say we're working in the context of self-driving cars right and our task is given an input image to predict a steering wheel angle that will be used to control the car and that's new the mean and in order to estimate the uncertainty we can take an ensemble of many different instances of a model like this and in the case of dropout sampling each model will have different sets of weights that are being dropped out and from each model we're going to get a different estimate of the predicted steering wheel angle right and we can aggregate many of these different estimates together and they're going to lie along some distribution and to actually estimate the uncertainty you can consider the variance right the spread of these estimates and intuitively if these different estimates are spread out really really far right to the left and to the right the model is going to be more uncertain in its prediction but if they're clustered very closely together the model is more certain more confident in its prediction and these estimates are actually being drawn from an underlying distribution and what ensemble is trying to do is to sample from this underlying distribution but it turns out that we can approximate and model this distribution directly using a neural network and this means that we're learning what is called an evidential distribution and effectively the evidential distribution captures how much evidence the model has in support of a prediction and the way that we can train these evidential networks is by first trying to maximize the fit of the inferred distribution to the data and also minimizing the evidence that the model has for cases when the model makes errors and if you train a network using this approach you can generate calibrated accurate estimates of uncertainty for every prediction that the network makes so for example if we were to train a regression model and suppose we have this case where in the white regions the model has training data and in the gray regions the model has no training data and as you can see a deterministic regression model fits this region this white region very well but in the gray regions it's not doing so well because it hasn't seen data for these regions before now we don't really hair too much about how well the model does on those regions but really what would be more important is if the model could tell us oh I'm uncertain about my prediction in this region because I haven't seen the data before and by using an evidential distribution our network actually generates these predictions of uncertainty that scale as as the model has less and less data or less and less evidence and so these uncertainties are also robust to adversarial perturbations and adversarial changes like similar to those that we saw previously and in fact if an input suppose an image is adversely perturbed and it's increasingly adversely perturbed the estimates of uncertainty are also going to increase as the degree of perturbation increases and so this example shows depth estimation where the more the input is perturbed the more the unknowns associated uncertainty of the network's prediction increases and so there and I won't spend too much time on this but uncertainty estimation can also be integrated into different types of tasks beyond depth estimation or regression also semantic and instant segmentation and this was work done a couple years ago where they actually used estimates of uncertainty to improve the quality of the segmentations and death estimations that they made and what they showed was compared to a baseline model without any estimates of uncertainty they could actually use these metrics to improve the performance of their model at segmentation and depth estimation okay so the final area that I'd like to cover is how we can go beyond you know I have us as users and implementers of neural networks to where we can potentially automate this approach that this pipeline and as you've hopefully seen through the course of these lectures and in the labs neural networks need to be finely tuned and optimized for the task of interest and as models get more and more complex they require some degree of expert knowledge right which hopefully some of what you've hopefully learned through this course too you know select the particular architecture of the network that's being used to selecting and tuning hyper parameters and adjusting the network to perform as best as it possibly can what Google did was they built a learning algorithm that can be used to automatically learn a machine learning model to solve a given problem and this is called Auto ml or automated machine learning and the way it works is it uses a reinforcement learning framework and in this framework there's a controller neural network which is sort of the agent and what the controller neural network does is it proposes a child model architecture in terms of the hyper parameters that that model architecture would theoretically have and then that resulting child network is trained and evaluated for a particular task say image classification and its performance is used as feedback or as reward for the controller agent and the controller agent takes this feedback into account and iteratively improves the resulting child network over thousands and thousands of iterations to iteratively produce new architectures test them the feedback is provided and this cycle continues and so how does this controller agent work it turns out it's an RNN controller that sort of at the macro scale considers what are the different values of the hyper for a particular layer in a generated Network so in the case of convolution and in the case of a CNN that may be the number of convolutional filters the size of these convolutional filters etc and after the controller proposes this child Network the child network is trained and its accuracy is evaluated right through the normal training and testing pipeline and this is then used as feedback that goes back to the controller and the controller can then use this to improve improve the child Network in in future iterations and what Google has done is that they've actually generated a pipeline for this and put this service on the cloud so that you as a user can provide it with a data set and a set of metrics that you want to optimum optimize over and this Auto ml framework will spit out you know candidate child networks that can be deployed for your tasks of interest and so I'd like to think use this example to think a little bit about what this means for deep learning and AI more generally this is an example of where we were Google was able to use a neural network and AI to generate new models that are specialized for particular tasks and this significantly reduces the burden on us as engineers in terms of you know having to perform a hyper parameter up optimization and choosing our architectures wisely and I think that this gets at the heart of what is the distinction between the capabilities that AI has now and our own human intelligence we as humans are able to learn tasks and use you know the analytical process that goes into that to generalize to other examples in our life and other problems that we may encounter whereas neural networks in AI right now are still very much constrained and optimized to perform well at particular individual problems and so I'll leave you with with that and sort of encourage you to think a little bit about what steps may be taken to bridge that gap and if those steps should be taken to bridge that gap so that concludes this talk [Applause]
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2021_Reinforcement_Learning.txt
Hi everyone and welcome back to 6.S191! Today is  a really exciting day because we'll learn about how we can marry the very long-standing field  of reinforcement learning with a lot of the very recent advancements that we've been seeing so  far in this class in deep learning and how we can combine these two fields to build some really  extraordinary applications and really agents that can outperform or achieve super human performance  now i think this field is particularly amazing because it moves away from this paradigm that  we've been really constrained to so far in this class so so far deep learning the way we've seen  it has been really confined to fixed data sets the way we kind of either collect or have can obtain  online for example in reinforcement learning though deep learning is placed in some environment  and is actually able to explore and interact with that environment and it's able to learn how to  best accomplish its goal usually does this without any human supervision or guidance which makes  it extremely powerful and very flexible as well this has huge obvious impact in fields  like robotics self-driving cars and robot manipulation but it also has really  revolutionized the world of gameplay and strategic planning and it's this really  connection between the real world and deep learning the virtual world that makes this  particularly exciting to me and and i hope this video that i'm going to show  you next really conveys that as well starcraft has imperfect information and is played  in real time it also requires long-term planning and the ability to choose what action to take from  millions and millions of possibilities i'm hoping for a 5-0 not to lose any games but i think the  realistic goal would be four and one in my favor i think he looks more confident  than __ was quite nervous before the room was much more tense this time really didn't know what to expect he's been  playing starcraft pretty much since his fight i wasn't expecting the ai to be that good  everything that he did was proper it was calculated and it was done well  i thought i'm learning something it's much better than i expected it i  would consider myself a good player right but i lost every single one of five games all right so in fact this is an example of  how deep learning was used to compete against humans professionally trained game players  and was actually trained to not only compete against them but it was able to achieve remarkably  superhuman performance beating this professional uh starcraft player five games to zero so let's  start by taking a step back and really seeing how reinforcement learning fits within respect to all  the other types of learning problems that we have seen so far in this class so the first piece and  the most comprehensive piece of learning problems that we have been exploring so far in this class  has been that of supervised learning problems so this was kind of what we talked about  in the first second and third lectures and in this domain we're basically given a  bunch of data x and we try to learn a neural network to predict its label y so this goal is  to learn this functional mapping from x to y and i like to describe this very intuitively  if if i give you a picture of this apple for example i want to train a neural network to  determine and tell me that this thing is an apple okay the next class of algorithms that we  actually learned about in the last lecture was unsupervised learning so in this case we  were only given data with no labels so a bunch of images for example of apples and we were  forced to learn a neural network or learn a model that represented this underlying structure in  the data set so again in the apple scenario we tried to learn a model that says back to us  if we show it these two pictures of apples that these things are basically like each other  we don't know that they're apples because we were never given any labels that explicitly  tell the model that this thing is an apple but we can tell that oh this thing is pretty  close to this other thing that it's also seen and it can pick out those underlying  structure between the two to identify that now in the last part in rl and reinforcement  learning which is what today's lecture is going to be focused on we're given only data in  the form of what we call state action pairs now states are what are the observations of the system  and the actions are the behaviors that that system takes or that agent takes when it sees those  states now the goal of rl is very different than the goal of supervised learning and the goal  of unsupervised learning the goal of rl is to maximize the reward or the future reward of that  agent in that environment over many time steps so again going back to the apple example what the  analog would be would be that the agent should learn that it should eat this thing because it  knows that it will keep you alive it will make you healthier and needs and you need food to survive  again like the unsupervised case it doesn't know that this thing is an apple it doesn't even  recognize exactly what it is all it knows is that in the past they must have eaten it and it  was able to survive longer and because it was a piece of food it was able to to become healthier  for example and through these state action pairs and somewhat trial and error it was able to  learn these representations and learn these plans so our focus today will be explicitly on  this third class of learning problems and reinforcement learning so to do that i think it's  really important before we start diving into the details and the nitty-gritty technical details  i think it's really important for us to build up some key vocabulary that is very important  in reinforcement learning and it's going to be really essential for us to build up on  each of these points later on in the lecture this is very important part of the lecture so  i really want to go slowly through this section so that the rest of the lecture is going to make  as much sense as possible so let's start with the central part the core of your reinforcement  learning algorithm and that is your agent now the agent is something that can take actions  in the environment that could be things like a drone making a delivery in the world it could be  super mario navigating a video game the algorithm in reinforcement learning is your agent and you  could say in real life the agent is each of you okay the next piece is the environment the  environment is simply the world in which the agent lives it's the place where the agent exists  and operates and and conducts all of these actions and that's exactly the connection between  the two of them the agent can send commands to the environment in forms of actions  now capital a or lowercase a of t is the action at time t that the agent takes  in this environment we can denote capital a as the action space this is the set of all  possible actions that an agent can make now i do want to say this even though i think it  is somewhat self-explanatory an action is uh or the list by which an action can be chosen the set  of all possible actions that an agent can make in the environment can be either discrete or it can  be from a set of actions in this case we can see the actions are forwards right backwards or left  or it could also be continuous actions for example the exact location in the environment a real as  a real number coordinate for example like the gps coordinates where this agent wants to move  right so it could be discrete as a categorical or a discrete probability distribution or it  could be continuous in either of these cases observations are how the environment interacts  back with the agent it's how the agent the agent can observe where in the environment it is and  how its actions affected its own state in the environment and that leads me very nicely into  this next point the state is actually a concrete and immediate situation in which the agent finds  itself so for example a state could be something like a image feed that you see through your eyes  this is the state of the world as you observe it a reward now is also a way of feedback from the  environment to the agent and it's a way that the environment can provide a way of feedback to  measure the success or failure of an agent's actions so for example in a video game when  mario touches a coin he wins points and from a given state an agent will send out outputs  in the form of actions to the environment and the environment will respond with the agent's new  state the next state that it can can achieve which resulted on acting on that previous  state as well as any rewards that may be collected or penalized by reaching  that state now it's important to note here that rewards can be either immediate or delayed  uh they basically you should think of rewards as effectively evaluating that agent's actions but  you may not actually get a reward until very late in life so for example you might take many  different actions and then be rewarded a long time into the future that's called a very  delayed reward but it is a reward nonetheless we can also look at what's called the total reward  which is just the sum of all rewards that an agent gets or collects at after a certain time t so r  of i is the reward at time i and r of capital r of t is just the return the total reward from time t  all the way into the future so until time infinity and that can actually be written now expanded we  can we can expand out that summation uh from our from r of t plus r of t plus one all the way on  into the future so it's adding up all of those uh rewards that the agent collects from this  point on into the future however often it's it's very common to consider not just the summed  return the total return as a straight-up summation but actually instead what we call the discounted  sum of rewards now this discounting factor which is represented here of by gamma is multiplied  by the future awards that are discovered by the agent in order to dampen those rewards effect  on the agent's choice of action now why would we want to do this so this is actually this this  formulation was created by design to make future rewards less important than immediate rewards  in other words it enforces a kind of short-term learning in the agent a concrete example of  this would be if i offered to give you five dollars today or five dollars in five years from  today which would you take even though it's both five dollars your reward would be the same but  you would prefer to have that five dollars today just because you prefer short-term rewards over  long-term rewards and again like before we can expand the summation out now with this discount  factor which has to be typically between zero and one and the discount factor is multiplied by  these future rewards as discovered by the agent and like i said it is reinforcing this  this concept that we want to prioritize these short-term rewards more than uh  very long-term rewards in the future now finally there's a very very important function  in rl that's going to kind of start to put a lot of these pieces together and that's called the q  function and now let's look at actually how this q function is defined remembering the definition  of this total discounted reward capital r of t so remember the total reward r of t measures the  discounted sum of rewards obtained since time t so now the q function is very related to that  the q function is a function that takes as input the current state that the agent is in and the  action that the agent takes in that state and then it returns the expected total future reward  that the agent can receive after that point so think of this as if an agent finds itself in some  state in the world and it takes some action what is the expected return that it can receive after  that point that is what the q function tells us let's suppose i give you this magical q function  this is actually it really is a magical function because it tells us a lot about the problem  if i give you this function an oracle that you can plug in any state and action pair and it  will tell you this expected return from point time point t from your current time point i give you  that function the question is can you determine given the state that you're currently in can  you determine what is the best action to take you can perform any queries on this function  and the way you can simply do this is that you ultimately in your mind you want to  select the best action to take in your current state but what is the best action to take well  it's simply the action that results in the highest expected total return so what you can do is  simply choose a policy that maximizes this future reward or this future return well  that can be simply written by finding the arg max of your q function over all  possible actions that you can take at the state so simply if i give you this q function and a  state that you're in you can feed in your state with every single action and evaluate what the q  function would tell you the expected total reward would be given that state action pair and you  pick the action that gives you the highest q value that's the best action to take in this current  state so you can build up this policy which here we're calling pi of s to infer this best  action to take so think of your policy now as another function that takes this  input your state and it tells you the action that you should execute in this in  the state so the strategy given a q function to compute your policy is simply from this arcmax  formulation find the action that maximizes your q function now in this lecture you're going  to we're going to focus on basically these two classes of reinforcement learning algorithms  into two categories one of which will actually try to learn this q function q of s your state  and your action and the other one will be called what are called policy learning algorithms because  they try to directly learn the policy instead of using a q function to infer your policy now we're  going to in policy learning directly infer your policy pi of s that governs what actions you  should take this is a much more direct way of thinking about the problem but first thing we're  going to do is focus on the value learning problem and how we can do what is called q learning  and then we'll build up to policy learning after that let's start by digging a bit deeper  into this q function so first i'll introduce this game of atari breakout on the left for those who  haven't seen it i'll give a brief introduction into how the game works now the q value tells  us exactly what the expected total expected return that we can expect to see on any state in  this game and this is an example of one state so in this game you are the agent is this paddle  on the bottom of the board it's this red paddle it can move left or right and those are its two  actions it can also stay constant in the same place so it has three actions in the environment  there's also this ball which is traveling in this case down towards the bottom of the the board  and is about to hit and ricochet off of this paddle now the objective the goal of this game is  actually to move the pedal back and forth and hit the ball at the best time such that you can bounce  it off and hit and break out all of these colored blocks at the top of the board each time the ball  touches one of these colored blocks it's able to break them out hence the name of the the name of  the game breakout and the goal is to knock off as many of these as possible each time the ball  touches one it's gone and you got to keep moving around hitting the ball until you knock off all  of the of the blocks now the q function tells us actually what is the expected total return that  we can expect in a given state action pair and the point i'd like to make here is that it's  actually it can sometimes be very challenging to understand or intuitively guess what is the uh  q value for a given state action pair so even if let's say i give you these two state action pairs  option a and option b and i ask you which one out of these two pairs do you think has a higher q  value on option a we can see the ball is already traveling towards the paddle and the paddle is  choosing to stay in the same place it's probably going to hit the ball and it'll bounce back up  and and break some blocks on the second option we can see the ball coming in at an angle and the  paddle moving towards the right to hit that ball and i asked you which of these two options  state action pairs do you believe will return the higher expected total reward before  i give you the answer i want to tell you a bit about what these two policies actually  look like when they play the game instead of just seeing this single state action pair so let's  take a look first at option a so option a is this this relatively conservative option that doesn't  move when the ball is traveling right towards it and what you can see is that as it plays the game  it starts to actually does pretty well it starts to hit off a lot of the breakout pieces towards  the center of the game and it actually does pretty well it breaks out a lot of the ball a lot of the  colored blocks in this game but let's take a look also at option b option b actually does something  really interesting it really likes to hit the ball at the corner of the paddle uh it does  this just so the ball can ricochet off at an extreme angle and break off colors in the corner  of the screen now this is actually it does this to the extreme actually because it even even in  the case where the ball is coming right towards it it will move out of the way just so it can come  in and hit it at these extreme ricocheting angles so let's take a look at how option  b performs when it plays the game and you can see it's really targeting the side of  the paddle and hitting off a lot of those colored blocks and why because now you can see once  it breaks out the corner the two edges of the screen it was able to knock off a ton of blocks  in the game because it was able to basically get stuck in that top region let's take another  look at this so once it gets stuck in that top region it doesn't have to worry about making any  actions anymore because it's just accumulating a ton of rewards right now and this is a very great  policy to learn because it's able to beat the game much much faster than option a and with much less  effort as well so the answer to the question which stay action pair has a higher q value in this case  is option b but that's a relatively unintuitive option at least for me when i first saw this  problem because i would have expected that playing things i mean not moving out of the way of  the ball when it's coming right towards you would be a better action but this agent actually has  learned to move away from the ball just so it can come back and hit it and really attack at extreme  angles that's a very interesting observation that this agent has made through learning now the  question is how can we use because q values are so difficult to actually define it's hard for humans  to define them as we saw in the previous example instead of having humans define that q value  function how can we use deep neural networks to model this function and learn it instead so in  this case uh well the q value is this function that takes this input to state in action so one  thing we could do is have a deep neural network that gets inputs of both its state and the desired  action that it's considering to make in that state then the network would be trained to predict the  q value for that given state action pair that's just a single number the problem with this is  that it can be rather inefficient to actually run forward in time because if remember how we  compute the policy for this model if we want to predict what is the optimal action that it should  take in this given state we need to evaluate this deep q network n times where n is the number  of possible actions that it can make in this time step this means that we basically have to run  this network many times for each time step just to compute what is the optimal action and this can be  rather inefficient instead what we can do which is very equivalent to this idea but just formulate  it slightly differently is that it's often much more convenient to output all of the q values at  once so you input the state here and you output a basically a vector of q values instead of one q  value now it's a vector of q values that you would expect to see for each of these states so the q  value for state sorry for each of these actions so the q value for action one the q value for  action two all the way up to your final action so for each of these actions and given  the state that you're currently in the this output of the network tells you what  is the optimal set of actions or what is the the breakdown of the q values given these  different actions that could be taken now how can we actually train this version of  the deep queue network we know that we wanted to output these what are called q values but it's  not actually clear how we can train it and to do this and well this is actually challenging  you can think of it conceptually as because we don't have a data set of q values right all  we have are observations state action and reward triplets so to do this and to train this type  of deep queue network we have to think about what is the best case scenario how would the  agent perform optimally or perform ideally what would happen if it takes all the best  actions like we can see here well this would mean that that the target return would be maximized  and what what we can do in this case is we can actually use this exact target return to serve  as our ground truth our data set in some sense in order to actually train this agent to train  this deep q network now what that looks like is first we'll formulate our expected return  if we were to take all of the best actions the initial reward r plus the action that we select  that maximizes the expected return for the next future state and then we apply that discounting  factor gamma so this is our target this is our our q value that we're going to try and uh optimize  towards it's like the what we're trying to match right that's what we want our prediction  to mac match but now we should ask ourselves what does our network predict well our  network is predicting like we can see in this in this network the network is predicting  the q value for a given state action pair well we can use these two pieces of information  both our predicted q value and our target q value to train and create this what we call q loss this  is a essentially a mean squared error formulation between our target and our predicted q values and  we can use that to train this deep queue network so in summary let's uh walk through this  whole process end to end of deep q learning our deep neural network sees as input a state  and the state gets fed through the network and we try to output the q value for each pause  of the three possible actions here there are three different ways that the network  can play you can either move to the left move to the right or it can  stay constant in the same place now in order to infer the optimal policy  it has to look at each of these q values so in this case moving to the left because it sees  that the ball is moving to the left it can it sees that okay if i step a little bit to the left i  have a higher chance of probably hitting that ball and and continuing the game so my q value  for my expected total reward return my q value for moving left is 20. on the other hand if i stay  in the same place let's say i have a q value of 3 and if i move to the right out of the way of the  ball in this case because the ball is already moving towards me i have a q value of 0.  so these are all my q values for all of the different possible actions how do i compute  the optimal policy well as we saw before the optimal policy is obtained by looking at  the maximum q value and picking the action that maximizes our q value so in this case we can  see that the maximum q value is attained when we move to the left with action one so we we  select action one we feed this back into the game engine we send this back to the  environment and we receive our next state and this process repeats all over again the next  state is fed through the deep neural network and we obtain a list of q values for each of the  possible actions and it repeats again now deepmind showed that these networks deep queue networks  could be applied to solve a variety of different types of atari games not just breakout but many  other games as well and basically all they needed to do was provide the state pictorially as an  input passing them through these convolutional layers followed by non-linearities and pooling  operations like we learned in lecture three and at the right hand side it's predicting these q  values for each possible action that it could take and it's exactly as we saw in the previous  couple slides so it picks an optimal action to execute on the next time step depending on  the maximum q value that could be attained and it sends this back to the environment to execute  and and receive its next state this is actually remarkably simple because despite its remarkable  simplicity in my opinion of essentially trial and error they tested this on many many games in atari  and showed that for over 50 percent of the games they were able to surpass human level performance  with this technique and other games which you can see on the right hand side of this plot were more  challenging but still again given how simple this technique is and how clean it is it really  is amazing that this works at all to me so despite all of the advantages  of this approach the simplicity the the cleanness and how elegant the solution  is i think it's and uh i mean above all that the ability for this solution to learn superhuman  uh policies policies that can beat humans even on some relatively simple tasks there are some  very important downsides to queue learning so the first of which is the simplistic model  that we learned about today this model can only handle action spaces which are discrete and it  can only really handle them when the action space is small so when we're only given a few possible  actions as each step it cannot handle continuous action spaces so for example if an autonomous  vehicle wants to predict where to go in the world instead of predicting to go left right or  go straight these are discrete categories how can we use reinforcement learning to learn a  continuous steering wheel angle one that's not not discretized into bins but can take  any real number within some bound of where the steering wheel angle can execute this  is a continuous variable it has an infinite space and it would not be possible in the version of q  learning that we presented here in this lecture it's also its flexibility of q learning is also  somewhat limited because it's not able to learn policies that can be stochastic that can change  according to some unseen probability distribution so they're deterministically computed from the  q function through this maximum formulation it always is going to pick the maximum the action  that maximally elevates your expected return so it can't really learn from these stochastic  policies on the other hand to address these we're really going to dive into this next phase of  today's lecture focused on policy gradient methods which will hopefully we'll see  tackle these remaining issues so let's dive in the key difference now  between what we've seen in the first part of the lecture and the second part that we're  going to see is that in value learning we try to have a neural network to learn our q value  q of our state given or action and then we use this q value to infer the best action to take  given a state that we're in that's our policy now policy learning is a bit different it tries  to now directly learn the policy using our neural network so it inputs the state and it tries to  directly learn the policy that will tell us which action we should take this is a lot simpler since  this means we now get directly the action for free by simply sampling straight away from this policy  function that we can learn so now let's dive into the the details of how policy learning works  and and first i want to really really narrow or sorry drive in this difference from q learning  because it is a subtle difference but it's a very very important difference so deep q networks  aim to approximate this q function again by first predicting given a state the q value for  each possible action and then it simply picks the best action where best here is described by which  action gives you the maximum q value the maximum expected return and execute that that action now  policy learning the key idea of policy learning is to instead of predicting the q values we're going  to directly optimize the policy pi of s so this is the policy distribution directly governing  how we should act given a current state that we find ourselves in so the output here  is for us to give us the desired action in a much more direct way the outputs represent  the probability that the action that we're going to sample or select should be the correct action  that we should take at this step right in other words it will be the one that gives us the maximum  reward so take for example this if we see that we predict these probabilities of these given actions  being the optimal action so we get the state and our policy network now is predicting a probability  distribution of uh we can basically aggregate them into our policy so we can say our policy is  now defined by this probability distribution and now to compute the action that we should  take we simply sample from this distribution to predict the action that we should execute in  this case it's the car going left which is a1 but since this is a probability distribution  the next time we sample we might not we might get to stay in the same place we might sample  action 2 for example because this does have a nonzero probability a probability of 0.1 now note  that because this is a probability distribution this this p of actions given our state must  sum to one now what are some of the advantages of this type of formulation over first of all  over q learning like we saw before besides the fact that it's just a much more direct way to get  what we want instead of optimizing a q function and then using the q function to create our policy  now we're going to directly optimize the policy beyond that though there is one very important  advantage of this formulation and that is that it can handle continuous action spaces so this was an  example of a discrete action space what we've been working with so far in this atari breakout game  moving left moving right or staying in the center there are three actions and they're discrete  there's a finite number of actions here that can be taken for example this is showing the prob our  action space here is representing the direction that i should move but instead a continuous action  space would tell us not just the direction but how fast for example as a real number that  i should move questions like that that are infinite in the number of possible answers  this could be one meter per second to the left half a meter per second to left or any  numeric velocity it also tells us direction by nature through plus or minus sign so if i say  minus one meter per second it tells me that i want to move to left at one meter per second if i say  positive one it tells me i want to move to the right at one meter per second but now when  we plot this as a probability distribution we can also visualize this as a continuous action  space and simply we can visualize this using something like a gaussian distribution in this  case but it could take many different you can choose the type of distribution that fits best  with your problem set gaussian is a popular choice here because of its simplicity so here again we  can see that the probability of moving to the left faster to the left is much greater than moving  faster to the right and we can actually see that the mean of this distribution the average the  point where this normal distribution is highest tells us an exact numerical value of how fast it  should be moving not just that it should be moving to the left but how fast it should be moving  to the left now let's take a look at how we can actually model these continuous action spaces with  a policy gradient method instead of predicting the probability of taking an action given  a possible state which in this case since we're in the continuous domain there would be an  infinite number of actions let's assume that our output distribution is actually a normal gaussian  an output a mean and a variance for that gaussian then we only have two outputs but it allows us  to describe this probability distribution over the entire continuous space which otherwise would  have been infinite an infinite number of outputs so in this case that's if we predict that the mean  that uh the mean action that we should take mu is negative one and the variance is of 0.5 we can  see that this probability distribution looks like this on the bottom left-hand side it should move  to the left with an average speed of negative one meters per second and with some variance so  it's not totally confident that that's the best speed at which it should move to the left but  it's pretty set on that being the the place to move to the left so for this picture we can  see that the paddle needs to move to the left if we actually plot this distribution like this  and we can actually see that the mass of the distribution does lie on the left-hand side of  the number line and if we sample for example from this distribution we can actually see that in this  case we're getting that the action that we should take the concrete velocity that should be executed  would indicate that we need to move left negative at a speed of 0.8 meters per second so again  that means that we're moving left with a speed of 0.8 meters per second note here that  even though the the mean of this distribution is negative one we're not constrained to that  exact number this is a continuous probability distribution so here we sampled an action that  was not exactly the mean but that's totally fine and that really highlights that the difference  here between the discrete action space and the continuous action space this opens up a ton  of possibilities for applications where we do model infinite numbers of actions and again  like before like the discrete action case this probability distribution still has all of the  nice properties of probability distributions namely that the integral of this computed  probability distribution does still sum to one so we can indeed sample from it which  is a very nice confirmation property okay great so let's take a look now of how  the policy gradients algorithm works in a concrete example let's start by revisiting this  whole learning loop of reinforcement learning again that we saw in the very beginning of  this lecture and let's think of how we can use the policy gradient algorithm that we have  introduced to actually train an autonomous vehicle using using this trial and error policy gradient  method so with this case study study of autonomous vehicles or self-driving cars what are all of  these components so the agent would be our vehicle it's traveling in the environment which is the  the world the lane that it's it's traveling in it has some state that is obtained through  camera data lidar data radar data et cetera it obtains sorry it makes actions what are the  actions that it can take the actions in this case are the steering wheel angle again this is a  concrete example of a continuous action space you don't discretize your steering wheel angle into  unique bins your steering wheel angle is infinite in the number of possibilities it can take and  it can take any real number between some bounds so that is a continuous problem that is a  continuous variable that we're trying to model through this action and finally uh it  receives rewards in the in the form of the distance it can travel before it needs to  be uh needs some form of human intervention so let's now dive in now that we've  identified all of those so how can we train this car using policy gradient network in this  context here we're taking self-driving cars as an example but you can hopefully see that we're  only using this because it's nice and intuitive but this will also apply to really any domain  where you can identify and set up the problem like we've set up the problem so far so let's  start by initializing our agent again our agent is the vehicle we can place it onto the road in  the center of the road the next step would be to let the agent run in the beginning it doesn't  run very well because it crashes and well it's never been trained before so we don't expect it  to run very well but that's okay because this is reinforcement learning so we run that policy until  it terminates in this case we mark terminations by the time that it it crashes and needs to be  taken over along that uh what we call rollout we start to record all of the state action pairs  or sorry state action reward pairs so at each step we're going to record where was the robot what  was it state what was the action that it executed and what was the reward that it obtained  by executing that action in that state now next step would be to take all of  those state action reward pairs and actually decrease the probability of taking  any action that it took close to the time where the terminated determination happened so  close to the time where the crash occurred we want to decrease the probability of making  any of those actions again in the future likewise we want to increase the probability  of making any actions in the beginning of this episode note here that we don't necessarily  know that there was something good in this first part of the episode we're just assuming  that because the crash occurred in the second part of the episode that was likely due to an  action that occurred in that second part this is a very unintelligent if you could say algorithm  because it that's all it assumes it just tries to decrease the probability of anything that resulted  in a low reward and increase the probability of anything that resulted in a high reward it doesn't  know that any of these actions were better than the other especially in the beginning because  it doesn't have that kind of feedback this is just saying that we want to decrease anything that  may have been bad and increase anything that would have been good and if we do this again we can see  that the next time the car runs it runs for a bit longer and if we do it again we do the same thing  now on this rollout we decrease the probability of actions that resulted in low reward and increase  the probability that resulted in positive or high reward we reinitialize this and we run it  until completion and update the policy again and it seems to run a bit longer and we  can do this again and we keep doing this until eventually it learns to start to follow the  lanes without crashing and this is really awesome i think because we never taught this vehicle how  anything well we never taught it anything about lanes we never taught it what a lane marker is  it learns to avoid lanes though and not crash and not crash just by observing very sparse  rewards of crashing so it observed a lot of crashes and it learned to say like okay i'm not  going to do any of these actions that occurred very close to my crashes and just by observing  those things it was able to successfully avoid lanes and survive in this environment longer and  longer times now the remaining question is how we can actually update our policy on every  training iteration to decrease the probability of bad events and increase the probability of these  good events or these good actions let's call them so that really focuses and narrows us into points  four and five in this this training algorithm how can we do this learning process of decreasing  these probabilities when it's bad and increasing the probabilities when they're good let's  take a look at that in a bit more detail so let's look at specifically the loss function  for training policy gradients and then we'll dissect it to understand exactly why this works  so this loss consists of really two parts that i'd like to dive into the first term is this log  likelihood term the log likelihood of our pro of our our policy our probability of an action  given our state the second term is where we multiply this negative log likelihood by the total  discounted reward or the total discounted return excuse me r of t so let's assume that we get a  lot of reward for an action that had very high log likelihood this loss will be great and it  will reinforce these actions because they resulted in very good returns on the other hand if the  reward is very low for an action that it had high probability for it will adjust those probabilities  such that that action should not be sampled again in the future because it did not result in  a desirable return so when we plug in these uh this loss to the gradient descent algorithm to  train our neural network we can actually see that the policy gradient term here  which is highlighted in blue which is where this this algorithm gets  its name it's the policy because it has to compute this gradient over the policy part of this  function and again uh just to reiterate once more this policy gradient term consists of these  two parts one is the likelihood of an action the second is the reward if the action is very  positive very good resulting in good reward it's going to amplify that through this gradient  term if the action is very is very probable or sorry not very probable but it did result in  a good reward it will actually amplify it even further so something that was not probable before  will become probable because it resulted in a good return and vice versa on the other side as well  now i want to talk a little bit about how we can extend some of these reinforcement  learning algorithms into real life and this is a particularly challenging question  because this is something that has a particular interest to the reinforcement learning field  right now and especially right now because applying these algorithms in the real world  is something that's very difficult for one reason or one main reason and that is this step  right here running a policy until termination that's one thing i touched on but i didn't spend  too much time really dissecting it why is this difficult well in the real world terminating means  well crashing dying usually pretty bad things and we can get around these types of things  usually by training and simulation but then the problem is that modern simulators do not  accurately depict the real world and furthermore they don't transfer to the real world when  you deploy them so if you train something in simulation it will work in simulation it will  work very well in simulation but when you want to then take that policy deployed into the real  world it does not work very well now one really cool result that we created in my lab was actually  developing a brand new photo realistic simulation engine specifically for self-driving cars that i  want to share with you that's entirely data driven and enables these types of reinforcement learning  advances in the real world so one really cool result that we created was developing this type of  simulation engine here called vista and allows us to use real data of the world to simulate brand  new virtual agents inside of the simulation now the results here are incredibly photorealistic as  you can see and it allows us to train agents using reinforcement learning in simulation using exactly  the methods that we saw today so that they can be directly deployed without any transfer learning  or domain adaptation directly into the real world now in fact we did exactly this we placed agents  inside of our simulator train them using exactly the same policy grading algorithm that we learned  about in this lecture and all of the training was done in our simulator then we took these policies  and put them directly in our full-scale autonomous vehicle as you can see in this video and on the  left hand side you can actually see me sitting in this vehicle in the bottom of the interior  shot you can see me sitting inside this vehicle as it travels through the real world completely  autonomous this represented the first time at the time of when we published these results the  first time an autonomous vehicle was trained using rl entirely in simulation and was able to be  deployed in the real life a really awesome result so now we have covered the fundamentals behind  value learning as well as policy gradient reinforcement learning approaches i think now  it's really important to touch on some of the really remarkable deep reinforcement learning  applications that we've seen in recent years and for that look we're going to turn first  to the game of go where reinforcement learning agents were put to the test against human  champions and achieved what at the time was and still is extremely exciting results so first  i want to provide a bit of an introduction to the game of go this is a game that consists of 19  by 19 uh grids it's played between two players who hold either white pieces or black pieces  and the objective of this game is to occupy more board territory with your pieces than your  opponent now even though the grid and the rules of the game are very simple the problem of go  solving the game of go and doing it to beat the grand masters is an extremely complex problem  it's it's actually because the number of possible board positions the number of states that you  can encounter in the game of go is massive the full size with the full-size board there  are greater number of legal board positions than there are atoms in the universe now the  objective here is to train an ai to train a machine learning or deep learning algorithm  that can master the game of go not only to beat the existing gold standard software but  also to beat the current world human champions now in 2016 google deepmind rose to this challenge  and a couple and several years ago they actually developed a reinforcement learning based pipeline  that defeated champion go players and the idea at its core is very simple and follows along with  everything that we've learned in this lecture today so first a neural network was trained and  it got to watch a lot of human expert go players and basically learn to imitate their behaviors  this part was not using reinforcement learning this was using supervised learning you basically  got to study a lot of human experts then they use these pre-trained networks to play  against reinforcement learning policy networks which allows the policy to go beyond what  the human experts did and actually play against themselves and achieve actually  superhuman performance in addition to this one of the really tricks that brought this to be  possible was the usage of this auxiliary network which took the input of the state of the board  as as input and predicted how good of a state this was now given that this network the ai  could then hallucinate essentially different board position actions that it could take  and evaluate how good these actions would be given these predicted values this essentially  allowed it to traverse and plan its way through different possible actions that it could take  based on where it could end up in the future finally a recently published extension of these  approaches just a few years ago in 2018 called alpha zero only used self-play and generalized to  three famous board green board games not just go but also chess shogi and go and in these examples  the authors demonstrate that it was actually not necessary to pre-train these networks from human  experts but instead they optimized them entirely from scratch so now this is a purely reinforcement  learning based solution but it was still able to not only beat the humans but it also beat the  previous networks that were pre-trained with human data now as recently as only last month  very recently the next breakthrough in this line of works was released with what is called  mu0 where the algorithm now learned to master these environments without even knowing the rules  i think the best way to describe mu0 is to compare and contrast its abilities with those previous  advancements that we've already discussed earlier already today so we started this discussion  with alphago now this demonstrates superhuman performance with go on go using self pro self  play and pre-training these models using human grand master data then came alphago 0 which showed  us that even better performance could be achieved entirely on its own without pre-training  from the human grandmasters but instead directly learning from scratch then came alpha  zero which extended this idea even further beyond the game of go and also into chess  and shogi but still required the model to know the rule and be given the rules of the  games in order in order to learn from them now last month the authors demonstrated  superhuman performance on over 50 games all without the algorithm knowing the rules  beforehand it had to learn them as well as actually learning how to play the game  optimally during its training process now this is critical because in many scenarios  we do not know the rules beforehand to tell the model when we are placed in the environment  sometimes the rules are unknown the rules or the dynamics are unknown objects may interact  stochastically or unpredictably we may also be in an environment where the rules are simply  just too complicated to be described by humans so this idea of learning the rules of the game  or of the task is a very very powerful concept and let's actually walk through very very briefly  how this works because it's such an awesome algorithm but again at its core it really builds  on everything that we've learned today so you should be able to understand each part of this of  this algorithm we start by observing the board's state and from this point we predict or we perform  a tree search through the different possible scenarios that can arise so we take some actions  and we look at the next possible scenarios or the next possible states that can arise but now since  we don't know the rules the network is forced to learn the dynamics the dynamics model of how to  do this search so to learn what could be the next states given the state that it currently sees  itself in and the action that it takes now at the base time this gives us this probability of  executing each of these possible actions based on the value that it can attain through this branch  of the tree and it uses this to plan the next action that it should take this is essentially  the policy network that we've been learning about but amplified to also encounter this tree  search algorithm for planning into the future now given this policy network it takes this  action and receives a new observation from the game and repeats this process over and over  again until of course that it the game finishes or the game is over a very similar this is very  very similar to how we saw alpha zero work but now the key difference is that the dynamics  model as part of the tree search that we can see at each of these steps is entirely learned  and greatly opens up the possibilities for these techniques to be applied outside of  rigid game scenarios so in these scenarios we do know the rules of the games very well so  we could use them to train our algorithms better but in many scenarios this type of advancement  allows us to apply these algorithms to areas where we simply don't know the rules and where we  need to learn the rules in order to play the game or simply where the rules are much harder  to define which in reality in the real world many of the interesting scenarios  this would be exactly the case so let's briefly recap with what we've learned in  the lecture today we started with the foundations of deep reinforcement learning we defined what  agents are what actions are what environments are and how they all interact with each other in  this reinforcement learning loop then we started by looking at a broad class of q learning  problems and specifically the deep q network where we try to learn a q function given a state  and action pair and then determine a policy by selecting the action that maximizes that q  function and finally we learned how we could optimize instead of optimizing the q value or the  q function learn to directly optimize the policy straight from straight from the state and we saw  that this has really impactful applications in continuous action spaces where q functions or  this q learning technique is somewhat limited so thank you for attending this lecture on deep  reinforcement learning at this point we'll now be taking the next part of the class which we focused  on reinforcement learning and you'll get some experience on how you can apply these algorithms  onto all by yourself specifically focusing on the policy gradient algorithms in the context  of a very simple example of pong as well as more complex examples as well so you'll actually  build up this body and the brain of the agent and the environment from scratch and you'll really  get to put together a lot of the ideas that we've seen today in this lecture so please  come to the gather town for if you have any questions and we'd be happy to discuss questions  on the software lab specifically as well as any questions on today's lecture so we look  forward to seeing you there thank you
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_Automatic_Speech_Recognition.txt
um yeah well thanks i'll i i figured i would take a few minutes to just set the context of why rev cares about speech recognition um and give a little bit of history overview of rev and then i'll let jenny explain the cool stuff and the technical you know aspects of our work um so at rev our founding mission has always been to create work-at-home jobs for people powered by ai um and so what rev is is a double-sided marketplace that enables people to work from anywhere in the world and they the work they do is they transcribe they caption or they subtitle media i sometimes call it uber for transcription right and so imagine before rev um the the if you wanted anything to be transcribed you'd have to go to like a website like up upwork or fiverr or maybe even craigslist you know find a transcriber make sure they're good at what they do negotiate a price and so on and you know you send your audio and you kind of hope that that they do a good job there weren't many very good website catering the single user for transcription today when you know now that we've built rev you know you you can as a single user just come to rev drop a file you know in a gui put your credit card and a few minutes later you get a transcript um and so we really set out to turn this cumbersome you know process into some more of a magical experience you know and so everything is hidden for you and you just give your audio you get a transcript back and today rev has over 170 000 customers um and more importantly we create work at a at home jobs for over 60 000 people we call them reverse and the result of you know having this amazing marketplace is that we transcribe we transcribe over 15 000 hours of media a week which if you think about it is like producing training data for asr right so if you're vaguely familiar with asr one of the most classic data set is called libre speech and it's about a thousand hours and most you know people in the research community uses you know data sets that are around a thousand two thousand you know um there are a few bigger data sets nowadays that are coming out uh but 15 000 hours of data every week is is you know like a massive amount um so you know quickly why do we care about asr at rev it should be a bit obvious but you know as a marketplace that that does caption subtitles and transcription you know um powering our reverse with ai means using speech recognition right so um our biggest customer is our internal customer we run every file that we get at rev through a speech recognition engine to produce a first draft so that the transcribers can go ahead and kind of fix it and you know work with work with that draft um but we also offer our api externally on revi um as a way for people to build their own voice applications like if you know a company called loom or the script you know both great companies powered by revi um and as now jenny's showing this this slide you know and we've accumulated a lot of data over the years uh 200 minutes is is like three 200 million minutes is is over three million hours of transcribed data so we have this massive amount of data and the question really becomes how do we use that data to create you know world-class asr models and uh i'm happy to hand it over to jenny and to show you some technical uh aspects of rev i hope you enjoy it thanks miguel um very excited to be here today i actually did my phd at mit i graduated in 2020 and i've been at rev since then and really my main project since coming to rev has been developing our new end-to-end deep learning asr model that we're very excited to have in beta release right now um so today what i'm going to be walking through first i'm just going to talk a little bit about um the performance of this model and sort of why we're so excited about it but then i'll i'll go back to the beginning and sort of talk about the development of this kind of model and the modeling choices that go into end-to-end asr and especially the big learning experience it's been for me to go from an academic setting at mit where the models i trained were just for me to use and test and ultimately you know report their performance versus um developing a model that customers are going to actually use and sort of all the extra things that go into that so this model that we're releasing now is we're calling it version two of asr at rev um so version one was what's called a hybrid architecture it had some deep neural network components in it but also a lot of other things going on based on you know probably 30 40 years of research into really specific aspects of the speech recognition problem and deep learning has really come along and blown a lot of that out of the water it's pretty incredible so we're really excited to um have this model now and and be able to share it so as i said the first thing i wanted to talk about um is um just sort of the the performance of this new model that we're releasing um these are results on a data set that we've actually released this is just test data so it's basically a benchmark that we can use to compare asr systems against each other and this data all comes from earnings calls from public companies and we think it's more representative than what's out there sort of in the academic community it's more representative of the types of data that we actually see on rev and you know what we're interested in transcribing for our customers and the y-axis of this chart is word error rate and that's really the standard metric for asr the way we measure our performance so for this case we took all of these earnings calls and we actually submitted them to rev.com to get gold standard transcripts from human transcribers and we use those as our references and then we compare the asr output to those and word error rate as a metric you can probably guess since it has error in the name is measuring how many errors that we make and so lower is better in this case we want to make as few errors as possible in our asr system so what i'm showing here is a comparison of several of our competitors that have open apis that you can run audio through and get transcripts back from and then our hybrid model from rev which was version one of our asr and then end to end our new v2 asr system so the first thing i wanted to highlight on the left hand side is the overall word error rate of our models um so because of the amazing data resources that we have at rev and i'll talk a little bit more about how we use that data in a minute but because of the really incredible resources that we have we were already getting really great performance out of the hybrid architecture that we had previously but now with end to end we're able to get you know additional significant improvements on top of the really nice performance that we already had the other thing i wanted to highlight on this chart so the rest um other than the sort of overall category the the rest of the categories shown on this chart are different types of entities that occur in our transcripts and so we label those entities and then calculate word error rate just on those words to get a sense of how well we're performing on different types of words um i wanted to highlight in particular the org entity which stands for organization so this is essentially company names and the person entities so these are both things that come up frequently in earnings calls and i would say arguably are the most important words in the transcript getting these words right is really imperative for to make it so that someone who's reading the transcript can get you know the same information out of the transcript as someone who's listening um to the audio um so what you can see is that you know we are doing a really good job relative to our competitors on these words which is exciting for us um but you can also see that these types of words are definitely some of the hardest ones to get right and our error rate is still quite high overall so i just wanted to illustrate you know speech recognition it's definitely even in english it's definitely not a solved problem and we certainly have plenty of work we can keep doing to try to improve on this and the next set of results that i wanted to show um are on a test benchmark set that we did not create but that's an open source data set designed to help um research groups understand the bias in different asr systems so here we have our end to end engine compared against just a couple of our competitors and the data has been broken out into different nationalities according to the speakers and so this is something that's really important to us and something that we make sure to track and that we're constantly trying to improve on um so here we can see we're definitely doing a good job on what i would call the maybe most common accents that you might expect to see you know in a us-based company we certainly get a lot of data from the us but also you know canada england australia our large english-speaking companies and we do well sorry not companies countries um and we do well uh on data from those nationalities um one thing that's really interesting about end-to-end models is that it can be hard to really track down and understand why the model does well on certain particular things and not well on other things so one thing that i think is interesting in this data is that it turns out we do quite well on scottish accents and very poorly on irish accents so bias is definitely something that we are constantly working on and trying to improve on and that our research team is very interesting is very interested in um so i just wanted to highlight again you know we're very excited about the performance of our models but there's always always work to do in terms of trying to improve them so now that i've talked a little bit about model performance i want to go back and start from the beginning and talk about the development of an end-to-end model for speech recognition and a little bit about the the process we went through in terms of um you know making making modeling choices and trying to get the best results that we could so i hope you've learned over the course of this week in this class that data is really the foundation of any machine learning model um without without data there's not a huge amount you can do as miguel talked about earlier luckily for rev data is something we have in abundance we're very lucky on that front and for speech recognition in particular when you think about it in comparison to other say language tasks something like language modeling or machine translation there's the aspect of learning to model how text is generated which is a difficult problem in and of itself and speech recognition we have to our models have to be able to do that but given that we have audio input the models also have sort of an extra level of difficulty in terms of having to learn what information in the audio signal is important and what information is essentially irrelevant to our speech recognition task one thing that's very different about developing models at rev or i think really anywhere in industry versus in the academic setting is that academic literature and the benchmarks that people use well actually already mentioned libra speech those models tend to cover really relatively small domains where all of the audio seen both at training and test time was you know recorded in relatively narrow conditions for us our goal is really to produce a model that can handle anything our customers want to throw at it so any audio that gets submitted to rev.com we want to be able to do a good job transcribing it and that's a much larger and more difficult problem than you tend to see in academia so miguel showed the slide earlier with sort of the the tip of the iceberg of the data that we're actually using to train our models currently versus the data that we have access to um the most important data selection we do before we train our models is that we've decided as a team at least for the moment to only use what we call verbatim transcripts for model training um the verbatim product on rev.com asks transcriptionists to transcribe literally everything that's said in an utterance so this includes um things like filler words disfluencies any errors things like that everything that gets said gets transcribed most of the data that comes through rev.com gets what's called non-verbatim transcription um where transcriptionists um have the option and the ability to sort of make small corrections um with the goal of making the transcript as readable as possible but for us for asr training it's very important that we get um everything that's said um several of our customers miguel mentioned both loom and descript actually um have as part of their product that they flag disfluencies and filler words and in some cases like in descript i believe you can automatically remove those from the audio based on um our asr output that says where they are in the audio signal um so it's very important to us that we get those correct the next piece of preparing to train a speech recognition model is to actually break the input down into segments that we can train on um our audio data that we get is often long files so it could be 20 minutes could be several hours um so long files of audio and transcripts and it's not realistic to you know feed one whole 20 minute audio and the transcript for it into a deep learning model and try to you know back propagate through that entire transcript typically what you see again in academia is data sets broken up into single sentences both at training time and test time for us at test time we don't know anything about segmentation and we have to essentially segment the audio arbitrarily we do some what's called voice activity detection to try to segment the audio around pauses but um people don't always pause where you think they will um and so we've found that actually we can get the best results from our models if at training time we split up our training data into essentially arbitrary segments of different lengths um and also including multi-speaker segments so having um having some segments that go from one speaker into another speaker that actually gets us the best results in terms of speech input to um to these models speech or i should say audio is you know one-dimensional signal that's high frequency so typically we see 16 kilohertz audio which means 16 000 samples per second where each sample is just one number it is possible to feed that to a neural network and essentially learn features from it but we find that everything is sort of simpler and easier to handle if we first do some pretty simple signal processing to turn our audio into a spectrogram like you see in the bottom image so what we use as input is a series of vectors we have one vector typically every 10 milliseconds and the vector has the energy in the different frequency bins in our signal and it is possible also to think of that signal as more of an image like you see here um and i'll talk a little bit about how some of the sort of low level processing we do in the neural network has some commonalities with what you might have seen in networks for image recognition as well the next piece of the model i think you guys might have seen this before sort of in the language modeling context so i'll go over it quickly but a really important piece of the model is deciding how we're going to break up the text data in order to have the model generate text output so one option is to produce output word by word another option would be to break everything up into characters and produce characters one at a time um if you go the words option um you tend to have sparsity problem in that you have to in that your model can only generate words that it saw during training and realistically probably can only generate words it saw multiple times during training so a lot of you know rare rare words um will just sort of be inaccessible to your model if you go that way with characters we don't have a sparsity problem there's a very small set of characters and the model will definitely learn to generate all of them um but now you have often very long output sequences and it becomes much harder for the model to learn um sort of longer range dependencies between words um so like in the in the language model case you know often it's important um when you're generating a word to look look back you know maybe several words maybe back to the beginning of a sentence and in the character case that can be very far away in terms of the length of the sequence um so what the field seems to have kind of settled on is the use of what are called subword units or wordpiece units um so here we break words up um sometimes some of our units are whole words but the words can also be broken up into units that can be single characters or longer than single characters in speech recognition we generally use the same techniques for this that are used in language modeling or like machine translation so either what's called bite pair encoding or a unigram model for this um they're both a bit heuristic i would say but they tend to work really well in practice and this is sort of what the field has settled on uh jenny before we move uh ben had a question about uh some of the design the the decision process around using mel scale male frequency scale versus other other approaches sure yeah happy to jump into that a little bit um this is definitely something that is um sort of left over from older speech recognition research um the mel scale the the way these filter banks are um are set up is designed to sort of mimic actually the way the human ear processes audio so humans actually can do a much better job of distinguishing between lower frequencies than higher frequencies and so this is intended to reflect that as i said there are some neural network models that can take a raw audio signal and essentially learn these filters as part of end-to-end model training um my understanding is that for the most part they tend to learn filters that look kind of like these ones um and i haven't seen results where that really ends up adding any extra accuracy from just doing the signal processing up front um so for us we just find it much simpler to to do it in signal processing rather than include it in the network nice is that cool thanks uh actually the just forever the question was was uh around now scale versus bark scale and i i've actually never heard of anybody using bark scale and speed tracks i i don't know exactly uh why that would be but i'll look it up after this talk yeah go for it i'm i'm also not sure i've i've heard of it but i think mel scale is definitely standard for speech recognition um it seems to be what everyone uses all right cool um so moving on to the actual deep learning modeling choices um i wanted to start with the encoder decoder model with attention again because i believe or hope that you guys might have already seen this a little bit in a previous lecture and the reason why you might have seen it before is because this is a very standard generic model that really could be applied to almost any sequence to sequence problem um and see speech recognition is no exception this works very well for speech recognition so this is um you know at a really high level what our model looks like and just really the key like thing to remember about these models um is the auto-regressive decoder um so these models produce output one unit at a time um and each output as it's produced is fed back into the model to bias what the next output will be so another way to think about it if you've been exposed to like neural language models but not this kind of model before is that essentially the decoder is a neural language model but it's also conditioned on embeddings of the input audio so for speech recognition there are just a few um basic choices to make about these architectures as i said earlier you can sort of think about our speech features as an image and we actually do a first embedding layer in our encoder that looks a lot like sort of the the low low level layers of like a vgg network for image recognition if anyone has seen seen that but basically we just do some convolutional layers to pull out low level features from our speech speech features here in the embedding we find that it's also useful to just do some down sampling as i mentioned earlier typically we we use 10 millisecond frames as our speech features um so that's 100 frames per second um so the output sequence is much much much longer sorry the input sequence is much much much longer than the output sequence um so it just makes things simpler to sort of down sample everything up front so we typically down sample by a factor of four and then if you're using a transformer layer as i'll talk about in the next part um this is where we also add relative positional encodings um and this helps the um the transformers it gives it you know more information about where in the sequence each input feature originally came from um and then as i said there's a choice about what kind of actual layer to use recurrent neural networks were very standard until really very recently it's been a very quick change towards transformers but transformers are definitely very popular and they're very effective in speech as they are in other arenas like language modeling and you know this nice feature about transformers that they're actually quite efficient at training time um is definitely a plus you know if you're training really big models on a lot of data which is what we're doing and having more efficient training is definitely nice we actually use something that's called a conformer i think i've only seen it in the speech recognition context but i wouldn't be surprised if this is something people are using for other types of problems it's pretty simple it just has you know this extra convolutional layer um sort of stuck after the self-attention layer of um of the transformer and these have been shown to be sort of a bit more efficient in terms of being able to get slightly better performance out of the same number of parameters or similar performance out of fewer parameters compared to a pure transformer model so attention-based asr models um ha work really well um definitely it's been a little while now it feels like a long time in sort of the deep learning world but really not that long since um they sort of officially surpassed the older hybrid architecture in terms of performance and that was with recurrent neural network models so with transformer models now we see even better performance out of these models um like really some really impressive numbers that i think people thought we might never hit in terms of accuracy um but there's a big problem here um and one that i was actually not really aware of i wrote my entire phd thesis on attention-based asr models but now that i'm at rev i realized that these models are just not practical for commercial applications so it's very hard to have you know a big enough model that gets good performance that's also fast enough to use for inference for us we offer two different products with our asr models we have streaming which means like live captioning so it's very important that you know the transcription process be fast enough to keep up with audio uh but even for offline speech recognition where speed is slightly less important um you know there's still trade-offs to be made related to compute costs um and whether um you know it really makes sense to offer a product at a price that customers are willing to pay um so ultimately these models are great but we actually can't use them at rev in production so um what do we do instead it turns out that there's this older algorithm called connectionist temporal classification or ctc um that's actually very well suited to asr and performs the backbone i think of most of the industry speech recognition systems that you'll see and certainly is the backbone of ours so ctc is not a generic sequences sequence algorithm it only works for certain types of problems in particular it's really best when the alignment between your input and output is monotonics so for speech recognition we definitely meet this criteria but something like translation where depending on your languages the alignments could be kind of all over the place would not work well with ctc another sort of criteria is that the output sequence should be the same length or shorter than the input sequence again this is perfect for asr our input sequences are very long relative to our output sequences but again something like translation you don't know necessarily whether the input or output sequence will be longer for any given example and ctc is um it's not a model architecture exactly it's actually sort of a loss function and a decoding algorithm that sits on top of a deep learning model so the main properties of the models that we use for ctc um is that the first one is that they are not auto regressive so unlike um the model we saw previously where our outputs are produced one at a time and that really contributed to um sort of the the slowness in the inference process um here we don't have any of that feedback feedback and the outputs are generated sort of all at once out of the model um and some of the probability calculations i'm going to talk about on the next slide are um dependent on the assumption that the outputs are conditionally independent given the inputs the other thing you need from a model to be used for ctc is a softmax output layer um so this is the same thing you'd see with an encoder decoder model with attention you have the softmax layer and that gives you um each output is a probability distribution over your output vocabulary so like your list of characters or sub units for ctc we add one extra symbol to our output vocabulary which is a blank symbol and i'll talk on the next slide about how we're going to use that so with the encoder decoder model which again i'm hoping you saw a little bit in the past but essentially you can just calculate the probability of your output sequence um by uh summing up the log probabilities that you have in each of your softmax outputs for each time step so here we can do the same thing assuming that we have an output sequence that's the same length as our model outputs our issue is that typically what we want to calculate is the probability of a label sequence that is shorter in the asr case much shorter than the output sequence so the way we do this is we say the probability of the sequence z the shorter sequence is simply the sum of the probability of all of the longer sequences that reduce to z and what i mean by reduce um so y reduces to z if y and z are the same once you remove all of the blanks and repeats from y so i have a couple examples down here um if our desired output sequence say we're using characters is three characters long it's c-a-t uh but our model outputs um are four characters long um then here are a couple examples of four character output sequences that reduce to the three character output sequence c a t so for this simple example you could actually just write out all of the different y's that reduce to this labeling um and you could calculate their probabilities individually and just add them up but as these sequences get large that's um very inefficient um there's a lot of redundancy in the different sequences and just the the length of the list of probabilities that you need to add up gets very long um so what's nice about ctc is it comes with a really elegant dynamic programming algorithm um that lets us very efficiently calculate this probability of z given x and also is can be easily modified to be used in decoding with an algorithm that's called ctc prefix beam search i'm not going to get into the details on this but i actually think this original paper is really nice and i would definitely recommend anyone who's interested go back and read it so ctc in terms of performance it's definitely not as good as our encoder decoder models um but it can be better than the hybrid models and at least in the right conditions so we're moving in the right direction um in particular recent advances in terms of transformer or conformer models and large data makes ctc work pretty well and what's really nice is that the decoding or inference is very fast so it's even faster than the hybrid model and it's much faster than the attention-based model so now we've found something that we actually could put into production um i think i'm gonna stop for a second just ask what are we thinking about timing i know we started a tiny bit late um just wanted to check in yeah that's that's i don't know how much time you have sort of in your deck but we can take another five to ten minutes or so that's okay okay great i'll try to move a little bit quickly but i think that's very doable thanks awesome um so yeah i think i think ctc is just a really good reminder that um even though you know these deep learning models are incredibly powerful and the results are can be really amazing um it's not always the best idea to just choose the most powerful model possible and throw as much data at it as possible that we can still get you know real benefit from um you know our computer science fundamentals and thinking a little bit more deeply about the problem um i think that's always a good lesson um in this context so despite um the ctc model being um you know reasonable to use like i said it can be better than the hybrid models and it is faster we still want to see if we can get closer in terms of accuracy to the performance of the encoder decoder models while still keeping the efficiency reasonable um for um for inference that we could actually put it into production so there are a bunch of different ways we can do this the first one is to add an externally trained language model so ctc the way the beam search works it's actually very easy to take scores from a language model and add them in technically this is no longer an end to end model because now we have a language model that's trained separately but in practice it does work well so we can use any kind of language model whether it's a neural language model or an n-gram model sort of an older statistical model and that model can be trained either to predict um words or sub words if it's some word level it should be the same vocabulary as as the asr model itself and these combine well there is some cost obviously to getting the probabilities out of the language model um so there's some trade-off here between accuracy and um and speed or compute cost um you know if you throw you know a humongous transformer language model in there it definitely is going to slow things down so it's still um there's still some some work there to understand you know what the trade-offs are and what what makes the most sense if we want to stick with a pure end-to-end model where all of the parameters are trained together um it's actually possible to basically add a language model to ctc as part of a deep neural network model and this is called transducer i think i most commonly see it called rnnt even if there are no rnn's in it anymore um it was rnnt when it was originally introduced but definitely as i said transformers have kind of taken over um but anyway this model here the prediction network the sort of green box in the diagram on the bottom left is essentially a language model so it takes um it takes previous outputs from the model and updates um updates this hidden embedding based on those previous outputs but this can be trained with the ctc loss and can use the same ctc decoding algorithm and again assuming that the prediction network and joint network are relatively small and not too costly this can definitely still fit into sort of the compute budget for a production system last thing i want to talk about is what we have actually decided to do at rev and that is a joint ctc and attention model so this is the encoder decoder model with the tension that we saw previously um so we can take the encoder um just add a softmax layer and turn the encoder into a ctc model and we can train all of this jointly by just adding the ctc loss and the attention based loss together and what this gives us at the end is essentially two complete models that just share encoder parameters this was originally developed as a way to actually improve the accuracy of the encoder decoder model with attention so there's a way to do the attention decoding while also incorporating scores from the ctc module and that actually gives i think that's the best performing asl architecture overall that i've seen um but it's um you know similarly slow to the encoder decoder model with attention um so what we do instead is um we use a two pass decoding framework so first we pass the audio through the encoder and we get our ctc we use our ctcd coding to get an end best list of hypotheses and then we can use the attention decoder to do what's called rescoring and so here we're able to feed each hypothesis that came out of ctc decoding into the into the attention decoder and get the probability of that hypothesis according to the attention decoder and what's nice about this is that we can actually do some of the parallelization that we saw during training in transformer models and we can do this at inference time as well um so here we're able to get actually word aerates very close to what we achieve with attention decoding but it's much faster than attention decoding is um so i have a couple of slides about like research projects that we're working on but i think i will pause and we can take questions um take questions here yeah awesome thank you so much jenny and nico for such an illuminating talk i think really touched on a lot of concepts and also connected them back to some of the topics we introduced in the lectures which we really appreciate um maybe as people are thinking of questions i can start with one the results you showed with respect to bias in the data sets and potential effects on model performance were very or very intriguing to me i'm curious if you can speak a little bit more to strategies that you're exploring for trying to handle and mitigate these biases in your asr pipeline yeah absolutely um so one thing that we already do to a certain extent but we're looking into doing more of is actually making more use of our rever human transcribers to have them help us label data um with accents we have some labeling but you know the more data we can collect the better the ultimate goal of that is really to do a better job of balancing the training data we think that's probably the sort of easiest approach in terms of you know getting the models to do a better or less biased job across different types of data um we are also um we have looked at least in the past with our hybrid model and i think it's something we would consider as well for this end-to-end model is um sort of post-processing steps that can potentially um account for some of these issues and try to do some um some error correction after the fact um yeah those are i think those are our two main strategies at the moment yeah um if i i can add that if you remember that iceberg you know slide that i think i think the answers are hidden in all the data you know and a big part of it is like us figuring out a way to uh to mine that data and and you know rebalance things uh as jenny said and there's a few techniques i think to learn like curriculum learning and things like that that that maybe we could explore one one of our research papers is around curriculum learning very interesting yeah i'd love to follow up on on that topic later on thank you sure yeah um i see one question in the chat about the first pass that we use in the two pass model um yeah so for for right now actually um we are um looking into the transducer models and it's simply definitely something we're interested in those seem to perform well um but we're currently using um ctc with an engram language model added after the fact as our first pass and that is a conformer the encoder that we use that we use for both ctc and the embeddings that get fed to the attention decoder as a conformer model thank you i just want to say quickly i did put our email addresses here um definitely feel free to reach out if anyone has other questions or wants to talk about rev or just asr in general i'm always happy to chat with people thanks
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2023_Deep_Generative_Modeling.txt
foreign I'm really really excited about this lecture because as Alexander introduced to yesterday right now we're in this tremendous age of generative Ai and today we're going to learn the foundations of deep generative modeling where we're going to talk about Building Systems that can not only look for patterns in data but can actually go A Step Beyond this to generate brand new data instances based on those learned patterns this is an incredibly complex and Powerful idea and as I mentioned it's a particular subset of deep learning that has actually really exploded in the past couple of years and this year in particular so to start and to demonstrate how powerful these algorithms are let me show you these three different faces I want you to take a minute think think about which face you think is real raise your hand if you think it's face a okay you see a couple of people face B many more people face C about second place well the truth is that all of you are wrong all three of these faces are fake these people do not exist these images were synthesized by Deep generative models trained on data of human faces and asked to produce new instances now I think that this demonstration kind of demonstrates the power of these ideas and the power of this notion of generative modeling so let's get a little more concrete about how we can formalize this so far in this course we've been looking at what we call problems of supervised learning meaning that we're given data and associated with that data is a set of labels our goal is to learn a function that maps that data to the labels now we're in a course on deep learning so we've been concerned with functional mappings that are defined by Deep neural networks but really that function could be anything neural networks are powerful but we could use other techniques as well in contrast there's another class of problems in machine learning that we refer to as unsupervised learning where we take data but now we're given only data no labels and our goal is to try to build some method that can understand the hidden underlying structure of that data what this allows us to do is it gives us new insights into the foundational representation of the data and as we'll see later actually enables us to generate new data instances now this class of problems this definition of unsupervised learning captures the types of models that we're going to talk about today in the focus on generative modeling which is an example of unsupervised learning and is United by this goal of the problem where we're given only samples from a training set and we want to learn a model that represents the distribution of the data that the model is seeing generative modeling takes two general forms first density estimation and second sample generation in density estimation the task is given some data examples our goal is to train a model that learns a underlying probability distribution that describes the where the data came from with sample generation the idea is similar but the focus is more on actually generating new instances our goal with sample generation is to again learn this model of this underlying probability distribution but then use that model to sample from it and generate new instances that are similar to the data that we've seen approximately falling along ideally that same real data distribution now in both these cases of density estimation and Sample generation the underlying question is the same our learning task is to try to build a model that learns this probability distribution that is as close as possible to the true data distribution okay so with this definition and this concept of generative modeling what are some ways that we can actually deploy generative modeling forward in the real world for high impact applications well part of the reason that generative models is are so powerful is that they have this ability to uncover the underlying features in a data set and encode it in an efficient way so for example if we're considering the problem of facial detection and we're given a data set with many many different faces starting out without inspecting this data we may not know what the distribution of Faces in this data set is with respect to Features we may be caring about for example the pose of the head clothing glasses skin tone Hair Etc and it can be the case that our training data may be very very biased towards particular features without us even realizing this using generative models we can actually identify the distributions of these underlying features in a completely automatic way without any labeling in order to understand what features may be overrepresented in the data what features may be underrepresented in the data and this is the focus of today and tomorrow's software Labs which are going to be part of the software lab competition developing generative models that can do this task and using it to uncover and diagnose biases that can exist within facial detection models another really powerful example is in the case of outlier detection identifying rare events so let's consider the example of self-driving autonomous cars with an autonomous car let's say it's driving out in the real world we really really want to make sure that that car can be able to handle all the possible scenarios and all the possible cases it may encounter including edge cases like a deer coming in front of the car or some unexpected rare events not just you know the typical straight freeway driving that it may see the majority of the time with generative models we can use this idea of density estimation to be able to identify rare and anomalous events within the training data and as they're occurring as the model sees them for the first time so hopefully this paints this picture of what generative modeling the underlying concept is and a couple of different ways in which we can actually deploy these ideas for powerful and impactful real world applications yeah in today's lecture we're going to focus on a broad class of generative models that we call latent variable models and specifically distilled down into two subtypes of latent variable models first things first I've introduced this term latent variable but I haven't told you or described to you what that actually is I think a great example and one of my favorite examples throughout this entire course that gets at this idea of the latent variable is this little story from Plato's Republic which is known as the myth of the cave in this myth there is a group of prisoners and as part of their punishment they're constrained to face a wall now the only things the prisoners can observe are shadows of objects that are passing in front of a fire that's behind them and they're observing the casting of the Shadows on the wall of this cave to the prisoners those Shadows are the only things they see their observations they can measure them they can give them names because to them that's their reality but they're unable to directly see the underlying objects the true factors themselves that are casting those Shadows those objects here are like latent variables in machine learning they're not directly observable but they're the true underlying features or explanatory factors that create the observed differences and variables that we can see and observe and this gets out the goal of generative modeling which is to find ways that we can actually learn these hidden features these underlying latent variables even when we're only given observations of The observed data so let's start by discussing a very simple generative model that tries to do this through the idea of encoding the data input the models we're going to talk about are called autoencoders and to take a look at how an auto encoder works we'll go through step by step starting with the first step of taking some raw input data and passing it through a series of neural network layers now the output of this of this first step is what we refer to as a low dimensional latent space it's an encoded representation of those underlying features and that's our goal in trying to train this model and predict those features the reason a model like this is called an encoder an autoencoder is that it's mapping the data X into this Vector of latent variables Z now let's ask ourselves the question let's pause for a moment why maybe we care about having this latent variable Vector Z be in a low dimensional space anyone have any ideas all right maybe there are some ideas as yes the suggestion was that it's more efficient yes that's that's gets at it the heart of the of the question the idea of having that low dimensional latent space is that it's a very efficient compact encoding of the rich High dimensional data that we may start with as you pointed out right what this means is that we're able to compress data into this small feature representation a vector that captures this compactness and richness without requiring so much memory or so much storage so how do we actually train the network to learn this latent variable vector since we don't have training data we can't explicitly observe these latent variables Z we need to do something more clever what the auto encoder does is it builds a way to decode this latent variable Vector back up to the original data space trying to reconstruct their original image from that compressed efficient latent encoding and once again we can use a series of neural network layers such as convolutional layers fully connected layers but now to map back from that lower dimensional space back upwards to the input space this generates a reconstructed output which we can denote as X hat since it's an imperfect reconstruction of our original input data to train this network all we have to do is compare the outputted Reconstruction and the original input data and say how do we make these as similar as possible we can minimize the distance between that input and our reconstructed output so for example for an image we can compare the pixel wise difference between the input data and the reconstructed output just subtracting the images from one another and squaring that difference to capture the pixel wise Divergence between the input and the reconstruction what I hope you'll notice and appreciate is in that definition of the loss it doesn't require any labels the only components of that loss are the original input data X and the reconstructed output X hat so I've simplified now this diagram by abstracting away those individual neural network layers in both the encoder and decoder components of this and again this idea of not requiring any labels gets back to the idea of unsupervised learning since what we've done is we've been able to learn a encoded quantity our latent variables that we cannot observe without any explicit labels all we started from was the raw data itself it turns out that as as the question and answer got at that dimensionality of the latent space has a huge impact on the quality of the generated reconstructions and how compressed that information bottleneck is Auto encoding is a form of compression and so the lower the dimensionality of the latent space the less good our reconstructions are going to be but the higher the dimensionality the more the less efficient that encoding is going to be so to summarize this this first part this idea of an autoencoder is using this bottlenecked compressed hidden latent layer to try to bring the network down to learn a compact efficient representation of the data we don't require any labels this is completely unsupervised and so in this way we're able to automatically encode information within the data itself to learn this latent space Auto encoding information Auto encoding data now this is of the pretty simple model and it turns out that in practice this idea of self-encoding or Auto encoding has a bit of a Twist on it to allow us to actually generate new examples that are not only reconstructions of the input data itself and this leads us to the concept of variational autoencoders or vaes with the traditional autoencoder that we just saw if we pay closer attention to the latent layer right which is shown in that orange salmon color that latent layer is just a normal layer in the neural network it's completely deterministic what that means is once we've trained the network once the weights are set anytime we pass a given input in and go back through the latent layer decode back out we're going to get the same exact reconstruction the weights aren't changing it's deterministic in contrast variational autoencoders vaes introduce a element of Randomness a probabilistic Twist on this idea of Auto encoding what this will allow us to do is to actually generate new images similar to the or new data instances that are similar to the input data but not forced to be strict reconstructions in practice with the variational autoencoder we've replaced that single deterministic layer with a random sampling operation now instead of learning just the latent variables directly themselves for each latent variable we Define a mean and a standard deviation that captures a probability distribution over that latent variable what we've done is we've gone from a single Vector of latent variable Z to a vector of means mu and a vector of standard deviations Sigma that parametrize the probability distributions around those latent variables what this will allow us to do is Now sample using this element of Randomness this element of probability to then obtain a probabilistic representation of the latent space itself as you hopefully can tell right this is very very very similar to the autoencoder itself but we've just added this probabilistic twist where we can sample in that intermediate space to get these samples of latent variables foreign now to get a little more into the depth of how this is actually learned how this is actually trained with defining the vae we've eliminated this deterministic nature to now have these encoders and decoders that are probabilistic the encoder is Computing a probability distribution of the latent variable Z given input data X while the decoder is doing the inverse trying to learn a probability distribution back in the input data space given the latent variables Z and we Define separate sets of weights Phi and Theta to define the network weights for the encoder and decoder components of the vae all right so when we get now to how we actually optimize and learn the network weights in the vae first step is to define a loss function right that's the core element to training a neural network our loss is going to be a function of the data and a function of the neural network weights just like before but we have these two components these two terms that Define our vae loss first we see the Reconstruction loss just like before where the goal is to capture the difference between our input data and the reconstructed output and now for the vae we've introduced a second term to the loss what we call the regularization term often you'll maybe even see this referred to as a vae loss and we'll go into we'll go into describing what this regular regularization term means and what it's doing to do that and to understand remember and and keep in mind that in all neural network operations our goal is to try to optimize the network weights with respect to the data with respect to minimizing this objective loss and so here we're concerned with the network weights Phi and Theta that Define the weights of the encoder and the decoder we consider these two terms first the Reconstruction loss again the Reconstruction loss is very very similar same as before you can think of it as the error or the likelihood that effectively captures the difference between your input and your outputs and again we can trade this in an unsupervised way not requiring any labels to force the latent space and the network to learn how to effectively reconstruct the input data the second term the regularization term is now where things get a bit more interesting so let's go on on into this in a little bit more detail because we have this probability distribution and we're trying to compute this encoding and then decode back up as part of regular regularizing we want to take that inference over the latent distribution and constrain it to behave nicely if you will the way we do that is we place what we call a prior on the latent distribution and what this is is some initial hypothesis or guess about what that latent variable space may look like this helps us and helps the network to enforce a latent space that roughly tries to follow this prior distribution and this prior is denoted as P of Z right that term d That's effectively the regularization term it's capturing a distance between our encoding of the latent variables and our prior hypothesis about what the structure of that latent space should look like so over the course of training we're trying to enforce that each of those latent variables adapts a problem adopts a probability distribution that's similar to that prior a common Choice when training va's and developing these models is to enforce the latent variables to be roughly standard normal gaussian distributions meaning that they are centered around mean zero and they have a standard deviation of one what this allows us to do is to encourage the encoder to put the latent variables roughly around a centered space Distributing the encoding smoothly so that we don't get too much Divergence away from that smooth space which can occur if the network tries to cheat and try to Simply memorize the data by placing the gaussian standard normal prior on the latent space we can define a concrete mathematical term that captures the the distance the Divergence between our encoded latent variables and this prior and this is called the KL Divergence when our prior is a standard normal the KL Divergence takes the form of the equation that I'm showing up on the screen but what I want you to really get away Come Away with is that the concept of trying to smooth things out and to capture this Divergence and this difference between the prior and the latent encoding is all this KL term is trying to capture so it's a bit of math and I I acknowledge that but what I want to next go into is really what is the intuition behind this regularization operation why do we do this and why does the normal prior in particular work effectively for vaes so let's consider what properties we want our latent space to adopt and for this regularization to achieve the first is this goal of continuity we don't me and what we mean by continuity is that if there are points in the latent space that are close together ideally after decoding we should recover two reconstructions that are similar in content that make sense that they're close together the second key property is this idea of completeness we don't want there to be gaps in the lane space we want to be able to decode and sample from the latent space in a way that is smooth and a way that is connected to get more concrete what let's ask what could be the consequences of not regularizing our latent space at all well if we don't regularize we can end up with instances where there are points that are close in the latent space but don't end up with similar decodings or similar reconstructions similarly we could have points that don't lead to meaningful reconstructions at all they're somehow encoded but we can't decode effectively regularization allows us to realize points that end up close in the latent space and also are similarly reconstructed and meaningfully reconstructed okay so continuing with this example the example that I showed there and I didn't get into details was showing these shapes these shapes of different colors and that we're trying to be encoded in some lower dimensional space with regularization we are able to achieve this by Trying to minimize that regularization term it's not sufficient to just employ the Reconstruction loss alone to achieve this continuity and this completeness because of the fact that without regularization just encoding and reconstructing does not guarantee the properties of continuity and completeness we overcome this these issues of having potentially pointed distributions having discontinuities having disparate means that could end up in the latent space without the effect of regularization we overcome this by now regularizing the mean and the variance of the encoded latent distributions according to that normal prior what this allows is for the Learned distributions of those latent variables to effectively overlap in the latent space because everything is regularized to have according to this prior of mean zero standard deviation one and that centers the means regularizes the variances for each of those independent latent variable distributions together the effect of this regularization in net is that we can achieve continuity and completeness in the latent space points and distances that are close should correspond to similar reconstructions that we get out so hopefully this this gets at some of the intuition behind the idea of the vae behind the idea of the regularization and trying to enforce the structured normal prior on the latent space with this in hand with the two components of our loss function reconstructing the inputs regularizing learning to try to achieve continuity and completeness we can now think about how we Define a forward pass through the network going from an input example and being able to decode and sample from the latent variables to look at new examples our last critical step is how the actual back propagation training algorithm is defined and how we achieve this the key as I introduce with vaes is this notion of randomness of sampling that we have introduced by defining these probability distributions over each of the latent variables the problem this gives us is that we cannot back propagate directly through anything that has an element of sampling anything that has an element of randomness back propagation requires completely deterministic nodes deterministic layers to be able to successfully apply gradient descent and the back propagation algorithm the Breakthrough idea that enabled vaes to be trained completely end to end was this idea of re-parametrization within that sampling layer and I'll give you the key idea about how this operation works it's actually really quite quite clever so as I said when we have a notion of randomness of probability we can't sample directly through that layer instead with re-parametrization what we do is we redefine how a latent variable Vector is sampled as a sum of a fixed deterministic mean mu a fixed Vector of standard deviation Sigma and now the trick is that we divert all the randomness all the sampling to a random constant Epsilon that's drawn from a normal distribution so mean itself is fixed standard deviation is fixed all the randomness and the sampling occurs according to that Epsilon constant we can then scale the mean and standard deviation by that random constant to re-achieve the sampling operation within the latent variables themselves what this actually looks like and an illustration that breaks down this concept of re-parametrization and Divergence is as follows so look looking here right what I've shown is these completely deterministic steps in blue and the sampling random steps in Orange originally if our latent variables are what effectively are capturing the randomness the sampling themselves we have this problem in that we can't back propagate we can't train directly through anything that has stochasticity that has randomness what reparametrization allows us to do is it shifts this diagram where now we've completely diverted that sampling operation off to the side to this constant Epsilon which is drawn from a normal prior and now when we look back at our latent variable it is deterministic with respect to that sampling operation what this means is that we can back propagate to update our Network weights completely end to end without having to worry about direct Randomness direct stochasticity within those latent variables C this trick is really really powerful because it enabled the ability to train these va's completely end to end in a in through back propagation algorithm all right so at this point we've gone through the core architecture of vais we've introduced these two terms of the loss we've seen how we can train it end to end now let's consider what these latent variables are actually capturing and what they represent when we impose this distributional prior what it allows us to do is to sample effectively from the latent space and actually slowly perturb the value of single latent variables keeping the other ones fixed and what you can observe and what you can see here is that by doing that perturbation that tuning of the value of the latent variables we can run the decoder of the vae every time reconstruct the output every time we do that tuning and what you'll see hopefully with this example with the face is that an individual latent variable is capturing something semantically informative something meaningful and we see that by this perturbation by this tuning in this example the face as you hopefully can appreciate is Shifting the pose is Shifting and all this is driven by is the perturbation of a single latent variable tuning the value of that latent variable and seeing how that affects the decoded reconstruction the network is actually able to learn these different encoded features these different latent variables such that by perturbing the values of them individually we can interpret and make sense of what those latent variables mean and what they represent to make this more concrete right we can consider even multiple latent variables simultaneously compare one against the other and ideally we want those latent features to be as independent as possible in order to get at the most compact and richest representation and compact encoding so here again in this example of faces we're walking along two axes head pose on the x-axis and what appears to be kind of a notion of a smile on the y-axis and you can see that with these reconstructions we can actually perturb these features to be able to perturb the end effect in the reconstructed space and so ultimately with with the vae our goal is to try to enforce as much information to be captured in that encoding as possible we want these latent features to be independent and ideally disentangled it turns out that there is a very uh clever and simple way to try to encourage this Independence and this disentanglement while this may look a little complicated with with the math and and a bit scary I will break this down with the idea of how a very simple concept enforces this independent latent encoding and this disentanglement all this term is showing is those two components of the loss the Reconstruction term the regularization term that's what I want you to focus on the idea of latent space disentanglement really arose with this concept of beta beta vaes what beta vas do is they introduce this parameter beta and what it is it's a weighting constant the weighting constant controls how powerful that regularization term is in the overall loss of the vae and it turns out that by increasing the value of beta you can try to encourage greater disentanglement more efficient encoding to enforce these latent variables to be uncorrelated with each other now if you're interested in mathematically why beta vas enforce this disentanglement there are many papers in the literature and proofs and discussions as to why this occurs and we can point you in those directions but to get a sense of what this actually affects Downstream when we look at face reconstruction as a task of Interest with the standard vae no beta term or rather a beta of one you can hopefully appreciate that the features of the rotation of the head the pose and the the rotation of the head is also actually ends up being correlated with smile and the facial the mouth expression in the mouth position in that as the head pose is changing the apparent smile or the position of the mouth is also changing but with beta vaes empirically we can observe that with imposing these beta values much much much greater than one we can try to enforce greater disentanglement where now we can consider only a single latent variable head pose and the smile the position of the mouth in these images is more constant compared to the standard vae all right so this is really all the core math the core operations the core architecture of the A's that we're going to cover in today's lecture and in this class in general to close this section and as a final note I want to remind you back to the motivating example that I introduced at the beginning of this lecture facial detection where now hopefully you've understood this concept of latent variable learning and encoding and how this may be useful for a task like facial detection where we may want to learn those distributions of the underlying features in the data and indeed you're going to get Hands-On practice in the software labs to build variational autoencoders that can automatically uncover features underlying facial detection data sets and use this to actually understand underlying and hidden biases that may exist with those data and with those models and it doesn't just stop there tomorrow we'll have a very very exciting guest lecture on robust and trustworthy deep learning which will take this concept A step further to realize how we can use this idea of generative models and latent variable learning to not only uncover and diagnose biases but actually solve and mitigate some of those harmful effects of those biases in neural networks for facial detection and other applications all right so to summarize quickly the key points of vais we've gone through how they're able to compress data into this compact encoded representation from this representation we can generate reconstructions of the input in a completely unsupervised fashion we can train them end to end using the repair maturization trick we can understand the semantic uh interpretation of individual latent variables by perturbing their values and finally we can sample from the latent space to generate new examples by passing back up through the decoder so vaes are looking at this idea of latent variable encoding and density estimation as their core problem what if now we only focus on the quality of the generated samples and that's the task that we care more about for that we're going to transition to a new type of generative model called a generative adversarial Network or Gam where with cans our goal is really that we care more about how well we generate new instances that are similar to the existing data meaning that we want to try to sample from a potentially very complex distribution that the model is trying to approximate it can be extremely extremely difficult to learn that distribution directly because it's complex it's high dimensional and we want to be able to get around that complexity what Gans do is they say okay what if we start from something super super simple as simple as it can get completely random noise could we build a neural network architecture that can learn to generate synthetic examples from complete random noise and this is the underlying concept of Gans where the goal is to train this generator Network that learns a transformation from noise to the training data distribution with the goal of making the generated examples as close to the real deal as possible with scans the Breakthrough idea here was to interface these two neural networks together one being a generator and one being a discriminator and these two components the generator and discriminator are at War at competition with each other specifically the goal of the generator network is to look at random noise and try to produce an imitation of the data that's as close to real as possible the discriminator that then takes the output of the generator as well as some real data examples and tries to learn a classification classification decision distinguishing real from fake and effectively in the Gan these two components are going back and forth competing each other trying to force the discriminator to better learn this distinction between real and fake while the generator is trying to fool and outperform the ability of the discriminator to make that classification so that's the overlying concept but what I'm really excited about is the next example which is one of my absolute favorite illustrations and walkthroughs in this class and it gets at the intuition behind Gans how they work and the underlying concept okay we're going to look at a 1D example points on a line right that's the data that we're working with and again the generator starts from random noise produces some fake data they're going to fall somewhere on this one-dimensional line now the next step is the discriminator then sees these points and it also sees some real data the goal of the discriminator is to be trained to Output a probability that a instance it sees is real or fake and initially in the beginning before training it's not trained right so its predictions may not be very good but over the course of training you're going to train it and it hopefully will start increasing the probability for those examples that are real and decreasing the probability for those examples that are fake overall goal is to predict what is real until eventually the discriminator reaches this point where it has a perfect separation perfect classification of real versus fake so at this point the discriminator thinks okay I've done my job now we go back to the generator and it sees the examples of where the real data lie and it can be forced to start moving its generated fake data closer and closer increasingly closer to the real data we can then go back to the discriminator which receives these newly synthesized examples from the generator and repeats that same process of estimating the probability that any given point is real and learning to increase the probability of the true real examples decrease the probability of the fake points adjusting adjusting over the course of its training and finally we can go back and repeat to the generator again one last time the generator starts moving those fake points closer closer and closer to the real data such that the fake data is almost following the distribution of the real data at this point it becomes very very hard for the discriminator to distinguish between what is real and what is fake while the generator will continue to try to create fake data points to fool the discriminator this is really the key concept the underlying intuition behind how the components of the Gan are essentially competing with each other going back and forth between the generator and the discriminator and in fact this is the this intuitive concept is how the Gan is trained in practice where the generator first tries to synthesize new examples synthetic examples to fool the discriminator and the goal of the discriminator is to take both the fake examples and the real data to try to identify the synthesized instances in training what this means is that the objective the loss for the generator and discriminator have to be at odds with each other they're adversarial and that is what gives rise to the component of adversarial ingenerative adversarial Network these adversarial objectives are then put together to then Define what it means to arrive at a stable Global Optimum where the generator is capable of producing the true data distribution that would completely fool the discriminator concretely this can be defined mathematically in terms of a loss objective and again though I'm I'm showing math I can we can distill this down and go through what each of these terms reflect in terms of that core intuitive idea and conceptual idea that hopefully that 1D example conveyed so we'll first consider the perspective of the discriminator D its goal is to maximize probability that its decisions uh in its decisions that real data are classified real Faith data classified as fake so here the first term G of Z is the generator's output and D of G of Z is the discriminator's estimate of that generated output as being fake D of x x is the real data and so D of X is the estimate of the probability that a real instance is fake 1 minus D of X is the estimate that that real instance is real so here in both these cases the discriminator is producing a decision about fake data real data and together it wants to try to maximize the probability that it's getting answers correct right now with the generator we have those same exact terms but keep in mind the generator is never able to affect anything the the discriminator's decision is actually doing besides generating new data examples so for the generator its objective is simply to minimize the probability that the generated data is identified as fake together we want to then put this together to Define what it means for the generator to synthesize fake images that hopefully fool the discriminator all in all right this term besides the math besides the particularities of this definition what I want you to get away from this from this section on Gans is that we have this dual competing objective where the generator is trying to synthesize these synthetic examples that ideally fool the best discriminator possible and in doing so the goal is to build up a network via this adversarial training this adversarial competition to use the generator to create new data that best mimics the true data distribution and is completely synthetic new instances foreign what this amounts to in practice is that after the training process you can look exclusively at the generator component and use it to then create new data instances all this is done by starting from random noise and trying to learn a model that goes from random noise to the real data distribution and effectively what Gans are doing is learning a function that transforms that distribution of random noise to some Target what this mapping does is it allows us to take a particular observation of noise in that noise space and map it to some output a particular output in our Target data space and in turn if we consider some other random sample of noise if we feed it through the generator again it's going to produce a completely new instance falling somewhere else on that true data distribution manifold and indeed what we can actually do is interpolate and Traverse between trajectories in the noise space that then map to traversals and and interpolations in the Target data space and this is really really cool because now you can think about an initial point and a Target point and all the steps that are going to take you to synthesize and and go between those images in that Target data distribution so hopefully this gets gives a sense of this concept of generative modeling for the purpose of creating new data instances and that notion of interpolation and data transformation leads very nicely to some of the recent advances and applications of Gans where one particularly commonly employed idea is to try to iteratively grow the Gan to get more and more detailed image Generations progressively adding layers over the course of training to then refine the examples generated by the generator and this is the approach that was used to generate those synthetic those images of those synthetic faces that I showed at the beginning of this lecture this idea of using again that is refined iteratively to produce higher resolution images another way we can extend this concept is to extend the Gan architecture to consider particular tasks and impose further structure on the networkers itself one particular idea is to say okay what if we have a particular label or some factor that we want to condition the generation on we call this C and it's supplied to both the generator and the discriminator what this will allow us to achieve is paired translation between different types of data so for example we can have images of a street view and we can have images of the segmentation of that street view and we can build a gan that can directly translate between the street view and the segmentation let's make this more concrete by considering some particular examples so what I just described was going from a segmentation label to a street scene we can also translate between a satellite view aerial satellite image to what is the road map equivalent of that aerial satellite image or a particular annotation or labels of the image of a building to the actual visual realization and visual facade of that building we can translate between different lighting conditions day to night black and white to color outlines to a colored photo all these cases and I think in particular the the most interesting and impactful to me is this translation between street view and aerial view and this is used to consider for example if you have data from Google Maps how you can go between a street view of the map to the aerial image of that finally again cons extending the same concept of translation bit between one domain to another idea is that of completely unpaired translation and this uses a particular Gan architecture called cyclogam and so in this video that I'm showing here the model takes as input a bunch of images in one domain and it doesn't necessarily have to have a corresponding image in another Target domain but it is trained to try to generate examples in that Target domain that roughly correspond to the source domain transferring the style of the source onto the Target and vice versa so this example is showing the translation of images in horse domain to zebra domain the concept here is this cyclic dependency right you have two Gans that are connected together via this cyclic loss transforming between one domain and another and really like all the examples that we've seen so far in this lecture the intuition is this idea of distribution transformation normally with again you're going from noise to some Target with the cycle Gan you're trying to go from some Source distribution some data manifold X to a target distribution another data manifold why and this is really really not only cool but also powerful in thinking about how we can translate across these different distributions flexibly and in fact this is a allows us to do Transformations not only to images but to speech and audio as well so in the case of speech and audio it turns out that you can take sound waves represent it compactly in a spectrogram image and use a cycle Gan to then translate and transform speech from one person's voice in one domain to another person's voice in another domain right these are two independent data distributions that we Define maybe you're getting a sense of where I'm hinting at maybe not but in fact this was exactly how we developed the model to synthesize the audio behind Obama's voice that we saw in yesterday's introductory lecture what we did was we trained a cycle Gan to take data in Alexander's voice and transform it into Data in the manifold of Obama's voice so we can visualize how that spectrogram waveform looks like for Alexander's Voice versus Obama's voice that was completely synthesized using this cyclegan approach hi everybody and welcome to my food sickness191 official introductory course here at NYC hi everybody I replayed it okay but basically what we did was Alexander spoke that exact phrase that was played yesterday and we had the Train Cycle Gan model and we can deploy it then on that exact audio to transform it from the domain of Alexander's voice to Obama's voice generating the synthetic audio that was played for that video clip all right okay before I accidentally uh played again I jump now to the summary slide so today in this lecture we've learned deep generative models specifically talking mostly about latent variable models autoencoders variational Auto encoders where our goal is to learn this low dimensional latent encoding of the data as well as generative adversarial networks where we have these competing generator and discriminator components that are trying to synthesize synthetic examples we've talked about these core foundational generative methods but it turns out as I alluded to in the beginning of the lecture that in this past year in particular we've seen truly truly tremendous advances in generative modeling many of which have not been from those two methods those two foundational methods that we described but rather a new approach called diffusion modeling diffusion models are driving are the driving tools behind the tremendous advances in generative AI that we've seen in this past year in particular viez Gans they're learning these Transformations these encodings but they're largely restricted to generating examples that fall similar to the data space that they've seen before diffusion models have this ability to now hallucinate and envision and imagine completely new objects and instances which we as humans may not have seen or even thought about right parts of the design space that are not covered by the training data so an example here is this AI generated art which art if you will right which was created by a diffusion model and I think not only does this get at some of the limits and capabilities of these powerful models but also questions about what does it mean to create new instances what are the limits and Bounds of these models and how do they how can we think about their advances with respect to human capabilities and human intelligence and so I'm I'm really excited that on Thursday in lecture seven on New Frontiers in deep learning we're going to take a really deep dive into diffusion models talk about their fundamentals talk about not only applications to images but other fields as well in which we're seeing these models really start to make a transformative advances because they are indeed at the very Cutting Edge and very much the New Frontier of generative AI today all right so with that tease and and and um hopefully set the stage for lecture seven on Thursday and conclude and remind you all that we have now about an hour for open Office hour time for you to work on your software Labs come to us ask any questions you may have as well as the Tas who will be here as well thank you so much [Applause]
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2018_Issues_in_Image_Classification.txt
thanks for having me here yeah so I'm I'm based in the Cambridge office which is like a hundred meters that way um and we do a lot of stuff with deep learning we've got a large group in Google brain and other related fields so hopefully that's interesting to some of you at some point so I'm gonna talk for about 20 minutes or so um this sort of image issues in image classification theme I'm gonna hand it over to my excellent colleague sunshine Kai who's going to go through an entirely different subject in using tensor flow debugger and eager mode to make work in tensor flow easier who's maybe that would be good okay so let's let's take a step back so if you guys seen happy graphs like this before go ahead and smile and not if you've seen stuff like this yeah okay so this is a happy graph on image net based image classification so image net is a dataset of million some odd images for this challenge there were a thousand classes and in 2011 back in the dark ages when nobody knew how to do anything the state of the art was something like 25% error rate on this stuff and in the last call it six seven years the reduction in error rate has been kind of astounding to the point where it's now been talked about so much it's like no longer even surprising and it was like yeah yeah we see this human error rate is somewhere between five and ten percent on this task so the contemporary results of you know 2.2 or whatever it is percent error rate are really kind of astonishing and you can look at a graph like this and make reasonable claims that well machines using deep learning are better than humans at image classification on this task that's kind of weird and kind of amazing and maybe we can declare victory and fill audiences full of people clamoring to learn about deep learning that's cool okay so um I'm gonna talk not about image data itself but about a slightly different image data set basically people were like okay obviously image net is too easy let's make a large more interesting data set so open images was released I think a year or two ago it's got about 9 million as opposed to 1 million images the the base dataset has 6,000 labels as opposed to 1000 labels this is also multi labels so you get you know if there's a person holding a rugby ball you get both person and rugby ball in the dataset it's got all kinds of classes including stairs here which are lovely Lee Illustrated and you can find this on github it's a nice data set to play around with so some colleagues and I did some work of saying ok what happens if we apply just a straight-up inception based model to this data there we trade it up and then we look at some how it classifies some images that we found on the web so here's one such image image that we found on the web all the images here are Creative Commons and stuff like that so it's it's OK for us to look at these and when we apply an image based a image nodi kind of all to this classifications we get back or kind of what I personally would expect I'm seeing things like bride dress ceremony woman wedding all things that as an American in this country at this time I'm thinking or make makes sense for this image cool maybe we did solve it image classification so then we applied it to another image also of a bride and the model that we had trained up on this open source image dataset returned the following classifications clothing event costume read and performance art no mention of bride also no mention of person miss regardless of gender so in a sense this this model is sort of like missed the fact that there's a human in the picture which is maybe not awesome and not really what I would think of as great success if we're claiming that image classification is solved ok so what's going on here I'm gonna argue a little bit that what's going on is is based in to some degree on the idea of stereotypes and if you're if you have your laptop up I'd like you to close your laptop for a second this is the interactive portion where you can interact by closing your laptop and I'd like you to find somebody sitting next to you and exercise your human conversation skills for about one minute to come up with a definition between the two of you of what is a stereotype keeping in mind that we're in sort of a statistical setting okay so have a quick one-minute conversation with the person sitting next to you if there's no one sitting next to you you may move ready set go three-two-one and thank you know for having that interesting conversation that easily could have lasted for much more than one minute but such as life let's hear from one or two folks let's had something that they came up with it was interesting oh yeah go ahead your name is adept yeah okay what did you okay so Dickie is saying that a stereotype is a generalization that you find from a large group of people and you apply it to more people okay interesting certainly agree with large parts that yeah okay so so I'm here the claim is that it's a label that's based on experience from within your training set yeah super interesting and the probability of label based on what's your training cool maybe one more oh yeah good okay so that there's claim here that stereotype has something to do with unrelated features that happen to be correlated I think that's interesting let me see if I can this was not a plant sorry your name was Constantine custody is not a plant but I do want to look at this a little bit more in detail so here's here's a data set that I'm going to claim is is based on running data so in the early mornings I pretend that I'm an athlete and go for a run and this is a data set that sort of based on risk that someone might not finish a race that they enter in so we've got high risk people or you know they are in yellow and lower risk people are in red you look in this data it's got a couple dimensions I might fit a linear classifier it's not quite perfect if I look a little more closely if I've actually got some more information here don't just have X&Y I also have this sort of color of outline so I might have a rule that if this data point has a blue outline I'm gonna predict low-risk otherwise I'm gonna predict high-risk fair enough now the big reveal you'll never guess what the the outline feature is based on shoe type the other x and y are based on how long the race is and sort of what a person's weekly training volume is but whether you're foolish enough to buy expensive running shoes because you think they're going to make you faster or whatever this is what's in the data and in traditional machine learning supervised machine learning we might say well wait a minute I'm not sure that shoe type is going to be actually predictive on the other hand it's in our training data and it does seem to be awfully predictive on this data set we have a really simple model it's highly regularized it still gives you no perfect or near perfect accuracy maybe it's fine and the only way we can find out if it's not I would argue is by gathering some more data and I'll point out that this data set has been diabolically constructed so that there are some points in the data space that are not particularly well represented and you can maybe tell yourself a story about maybe this data was collected after some corporate 5k or something like that so if we can collect some more data maybe we find that actually there's people wearing all kinds of shoes on both sides of our imaginary classifier but that this shoe type feature is really not predictive at all and this gets back to Constantine's point that perhaps relying on features that are strongly correlated but not necessarily causal may be a point at which we're thinking about a stereotype in some way so obviously given this data and what we know now I would probably go back and suggest a linear classifier based on these these features of length of race and weekly training volumes potentially a better model so how does this happen what's what's the issue here that's at play one of the issues that's at play is that in supervised machine learning we often make the assumption that our training distribution and our test distribution are identical right and we make this assumption for a really good reason which is that if we make that assumption then we can pretend that there's no difference between correlation and causation and we can use all of our features whether they're what Constantine would call you know meaningful or causal or not we can throw them in there and so long as their tests and training distributions are the same we're probably okay to within some some degree but in the real world we don't just apply models to a training or test set we also use them to make predictions that may influence the world in some way and there I think that the right sort of phrase to use isn't so much test set it's more inference time performance okay because that at inference time when we're going and applying our model to some new instance in the world we may not actually know what they let the true label is ever are things like that but we still care very much about having good performance and making sure that our test that our training set matches our inference distribution to some degree is is like super critical so let's go back to open images and what was happening there you'll recall that it did quite badly on at least anecdotally on that image of a bride who appeared to be from India if we look at the geo diversity of open images this is something where we we did our best to sort of track down the geolocation of each of the images in the open image data set what we found was that an overwhelming proportion of the data in open images was from North America and six countries in Europe vanishingly small amounts of that data were from countries such as India or China or other places where I've heard there's actually a large number of people so this is clearly not representative in a meaningful way of sort of the global diversity of the world how does this happen it's not like the researchers who put the open images data set were in any way little intention they were working really hard to put together what they believe was a more representative data set then of an image net at the very least they don't have a hundred categories of dogs in this one so what happens well you could make an argument that there's some strong correlation with the distribution of open images with the distribution of countries with high loca high bandwidth low-cost internet access it's not a perfect correlation but it's it's pretty close and that if we're doing if one might do things like base an image classifier on data drawn from a distribution of areas that have high bandwidth low cost internet access that may induce differences between the training distribution and the inference time distribution none of this is like something you wouldn't figure out without you know if you sat down for five minutes right this is all a super basic statistics it is in fact stuff that's this is just six people have been sort of railing at the machine learning community at for the last several decades but as machine learning models become sort of more ubiquitous in everyday life it thinks that paying attention to these kinds of issues becomes ever more important so let's go back to what a start a stereotype and I think I agree with Constantine's idea and I'm gonna add one more tweak to it so I'm gonna say that a stereotype is a statistical confounder I think it's using Constantine's language almost exactly that has a societal basis so when I think about issues of fairness if it's the case that you know rainy weather is correlated with people using umbrellas like yes that's a confounder the umbrellas did not cause the rain but I'm not as worried as a individual human about the societal impact of models that are based on that you know module I'm sure you could imagine some crazy scary scenario where that was the case but in general I don't think that's as large an issue but when we think of things like internet connectivity or other societally based factors I think that paying attention to questions of do we have confounders in our data are they being picked up by our models is as incredibly important so if you take away nothing else from this short talk I hope that you take away a caution to be aware of differences between your training and inference distributions ask the question because statistically this is not a particularly difficult thing to uncover if you take the time to look in a world of keggle competitions and people trying to get high marks on deep learning classes and things like that I think it's all too easy for us to just take datasets as given not think about them too much and just try and get our accuracy from 99.1 to 99 points you and as someone who is interested in people coming out of programs like this being ready to do work in the real world I would caution that we can't only be training ourselves to do that so with that I'm gonna leave you with a set of additional resources around machine learning fairness these are super hot off the presses in the sense that this particular little website was launched and I think 8:30 this morning something like that so you've you've got it first MIT leading the way in on this page there are n yeah you can open your laptop's again now there are a number of papers that go through this sort like a greatest hits of the machine learning fairness literature from the last couple years really interesting papers I don't think any of them are like the one final solution to machine learning fairness issues but they're super interesting reads and I think help sort of paint the the space in the landscape really usefully they're also a couple of interesting exercises there that you can access by a collab and if you're interested in this space they're things that you can play with I think they include one on adversarial D biasing where because you guys all love deep learning you can use a network to try and become unbiased by making sure that by having an extra output head that predicts a characteristic that you wish to be unbiased on and then penalizing that model if it's good at predicting that that characteristic and so this is trying to adversary only make sure that our internal representation in a deep network is not picking up unwanted correlations around water biases so I hope that that's interesting and I'll be around afterwards to take questions but at this point I'd like to make sure that sunchang has plenty of time so thank you very much [Applause]
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_LiDAR_for_Autonomous_Driving.txt
thank you very much alex so i'm very happy to be here today i and i hope we will be able to uh cover everything that we wanted there is about 100 pages here and for 45 minutes but uh we'll try okay so we're very in show in very in in very short uh innova is developing lidars for autonomous vehicles we're very active in in automotive trying to pursue autonomous driving level two level three we have several products that we offer to the market but actually here today and we're going to focus on two main topics one of them is about what our car makers are trying to achieve in developing autonomous vehicle and how we're helping them not only with the lidar but also with the perceptions of them that's something that amir will cover you might have heard on innovas through our program with bmw we have our lidar here innova is one that's our first generation that is going to be part of bmw series production for level three for highway driving it's going to be used on several models seven model series the five series the ix i'm very fortunate to be part of this program and of course many other things as i said uh today i'll cover i'll cover some topics that are coming from the lidar space uh and talk about possibly uh some standardization that is required in that space and later amir will translate some of those requirements also taking from the perceptions of them we have a white paper uh that we shared on our website some of the material that i'm going to cover here very quickly because we don't have time uh is is explained uh very in deep uh in in that document so you can find it on our website and i'm sure you'll find it interesting today most of the cars on all of the cars that are trying to do some automation of driving are at a level two meaning that the car controls either the pedals uh or the the wheel but still require the attention of the passenger uh you just probably heard about a cow er that a person that was accused in killing a person uh due to an accident in in you know automated driving basically because the car makers are still not taking liability the quantum leap between level two and level three comes from the car maker actually taking full responsibility on anything that happens when driving and does not require the person to be attentive it obviously requires them to have much higher confidence and that comes from additional sensors and capabilities in order to reach a full autonomous driving you need to have a good visibility a good prediction on what everything is going on on the road and there is an eye diagram a certain way that you need to cover the space with different types of sensors and lidar is is one of them and what you're seeing here is just one i would say one way to try to do it with existing solutions in the market uh someone who took specific sensors and tried to map how putting them one next to the other gives you the full visibility that is required by the system there are other ways uh to do it this is just one example i want to talk about explain first what is a lidar and maybe specifically how we approach this problem allied are using a laser a laser beam that we move around by using a two-axis manner uh scanning mechanism that allows us to scan the scene that light is emitted towards um an object and the light that bounces back to the system a fraction of the light is collected by the system and there is a sphere that comes from 200 meters away and you have a certain flux of photons that are collected by the system the aperture of the system is you can say it's like an antenna that's the antenna of the system it defines how many photons are eventually collected into the system those photons are collimated on a detector and of course the sensitivity of the system defines how well we are able to react to each photon you want to have the lowest uh noise figure in order to be able to detect each and every photon of course photons could come from either the object or photons that actually came from the sun the sun is like the nemesis of sliders and and that's our job to try to differentiate between them and there are ways to do it those photons that are converted to electrons through an avalanche reaction of the silicon are collected by the signal processing mechanism of course there is also self noise of the of the detector itself and it's also part of what we need to do is to uh see the difference between them eventually uh light that comes back from the system is detected by the system but by the unit and by measuring the time in which it took for the light to bounce back we know how far things are now eventually a lidar is like a communication system as you might know you can define it by a signal to noise ratio the signal to noise ratio for lidars is defined by the emission because that's the transmission you are using the aperture of the system which is the antenna the photon detection efficiency which defines the gain of the system and of course the noise the noise that either comes within from the cells from the system or the noise that comes from from the sun now we use this um equation to see how we can improve our system from one generation to the other between innovas one and innovis2 our second generation which we recently showed we improved this equation by a facto of more than 30 times okay this comes from being able to control different knobs of the system using 905 means that we are capped by the amount of laser that we are allowed to use but we there are many other measures that we do in order to improve the system significantly and today innoviz 2 is is a few orders of magnitude really better than any other lidar company any system that is available i'm showing you an exa a demo of innovis2 and this is actually also in a very early stage but already showing the very nice i would say point cloud just to get you understanding every point that you see here is a reflection of light coming back from the scene but after shooting a pulse of light you and we do that very very fast in order to cover the entire field of view and we can see very far away at very nice field of view and range and resolution and and that is how we solve the problem uh which is defined by requirements we get from automakers now autonomous driving is could be described by different applications shuttles trucks passenger vehicles those provide different requirements for example passenger vehicles on highway require a certain requirement for driving very fast but a shuttle the drives in the city with much more complex scenarios and and traffic light and complex intersections require a different set of requirements today i'm going to cover uh this area the highway driving the highway driving is what we see is the mvp of autonomous driving because it it actually simplifies and reduces the variance of different use cases that could happen on the road because highways are more alike and and it actually narrows the number of use cases that you need to catch it it shortens the validation process uh our lidar can support obviously all of those applications uh but we see that level two and level three uh the the opportunities that probably would go the fastest in the market now when a car maker is trying to develop a a level three it starts from trying to understand how fast it wants to drive because the faster the car wants to drive it it needs to be able to predict things further ahead it needs higher resolution it needs higher frame rate and and those are the interpretation of the different requirements from the features of the car into the light of space which i this is covered in the white paper and i'm inviting you to read it of course on top of it there is the design of the vehicle uh whether you want to mount it on the roof in the grill uh you know the the the window tilt they're they're the cleaning system there are many things that are from practical elements uh require some modification for the design of the lidar which we obviously need to take into account for those of you that are less familiar with gliders uh then you know obviously a lighter is needed to provide redundancy for cameras due to low light condition or or missed you can see here an example of a very simple example of just some water that is aggregated on the camera and of course every drop of water can create the same problem and and that's not it's not okay to be in this situation this is why you need to have redundancy another case is direct sun of course uh some might say that if you have sufficient processing power and collected millions of objects a camera might be enough but obviously if if you're unable to see because of limitation of the physical layer of the of the sensor it's not enough you need to have a secondary sensor that is not sensitive so um today we're talking about level three level three requirements is is defined uh by the ability to see the front of the vehicle mostly and a good system is not only there to provide safety uh to the to the person because if if all the car needs to do is to drive uh to make to bring to make you uh leave after the the car travel it can decide to break uh every five minutes and every for everything that might happen on the road it will slow down uh you will be exhausted and possibly want to throw up after such a a drive which means that the system in order to be good it has to be smooth in its driving and to have a smooth driving you need to be able to predict things uh very well and and in order to do a smooth uh acceleration uh brakes maneuvers and that really what defines uh the requirements of the sensor i will not go uh very deep in these diagrams these are part of the things that you can find on the white paper talking about the requirements of the field of view from cutting scenario analysis uh and you know whether what whether you place it on the grill or you place it on the roof uh how you manage a highway with different curvature a slope and then you have the vertical field of view that is very much affected by uh the height of the vehicle and uh and the need to support uh the ability to see the the ground very well and under drivable again i don't want to go very deep here but if you're interested to learn about those principles and how they are translated to the requirements of the liar again you can find this on on a white paper and actually there is also a lecture which i gave just a month ago about this white paper it's it's possibly another hour where you need to hear me but you know i don't do i don't want you to do that twice at least um before we go to the perception area i think this possibly something that i do want to dwell on eventually in order to have a good uh driving and smooth driving it's about predicting it's about being able to classify an object as a person knowing that a certain object is a person allows the cow to have better projection of its trajectory on the way it can actually move in space if it's a car there is a certain degree of freedom of how it can move and the same for a a bicycle and a person and its velocity the higher the resolution of the sensor is it allows you to have a better capability in doing that at a longer range because of course the longer the person is you get less pixels on it less points so the vertical resolution in this example in the horizontal resolution is key in order to allow the the sensor to have good capabilities in identifying objects this talks about uh the braking of the the frame rate also related to braking distance and i don't want to spend time here it's again another example of you know why a certain frame rate is better than the other or why this is enough i'll let you read it in the white paper uh this example is something that i do want to spend some time on sorry uh yeah here okay so this is a use case we hear a lot from car makers uh you are driving behind a vehicle and there are two vehicles next to you so you can't move in case you're seeing a problem and at some point this vehicle here identifies an object that he wants to avoid crashing into and and this use case it tries to um provide indication of how fast uh or how well the car would be able to do emergency braking assuming that you're driving at 80 miles an hour now imagine that you have an object that is very far away from you and you want to be able to make a decision that this object is non-overdriveable meaning that it's too high it's high enough to cause damage to the car and basically this is about 14 centimeter because of the suspension of the vehicle which cannot absorb a collision into an object that is too high so the vertical resolution is very important because it's not enough to make a decision on an object because of a single pixel if you're seeing a single pixel at far away you don't know if it's a reflection from the road a reflection from a bot dot cat eye or just anything else you need to have at least two pixels so you have a good clarity that you're looking at something that is uh tall and and therefore the vertical resolution is very important once you are able to collect enough frames to identify that at good capability there is the latency of the vehicle itself in terms of how slow it can eventually stop now this analysis here is trying to show you the sensitivity of of parameters of the lidar even if the lidar had twice the range capabilities the ability to see an object at twice the range would not help me because if i only get one pixel it will not help me to make a decision earlier if i have higher frame rate even once i'll see the object and start to aggregate frames to make a decision that this is something i worry about it will only save me about six meters of decision the time in which i collect start to seeing the object and collecting enough frames to make a decision it's a very short period which i will try to save if i will have double the vertical resolution i will be able to identify this obstacle 100 meters more away so just to show you the importance of the different parameters of the lidar is not very obvious but are critical to the safety of the vehicle i will let i mean take it from here and thank you thanks [Music] it took more time than i told you maybe while aamir is getting his screen set up i have a quick question omar um how do you in the company view um kind of like the evolution of solid-state lidars is this something that's also on the business radar and you want to also develop that kind of technology or or do you believe like the mechanical lidars or we have i mean our lighter is a solid state yeah it is a solid state but we are also working on a product i didn't talk about it we're also working on a product for a 360 but as such that is you know about 10 times the resolution of today available solutions i mean the best-in-class 360 solutions are 128 lines we are working on a lidar with 1280 lines at a at a fraction of the price uh we decided to step into that direction because there are still applications that leverage on a wider field of view for non-automotive or non-front-looking lighters and that's something that we announced just just a month ago and we will show later in the year very exciting thank you yeah okay so so thanks so much again so now now we'll speak about how we take this oem uh requirements and that specification and actually build a system to support this perception system so first before we dive in uh i would like to speak about um i think the most obvious but yet most important characteristics of uh of point cloud of lidar uh and that lidar is actually measured uh in 3d uh scene around it it means that that each points represents a physical object uh in in the scene uh so uh it's really easy to measure uh distance between two points uh it's easy to measure the height of object it's easier to fit planes to the ground for example um and and i want i want to take you um like through a really simple algorithm uh and and we'll see together how far uh can we go just with lighter and really simple algorithm and support uh much of the requirements on the on that i mentioned before uh so um the simple algorithm uh the essence of this simple algorithm is detecting or classifying the collision relevancy of each point in the point cloud so in this visualization you can see pink points and green points the peak points are a collision relevant points means you don't want to drop through them and the green points are actually derivable points in this case and the row the load so the most simple algorithm you can you can come up with it just take every uh each pair of points in the point line and if they are close enough so just matter and measure the height difference between these two points and if it's greater than 40 centimeters like like you can just classify these two points it's collision relevant uh so here this uh turnabout track uh is uh is easily detected as uh exclusion relevant uh and the car won't drive through this truck um so while we're talking about uh deep learning network uh it's really easy to it's really hard to generalize and deep learning networks to new examples new scenarios uh so you can have a really good deep learning network that detects cars trucks desmond whatever but then you get this santa on the road uh and it's not trivial uh to um to generalize and to understand this center uh is actually an object which you want to avoid not just just a background uh and and with point cloud like with this really really simple algorithm uh these tasks become uh really easy uh another example just is fire trucks uh maybe in ambulances and other um other cars which are not uh represented um sufficiently in the train set uh and you probably heard about accidents that might be to similar reasons uh and another but but related uh cases is completely different scenario i mean um most of the data tends to be uh from from places like uh north america or europe uh but then you can end up uh in india uh a city full of riches and you just want to make sure you never connect them um so so again with lidar and this really simple algorithm i described before this problem it still exists obviously uh but it's it's suppressed and it's under control once you can actually measure the scene so now let's look a little bit more complete example maybe some of you recognize this this video from uh from one of the tesla crashes so you can see the white tesla actually crash into this uh this turn over trucks uh so there are many reasons for for this crash some say it's lighting other maybe because uh because during training uh the the network never seen a turn of the truck so it might be a problem um but but as as alma says and as i i mentioned before with lidar uh this this whole accident would be will be avoided so you can see here um this how it would look and light up um so the truck is easily easily detected from really really far um and and the car would would never would ever actually crash this this vehicle um and and this this algorithm is the same algorithm uh as i described before it's a really simple algorithm that actually um [Music] makes all the safety criticals and others much more solid and under control so maybe some of you guys saying that uh detecting this huge truck is is easy so here's a different example uh from from our action i thought this is the end of this one look at the distance uh two i don't know if you guys can see uh a tile uh this the distance is a tile um so this same really simple algorithm just looking at two points one above the other can easily detect this tyre as uh as a collision relevant so what the car actually sees uh the car just take uh the collision relevant point and maybe project them on on xy plane like on the ground plane or something and use this information to uh to navigate through the many other obstacles uh in the city and this is that just to close up just you can see this really uh really a tire and a can actually a pallet next to the next little tire that cannot be seen from distance um but but as long as i mentioned before it's not enough i mean get a good understanding of the static object and seeing small obstacles big optical is really important and safety critical but it's not enough because eventually we want to understand where these objects are going maybe they're moving in our velocity so we don't need to do anything just be well or maybe they're going to enter our eagle lane so we need to break um so so still detecting an object as an object is is really important so let's take this example uh just splash detection this is actually a pedestrian captured by uh by the ladder so i and i think everybody everybody would agree this is this is an easy example right um it is expected from every uh average network to say uh this is actually a pedestrian and classified it with pedestrian but what about this example i mean here it's not it's not obvious anymore right uh i mean maybe you can see legs a little bit of head torso but it's it's not it's not obvious so but but still i think a good trained uh network or system can still say this is this one giving surrounding maybe more context um so so here again it's expected uh expected to to be detected and classified as pedestrian but what about these two points these two points really distant points um so now now vehicle is moving really fast and we want to be super aware of any anything even even if it's it's a high distance so what what what can we do what do we expect uh from uh from deep learning network or look at the appearance of the object um it's it's really i think everybody agrees how to say this is a pedestrian but with lidar luckily uh it's not it's not critical i mean we can still uh classify or cluster uh these two points as an object because we know they are close by and if we classified it as an object and now we have a bounding box we can keep track uh and and estimates all the other how to use like velocity shape uh position obviously all that needed uh for the car to to predict uh it's it's uh it's motion and and act accordingly so taking this simple uh really simple uh clustering algorithm and putting on put it on real real scenario like uh normal how we drive uh so so it would look uh roughly like this uh you can see many many clusters of actual cars and objects um around uh with zero uh thin semantic uh but since we don't have the systematic you you would also see also see um like uh other objects which are not relevant uh necessarily for driving was not moving uh classified uh or not classified detected as objects but if our tracking system is good enough we would say okay this is an object i don't know what is it but it's stationary so just don't drive into it but don't worry it will never cross your lane um so you can you can go really far with these two simple algorithms just uh build quality relevant and clustering you can really go far uh with uh with perception task force and who's driving but now the question is it enough i mean is it enough to really create a system which is robust and useful for upper level stack so here's an example uh where this cluster mechanism is not perfect um what what you see here you see in blue is actually deeply on an airport it's the first time showing like deep learning results so this is uh the blue is deep learning that's what detects this this whole object as a trunk but unfortunately the cluster mechanism actually split it into two different objects um and and and reported if we use just the clustering mechanism we would report it as two different objects uh see you can imagine uh this ambiguity or instability uh of the cluster mechanism actually make it a little bit harder uh for for the upper layers of the stack to get a good understanding of what it's in uh and if you're not classified as a truck so the motion model is not it's not clear um and again the upper layer of the autonomous vehicle stuff a truck autonomous vehicle stack uh can't be sure which uh how how this uh uh object would behave uh so so semantic uh is is still important and still critical uh for for this full full system um so now let's see how how we can do uh deep learning uh on on point club so first thing we need to we need to decide is how should we represent the data so now point cloud that sounds just a set of points uh so the first thing we need to uh need to understand while processing point cloud is that it's unstructured it means if we took all the points of of on this car and and order it differently in the memory it will still be the same card still with the exact same scene uh so there are actually a deep learning architecture which take advantages take advantage of this like uh pointing at the multiple class but but for sure this is not standard and i'm going to make sure we understand this before from processing data another characteristic which is important and and we need to consider is the sparsity of point cloud if you're looking at point cloud at the cartesian coordinate system and this is the relationship from the top uh so you would see that most of the points are concentrated in the beginning of the scene uh because we sampled the world in spheric coordinate system so this this again [Music] challenged some of the architecture but actually some other architectures actually can leverage from this from these characteristics and and create much more efficient networks and this efficiency with computation gonna i'm gonna speak about it through this stock because uh on autonomous driving and in general processing on the edge computational efficiency efficiency is a key element and sometimes actually defines the solution so it's really important to make sure your algorithms are efficient um another presentation now which is structured okay uh like images is front view uh so you can see the camera image just normal cameras above and below the point cloud which are projected on the on the ladder itself so it looks like uh it looks like an image uh the only difference is that each point here has some has different attributes it has the reflectivity as you can see here but it also has um it also has the the xyz position relative to the sensor um so now now the data is structured and we can apply uh many legacy networks that yes you are available or and leverage from from a lot of legacy uh but but now it's a little bit harder to exploit the 3d measurement characteristic of point cloud for instance even though we know this car and this car roughly the same size and same shape roughly and while looking at it from front view it's a little bit hard to use this advantage now we get a lot of scale per distance and this is a kind of uh increasing data set that we need we need to uh to use in order to have a good generalization but this is useful representation another representation which is also common uh while processing uh polycloud is vocalization uh so uh if we take all the volume uh we wanna process and predict uh road users estimates roses location and we split it into multiple uh smaller volumes like you can see here this is the visualization i'll try to try to give here and in each voxel uh just uh put a representation of the points in it or surrounding uh then we can get again a structured representation of the point um and in each boxer we can we can start with really simple representation like occupancy whether it has points in it or not or we can go for much more complex representations like statistics even small networks that actually estimate the best representation of each boxes and there is a lot of research about it so here is the here is an example uh for for this uh vocalization uh representation so this is the the voxelized map uh looking from the top okay so uh it's uh it's an image uh and each uh each voxel uh is uh represented by by the reflectivity of the center of the central point uh and i put here that give you just some ankles so you can associate these two pictures uh so by the way it's really course representation mostly it's much better um so you can see how the network might see but but now we lost uh maybe key uh information that that we have if we look for the point cloud from the front view now it's a little bit harder to understand and what which object includes which object for instance this break uh in the guardrail uh it's a little bit harder to understand that this break is actually due to this car um and and not just due to a break in the garden uh so in order to do this again we need to build uh a network with greater receptive field and a little bit more deeper network to get a deeper understanding of the scene and sometimes i said before one avoid want to still be as efficient as possible but once again likely with point cloud is still easy uh to get all this uh occlusion information and so what you can see here this is the example of an illusion map so all the white is non-occluded point cloud uh so if you take this clustering mechanism and just uh just color all the free space uh you would get this occlusion map so you can add this occlusion map as an extra layer for the network so it has this information in events and you don't need to to create really big fat networks in order to understand this these conclusions so now after we know how to represent data with deep learning and we picked [Music] architecture and the question is what what are the key elements what i want to achieve with the food system so if we take the the cutting use case mentioned um we want to make sure this motorcycle is not in our lane we want to make sure we never crash into this motorcycle uh so first we need to detect it with the lidar and luckily our ladder is good enough for detection of this has a large enough field of view and then you want to put the boundary box around it right so you can tell to the car where it's located where it's going you can you can track it uh but now it's really critical that this bounding box is is really tight uh on this object right because if we just missed by a few centimeters in this case to the right uh we might think that this this motorcycle is actually not in our lane there's no problem because it can just keep going and it might cross cross the motorcycle [Music] uh so we want to get really good accuracy uh with with bounding box with output detection um so if we take this uh voxel presentation uh of the field of view now now we have a problem because on one hand you want to get really dense and really fine grid uh in order to uh to be much more accurate and and to reduce the ambiguity uh between the center of the cell and the actual object in it but as i said before it is computationally extensively expensive so uh we want to still find a way to work with reduced representations uh but let's share this this uh this accuracy um so a possible solution is uh fusion between the deep learning approach and the classical approach leverage uh the best from from each approach to to create a solid uh object list for the output of the stack uh so uh this deep learning uh stream gives you the uh the semantics it can say whether roughly where the object starts where it ends um is it is it a classified object is the car is the pedestrian motorcycle um and and the clustering stream actually gives you the accuracy uh that you need in order to drive and drive the car safely uh so so this is this is an example of how it looks so we again in blue this is the deep learning and and in white this is the clustering so you can see the deep learning no this is the car and actually put a bunk box around around the car but it's not accurate enough it's a few centimeters often and these few centimeters are important uh for for the safety critical objects objects which are close by um and the clustering actually really good uh it fits really good uh the of itself so once we did this uh we actually gained one more uh one more thing which is again again important in safety critical systems and this part the clustering the clustering path and the fusion is fully interpretable path and it's really helped to get to root cause of problems and look at the system as a white box so you can understand exactly what it does and and in some cases this is this is important it's really useful that you have a safety path which is fully interpretable so this is how it all kind of uh adds up uh so this is uh the deep learning output you can see the bounding box you can see them a little bit shaky a lot of the objects are fully fully detected all the time uh this is the clustering uh output uh so you can see it's solid but you have many false positives and the object length is not predicted and you don't know which object is obviously uh and this is the fused uh diffuse output so you can kind of get the best from from everything you you have a classes uh you have boundary books which are pretty solid uh and and it's really helpful uh for again for the upper layers of the stack so i know i know i moved fast because i didn't have much time uh but this is it thank you very much thank you so much i'm trying to answer some questions in the meantime and if you want i can answer some of them or i think that maybe one thing that it was it is important for me to to add because there was a question innovate is not developing an autonomous vehicle meaning we're not developing the driving decision we are developing the lidar and the perception software which allows car makers to have a more seamless integration assume that the processing unit of the car maker has a certain input from the camera an object detection classification interface and they are just getting another one from another sensor and you can imagine that they don't really care if it's a lighter or not all they care is that that secondary interface which tells them where things are uh is is in redundant to the other and gives them higher confidence in certain conditions so we're not developing so we are not doing driving decision but we are aiding uh our customers um do you want to ask specific questions you want me to go over questions that came up and you know maybe choose one of those oh uh yeah either one is it's perfectly fine or if anyone else has other questions feel free to just jump in and ask sure someone asked me about weather condition um although it's less related to perception maybe um anyway quickly on that rain is actually not very affecting lidars because drop of water is almost meaningless in terms of how much light is reflected back when you're you know meeting a drop of water in mid-air and even if very with very uh dense rain it's a it's only reducing possibly a few percentages of range fog is like an extrapolation of rain imagine a very large volume of particles that each one of them by itself reflects light back so it creates attenuation of the light going through it doesn't blind it completely but it does reduce it uh quite significantly depends on the density of course um there was a question here let me just check um so when we someone asked about uh false positive etc or actually there is another question i prefer someone asked me um what's uh how what what makes our lidar possibly a better fit to this application compared to others so beyond of course the obvious of cost and size which i think are important for automotive um if you would follow the white paper you would see that there are really um there is a trade-off between different parameters it's very important not to fall in love with only one because just again we talked about ranging as example just seeing like doing a lidar that says one kilometer with a single laser pointer is obviously you can say you have a lidar that says one kilometer and you can probably uh spark it and and raise a lot of money but eventually it will not help autonomous vehicles so there is there are many parameters and i think what what innovate is doing uh well is that we have a system that has a very nice working point and tradable meaning that we can actually we we can trade between palmettos but the working point the overall snl that we have in our system is significantly higher which allows us to meet all of the parameters that we show in that document including resolution frame rate it's not only resolution it's also frame rate it's also field of view and of course range so it's not and of course there's the pricing so that's uh i think the white paper explains it probably better than me um there is a question here on classified classifiers i mean maybe this is for you is it possible in theory to rig the loss function of the classifier to be more or maybe that was a joke actually sorry good [Music] so let's start with the training and i think uh i think we have two major concerns uh in training uh one which is related directly to the training is and and it's um sampling uh the most beneficial samples uh for annotation uh i think uh like especially in autonomous driving especially on highway uh most of the saints especially in north america and uh and in europe most of the series is just identical uh so you won't get much uh from just uh sampling uh random random friends for the training and we actually built a system of active learning uh and i've described it in previous talks so you can lock it up tonight so so this is really like a key element and it is like uh make sure that there is there is a question here make sure make sure we have another one i'm sorry sorry yeah i think there might be a little bit of flag but yeah yeah maybe we have time for one more question if there's one more there's someone to ask me a question here about the different types of sliders fmcw and time of flight and it's actually there are there are different camps in the light of space you have the the wavelength camp 905 1550 that's kind of a big kind of discussion and then you have uh the laser modulation whether it's time of flight and fmcw and i think other than that you have the scanning mechanism like whether it's mechanical solid-state or i don't know optical phased array so those are the primarily the the three main kind of branches in in the market uh starting with the question of fmcw and in time of flight so the only benefits uh proclaimed by the fmcw is the ability to do uh direct measurement of velocity meaning it's it's you modulate the laser in a certain way uh that allows you to measure uh both range and velocity by measuring doppler very similar on how you do it with radars the only uh thing that the disadvantage just comes with the need to use 1550 and again very expensive and there is a very strong coupling between the range frame rate and field of view so the trade the working point there is quite limited so fmcw systems can reach around uh 200k samples a second innova is one is about seven mega samples per second and in obvious two it's uh even significantly higher and it means that when you need to trade between resolution number of points frame frame rate and field of view fmcw mostly is using a very very narrow periscope kind of lidars because of that limitation and eventually measuring the the velocity of the vehicle in fmcw is is only a possible in the longitude vector because you're measuring velocity in the vector of the of the light direction you cannot measure velocity in the lateral which is as important so the need to calculate velocity is there anyway with time of light you can calculate velocity very nice if you have very high resolution and high frame rate it's not less well and eventually when it comes to the trade-off between parameters definitely resolution range field of view frame rate comes on top of the requirement for velocity and seeing probably tens of rfis and rfqs in the market i haven't seen yet anyone asking for velocity really so uh the the the value there is i think very limited and comes with very high cost um excellent yeah so thank you both so much and maybe one more quick question i know car makers are probably your primary customer but i was wondering do you also send sell your sensor to or others beyond the car makers for example academia and universities doing autonomous driving research you know someone yeah sure we do and we're happy to work with teams that are trying to innovate uh and of course we can talk about it after this session of course of course i mean uh we yes of course i mean we we work with construction companies smart cities surveillance i mean look today every in every corner of the world you have a 2d camera somewhere we live in a 3d volt okay anything you might ever want to automate you would prefer to use a 3d sensor it gives you much faster capability ability to exercise an application i'm sure lidars would be in the same position in several years from today excellent thank you so much and thank you both for for your presentation today
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2023_Recurrent_Neural_Networks_Transformers_and_Attention.txt
Hello everyone! I hope you enjoyed Alexander's  first lecture. I'm Ava and in this second lecture, Lecture 2, we're going to focus on this  question of sequence modeling -- how we can build neural networks that can  handle and learn from sequential data. So in Alexander's first lecture he  introduced the essentials of neural networks starting with perceptrons building  up to feed forward models and how you can actually train these models and start  to think about deploying them forward. Now we're going to turn our attention to  specific types of problems that involve sequential processing of data and we'll  realize why these types of problems require a different way of implementing and building  neural networks from what we've seen so far. And I think some of the components in  this lecture traditionally can be a bit confusing or daunting at first but what I  really really want to do is to build this understanding up from the foundations walking  through step by step developing intuition all the way to understanding the math and the  operations behind how these networks operate. Okay so let's let's get started to to  begin I to begin I first want to motivate what exactly we mean when we talk about  sequential data or sequential modeling. So we're going to begin with a really simple  intuitive example let's say we have this picture of a ball and your task is to predict  where this ball is going to travel to next. Now if you don't have any prior information about the trajectory of the ball it's motion  it's history any guess or prediction about its next position is going  to be exactly that a random guess. If however in addition to the current  location of the ball I gave you some information about where it was moving in the  past now the problem becomes much easier and I think hopefully we can all agree that most  likely or most likely next prediction is that this ball is going to move forward  to the right in in the next frame. So this is a really you know reduced down  bare bones intuitive example but the truth is that beyond this sequential  data is really all around us. As I'm speaking the words coming out of  my mouth form a sequence of sound waves that define audio which we can split up  to think about in this sequential manner similarly text language can be split up into a  sequence of characters or a sequence of words and there are many many more examples in which  sequential processing sequential data is present right from medical signals like EKGs to financial  markets and projecting stock prices to biological sequences encoded in DNA to patterns in the  climate to patterns of motion and many more and so already hopefully you're getting  a sense of what these types of questions and problems may look like and where  they are relevant in the real world when we consider applications of sequential  modeling in the real world we can think about a number of different kind of problem definitions  that we can have in our Arsenal and work with in the first lecture Alexander introduced the  Notions of classification and the notion of regression where he talked about and we learned  about feed forward models that can operate one to one in this fixed and static setting right  given a single input predict a single output the binary classification example of  will you succeed or pass this class here there's there's no notion of sequence  there's no notion of time now if we introduce this idea of a sequential component we can  handle inputs that may be defined temporally and potentially also produce a sequential  or temporal output so for as one example we can consider text language and maybe we want to  generate one prediction given a sequence of text classifying whether a message is a  positive sentiment or a negative sentiment conversely we could have a single input let's say  an image and our goal may be now to generate text or a sequential description of this image right  given this image of a baseball player throwing a ball can we build a neural network that generates  that as a language caption finally we can also consider applications and problems where we have  sequence in sequence L for example if we want to translate between two languages and indeed this  type of thinking in this type of Architecture is what powers the task of machine translation  in your phones in Google Translate and and many other examples so hopefully right this has given  you a picture of what sequential data looks like what these types of problem definitions may look  like and from this we're going to start and build up our understanding of what neural networks we  can build and train for these types of problems so first we're going to begin with the notion  of recurrence and build up from that to Define recurrent neural networks and in the last  portion of the lecture we'll talk about the underlying mechanisms underlying the Transformer  architectures that are very very very powerful in terms of handling sequential data but as I said  at the beginning right the theme of this lecture is building up that understanding step by step  starting with the fundamentals and the intuition so to do that we're going to go back revisit  the perceptron and move forward from there right so as Alexander introduced where we  studied the perception perceptron in lecture one the perceptron is defined by this single  neural operation where we have some set of inputs let's say X1 through XM and each of these  numbers are multiplied by a corresponding weight pass through a non-linear activation function  that then generates a predicted output y hat here we can have multiple inputs  coming in to generate our output but still these inputs are not thought of as  points in a sequence or time steps in a sequence even if we scale this perceptron and start  to stack multiple perceptrons together to Define these feed forward neural networks we still  don't have this notion of temporal processing or sequential information even though we are able to  translate and convert multiple inputs apply these weight operations apply this non-linearity  to then Define multiple predicted outputs so taking a look at this diagram right on the left  in blue you have inputs on the right in purple you have these outputs and the green defines the  neural the single neural network layer that's transforming these inputs to the outputs Next  Step I'm going to just simplify this diagram I'm going to collapse down those stack perceptrons  together and depict this with this green block still it's the same operation going  on right we have an input Vector being being transformed to predict this output  vector now what I've introduced here which you may notice is this new variable T right  which I'm using to denote a single time step we are considering an input at a single time  step and using our neural network to generate a single output corresponding to that how could  we start to extend and build off this to now think about multiple time steps and how we could  potentially process a sequence of information well what if we took this diagram all I've done  is just rotated it 90 degrees where we still have this input vector and being fed in producing an  output vector and what if we can make a copy of this network right and just do this operation  multiple times to try to handle inputs that are fed in corresponding to different times right  we have an individual time step starting with t0 and we can do the same thing the same operation  for the next time step again treating that as an isolated instance and keep doing this repeatedly  and what you'll notice hopefully is all these models are simply copies of each other just with  different inputs at each of these different time steps and we can make this concrete right in terms  of what this functional transformation is doing the predicted output at a particular time step  y hat of T is a function of the input at that time step X of T and that function is what is  learned and defined by our neural network weights okay so I've told you that our goal here is  Right trying to understand sequential data do sequential modeling but what could  be the issue with what this diagram is showing and what I've shown you  here well yeah go ahead [Music] exactly that's exactly right so the student's  answer was that X1 or it could be related to X naught and you have this temporal dependence but  these isolated replicas don't capture that at all and that's exactly answers the question perfectly  right here a predicted output at a later time step could depend precisely on inputs at previous  time steps if this is truly a sequential problem with this temporal dependence so how could we  start to reason about this how could we Define a relation that links the Network's computations  at a particular time step to Prior history and memory from previous time steps well what if we  did exactly that right what if we simply linked the computation and the information understood  by the network to these other replicas via what we call a recurrence relation what this means is  that something about what the network is Computing at a particular time is passed on to those  later time steps and we Define that according to this variable H which we call this internal  state or you can think of it as a memory term that's maintained by the neurons and the  network and it's this state that's being passed time set to time step as we read in  and and process this sequential information what this means is that the Network's output  its predictions its computations is not only a function of the input data X but also we have  this other variable H which captures this notion of State captions captures this notion of memory  that's being computed by the network and passed on over time specifically right to walk through this  our predicted output y hat of T depends not only on the input at a time but also this past memory  this past state and it is this linkage of temporal dependence and recurrence that defines this idea  of a recurrent neural unit what I've shown is this this connection that's being unrolled over time  but we can also depict this relationship according to a loop this computation to this internal State  variable h of T is being iteratively updated over time and that's fed back into the neuron the  neurons computation in this recurrence relation this is how we Define these recurrent cells  that comprise recurrent neural networks or and the key here is that we have this this idea of  this recurrence relation that captures the cyclic temporal dependency and indeed it's this idea  that is really the intuitive Foundation behind recurrent neural networks or rnns and so let's  continue to build up our understanding from here and move forward into how we can actually Define  the RNN operations mathematically and in code so all we're going to do is formalize this  relationship a little bit more the key idea here is that the RNN is maintaining the state  and it's updating the state at each of these time steps as the sequence is is processed we  Define this by applying this recurrence relation and what the recurrence relation captures is how  we're actually updating that internal State h of t specifically that state update is exactly like any  other neural network operator operation that we've introduced so far where again we're learning  a function defined by a set of Weights w we're using that function to update the cell State h  of t and the additional component the newness here is that that function depends both on the  input and the prior time step h of T minus one and what you'll know is that this function f sub  W is defined by a set of weights and it's the same set of Weights the same set of parameters  that are used time step to time step as the recurrent neural network processes this temporal  information the sequential data okay so the key idea here hopefully is coming coming through is  that this RNN stay update operation takes this state and updates it each time a sequence is  processed we can also translate this to how we can think about implementing rnns in Python code  or rather pseudocode hopefully getting a better understanding and intuition behind how these  networks work so what we do is we just start by defining an RNN for now this is abstracted away  and we start we initialize its hidden State and we have some sentence right let's say this is  our input of Interest where we're interested in predicting maybe the next word that's occurring in  this sentence what we can do is Loop through these individual words in the sentence that Define our  temporal input and at each step as We're looping through each word in that sentence is fed into  the RNN model along with the previous hidden state and this is what generates a prediction for  the next word and updates the RNN state in turn finally our prediction for the final word in  the sentence the word that we're missing is simply the rnn's output after all the prior  words have been fed in through the model so this is really breaking down how the RNN Works  how it's processing the sequential information and what you've noticed is that the  RNN computation includes both this update to the hidden State as well  as generating some predicted output at the end that is our ultimate  goal that we're interested in and so to walk through this how we're actually  generating the output prediction itself what the RNN computes is given some input vector  it then performs this update to the hidden state and this update to the head and state is just  a standard neural network operation just like we saw in the first lecture where it consists of  taking a weight Matrix multiplying that by the previous hidden State taking another weight Matrix  multiplying that by the input at a time step and applying a non-linearity and in this case right  because we have these two input streams the input data X of T and the previous state H we have these  two separate weight matrices that the network is learning over the course of its training that  comes together we apply the non-linearity and then we can generate an output at a given  time step by just modifying the hidden state using a separate weight Matrix to update this  value and then generate a predicted output and that's what there is to it right  that's how the RNN in its single operation updates both the hidden State  and also generates a predicted output okay so now this gives you the internal working  of how the RNN computation occurs at a particular time step let's next think about how this looks  like over time and Define the computational graph of the RNN as being unrolled or expanded acrost  across time so so far the dominant way I've been showing the rnns is according to this loop-like  diagram on the Left Right feeding back in on itself another way we can visualize and think  about rnns is as kind of unrolling this recurrence over time over the individual time steps in our  sequence what this means is that we can take the network at our first time step and continue  to iteratively unroll it across the time steps going on forward all the way until we process all  the time steps in our input now we can formalize this diagram a little bit more by defining the  weight matrices that connect the inputs to the hidden State update and the weight matrices that  are used to update the internal State across time and finally the weight matrices that Define  the the update to generate a predicted output now recall that in all these cases right for all  these three weight matrices add all these time steps we are simply reusing the same weight  matrices right so it's one set of parameters one set of weight matrices that just process this  information sequentially now you may be thinking okay so how do we actually start to be thinking  about how to train the RNN how to define the loss given that we have this temporal processing in  this temporal dependence well a prediction at an individual time step will simply amount to  a computed loss at that particular time step so now we can compare those predictions time step  by time step to the true label and generate a loss value for those timestamps and finally we can get  our total loss by taking all these individual loss terms together and summing them defining the  total loss for a particular input to the RNN if we can walk through an example of how we  implement this RNN in tensorflow starting from scratch the RNN can be defined as a layer  operation and a layer class that Alexander introduced in the first lecture and so we can  Define it according to an initialization of weight matrices initialization of a hidden state which  commonly amounts to initializing these two to zero next we can Define how we can actually pass  forward through the RNN Network to process a given input X and what you'll notice is in this forward  operation the computations are exactly like we just walked through we first update the hidden  state according to that equation we introduced earlier and then generate a predicted output that  is a transformed version of that hidden state and finally at each time step we return it  both the output and the updated hidden State as this is what is necessary to be stored  to continue this RNN operation over time what is very convenient is that although you  could Define your RNN Network and your RNN layer completely from scratch is that tensorflow  abstracts this operation away for you so you can simply Define a simple RNN according to  uh this this call that you're seeing here um which yeah makes all the the computations  very efficient and and very easy and you'll actually get practice implementing and  working with with rnns in today's software lab okay so that gives us the understanding of  rnns and going back to what I what I described as kind of the problem setups or the problem  definitions at the beginning of this lecture I just want to remind you of the types of sequence  modeling problems on which we can apply rnns right we can think about taking a sequence of  inputs producing one predicted output at the end of the sequence we can think  about taking a static single input and trying to generate text according  to according to that single input and finally we can think about taking a sequence  of inputs producing a prediction at every time step in that sequence and then doing this sequence  to sequence type of prediction and translation okay so yeah so so this will  be the the foundation for um the software lab today which will focus  on this problem of of many to many processing and many to many sequential modeling  taking a sequence going to a sequence what is common and what is universal across  all these types of problems and tasks that we may want to consider with rnns is what I like  to think about what type of design criteria we need to build a robust and reliable Network for  processing these sequential modeling problems what I mean by that is what are the characteristics  what are the the design requirements that the RNN needs to fulfill in order to be able  to handle sequential data effectively the first is that sequences can be of different  lengths right they may be short they may be long we want our RNN model or our neural network  model in general to be able to handle sequences of variable lengths secondly and really importantly  is as we were discussing earlier that the whole point of thinking about things through the lens of  sequence is to try to track and learn dependencies in the data that are related over time so  our model really needs to be able to handle those different dependencies which may occur at  times that are very very distant from each other next right sequence is all about order right  there's some notion of how current inputs depend on prior inputs and the specific order of the  observations we see makes a big effect on what prediction we may want to generate at the end  and finally in order to be able to process this information effectively our Network needs to be  able to do what we call parameter sharing meaning that given one set of Weights that set of weights  should be able to apply to different time steps in the sequence and still result in a meaningful  prediction and so today we're going to focus on how recurrent neural networks meet these design  criteria and how these design criteria motivate the need for even more powerful architectures  that can outperform rnns in sequence modeling so to understand these criteria very concretely  we're going to consider a sequence modeling problem where given some series of words our task  is just to predict the next word in that sentence so let's say we have this sentence this morning I  took my cat for a walk and our task is to predict the last word in the sentence given the prior  words this morning I took my cap for a blank our goal is to take our RNN Define it and put  it to test on this task what is our first step to doing this well the very very first step before  we even think about defining the RNN is how we can actually represent this information to the network  in a way that it can process and understand if we have a model that is processing this data  processing this text-based data and wanting to generate text as the output our problem can arise  in that the neural network itself is not equipped to handle language explicitly right remember  that neural networks are simply functional operators they're just mathematical operations  and so we can't expect it right it doesn't have an understanding from the start of what a word is  or what language means which means that we need a way to represent language numerically so that  it can be passed in to the network to process so what we do is that we need to define  a way to translate this text this this language information into a numerical  encoding a vector an array of numbers that can then be fed in to our neural network and  generating a a vector of numbers as its output so now right this raises the question  of how do we actually Define this transformation how can we transform  language into this numerical encoding the key solution and the key way that a  lot of these networks work is this notion and concept of embedding what that means  is it it's some transformation that takes indices or something that can be represented as  an index into a numerical Vector of a given size so if we think about how this idea  of embedding works for language data let's consider a vocabulary of words that we can  possibly have in our language and our goal is to be able to map these individual words in our  vocabulary to a numerical Vector of fixed size one way we could do this is by defining all the  possible words that could occur in this vocabulary and then indexing them assigning a index  label to each of these distinct words a corresponds to index one cat responds to index  two so on and so forth and this indexing Maps these individual words to numbers unique indices  what these indices can then Define is what we call a embedding vector which is a fixed length  encoding where we've simply indicated a one value at the index for that word when we observe  that word and this is called a one-hot embedding where we have this fixed length  Vector of the size of our vocabulary and each instance of the vocabulary corresponds  to a one-hot one at the corresponding index this is a very sparse way to do  this and it's simply based on purely purely count the count index there's  no notion of semantic information meaning that's captured in this vector-based encoding  alternatively what is very commonly done is to actually use a neural network to learn in encoding  to learn in embedding and the goal here is that we can learn a neural network that then captures  some inherent meaning or inherent semantics in our input data and Maps related words or  related inputs closer together in this embedding space meaning that they'll have numerical  vectors that are more similar to each other this concept is really really foundational to  how these sequence modeling networks work and how neural networks work in general okay so with  that in hand we can go back to our design criteria thinking about the capabilities that we desire  first we need to be able to handle variable length sequences if we again want to predict  the next word in the sequence we can have short sequences we can have long sequences we can have  even longer sentences and our key task is that we want to be able to track dependencies across  all these different lengths and what we need what we mean by dependencies is that there could  be information very very early on in a sequence but uh that may not be relevant or come up  late until very much later in the sequence and we need to be able to track these dependencies  and maintain this information in our Network dependencies relate to order and sequences are  defined by their order and we know that same words in a completely different order have completely  different meanings right so our model needs to be able to handle these differences in order and  the differences in length that could result in different predicted outputs okay so hopefully  that example going through the example in text motivates how we can think about transforming  input data into a numerical encoding that can be passed into the RNN and also what are  the key criteria that we want to meet in handling these these types of problems so so  far we've painted the picture of rnn's how they work intuition their mathematical operations and  what are the key criteria that they need to meet the final piece to this is how we actually train  and learn the weights in the RNN and that's done through back propagation algorithm with a bit  of a Twist to just handle sequential information if we go back and think about how we train feed  forward neural network models the steps break down in thinking through starting with an input where  we first take this input and make a forward pass through the network going from input to Output the  key to back propagation that Alexander introduced was this idea of taking the prediction and back  propagating gradients back through the network and using this operation to then Define and  update the loss with respect to each of the parameters in the network in order to gradually  adjust the parameters the weights of the network in order to minimize the overall loss now with  rnns as we walked through earlier we have this temporal unrolling which means that we have these  individual losses across the individual steps in our sequence that sum together to comprise  the overall loss what this means is that when we do back propagation we have to now instead of  back propagating errors through a single Network back propagate the loss through  each of these individual time steps and after we back propagate loss through each  of the individual time steps we then do that across all time steps all the way from our current  time time T back to the beginning of the sequence and this is the why this is why this algorithm is  called back propagation Through Time right because as you can see the data and the the predictions  and the resulting errors are fed back in time all the way from where we are currently to  the very beginning of the input data sequence so the back propagations through time is actually  a very tricky algorithm to implement uh in practice and the reason for this is if we take a  close look looking at how gradients flow across the RNN what this algorithm involves is that many  many repeated computations and multiplications of these weight matrices repeatedly against each  other in order to compute the gradient with respect to the very first time step we have to  make many of these multiplicative repeats of the weight Matrix why might this be problematic  well if this weight Matrix W is very very big what this can result in is what they call what  we call the exploding gradient problem where our gradients that we're trying to use to optimize our  Network do exactly that they blow up they explode and they get really big and makes it infeasible  and not possible to train the network stably what we do to mitigate this is a pretty simple solution  called gradient clipping which effectively scales back these very big gradients to try to  constrain them more in a more restricted way conversely we can have the instance where the  weight matrices are very very small and if these weight matrices are very very small we end up with  a very very small value at the end as a result of these repeated weight Matrix computations and  these repeated um multiplications and this is a very real problem in rnns in particular where  we can lead into this funnel called a Vanishing gradient where now your gradient has just dropped  down close to zero and again you can't train the network stably now there are particular tools that  we can use to implement that we can Implement to try to mitigate the Spanish ingredient problem  and we'll touch on each of these three solutions briefly first being how we can Define the  activation function in our Network and how we can change the network architecture itself to try  to better handle this Vanishing gradient problem before we do that I want to take just  one step back to give you a little more intuition about why Vanishing gradients can  be a real issue for recurrent neural networks Point I've kept trying to reiterate is this notion  of dependency in the sequential data and what it means to track those dependencies well if the  dependencies are very constrained in a small space not separated out that much by time this repeated  gradient computation and the repeated weight matrix multiplication is not so much of a problem  if we have a very short sequence where the words are very closely related to each other and it's  pretty obvious what our next output is going to be the RNN can use the immediately passed  information to make a prediction and so there are not going to be that many uh  that much of a requirement to learn effective weights if the related information  is close to to each other temporally conversely now if we have a sentence  where we have a more long-term dependency what this means is that we need information from  way further back in the sequence to make our prediction at the end and that gap between what's  relevant and where we are at currently becomes exceedingly large and therefore the vanishing  gradient problem is increasingly exacerbated meaning that we really need to um the RNN becomes  unable to connect the dots and establish this long-term dependency all because of this Vanishing  gradient issue so the ways that we can imply the ways and modifications that we can make to our  Network to try to alleviate this problem threefold the first is that we can simply change  the activation functions in each of our neural network layers to be such that they can  effectively try to mitigate and Safeguard from gradients in instances where from shrinking the  gradients in instances where the data is greater than zero and this is in particular true for the  relu activation function and the reason is that in all instances where X is greater than zero with  the relu function the derivative is one and so that is not less than one and therefore it helps  in mitigating The Vanishing gradient problem another trick is how we initialize the parameters  in the network itself to prevent them from shrinking to zero too rapidly and there are there  are mathematical ways that we can do this namely by initializing our weights to Identity matrices  and this effectively helps in practice to prevent the weight updates to shrink too rapidly to zero  however the most robust solution to the vanishing gradient problem is by introducing a slightly  more complicated uh version of the recurrent neural unit to be able to more effectively track  and handle long-term dependencies in the data and this is this idea of gating and what the  idea is is by controlling selectively the flow of information into the neural unit to be able to  filter out what's not important while maintaining what is important and the key and the most popular  type of recurrent unit that achieves this gated computation is called the lstm or long short term  memory Network today we're not going to go into detail on lstn's their mathematical details their  operations and so on but I just want to convey the key idea and intuitive idea about why these lstms  are effective at tracking long-term dependencies the core is that the lstm is able to um control  the flow of information through these gates to be able to more effectively filter out the  unimportant things and store the important things what you can do is Implement Implement lstms  in tensorflow just as you would in RNN but the core concept that I want you to take away when  thinking about the lstm is this idea of controlled information flow through Gates very briefly the  way that lstm operates is by maintaining a cell State just like a standard RNN and that cell state  is independent from what is directly outputted the way the cell state is updated is according to  these Gates that control the flow of information for getting and eliminating what is irrelevant  storing the information that is relevant updating the cell state in turn and then  filtering this this updated cell state to produce the predicted output just like the  standard RNN and again we can train the lstm using the back propagation Through Time algorithm  but the mathematics of how the lstm is defined allows for a completely uninterrupted flow of the  gradients which completely eliminates the well largely eliminates the The Vanishing  gradient problem that I introduced earlier again we're not if you're if you're interested  in learning more about the mathematics and the details of lstms please come and discuss  with us after the lectures but again just emphasizing the core concept and the  intuition behind how the lstm operates okay so so far where we've out where we've been at  we've covered a lot of ground we've gone through the fundamental workings of rnns the architecture  the training the type of problems that they've been applied to and I'd like to close this  part by considering some concrete examples of how you're going to use rnns in your software  lab and that is going to be in the task of Music generation where you're going to work to build an  RNN that can predict the next musical note in a sequence and use it to generate brand new musical  sequences that have never been realized before so to give you an example of just the quality  and and type of output that you can try to aim towards a few years ago there was a work that  trained in RNN on a corpus of classical music data and famously there's this composer  Schubert who uh wrote a famous unfinished Symphony that consisted of two movements but  he was unable to finish his uh his Symphony before he died so he died and then he left the  third movement unfinished so a few years ago a group trained a RNN based model to actually try to  generate the third movement to Schubert's famous unfinished Symphony given the prior to movements  so I'm going to play the result quite right now [Music] okay I I paused it I interrupted it quite  abruptly there but if there are any classical music aficionados out there hopefully you get  a appreciation for kind of the quality that was generated uh in in terms of the music quality and  this was already from a few years ago and as we'll see in the next lectures the and continuing with  this theme of generative AI the power of these algorithms has advanced tremendously since  we first played this example um particularly in you know a whole range of domains which I'm  excited to talk about but not for now okay so you'll tackle this problem head on in today's  lab RNN music generation foreign we can think about the the simple example of input sequence  to a single output with sentiment classification where we can think about for example text like  tweets and assigning positive or negative labels to these these text examples based on the  content that that is learned by the network okay so this kind of concludes the portion on rnns  and I think it's quite remarkable that using all the foundational Concepts and operations  that we've talked about so far we've been able to try to build up networks that handle  this complex problem of sequential modeling but like any technology right and RNN is not  without limitations so what are some of those limitations and what are some potential issues  that can arise with using rnns or even lstms the first is this idea of encoding and and  dependency in terms of the the temporal separation of data that we're trying to process while rnns  require is that the sequential information is fed in and processed time step by time step what that  imposes is what we call an encoding bottleneck right where we have we're trying to encode a lot  of content for example a very large body of text many different words into a single output that  may be just at the very last time step how do we ensure that all that information leading up to  that time step was properly maintained and encoded and learned by the network in practice this is  very very challenging and a lot of information can be lost another limitation is that by  doing this time step by time step processing rnns can be quite slow there is not really  an easy way to parallelize that computation and finally together these components of  the encoding bottleneck the requirement to process this data step by step imposes the biggest  problem which is when we talk about long memory the capacity of the RNN and the lstm is really not  that long we can't really handle data of tens of thousands or hundreds of thousands or even Beyond  sequential information that effectively to learn the complete amount of information and patterns  that are present within such a rich data source and so because of this very recently there's been  a lot of attention in how we can move Beyond this notion of step-by-step recurrent processing  to build even more powerful architectures for processing sequential data to understand how we  do how we can start to do this let's take a big step back right think about the high level goal  of sequence modeling that I introduced at the very beginning given some input a sequence of data  we want to build a feature encoding and use our neural network to learn that and then transform  that feature encoding into a predicted output what we saw is that rnns use this notion  of recurrence to maintain order information processing information time step by time step  but as I just mentioned we had these key three bottlenecks to rnns what we really want to achieve  is to go beyond these bottlenecks and Achieve even higher capabilities in terms of the power of  these models rather than having an encoding bottleneck ideally we want to process information  continuously as a continuous stream of information rather than being slow we want to be able to  parallelize computations to speed up processing and finally of course our main goal is  to really try to establish long memory that can build nuanced and Rich  understanding of sequential data the limitation of rnns that's linked to all  these problems and issues in our inability to achieve these capabilities is that they  require this time step by time step processing so what if we could move beyond that what if we  could eliminate this need for recurrence entirely and not have to process the data time set by time  step well a first and naive approach would be to just squash all the data all the time steps  together to create a vector that's effectively concatenated right the time steps are eliminated  there's just one one stream where we have now one vector input with the data from all time points  that's then fed into the model it calculates some feature vector and then generates some output  which hopefully makes sense and because we've squashed all these time steps together we  could simply think about maybe building a feed forward Network that could that could do this  computation well with that we'd eliminate the need for recurrence but we still have the issues that  it's not scalable because the dense feed forward Network would have to be immensely large defined  by many many different connections and critically we've completely lost our in order information by  just squashing everything together blindly there's no temporal dependence and we're then stuck in  our ability to try to establish long-term memory so what if instead we could still think  about bringing these time steps together but be a bit more clever about how we try  to extract information from this input data the key idea is this idea of being able to  identify and attend to what is important in a potentially sequential stream of information and  this is the notion of attention or self-attention which is an extremely extremely powerful Concept  in modern deep learning and AI I cannot understate or I don't know understand overstate I I cannot  emphasize enough how powerful this concept is attention is the foundational mechanism of the  Transformer architecture which many of you may have heard about and it's the the the notion  of a transformer can often be very daunting because sometimes they're presented with these  really complex diagrams or deployed in complex applications and you may think okay how  do I even start to make sense of this at its core though attention the key operation  is a very intuitive idea and we're going to in the last portion of this lecture break  it down step by step to see why it's so powerful and how we can use that as part of  a larger neural network like a Transformer specifically we're going to be talking and  focusing on this idea of self-attention attending to the most important parts of an  input example so let's consider an image I think it's most intuitive to consider an image  first this is a picture of Iron Man and if our goal is to try to extract information from this  image of what's important what we could do maybe is using our eyes naively scan over this image  pixel by pixel right just going across the image however our brains maybe maybe internally they're  doing some type of computation like this but you and I we can simply look at this image and  be able to attend to the important parts we can see that it's Iron Man coming at you  right in the image and then we can focus in a little further and say okay what are the  details about Iron Man that may be important what is key what you're doing is your brain  is identifying which parts are attending to to attend to and then extracting those  features that deserve the highest attention the first part of this problem is really  the most interesting and challenging one and it's very similar to the concept of search  effectively that's what search is doing taking some larger body of information and trying  to extract and identify the important parts so let's go there next how does search work you're  thinking you're in this class how can I learn more about neural networks well in this day and age one  thing you may do besides coming here and joining us is going to the internet having all the videos  out there trying to find something that matches doing a search operation so you have a giant  database like YouTube you want to find a video you enter in your query deep learning and  what comes out are some possible outputs right for every video in the database there is  going to be some key information related to the interview to that to that video let's say the  title now to do the search what the task is to find the overlaps between your query and each  of these titles right the keys in the database what we want to compute is a metric of similarity  and relevance between the query and these keys how similar are they to our desired query and we can  do this step by step let's say this first option of a video about the elegant giant sea turtles  not that similar to our query about deep learning our second option introduction to deep learning  the first introductory lecture on this class yes highly relevant the third option a video about  the late and great Kobe Bryant not that relevant the key operation here is that there is this  similarity computation bringing the query and the key together the final step is now that  we've identified what key is relevant extracting the relevant information what we want to pay  attention to and that's the video itself we call this the value and because the searches  is implemented well right we've successfully identified the relevant video on deep learning  that you are going to want to pay attention to and it's this this idea this intuition of  giving a query trying to find similarity trying to extract the related values  that form the basis of self-attention and how it works in neural networks like  Transformers so to go concretely into this right let's go back now to our text our language  example with the sentence our goal is to identify and attend to features in this input that are  relevant to the semantic meaning of the sentence now first step we have sequence we have order  we've eliminated recurrence right we're feeding in all the time steps all at once we still need a way  to encode and capture this information about order and this positional dependence how this is done is  this idea of possession positional encoding which captures some inherent order information present  in the sequence I'm just going to touch on this very briefly but the idea is related to this  idea of embeddings which I introduced earlier what is done is a neural network layer is  used to encode positional information that captures the relative relationships in  terms of order within within this text that's the high level concept right we're still  being able to process these time steps all at once there is no notion of time step rather the data  is singular but still we learned this encoding that captures the positional order information now  our next step is to take this encoding and figure out what to attend to exactly like that search  operation that I introduced with the YouTube example extracting a query extracting a key  extracting a value and relating them to each other so we use neural network layers to do exactly  this given this positional encoding what attention does is applies a neural network layer  transforming that first generating the query we do this again using a separate neural network  layer and this is a different set of Weights a different set of parameters that then transform  that positional embedding in a different way generating a second output the key and finally  this repeat this operation is repeated with a third layer a third set of Weights generating the  value now with these three in hand the key the the query the key and the value we can compare  them to each other to try to figure out where in that self-input the network should attend  to what is important and that's the key idea behind this similarity metric or what you can  think of as an attention score what we're doing is we're Computing a similarity score between a  query and the key and remember that these query and Qui key values are just arrays of numbers  we can Define them as arrays of numbers which you can think of as vectors in space the query  Vector the query values are some Vector the key the key values are some other vector and  mathematically the way that we can compare these two vectors to understand how similar they are is  by taking the dot product and scaling it captures how similar these vectors are how whether or not  they're pointing in the same direction right this is the similarity metric and if you are familiar  with a little bit of linear algebra this is also known as the cosine similarity operation functions  exactly the same way for matrices if we apply this dot product operation to our query in key matrices  key matrices we get this similarity metric out now this is very very key in defining our next  step Computing the attention waiting in terms of what the network should actually attend to within  this input this operation gives us a score which defines how how the components of the input data  are related to each other so given a sentence right when we compute this similarity score metric  we can then begin to think of Weights that Define the relationship between the sequential the  components of the sequential data to each other so for example in the this example with a text  sentence he tossed the tennis ball to serve the goal with the score is that words in the  sequence that are related to each other should have high attention weights ball related to  toss related to tennis and this metric itself is our attention waiting what we have done is  passed that similarity score through a soft Max function which all it does is it constrains  those values to be between 0 and 1. and so you can think of these as relative scores of  relative attention weights finally now that we have this metric that can captures this notion of  similarity and these internal self-relationships we can finally use this metric to extract  features that are deserving of high attention and that's the exact final step in this  self-attention mechanism in that we take that attention waiting Matrix multiply it by the value  and get a transformed transformation of of the initial data as our output which in turn reflects  the features that correspond to high attention all right let's take a breath let's  recap what we have just covered so far the goal with this idea of self-attention  the backbone of Transformers is to eliminate recurrence attend to the most important features  in in the input data in an architecture how this is actually deployed is first we take our  input data we compute these positional encodings the neural network layers are applied three-fold  to transform the positional encoding into each of the key query and value matrices we can  then compute the self-attention weight score according to the up the dot product operation  that we went through prior and then self-attend to these features to these uh information to  extract features that deserve High attention what is so powerful about this approach in taking  this attention wait putting it together with the value to extract High attention features is that  this operation the scheme that I'm showing on the right defines a single self-attention head  and multiple of these self-attention heads can be linked together to form larger Network  architectures where you can think about these different heads trying to extract different  information different relevant parts of the input to now put together a very very rich encoding and  representation of the data that we're working with intuitively back to our Ironman example what  this idea of multiple self-attention heads can amount to is that different Salient features  and Salient information in the data is extracted first maybe you consider Iron Man attention had  one and you may have additional attention heads that are picking out other relevant parts of  the data which maybe we did not realize before for example the building or the spaceship  in the background that's chasing iron man and so this is a key building block of many  many many many powerful architectures that are out there today today I again cannot emphasize  how enough how powerful this mechanism is and indeed this this backbone idea of  self-attention that you just built up understanding of is the key operation of some  of the most powerful neural networks and deep learning models out there today ranging from the  very powerful language models like gpt3 which are capable of synthesizing natural language in a  very human-like fashion digesting large bodies of text information to understand relationships  in text to models that are being deployed for extremely impactful applications in biology  and Medicine such as Alpha full 2 which uses this notion of self-attention to look at data  of protein sequences and be able to predict the three-dimensional structure of a protein just  given sequence information alone and all the way even now to computer vision which will be the  topic of our next lecture tomorrow where the same idea of attention that was initially developed in  sequential data applications has now transformed the field of computer vision and again using  this key concept of attending to the important features in an input to build these very rich  representations of complex High dimensional data okay so that concludes lectures for today I  know we have covered a lot of territory in a pretty short amount of time but that is what this  boot camp program is all about so hopefully today you've gotten a sense of the foundations of neural  networks in the lecture with Alexander we talked about rnns how they're well suited for sequential  data how we can train them using back propagation how we can deploy them for different applications  and finally how we can move Beyond recurrence to build this idea of self-attention for  building increasingly powerful models for deep learning in sequence modeling  all right hopefully you enjoyed we have um about 45 minutes left for the for the  lab portion and open Office hours in which we welcome you to ask us questions uh of us  and the Tas and to start work on the labs the information for the labs is is up there  thank you so much for your attention foreign
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_Uncertainty_in_Deep_Learning.txt
uh thank you so much for having me i'm super excited to be here um thank you alexander so for the kind words and uh and ava um both of you for organizing so yeah i'm going to talk about practical uncertainty estimation and out of distribution robustness and deep learning this is actually a bit of an abridged and maybe slightly updated version of the nurbs tutorial like age in 2020 so if you want to see the the extended one uh check that out and a bunch of these slides are also by dustin and blaji uh wonderful colleagues of mine all right what do we mean by uncertainty um the basic idea is we want to return a distribution over predictions rather than just a single prediction in terms of classification that means we'd like to output a label along with along with its confidence so how sure are we about this label in terms of regression we want to output a mean but also its variance you know confidence in intervals or error bars around our predictions good uncertainty estimates are crucial because they quantify when we can trust the model's predictions so what do we mean by out of distribution robustness well in machine learning we usually assume actually all almost all the theory is is under the assumption that our training data and our test data are drawn iid independent and identically distributed from the same data set in practice in reality you know the data sets change and um and things change either temporally spatially or in other ways and in practice we often see data that's not when we're deploying models we see data that's not from the same distribution as the training set so the kinds of of data set shift you might imagine seeing are things like covariate shift so the inputs x may change in distribution but the label distribution changes open set recognition is a fun one it's actually really really terrible so you can have new classes appear at test time imagine for example you've got a cat and dog classifier that sees an airplane that's something that's that's really hard to deal with actually and then label shift so the distribution of labels may change while the input and output distributions are the same this is also prevalent in things like medicine so the distribution of the number of people who test positive for covet changes pretty drastically over time and your models will have to adapt to that kind of thing okay so here's an example of some data set shift um so the iid test that this is actually from imagenet which is a popular image data set we have clean images you could imagine the shift being just adding noise to the images with with additional severity um so here we've got our frog and we're adding just increasing amounts of noise there's actually a paper by hendrix and dietrich where they took various kinds of shifts like noise motion blur zoom blur snow actually just drove through the snow in an in an autonomous car and you know it didn't deal very well with that data set shift so um here's a bunch of data shifts at various different severities and um and we showed this to a a common architecture a resnet and looked at how the accuracy behaved with respect to these data set shifts and so you can see accuracy goes down as these various shifts are happening and that's that corresponds to the intensity of the shift that's maybe not surprising but what you would really like is for your model to say okay my accuracy is going down but my uncertainty goes up corresponding with that right i don't know what the right answer is but i'm telling you i don't know by saying like you know i in a binary classifier i have 0.5 confidence on a label for example okay so in our experiments was that true in kind of these classic deep learning architectures no definitely not so as accuracy got worse the ece is a measure of calibration error i'll tell you about that in a second but our measure of the quality of the uncertainty got worse as well and the model started to make over-confident mistakes when exposed to this changing distribution of data so that's pretty alarming it's pretty scary for any any application where you might deploy a machine learning model another kind of terrifying thing is here are a bunch of examples of of uh of data examples where the model actually said that it had a class with over 99.5 percent confidence so here's a here's like random noise shown to the model and said i am almost completely sure that's a robin or a cheetah or a panda which is also pretty pretty freaky um you think it would say i don't know what this what this noisy thing is but it doesn't one reason why you might imagine that happening is is just the way our models are constructed right so in an ideal sense imagine it being a two-dimensional input space um and you have three classes so you have this blue orange and red ideally right when you when you get further away from your class your model starts to become uncertain about what the right class is and so the red class here we call out of distribution and we'd like to say okay well the red is far away from orange and blue so we're uncertain about what the what the actual label is in reality you know these classifiers are are decision boundaries and the further you get from the boundary so this is the boundary between the two classes the further you get from the boundary the more confident you become that as one class or the other so an interesting pathology of a lot of these models is you know if you show a a bird class or sorry a cat a dog classifier or a bird it won't say oh i don't i don't know what this is it'll say oh this is more bird-like than or sorry more dog-like than cat-like so i'm 100 sure that this is a cat which is not not what you want all right so applications a very very important one that's becoming more and more important or relevant these days is healthcare so medical imaging radiology this is diabetic retinopathy and so of course you would think that you would like it to be able to pass along with a classification you know diseased or not diseased tumor or not tumor if you pass that down to a doctor you want to pass down a confidence measure as well so 80 sure it's a tumor 70 sure it's a tumor right rather than yes or no and have the doctor be able to reason about that probability and include it in in a downstream or maybe expectation calculation so like what is the expectation that the patient will survive or something so here right we'd really like to be able to pass good uncertainty downstream to a doctor or say we're not actually sure we'd like an expert to label it instead and pass it to an expert labeler or you know an actual doctor self-driving cars which you just heard a lot about like i said i i actually was in a in a self-driving car an hour ago driving through vermont to get here in snow and so that's definitely a a quite an out-of-distribution situation and it certainly was fooled a couple of times but here right you would imagine that the car can um encounter any number of out-of-distribution examples and you would like it to express uncertainty such that the system making decisions is able to incorporate that uncertainty one that we care about a lot at google is conversational dialogue systems right so if you have your google assistant or your amazon alexa or your siri or whatever then it should be able to express uncertainty about what you said or what you asked it to do and you could possibly then uh defer to a human or it could ask you to clarify right it could say oh please repeat that i didn't quite understand instead of like you know add something to your shopping cart when you really just wanted to know what the weather was all right so there's a lot of applications of uncertainty and added distribution robustness basically any place we deploy a machine learning model in the real world we want good uncertainty this is just a subset of tasks that i care about a lot and my team at google is working on um one of our our taglines is so a popular expression about machine learning is all models are wrong but some models are useful and our tagline we've changed that a little bit to say all models are wrong but models that know when they're wrong are useful all right so give you a little primer on on uncertainty and robustness so there's multiple sources of sources of uncertainty one is model uncertainty and the way to think about that is there's many models that fit the training data well so if you look at these this two-class situation you know there's actually infinitely many lines that you could draw that perfectly separate these two classes and so you would like to be able to express your uncertainty about which line is the right line right rather than make an arbitrary decision that theta one or theta two or theta three is you know the true model um this kind of model so a model about what is the right model is known as epistemic uncertainty and you can actually reduce it the way you reduce it is just by gathering more data right so if we filled in more new triangles and more red uh circles then maybe we could eliminate theta 3 and zeta 1 because you know they no longer separate the data well one thing to note is you know it doesn't necessarily have to be models in the same hypothesis class so the first one was just straight lines linear models but you could imagine non-linear models of various flavors also be incorporated all kinds of yeah all kinds of models and that significantly increases the scope as well the number of of plausible models that could describe the data okay then the second big source of uncertainty is known as data uncertainty and that's basically uncertainty that's inherent to the data it could be something like label noise just uncertainty in the labeling process you know humans i think these are cfar images maybe and they're really low resolution and humans may not even know you know what that thing is and two human writers may give too a few different labels it could be sensor noise you know and in a thermometer there's some inherent uncertainty about you know the decimal place that you're uh that you can estimate the rope the weather too and so this is irreducible uncertainty often is called alliatoric uncertainty um the distinction between epistemic and aliatoric actually you know experts constantly mistake the two and between us we've kind of stated that we need to change the language because those words are too too hard to memorize we can think of it as model and data uncertainty all right so how do we measure the quality of our uncertainty um this is something that we've been thinking about quite a bit as well one thing that's popular to think about is a notion called calibration error and that's basically the difference between the confidence of the models they say your model said i'm 90 sure this is a tumor or i'm 90 sure that was a stop sign uh minus the aggregate accuracy so if it said this is a stop sign a million times how many times was it actually right um or sorry if it said this is a stop sign with 90 certainty a million times how many times was it actually right and the calibration error is basically the difference between that confidence and the aggregate accuracy um so you know in in the limit how often do does or how does my confidence actually correspond to the actual accuracy okay so oh i kind of explained this already but another great example is with weather so um if you predict the rain with 80 accuracy your calibration error would be you know over many many days how uh how often was what was the difference between the confidence and the actual predicted accuracy and for regression you might imagine calibration corners corresponding to this notion of notion of coverage which basically how often do predictions fall within the confidence intervals of your of your predictions okay so a popular measure for this is something known as expected calibration error this equation is just showing you what i would have just said so you actually been your confidences and bins of maybe it's zero to ten percent ten to twenty twenty to thirty and then for each bin you uh you estimate for the prediction that landed in that bin you know what was the difference between the the accuracy of that prediction and the actual confidence or that the confidence in that bin okay and that gives us that can give us an idea of the you know the level of overconfidence or under confidence in um in our models here's an example from a from a paper by guodal where they showed that very often deep learning models could be very under confident in that you know the blue what they actually outputted was pretty far from the actual um you know confidence or the actual accuracy that we'd like for each bin okay one downside of calibration is that it actually has no notion of accuracy built in it's just saying like how often was my confidence aligned with the actual accuracy and so you could have a perfectly calibrated model that just predicted randomly all the time because it's just saying i don't know and i didn't know and so we've been looking at uh or actually the statistical meteorology community many years ago was looking at ways to actually score the the level of the quality of the uncertainty of weather forecasts and came up with a notion of something called proper explorables that that incorporates this notion of calibration but also a notion of accuracy and this paper by knighting and raftery is a wonderful paper that outlines these rules and it gives you a whole class of loss functions if you will or scoring functions that um that are that uh don't violate a set of rules and um and accurately kind of give you an idea of how good your uncertainty was negative log likelihood is popular in machine learning that's a proper scoring rule briar score is just squared error and that's also a proper score rule also used quite a bit in machine learning okay i'm gonna skip over that for the sake of time okay so how do we get uncertainty out of our models um so we know it's important uh how do we actually extract a good notion of uncertainty so um i assume you are all familiar with the setting of i have enrolled that and i want to train it with std and i have a loss function well every single i would say almost every loss function corresponds to a maximum so minimizing a loss function actually corresponds to maximizing a probability or maximizing a log probability of the data given the model parameters so if you think this think of this p theta that's uh this is saying or this r max is saying i want to maximize the probability of my parameters theta given the data set that i have to find one setting the parameters that maximizes this probability that corresponds to minimizing um log likelihood minus a prior and actually if you take uh if you think of that as a loss right then that's a log loss p my plus a regularization term this in this case it's squared error um which actually corresponds to a gaussian prior but i won't get into that okay so actually this this log prob uh corresponds to data uncertainty which is interesting so you can actually build into your model a notion of data uncertainty in the likelihood function um let's see oh yeah okay and a special case of this right is just soft max cross entropy with lt regularization which you optimize with sgd which is the standard way to treat train deep neural nets sorry there's a lag on my slide so i've skipped over a couple all right see if this gets okay all right we'll just go here okay so the the problem with this right is that we've we found just one set of parameters um this gives us just one prediction for example and it doesn't give us model uncertainty right we just have one model and we plug in an x and it gives us a y so how do we get uncertainty in the probabilistic approach which is my definitely my favorite way of thinking about things you would instead of getting the single arg max parameters you want a full distribution for the p theta given x y you want a whole distribution over parameters um rather than the single one a really popular thing to do actually is instead of getting this full distribution is just get multiple good ones and that corresponds to there's a number of strategies for doing this the most popular probably is something called ensembling which is just get a bunch of good ones and aggregate your predictions over this set of good models okay let's hope it goes forward all right the the recipe at least in the in the bayesian sense right is we have a model it's a joint distribution of outputs and parameters given given some set of inputs during training we want to compute the posterior which is the conditional distribution of the parameters given observations right so instead of finding the single setting of theta bayes rule gives us the equation that we need to get the entire distribution over thetas and that's actually the numerator is what we were doing before and to get the entire distribution you need to compute the denominator below which is actually a pretty pretty messy and scary integral um high dimensional because it's over all of our parameters which for deep nets can be millions or even billions then at prediction time we'd like to compute the likelihood given the parameters um where each parameter configuration is weighted by this posterior right so we compute an integral predicting condition on a set of parameters you know times the probability of those parameters under this posterior um and aggregate all those to get our our predictions in practice what is often done is you get a bunch of samples s and you say okay over a set of discrete samples i'm going to aggregate the predictions over this set which might look a lot like ensembling which i just talked about okay so what does this give us um instead of having just a single model now we have a whole distribution of models right and you're what you're looking at is actually such a distribution where you can see all the lines each line is a different model they all fit the data quite well but they do different things as they get away from the data so as you move away from the data they have different hypotheses about how the the behavior of the of the data will be as you move away and the that uh that difference gives you an interesting uncertainty as you move away from the data right so you might imagine just computing the variance for example and out near the tails yeah for a prediction all right i'm going to speed through this but there's a vast literature on different ways of approximating that integral over all parameters it's in general way too expensive to do you know and certainly in closed form or um or even exactly for deep nets so there's tons of approximations they correspond to things like if you imagine these lines being the lost surface of the of the network they correspond to things like putting a quadratic on top of the loss it's known as a laplace approximation or things like sampling to markov chain monte carlo is used quite a bit and that's just as you optimize usually you draw samples you grab a good model then you optimize a bit further grab a good model and so on and so forth one thing that i'm really interested in is this notion of um so a a parametrization of a deep neural net defines a function from x to y if you're if you have a classifier and or a regressor and so then a bayesian neural net gives you a distribution over these functions from x to y and reasoning about this distribution is something i find super interesting and there's a really neat property that under under a couple of assumptions you can show that if you take the limit of infinite hidden units it corresponds to a model that we know as a gaussian process i won't get into that but it gives you that integral in closed form and then we can use that closed form integral to make predictions or look at pretty pictures of what the posterior actually looks like [Music] this is actually a line of research of mine and a couple of my colleagues of thinking about the behavior of deep neural networks under this this infinite limit or you know thinking about how things behave as you move away from the data using this gaussian process representation which is a really neat and clean way to think about things at least in theory because we have a closed form model for what the integral over parameters actually is um and it turns out that they're really well calibrated which is awesome they have good uncertainty i won't won't describe what they are exactly but i highly recommend that you check out gaussian processes if you find that interesting okay so then um if you think okay this bayesian methodology is is pretty ugly because i have this crazy high dimensional interval it's really hard and mathy to figure out then you might think about doing ensemble learning which is basically just take a bunch of independently trained models and form a mixture distribution which is basically just average the predictions of the ensemble to get you a i guess in the in the case of classification you really just aggregate or average the predictions and that gives you uncertainty over the class prediction in regression you can compute uncertainty as a variance over the predictions of the different models and there's many different ways of doing this ensembling it's almost as old as machine learning itself just like take the kitchen sink of all the things that you tried and aggregate the predictions of these things and that turns out to give you usually better predictions and better uncertainty than just a single model all right there's so if you find it interesting you can you can wade into this debate if you go on twitter there's a there's a debate between experts in machine learning about you know are ensembles beijing are they not basin i won't get into that i fall into the not beijing camp but um there's an interesting it's interesting to think about you know what is the difference between these these strategies um i won't i won't spend too much time on that i've also spent a lot of time thinking about issues with the with bayesian models so um there's we spent in my group a ton of time trying to get asian models to work well on modern sized deep neural nets and it's really hard to get it to work um because it requires very course approximations you know you need this very high dimensional integral um people play around with bayes rule to get it to work and so it at the end it's not clear if it's totally kosher from the i guess a a purist bayesian view um and it requires you to specify a model well i won't get into that too much but it basically means that you know you specify a class of models and the ideal model needs to be in that in that well specified class of models for bays to work well and it turns out that for deep nets we don't really understand them well enough to specify uh what this class of models might look like and the problem there often hinges on the um on the prior which is you know how do you specify a prior over deep neural nets we don't really know anyway there's there's a paper i'm i'm particularly proud of called how good is the base posterior deep neural nets where we try to figure out you know what is wrong with the deep bronze why can't we get them to work well on um on modern problems okay i see there there are some chat messages should i should i be reading these um this is all the we we are um handling the chat so and some of the questions are directed specifically towards you so we can feel them moderate them to you at the end of your talk if that's okay perfect perfect yeah go ahead um all right so some some really simple ways to improve the uncertainty of your model a popular one is just known as recalibration this is done all the time in in real machine learning systems which is you know train your model and then look at the calibration on some withheld data set and recalibrate on that data set which you can actually do is take just the last layer and do something called temperature scaling which is actually just optimized via cross entropy on this withheld dataset and that can increase your calibration on the distribution that corresponds to that withheld dataset of course that doesn't give you much in terms of model uncertainty and that doesn't help you when you see you know yet another different data distribution than the one you you just recalibrated on but it can be really effective um i don't know if you talked about dropout in the course you probably did something that's surprisingly effective is called monte carlo dropout yarn gal and zubin garamani where they just did drop out at test time so when you're making predictions drop out a bunch of hidden units and average over the dropouts when you're predicting and you can imagine that gives you an ensemble like behavior and you get a distribution of predictions at the end and that actually seems seems to work pretty well as a as a baseline and then deep ensembles so bellagi i won't say his last name um found that deep ensembles work incredibly well for uh deep learning and this is basically just retraining your deep learning model uh n times n is usually something like five to ten just with different random initializations they end up in different maxima or optima of the lost landscape and give interestingly diverse predictions and this actually gives you a really good uncertainty at least empirically it seems really good yeah so here's a figure from if you recall one of the first slides i showed you the was different shifts of the data that same study we found deep ensembles actually worked better than basically everything else we tried um i lost the back to bellagi actually because i i said bayesian or approximate evasion methods were going to work better and it turned out that they that they didn't something that works even better than than this deep ensemble strategy is what we call hyper grammar ensembles which is basically also change the hyper parameters of your model um and that gives you even more diversity in the predictions i guess you might imagine that corresponding to broadening your model hypothesis in terms of the types of models that might fit well and uh so you ensemble over those that does even better than just on something over the same hyper parameters and architecture then another thing that works really well swag which was biomatic set out is you just optimize via sgd basically and then you fit a gaussian around around the average weight iterates so as you're bouncing around an optimum in std you're basically drawing out a gaussian and you're going to say that gaussian now is the distribution over parameters that i'll use okay [Music] i'm waiting for the slide to change i may have clicked twice we'll see huh do i dare click again okay yeah it's definitely skipped over a couple of slides let's see if we can go back all right okay uh i think this is the right slide anyway um one thing that we've been thinking about a lot uh within my team at google is okay what about scale um all of a lot of our systems operate in the regime where we have giant models and they barely fit in the hardware that we use to serve them and you know and also we care a lot about latency so you know things like ensembling yeah they're more efficient than than carrying around the entire posterior or entire distributional parameters but you're still copying carrying around five to ten copies of your model and for most practical purposes when i've talked to teams they say oh we can't afford to carry around you know five to ten copies of our model and we can't afford to predict five to ten times uh for every data example because that takes too long in terms of latency so scale is definitely a a problem you know i imagine for self-driving that's also a thing right if you need if you need to predict in real time and you have a model in hardware on your on your car you probably can't afford to carry around a whole bunch of them and predict um over all of them so within our team we've we've been drawing out an uncertainty and robustness frontier which is basically thinking about okay you know how can we get the the best bang for our buck basically in terms of uncertainty uh while we increase the number of parameters and it turns out it's actually really interesting um you know you can do much more sophisticated things if you're willing to carry around many copies of your model for example and then you can't do quite as much with bigger models but you can do quite a bit but this has certainly driven a lot of our our more recent research so one way that we think about this is if you think about ensembles as a giant model for example so it's basically think about an ensemble as being a bunch of models where there are no connections basically between the different ensemble members and then a connection bringing them all together at the end then there's basically a bunch of paths of independence of networks so you might imagine that you could make things cheaper if you know you just shared parts of the network and had independent networks at other parts or um where you found some way to maybe factorize the difference between the the different models and so that's basically what we've been doing so um this method known as batch ensemble by ethan nguyen at all was exactly take factors and use those to modulate the the model so you have a single model and then you have a factor for every every layer and that modulates the layer and you have n factors where n corresponds to your ensemble size you could imagine that you know this could produce drop out if the factors were zeros and ones or it could produce um you know different weightings that modulate different hidden units as you move through and so we call this batch ensemble and they're actually rank one factor so they're really cheap to carry around compared to you know having multiple copies of the model um i won't maybe we won't talk about this much but you know you can batch things so that you can actually compute across all factors in just one single forward pass which is really nice and this turned out to work almost really close almost as well as the full ensemble which is great because it requires something like five percent the number of parameters of a single model and so 90 or something less than a whole ensemble then um a neat way to kind of turn that batch ensemble thing into an approximate bayesian method is this is another big slide so it's taking a little while to switch but here we go something we call rank 1 bayesian neural nets which was basically be bayesian about those factors and so we'd have a distribution over factors and we could sample factors as we're making predictions and then sampling them you can imagine that definitely corresponds could correspond to something like dropout if you have some kind of binary distribution over the factors um but it could also correspond to interesting other distributions that modulate the weights of the model and give you an interesting um aggregate prediction and uncertainty at the end this is one flavor of a number of of exciting recent papers so the cyclical mcmc one as well um so exciting paper is where you think about being bayesian over a subspace basically so you can imagine integrating over a subspace that defines the greater space of models and using that to get your uncertainty rather than being expressing uncertainty over all the parameters of your model then actually something that uh that an intern did um that works really really well so martin havassi who's at harvard now uh he actually said okay let's take a standard neural net and instead of plugging in one input we'll plug in three different inputs and three different outputs or k different inputs and k different outputs and force the model to predict for all of them and they can be different classes which means that the model can't really share structure predicting for two at the same time and that gives so that basically forces the model to learn independent subnetworks through the through the whole configuration of the model and find some interesting diversity at the the outputs and so that actually you know tended to work pretty well so at test time then you just replicate the same input k times and it gives you k different predictions and those are interestingly diverse because they go through different sub networks of this bigger network here's just a a figure showing the diversity of the predictions so this is a dimensionality reduction on the distribution of predictions basically and we found that the predictions of the of the different outputs are actually interestingly diverse and then here's a couple pictures so as we increase the number of inputs and keep the the structure of the actual model the same so the number of parameters the same what does that do for the uncertainty and the the accuracy of a model i find really surprising is sometimes accuracy goes up um but certainly the um so if you look at this the solid lines so interestingly you know accuracy goes up sometimes and log likelihood so notion of the quality of uncertainty certainly goes up it's it's surprising that you don't need more parameters in the model to do this but it tends to work okay so i think i have basically at the end um maybe i can i can share just kind of an anecdote about what we're thinking about you know more imminently now since i've got a couple minutes left in our in our team we've been thinking a lot about um you may have noticed a number of papers coming out out calling uh large-scale pre-trained models a paradigm shift for machine learning and so the large scale so pre-trained models are basically saying you know instead of taking just my training data distribution what if i can access some giant other distribution and that might be you know if it's a text model i just rather than taking my labeled machine translation model where i have only n examples i just mine the whole web and i like find some way to model this this data and then i take that model and i chop off the last layer and then i point the last layer at my machine translation or whatever prediction task and then retrain you know starting from where that other model left off and that pre-training strategy works incredibly well it turns out in terms of accuracy it also seems to work well in terms of uncertainty so you know one thing that i think is is really interesting to think about is okay if we care about outer distribution uh robustness you know either we can do a lot of math and a lot of fancy tricks ensembling etc or we can go and try to get the entire distribution and that's what in my view pre-training is kind of doing um but in any case so that's something that we're really involved in and interested in right now which is you know how what does that free training actually do and what does this mean for uncertainty and robustness um okay and then [Music] the or the takeaways of of this uh of the previous slide slides is basically you know uncertainty and robustness is incredibly important um it's something that you know is at the tip of uh top of mind for a lot of researchers in deep learning and um you know as we increase compute as i said you know there's interesting new uh new ways to look at the frontier um and and i think a lot of promise to get better uncertainty out of our models okay and with that i'll i'll close and say thanks so this is actually a subset of many collaborators on uh on a bunch of the papers that i talked about and uh and from where the these slides are from so thank you and uh happy to take any questions thank you so much jasper really really fantastic overview with beautiful visualizations and explanations super super clear yeah so there are several questions from the chat which i'm sort of gathering together now so one question from stan lee asks is it possible to mitigate issues of high confidence on out of distribution data problem by adding in new images that are he describes as like nonsense images into the training set with label of belonging to an unknown class that's really interesting yeah so there is a bunch of work on effectively doing that yeah so um there there is a line of literature which is basically saying like let's create a a bucket or to discard things that are outside our distribution we'll call that an unknown class and then we just need to feed in to our model you know things that may fall into that class sorry my dog just barged into my office um so uh yeah that's certainly something that that's done quite a bit um danager i think uh had a paper on this um it was i forget what it's called something about priors and noise contrasting contrastive priors that's right yeah yeah um you could imagine yeah plugging in noise as being this this bucket or even coming up with um you know with the examples that hopefully would be closer to the boundary um and that corresponds to there's a couple of papers on on doing kinds of data augmentation just like augmenting your data maybe from from one class interpolating with another class and trying to use that as like helping to define the boundary of what is one or what is another and then trying to push just outside your class to say that's not part and putting that in the bucket of unknown but yeah great question it's definitely something people are doing and thinking about awesome one more question from mark asks can can you speak about the use of uncertainty to close the reality gap in sim to real applications okay yeah i mean that's a great question i personally don't know that much about sim to reel you know i'm thinking um in the robotics context you have a simulation you can train your robot in simulation space and then then you would like to deploy it as a real robot um i imagine that uncertainty and robustness are incredibly important um but i don't know how they think about it in in those particular applications clearly you know i think if you deploy your robot in real you would like it to express um reasonable uncertainty about things that are out of distribution or that it hasn't seen before i'm curious i don't know alexander ava if you know an answer to that question i think alexander can speak to it yeah yeah actually i was i was going to even ask a kind of related question but yeah so i think all of the approaches kind of and all of this interest in the field where we're talking about estimating uncertainty either through sampling or other approaches is super interesting and yeah i definitely agree like everyone is going to accept that these deep learning models need some measure of weight or some way to express their confidence right i think like one interesting application i haven't seen a lot of maybe there's like a good opportunity um for the class and this is definitely an interest of mine is kind of how we can build kind of the downstream ai models to kind of uh be advanced by these measures of uncertainty so for example how can we build better predictors that can leverage this uncertainty to improve their their own learning for example so if we have a robot that learns some measure of uncertainty and simulation and as it is deployed into reality can it leverage that instead of just conveying it to a human i'm not sure like um yeah if anyone in your group is kind of focusing more on like the second half of the problem almost not just conveying the uncertainty but um using it in some ways as well yeah yeah absolutely that's definitely something that we're we're super interested in yeah which is definitely you know once you have better uncertainty yeah how do you use it exactly there's yeah i think there are there's really interesting questions of how do you communicate the uncertainty to for example a layman or a you know like a doctor who's an expert in something but not in you know deep learning um and yeah how do you actually make decisions based on the uncertainty that's definitely a direction that we're moving more towards we've been spending a lot of time within our group just looking at the quality of uncertainty of models and looking at notions of calibration and and um and these proper scoring rules but they're kind of like intermediate measures right there what you really care about is the downstream decision loss which might be like for a medical task is how many people did you end up saving or you know how many decisions in your in your self-driving car were for correct decisions sure so that's definitely something that we're looking at a lot more there there's a great literature on on decision making it's called statistical decision making i think berger is the author of a book called statistical statistical decision making which is great a great book about you know how to think about what are optimal decisions given an amount of uncertainty awesome thank you great i think there was one final question and i um it's more about a practical hands-on type of thing so uh let me just get yes suggestions and pointers on the actual implementation and deployment of bayesian neural networks in sort of more industrial or practical oriented machine learning workflows in terms of production libraries or frameworks to use and what people are thinking about in terms of in terms of that yeah yeah that's a really great question so you know something that i think the the certainly the bayesian community or the uncertainty in robustness community something that they haven't been as good at is is producing really easy to use accessible implementations of models and that's something that we've definitely been been thinking about we've we've open sourced uh a um a library that includes benchmarks and model implementations for tensorflow it's called uncertainty baselines and i was exactly trying to to address this question we're like okay you know everyone will be bayesian or get better uncertainty if you could just hand them a model that had better uncertainty and so this uncertainty baselines was an attempt at that and we're still building it out it's built on built on top of edward which is a probabilistic programming framework for tensorflow but there are a bunch of libraries that do probabilistic programming which is making bayesian inference efficient and easy but they're not made for deep learning and so you know i think something that we need to do as a community is kind of bring those two together which is like libraries for easy [Music] prediction under uncertainty or easily approximating integrals effectively and incorporating that within deep learning libraries um but yeah that's definitely something we're we're working on check out uncertainty baselines i think it has implementations of everything that i talked about in this talk awesome thank you so much um are there any other questions from anyone in the audience either through chat or that i may have missed or if you would like to ask them live okay well let's please thank jasper once again for a super uh fascinating and very clear talk and for taking the time to join us and and speak with us today thank you so much it was my pleasure thank you um thank you alexander and thank you ava for for organizing and thank you everyone for uh for your attention yeah
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2018_Deep_Reinforcement_Learning.txt
all right good morning everyone my name is Lex Frieden I'm a research scientist here at MIT I build autonomous vehicles and perception systems for these vehicles and today I'd like to talk first of all it's great to be part of this amazing course and enter deep learning it's it's a wonderful course covering pretty quickly but deeply some of the fundamental aspects of deep learning this is awesome so the topic that I'm perhaps most passionate about from the perspective of just being a researcher in artificial intelligence is deep reinforcement learning which is a set of ideas and methods that teach agents to act in the real world if you think about what machine learning is it allows us to make sense of the world but to truly have impact in the physical world we need to also act in that world so you bring the understanding that you extract from the world through the perceptual systems and actually decide to make actions so you can think of intelligent systems as this kind of stack from the top to the bottom so in the environment at the top the world that the agent operates in and at the bottom is the effectors that actually make changes to the world by moving the robot moving the agent in a way that changes the world that acts in the world and so from the environment it goes to the sensors sensing the raw sensory data extracting the features making sense of the features understanding forming representations higher and higher order representations from those representations we gain knowledge that's useful actionable finally we reason the thing that we hold so dear as human beings the reasoning that builds and aggregates knowledge basis and using that reasoning form short-term and long-term plans that finally turn into action and act in that world changing it so that's kind of the stack of artificial intelligence and the question is how much in the same way as human beings we learn most of the stack when we're born know very little and would take in five sensory sources of sensory data and make sense of the world learn over time to act successful in that world how much the question is can we use deep learning methods to learn parts of the stack the modules of the stack or the entire stack end to end let's go over them okay so for robots that act in the world autonomous vehicles humanoid robots drones their sensors whether it's lidar camera or microphone coming from the audio networking for the communications I am you getting the kinematics of the different vehicles that's the raw data coming in that's the eyes ears nose for robots then the ones you have the sensory data the task is to form representations on that data you make sense of this raw pixels or raw pieces samples for whatever the sensor is and you start to try to piece it together into something that can be used to gain understanding it's just numbers and those numbers need to be converted into something that can be reasoned with that's forming representations and that's where machine learning deep learning steps in and takes this raw sensory data the with some pre process some process some initial processing and forming higher and higher order representations in that data that can be reasoned with about in the computer vision from edges to faces to entire entities and finally the interpretation the semantic interpretation of the scene that's machine learning playing with the representations and the reasoning part one of the exciting fundamental open challenges of machine learning is how does this greater and greater representation that can be formed through deep neural networks can then lead to reasoning to building knowledge not just a memorization task which is taking supervised learning memorizing patterns and the input data based on human annotations but also in extracting those patterns but also then taking that knowledge and building over time as we humans do building into something that could be called common sense into knowledge basis in a very trivial task this means aggregating fusing multiple types of multiple types of extraction of knowledge so from image recognition you could think if it looks like a duck in an image it sounds like a duck on the audio and then you could do act activity recognition with the video it swims like a duck then it must be a duck just aggregating this trivial different sources of information that's reasoning now I think from the human perspective from the very biased human perspective one of the illustrative aspects of reasoning is theorem proving is the moment of invention of creative genius of this breakthrough ideas as we humans come up with I mean really these aren't new ideas whenever we come up with an interesting idea they're not new we're just collecting pieces of higher-order representations of knowledge that we've gained over time and then piecing them together to form something some simple beautiful distillation that is useful for for the rest of the world and one of my favorite sort of human stories and discoveries in pure theorem proving is for Moz Last Theorem it stood for 350 years this is a trivial thing to explain most six well eight year olds can understand the definition of the the conjecture that X to the M plus y to the N equals e to the N has no solution for N greater than three greater than equal to three okay this this has been unsolved there's been hundreds of thousands of people try to solve it and finally Andrew Wiles from Oxford and Princeton had this final breakthrough moment in 1994 so he first proved in 1993 and then it was shown that he failed and then so that's that's the the human drama and then nightly so he had he he spent a year trying to find the solution and there's this moment this final breakthrough three hundred fifty eight years after this first form of the by Pharma as a conjecture he he said it was so incredibly beautiful it was so simple so elegant I couldn't understand how I'd missed it and I just stared at it and just believed for 20 minutes this is him finally closing the loop figuring out the final proof I just stared at it in bit disbelief for 20 minutes then during the day I walked around the department and I keep coming back to my desk looking to see if I was still there it was still there I couldn't contain myself I was so excited it was the most important moment of my working life nothing I ever done nothing I ever do again will mean as much so this moment a breakthrough how do we teach neural networks how do we learn from data to achieve this level breakthrough that's the open question that I want you to sort of walk away from this part of the lecture thinking about what is the future of agents that think and Alexander will talk about the new future challenges next but what what can we use deep reinforcement learning to extend past memorization passpack pattern recognition into something like reasoning and achieving this breakthrough moment and at the very least something brilliantly in 1995 and after Andrew Wiles Homer Simpson and and those are fans of The Simpsons actually proved him wrong this is very interesting so you found an example where it does hold true the form a theorem to a certain number of digits after the period okay and then finally aggregating this knowledge into action that's what deep reinforcement learning is about extracting patterns from raw data and then finally being able to estimate the state of the world around the agent in order to make successful action that completes at a certain goal so and I will talk about the difference between agents that are learning from data and agents that are currently successfully being able to operate in this world example the agents here from Boston Dynamics are ones that don't use any deeper enforcement learning that don't use any they don't learn from data this is the open gap the challenge that we have to solve how do we use reinforcement learning methods build robots agents that act in the real world that learn from that world except for the perception task so in this stack you can think from the environment to the effectors the promise the beautiful power of deep learning is taking the raw sensory data and being able to an automated way do feature learning to extract arbitrary high order representations and that data so makes sense of the patterns in order to be able to learn in supervised learning to learn the mapping of those patterns arbitrarily high order representations on those patterns to to to extract actionable useful knowledge that's in the red box so the promise of deep reinforcement learning why it's so exciting for artificial intelligence community why it captivates our imaginations about the possibility towards achieving human level general intelligence is that you can take not just the end-to-end extraction of knowledge from raw sensory data you can also do end-to-end from raw sensory data be able to produce actions to brute force learn from the raw data the the semantic context the meaning of the world around you in order to successfully act in our world and to end just like we humans do that's the promise but we're in the very early stages of of achieving that promise so any successful presentation must include cats so supervised learning and unsupervised learning and reinforcement learning sits in the middle was supervised learning most of the the data has to come from the human they're the insights about what is inside the data has to come from human annotations and it's the task of the machine to learn how to generalize based on those human annotations over future examples it hasn't seen before and unsupervised learning you have no human annotation reinforcement learning is somewhere in between close around supervised learning where the annotations from humans the information the knowledge from humans is very sparse extremely sparse and so you have to use the temporal dynamics the fact that in time these the the continuity of our world through time you have to use the sparse little rewards you have along the way to extend that over the entirety of the the temporal domain to make some sense of the world even though the rewards are really sparse those are two cats learning Pavlov's cats if you will learning to ring the doorbell in order to get some food that's the basic for them the reinforcement learning problem so from supervised learning you can think of those networks as memorizers reinforcement learning is you can think of them crudely so as sort of brute-force reasoning trying to propagate rewards into extending how to make sense of the world in order to act in it the pieces are simple there's an environment there's an agent it takes actions in that agent it senses the environment so there's always a state at the agent senses and then I always when taking an action receives some kind of reward or punishment from that world so we can model any kind of world in this way they can model an arcade game break out here Atari break out the engine has the capacity to act move the paddle it has it can influence the future state of the system by taking those actions and it achieves the reward there's a goal to this game the goal is to maximize future reward you can model the cart pole balance a problem where the you can control the pole Angle angular speed you can control the cart position the horizontal velocity the actions of that is the pushing the cart applying the force to the cart and the task is to balance the pole okay and the reward is one at each time step the pole still upright that's the goal so that's a simple formulation of state action reward you can play a game of doom with a state being the raw pixels of the of the game and the actions are moving the the player around shooting and the reward positive one at eliminating an opponent and negative one agent is eliminated it's just a game industrial robotics any kind of humanoid robotics when you have to control multiple degrees of freedom control the robotic arm control the robot the State is the raw pixels of the real world coming into the sensors of the robot the actions of the possible actions the robot can take so manipulating each of its sensors actuators sorry actuators and then the reward is positive when placing a device successfully negative otherwise so the task is to pick something up put it back down okay so a and I'd like to sort of continue this trajectory further and further complex systems to think the biggest challenge for reinforcement learning is formulating the world that we need to solve the set of goals in such a way that can be a we can apply these deep reinforcement reinforcement learning methods so give you an intuition about us humans it's exceptionally difficult to formulate the goals of life what is a survival homeostasis happiness who knows citations depending on who you are so state the state sight hearing taste smell touch that's the raw sensory data actions are think move what else I don't know and then the reward is just it's open all these questions are open so if we want to actually start to create more and more intelligent systems it's hard to formulate what the goals are with the state space is with the action spaces that's the that's if this the if you take it away anything sort of in a practical sense from today form from the deeper enforcement learning part here a few slides is that there's a fun part and a hard part to all of this work so the fun part the what this course is about I hope it's inspire people about the amazing interesting fundamental algorithms of deep learning that's the fun part the hard part is the collecting and annotating huge amount of representative data in deep learning informing higher representations data does the hard work so data is everything once you have good algorithms data is everything and deep reinforcement learning the heart the fun part again is these algorithms we'll talk about them today we will overview them but the hard part is defining the world the action space and the reward space now it's just defining formalizing the problem is actually exceptionally difficult when you start to to create an agent that operates in the real world and actually operates with other human beings and are actually significantly helps in the world so this isn't playing an arcade game where everything is clean or playing chess go it's when you're operating in the real world everything is messy how do you formalize that that's the hard part and then the hardest part is getting a lot of meaningful data that represents that fits into that formalization that you form okay in the the Markov decision process that's underlying the thinking of reinforcement learning there's a state you take can actually receive a reward and then observe the next state so it's always state action reward state that's the sample of data you get as an agent acting in the world state action reward state there's a policy where an agent tries to form a a policy how to act in the world so that it will never state it's in it it has a preference to act in a certain way in order to optimize the reward there's a value function that can estimate how good a certain action is in a certain state and there is sometimes a model that the agent forms about the world a quick example you can have a robot in the room at the bottom left starting moving about this room it's a three by four grid it tries to get to the top right because it's a plus one but because it's a stochastic system when it goes chooses to go up it sometimes goes left and right ten percent of the time so in this world it has to try to come up with a policy so what's the policy is this a solution so it's starting at the bottom arrows show the actions whenever you in that state that you would like to take so this is this is a pretty good solution to get to the plus one in a deterministic world and it's the cast the world when you don't when you go up you don't always go up it's not a optimal policy because you have to have an optimal policy has to have an answer for every single state you might be in so this is the optimum policy would look something like this now that's for when the the cost the reward is negative point zero one for taking a step now fee every time I take a step it's really painful it's you get a negative to reward and so there's a the optimal policy changes there's no matter what no matter the statistician a of the system the randomness you want to get to the end as fast as possible even if you go to negative states you just want to get to the plus one as quickly as possible and then so the reward structure changes the optimum policy if you make the reward negative point one then there's some more incentive to explore and as we increase that reward or decrease the punishment of taking a step more and more exploration is encouraged until we get to the kind of what I think of as college which is you encourage exploration by having a positive reward to moving around and so you never want to get to the end you just kind of walk around the world without ever reaching the end okay so there's a the main goal is to really optimize reward in this world and because the reward is collected over time you want to have some estimate of future reward and because you don't have a perfect estimate of the future you have to discount that reward over time so the goal is to maximize the discounted reward over time and in q-learning is an approach that that I'd like to focus in on today the there's a state action value queue it's a queue function that takes in the state and action and tells you the value of that state it's off policy because we can learn this function without forming a optimal without keeping an optimal policy an estimate of an optimal policy with us and what it turns out with the equation at the bottom or the bellman equation you can estimate the you can update your estimate of the Q function in such a way that over time it converges to an optimal policy and the update is simple you have an estimate you start knowing nothing the there's an old this estimate of an old state Q SAT and then you take an action you collect a reward and you update your estimate based on the awardee received and the difference between what you've expected and what you actually received and that's the update usually walk around this world exploring until you form better but understanding of what is the good action to take in each individual state and there's always as in life and in reinforcement learning in any agents that act if there's a learning stage where you have to explore exploring pays off when you know very little the more and more you learn the less and less valuable it is to explore and you want to exploit you want to take greedy actions and that's always the balance you start exploring at first but eventually you want to make some money whatever the the the metric of success is and you want to focus on a policy that you've converged towards that is pretty good a near optimal policy in order to act in a greedy way as you move around with this bellman equation move around the world taking different states different actions you can update a you can think of it as a cue table and you can update the quality of a taking a certain action in a certain state so that's a that's a that's a picture of a table there or in this the world with four states and four actions and you can move around using the bellman equation updating the value of being in that state the problem is when this cute table grows exponentially in order to represent raw sensory data like we humans have when taking in vision or if you take in the raw pixels of an arcade game that's the number of pixels that are there get is larger than is it is larger than can be stored in memory is larger that can be explored the simulation it's exceptionally large and if you know anything about exceptionally large high dimensional spaces and learning anything about them that's what deep neural networks are good at forming the approximator is forming some kind of representation an exceptionally high dimensional complex space so that's the hope for deeper enforcement learning is you take these reinforcement learning ideas where an agent acts in the world to learn something about that world and we use a neural network as the approximator as the thing that the agent uses in order to approximate the quality either approximate the policy or approximate the quality of taking a certain action in certain state and and therefore making sense of this raw information forming higher and higher order representations of the raw sensory and for me in order to then as an output take an action so the neural network is injected as a function approximator into the queue it's it's the cue function is approximated with a neural network that's DQ n that's deep Q learning so injecting into the Q learning framework and your own network that's what's been the success for deep mind with the playing the Atari games having this neural network takes in the raw pixels of the Atari game and produces actions or values of each individual actions and then in a greedy way picking the best action and the learning the loss function for these networks is twofold so you have a you have a Q function an estimate of taking a certain action certain state you take that action and then you observe how it's the actual reward received is different so you have a target you have a prediction and the loss is the squared error between those two and DQ n has uses the same network the traditional DQ n the first ya DQ and uses one network to estimate both Q's in that loss function a double D Q and d DQ n uses a separate network for each one of those a few tricks their key experience replay so it tricks in reinforcement learning because it's the fact that it works is incredible and as a fundamental sort of philosophical idea knowing so little and being able to make sense with from such a high dimensional space is amazing but actually these ideas have been around for quite a long time and a few key tricks is what made them really work so in the first I think the two things for D queuing is experience replay is instead of letting the agent learn for as it as it acts in the world agent is acting in the world and collecting experiences that can be replayed through the learning so the learning process jumps around through memory through the experiences and instead so it doesn't it doesn't learn on the local evolution of a particular simulation that learn of the entirety of its experiences then the fixed target network as I mentioned with GG QN the fact that the loss function includes if you notice sort of two two forward passes through the neural network and so because when you know very little in the beginning it's a very unstable system and bias can I can have a significant negative effect so there's some benefit in for the target the forward pass the neural network takes for the target function to to be fixed and only be updated then you'll now work there to be only updated every thousand one hundred steps and there is a few other tricks the slides are available online there's a few interesting bits throughout these slides please check them out there's a lot of interesting results here on this slide and showing the benefit that you get from these tricks so replay experience replay and fixed target network are the biggest that's the magic that's the thing that made it work for the atari games and the result was achieving with deepmind achieving super above human level performance on these atari games now what's it's been very successful use now with alphago and the other more complex systems is policy gradients which is a slight variation on this idea of applying your own networks in this deep reinforcement learning space so DQ one is Q learning you'll know what can the Q learning framework it's off policy so it's approximating Q and infer the optimal policy policy gradients PG is on policy is directly optimizing the policy space so the neural network is estimating the probability of taking a certain action and the learning and there's a great if you want the details of this from Andre Botha is a great post explaining illustrated in an illustrative way deeper enforcement learning by looking at pong playing pong so the training process there is you look at the evolution of the different games and then reinforced also knows actor critic you take the policy gradient increases the probability of good action and decrease the probability of bad action so the policy network is the actor so the neural network is the thing that takes in the raw pixels usually a sequence of frames and outputs a probability of taking a certain action so you want to reward actions that have eventually led to a winning a high reward and you want to punish the you want to decrease you note of negative gradient for actions that that led to a negative reward so the the reward there is the critic the policy network is the actor the pros and cons of DQ on of policy gradients was DQ n most folks now the success comes from policy gradients active critic methods different variations of it the the pros are its able to deal with more complex Q function as faster convergence in most in most cases given given you have enough data that's the big con is its needs a lot of data and use a lot ability to simulate huge huge amounts of evolutions of the system and because the model probabilities the the policy of grading smaller the probabilities of action they're able to learn stochastic policies and DQ lung cannot and that's where the game of Go has received a lot of successes with the application of policy gradients where at first alphago 2016 beat the top humans in the world at the game of go by training an expert games so in a supervised way starting from training on those human expert positions and alphago 0 in 2017 achieving a monumental feat and an artificial intelligence one of the greatest in my opinion in the last decade of training on no human expert play playing against it itself being able to beat and the initial alphago and beat the best human players in the world this is an incredible achievement for reinforcement learning they captivated our imagination of what's possible with these approaches but the actual approach and you can look through the slides as a few interesting tidbits in there but the it's using the same kind of methodology that a lot of game engines I've been using and certainly go players for the Monte Carlo tree search so you have this incredibly huge search space and you have to figure out like which parts of it do I search in order to find the good positions the good actions and so there you'll now actually use to do the estimation of what are the good actions what are the good positions again the slides have the fun the fun details for those of you who are gambling addicts this is importantly so the the stochastic element of poker at least heads-up poker so one on one has been for the first time ever this same exact approach have been used in deep stack and other agents to beat the top professional poker players in the world in 2017 the open challenge for the community for maybe people in this room in 2018 is to apply these methods to win in a much more complex environment of term and play when there's multiple players so heads-up poker is a much easier problem the human element is much more formal izybelle and clear there when there's multiple players it's exceptionally difficult and fascinating a fascinating problem that's perhaps more representative of agents that have to act in the real world so now the downer part a lot of the successful agents that we work with here at MIT and build the robots that act in the real world are using almost no deep reinforcement learning so deeper enforcement learning is successfully applied in context of simulation in context of game playing but in successfully controlling humanoid robotics or human robots humanoid robots or autonomous vehicles for example the deep learning methods they use primarily for the perception tasks they're exceptionally good at making sense of the environment and extracting useful knowledge from it but in terms of forming actions that's usually done through optimization based methods finally a quick comment on the unexpected local pockets that's that's at the core of why these methods are not used in the real world here is a game of coast runners where a boat is asked with receiving a lot of points traditionally the game is played by racing other boats and trying to get to the finish as quickly as possible and this boat figures out that it doesn't need to do that in a brilliant breakthrough idea it can just collect the regenerating green squares that's an unintended consequence that you can extend to other systems perhaps including you can imagine what how the the cat system over time at the bottom right can evolve into something undesirable and further on in these reinforcement learning agents when they act in the real world the human life is often the human factor are often injected into the system and so oftentimes in the reward function the objective loss function you start injecting concepts of risk and even human life so what does it look like in terms of AI safety when an agent has to make decision based on a loss function that includes an estimate or risk of killing another human being this is a very important thing to think about about machines that learn from data and finally to play around as there's a there's a lot of ways to explore and learn about deeper enforcement learning and we have at the URL below there a deep traffic simulation game it's a competition where you get to build a car that speeds that tries to achieve the as close to 80 miles per hour as possible and I encourage you to participate participate and try to win get at a leaderboard not enough MIT folks are at the top 10 so with that thank you very much thank you for having me [Applause]
MIT_6S191_Introduction_to_Deep_Learning
MIT_Introduction_to_Deep_Learning_2023_6S191.txt
Good afternoon everyone! Thank you all for joining today. My name is Alexander Amini and I'll be one of your  course organizers this year along with Ava -- and together we're super excited to introduce you  all to Introduction to Deep Learning. Now MIT Intro to Deep Learning is a really really fun  exciting and fast-paced program here at MIT and let me start by just first of all giving  you a little bit of background into what we do and what you're going to learn about this year. So this week of Intro to Deep Learning we're going to cover a ton of material in just one week. You'll learn the foundations of this really really fascinating and exciting field of  deep learning and artificial intelligence and more importantly you're going to get hands-on  experience actually reinforcing what you learn in the lectures as part of hands-oOn software labs. Now over the past decade AI and deep learning have really had a huge resurgence and many  incredible successes and a lot of problems that even just a decade ago we thought were not  really even solvable in the near future now we're solving with deep learning with Incredible ease. Now this past year in particular of 2022 has been an incredible year for a deep learning progress  and I like to say that actually this past year in particular has been the year of generative  deep learning using deep learning to generate brand new types of data that I've never been  seen before and never existed in reality in fact I want to start this class by actually  showing you how we started this class several years ago which was by playing this video that  I'll play in a second now this video actually was an introductory video for the class it kind  of exemplifies this idea that I'm talking about. So let me just stop there and  play this video first of all Hi everybody and welcome to MIT 6.S191  -- the official introductory course on deep learning taught here at MIT. Deep Learning is revolutionizing so many fields: from robotics to  medicine and everything in between. You'll learn the fundamentals of  this field and how you can build some of these incredible algorithms. In fact, this entire speech and video are not real and were created using deep  learning and artificial intelligence. And in this class you'll learn how. It has been an honor to speak with you  today and I hope you enjoy the course. so in case you couldn't tell this video and  its entire audio was actually not real it was synthetically generated by a deep learning  algorithm and when we introduced this class A few years ago this video was created several  years ago right but even several years ago when we introduced this and put it on YouTube it went  somewhat viral right people really loved this video they were intrigued by how real the video  and audio felt and looked uh entirely generated by an algorithm by a computer and people were  shocked with the power and the realism of these types of approaches and this was a few years ago  now fast forward to today and the state of deep learning today we have have seen deep learning  accelerating at a rate faster than we've ever seen before in fact we can use deep learning  now to generate not just images of faces but generate full synthetic environments where we can  train autonomous vehicles entirely in simulation and deploy them on full-scale vehicles in the  real world seamlessly the videos here you see are actually from a data driven simulator from  neural networks generated called Vista that we actually built here at MIT and have open sourced  to the public so all of you can actually train and build the future of autonomy and self-driving cars  and of course it goes far beyond this as well deep learning can be used to generate content directly  from how we speak and the language that we convey to it from prompts that we say deep learning can  reason about the prompts in natural language and English for example and then guide and control  what is generated according to what we specify we've seen examples of where we can generate for  example things that again have never existed in reality we can ask a neural network to generate  a photo of a astronaut riding a horse and it actually can imagine hallucinate what this might  look like even though of course this photo not only this photo has never occurred before but  I don't think any photo of an astronaut riding a horse has ever occurred before so there's  not really even training data that you could go off in this case and my personal favorite  is actually how we can not only build software that can generate images and videos but build  software that can generate software as well we can also have algorithms that can take language  prompts for example a prompt like this write code and tensorflow to generate or to train  a neural network and not only will it write the code and create that neural network but it  will have the ability to reason about the code that it's generated and walk you through step by  step explaining the process and procedure all the way from the ground up to you so that you can  actually learn how to do this process as well now I think some of these examples really just  highlight how far deep learning and these methods have come in the past six years since we started  this course and you saw that example just a few years ago from that introductory video but now  we're seeing such incredible advances and the most amazing part of this course in my opinion is  actually that within this one week we're going to take you through from the ground up starting  from today all of the foundational building blocks that will allow you to understand and  make all of this amazing Advance as possible so with that hopefully now you're all super  excited about what this class will teach and I want to basically now just start by taking a step  back and introducing some of these terminologies that I've kind of been throwing around so far  the Deep learning artificial intelligence what do these things actually mean so first of  all I want to maybe just take a second to speak a little bit about intelligence and  what intelligence means at its core so to me intelligence is simply the ability to process  information such that we can use it to inform some future decision or action that we take now the  field of artificial intelligence is simply the ability for us to build algorithms artificial  algorithms that can do exactly this process information to inform some future decision  now machine learning is simply a subset of AI which focuses specifically on how we can build  a machine to or teach a machine how to do this from some experiences or data for example now deep  learning goes One Step Beyond this and is a subset of machine learning which focuses explicitly on  what are called neural networks and how we can build neural networks that can extract features in  the data these are basically what you can think of as patterns that occur within the data so that  it can learn to complete these tasks as well now that's exactly what this class is really all  about at its core we're going to try and teach you and give you the foundational understanding  and how we can build and teach computers to learn tasks many different type of tasks directly from  raw data and that's really what this class spoils down to at it's it's most simple form and we'll  provide a very solid foundation for you both on the technical side through the lectures which will  happen in two parts throughout the class the first lecture and the second lecture each one about one  hour long followed by a software lab which will immediately follow the lectures which will try to  reinforce a lot of what we cover in the in the in the technical part of the class and you know give  you hands-on experience implementing those ideas so this program is split between these two pieces  the technical lectures and the software Labs we have several new updates this year in specific  especially in many of the later lectures the first lecture will cover the foundations of  deep learning which is going to be right now and finally we'll conclude the course with  some very exciting guest lectures from both Academia and Industry who are really leading  and driving forward the state of AI and deep learning and of course we have many awesome  prizes that go with all of the software labs and the project competition at the end of the  course so maybe quickly to go through these each day like I said we'll have dedicated  software Labs that couple with the lectures starting today with lab one you'll actually  build a neural network keeping with this theme of generative AI you'll build a neural  network that can learn listen to a lot of music and actually learn how to generate  brand new songs in that genre of music at the end at the next level of the class on  Friday we'll host a project pitch competition where either you individually or as part of a  group can participate and present an idea a novel deep learning idea to all of us it'll be roughly  three minutes in length and we will focus not as much because this is a one week program we're  not going to focus so much on the results of your pitch but rather The Innovation and the idea  and the novelty of what you're trying to propose the prices here are quite significant already  where first price is going to get an Nvidia GPU which is really a key piece of Hardware that  is instrumental if you want to actually build a deep learning project and train these neural  networks which can be very large and require a lot of compute these prices will give you  the compute to do so and finally this year we'll be awarding a grand prize for labs two and  three combined which will occur on Tuesday and Wednesday focused on what I believe is actually  solving some of the most exciting problems in this field of deep learning and how specifically how  we can build models that can be robust not only accurate but robust and trustworthy and safe when  they're deployed as well and you'll actually get experience developing those types of solutions  that can actually Advance the state of the art and AI now all of these Labs that I mentioned and  competitions here are going to be due on Thursday night at 11 PM right before the last day of  class and we'll be helping you all along the way this this Prize or this competition in  particular has very significant prizes so I encourage all of you to really enter this prize  and try to try to get a chance to win the prize and of course like I said we're going to  be helping you all along the way who are many available resources throughout this class to  help you achieve this please post to Piazza if you have any questions and of course this program  has an incredible team that you can reach out to at any point in case you have any issues or  questions on the materials myself and Ava will be your two main lectures for the first part  of the class we'll also be hearing like I said in the later part of the class from some guest  lectures who will share some really cutting edge state-of-the-art developments in deep learning  and of course I want to give a huge shout out and thanks to all of our sponsors who without their  support this program wouldn't have been possible at first yet again another year so thank you all okay so now with that let's really dive into  the really fun stuff of today's lecture which is you know the the technical part and I think I  want to start this part by asking all of you and having yourselves ask yourself you know having  you ask yourselves this question of you know why are all of you here first of all why do you  care about this topic in the first place now I think to answer this question we have to take a  step back and think about you know the history of machine learning and what machine learning is and  what deep learning brings to the table on top of machine learning now traditional machine learning  algorithms typically Define what are called these set of features in the data you can think of these  as certain patterns in the data and then usually these features are hand engineered so probably  a human will come into the data set and with a lot of domain knowledge and experience can try to  uncover what these features might be now the key idea of deep learning and this is really Central  to this class is that instead of having a human Define these features what if we could have a  machine look at all of this data and actually try to extract and uncover what are the core  patterns in the data so that it can use those when it sees new data to make some decisions  so for example if we wanted to detect faces in an image a deep neural network algorithm might  actually learn that in order to detect a face it first has to detect things like edges in the image  lines and edges and when you combine those lines and edges you can actually create compositions  of features like corners and curves which when you create those when you combine those you can  create more high level features for example eyes and noses and ears and then those are the features  that allow you to ultimately detect what you care about detecting which is the face but all of these  come from what are called kind of a hierarchical learning of features and you can actually see some  examples of these these are real features learned by a neural network and how they're combined  defines this progression of information but in fact what I just described this underlying and  fundamental building block of neural networks and deep learning have actually existed for decades  now why are we studying all of this now and today in this class with all of this great enthusiasm  to learn this right well for one there have been several key advances that have occurred in the  past decade number one is that data is so much more pervasive than it has ever been before in our  lifetimes these models are hungry for more data and we're living in the age of Big Data more data  is available to these models than ever before and they Thrive off of that secondly these algorithms  are massively parallelizable they require a lot of compute and we're also at a unique time in history  where we have the ability to train these extremely large-scale algorithms and techniques that have  existed for a very long time but we can now train them due to the hardware advances that have  been made and finally due to open source toolbox access and software platforms like tensorflow  for example which all of you will get a lot of experience on in this class training and building  the code for these neural networks has never been easier so that from the software point of view  as well there have been incredible advances to open source you know the the underlying  fundamentals of what you're going to learn so let me start now with just building up from  the ground up the fundamental building block of every single neural network that you're going  to learn in this class and that's going to be just a single neuron right and in neural network  language a single neuron is called a perceptron so what is the perceptron a perceptron  is like I said a single neuron and it's actually I'm going to say it's very  very simple idea so I want to make sure that everyone in the audience understands  exactly what a perceptron is and how it works so let's start by first defining a perceptron  as taking it as input a set of inputs right so on the left hand side you can see this perceptron  takes M different inputs 1 to M right these are the blue circles we're denoting these inputs as  X's each of these numbers each of these inputs is then multiplied by a corresponding weight which  we can call W right so X1 will be multiplied by W1 and we'll add the result of all of these  multiplications together now we take that single number after the addition and we pass it  through this non-linear what we call a non-linear activation function and that produces our final  output of the perceptron which we can call Y now this is actually not entirely accurate of  the picture of a perceptron there's one step that I forgot to mention here so in addition  to multiplying all of these inputs with their corresponding weights we're also now going to add  what's called a bias term here denoted as this w0 which is just a scalar weight and you can think  of it coming with a input of just one so that's going to allow the network to basically shift  its nonlinear activation function uh you know non-linearly right as it sees its inputs now  on the right hand side you can see this diagram mathematically formulated right as a single  equation we can now rewrite this linear this this equation with linear algebra terms of vectors and  Dot products right so for example we can Define our entire inputs X1 to XM as a large Vector  X right that large Vector X can be multiplied by or taking a DOT excuse me Matrix multiplied  with our weights W this again another Vector of our weights W1 to WN taking their dot product  not only multiplies them but it also adds the resulting terms together adding a bias like  we said before and applying this non-linearity now you might be wondering what is this non-linear  function I've mentioned it a few times already well I said it is a function right that's passed  that we pass the outputs of the neural network through before we return it you know to the next  neuron in the in the pipeline right so one common example of a nonlinear function that's very  popular in deep neural networks is called the sigmoid function you can think of this as kind of  a continuous version of a threshold function right it goes from zero to one and it's having it can  take us input any real number on the real number line and you can see an example of it Illustrated  on the bottom right hand now in fact there are many types of nonlinear activation functions that  are popular in deep neural networks and here are some common ones and throughout this presentation  you'll actually see some examples of these code snippets on the bottom of the slides where we'll  try and actually tie in some of what you're learning in the lectures to actual software and  how you can Implement these pieces which will help you a lot for your software Labs explicitly so  the sigmoid activation on the left is very popular since it's a function that outputs you know  between zero and one so especially when you want to deal with probability distributions for example  this is very important because probabilities live between 0 and 1. in modern deep neural networks  though the relu function which you can see on the far right hand is a very popular activation  function because it's piecewise linear it's extremely efficient to compute especially when  Computing its derivatives right its derivatives are constants except for one non-linear idiot  zero now I hope actually all of you are probably asking this question to yourself of why do we  even need this nonlinear activation function it seems like it kind of just complicates this  whole picture when we didn't really need it in the first place and I want to just spend a moment  on answering this because the point of a nonlinear activation function is of course number one is to  introduce non-linearities to our data right if we think about our data almost all data that we care  about all real world data is highly non-linear now this is important because if we want to be  able to deal with those types of data sets we need models that are also nonlinear so they can  capture those same types of patterns so imagine I told you to separate for example I gave you this  data set red points from greenpoints and I ask you to try and separate those two types of data points  now you might think that this is easy but what if I could only if I told you that you could only  use a single line to do so well now it becomes a very complicated problem in fact you can't  really Solve IT effectively with a single line and in fact if you introduce nonlinear activation  functions to your Solution that's exactly what allows you to you know deal with these types of  problems nonlinear activation functions allow you to deal with non-linear types of data now  and that's what exactly makes neural networks so powerful at their core so let's understand  this maybe with a very simple example walking through this diagram of a perceptron one  more time imagine I give you this trained neural network with weights now not W1 W2 I'm  going to actually give you numbers at these locations right so the trained weights w0 will  be 1 and W will be a vector of 3 and negative 2. so this neural network has two inputs like we  said before it has input X1 it has input X2 if we want to get the output of it this is also  the main thing I want all of you to take away from this lecture today is that to get the output  of a perceptron there are three steps we need to take right from this stage we first compute the  multiplication of our inputs with our weights sorry yeah multiply them together add  their result and compute a non-linearity it's these three steps that Define the forward  propagation of information through a perceptron so let's take a look at how that exactly  works right so if we plug in these numbers to the to those equations we can see that  everything inside of our non-linearity here the nonlinearity is G right that function G  which could be a sigmoid we saw a previous slide that component inside of our nonlinearity is  in fact just a two-dimensional line it has two inputs and if we consider the space of all of  the possible inputs that this neural network could see we can actually plot this on a decision  boundary right we can plot this two-dimensional line as as a a decision boundary as a plane  separating these two components of our space in fact not only is it a single plane there's a  directionality component depending on which side of the plane that we live on if we see an input  for example here negative one two we actually know that it lives on one side of the plane and  it will have a certain type of output in this case that output is going to be positive right because  in this case when we plug those components into our equation we'll get a positive number that  passes through the nonlinear component and that gets propagated through as well of course if  you're on the other side of the space you're going to have the opposite result right and that  thresholding function is going to essentially live at this decision boundary so depending on which  side of the space you live on that thresholding function that sigmoid function is going to then  control how you move to one side or the other now in this particular example this is very  convenient right because we can actually visualize and I can draw this exact full space  for you on this slide it's only a two-dimensional space so it's very easy for us to visualize  but of course for almost all problems that we care about our data points are not going to  be two-dimensional right if you think about an image the dimensionality of an image is going  to be the number of pixels that you have in the image right so these are going to be thousands  of Dimensions millions of Dimensions or even more and then drawing these types of plots like  you see here is simply not feasible right so we can't always do this but hopefully this gives  you some intuition to understand kind of as we build up into more complex models so now that we  have an idea of the perceptron let's see how we can actually take this single neuron and start  to build it up into something more complicated a full neural network and build a model from that  so let's revisit again this previous diagram of the perceptron if again just to reiterate one more  time this core piece of information that I want all of you to take away from this class is how a  perceptron works and how it propagates information to its decision there are three steps first is the  dot product second is the bias and third is the non-linearity and you keep repeating this process  for every single perceptron in your neural network let's simplify the diagram a little bit I'll get  rid of the weights and you can assume that every line here now basically has an Associated weight  scaler that's associated with it every line also has it corresponds to the input that's coming  in it has a weight that's coming in also at the on the line itself and I've also removed the bias  just for a sake of Simplicity but it's still there so now the result is that Z which let's call  that the result of our DOT product plus the bias is going and that's what we pass into  our non-linear function that piece is going to be applied to that activation function  now the final output here is simply going to be G which is our activation function of  Z right Z is going to be basically what you can think of the state of this neuron it's  the result of that dot product plus bias now if we want to Define and build up a  multi-layered output neural network if we want two outputs to this function for example  it's a very simple procedure we just have now two neurons two perceptrons each perceptron will  control the output for its Associated piece right so now we have two outputs each one is a normal  perceptron it takes all of the inputs so they both take the same inputs but amazingly now  with this mathematical understanding we can start to build our first neural network entirely  from scratch so what does that look like so we can start by firstly initializing these two  components the first component that we saw was the weight Matrix excuse me the weight  Vector it's a vector of Weights in this case and the second component is the the bias Vector  that we're going to multiply with the dot product of all of our inputs by our weights right so the  only remaining step now after we've defined these parameters of our layer is to now Define you know  how does forward propagation of information works and that's exactly those three main components  that I've been stressing to so we can create this call function to do exactly that to Define this  forward propagation of information and the story here is exactly the same as we've been seeing it  right Matrix multiply our inputs with our weights Right add a bias and then apply a non-linearity  and return the result and that literally this code will run this will Define a full net a full neural  network layer that you can then take like this and of course actually luckily for all  of you all of that code which wasn't much code that's been abstracted away by these  libraries like tensorflow you can simply call functions like this which will actually  you know replicate exactly that piece of code so you don't need to necessarily copy all of  that code down you just you can just call it and with that understanding you know we just saw  how you could build a single layer but of course now you can actually start to think about how  you can stack these layers as well so since we now have this transformation essentially from  our inputs to a hidden output you can think of this as basically how we can Define some  way of transforming those inputs right into some new dimensional space right perhaps closer  to the value that we want to predict and that transformation is going to be eventually learned  to know how to transform those inputs into our desired outputs and we'll get to that later but  for now the piece that I want to really focus on is if we have these more complex neural networks  I want to really distill down that this is nothing more complex than what we've already seen if we  focus on just one neuron in this diagram take is here for example Z2 right Z2 is this neuron that's  highlighted in the middle layer it's just the same perceptron that we've been seeing so far in this  class it was a its output is obtained by taking a DOT product adding a bias and then applying  that non-linearity between all of its inputs if we look at a different node for example Z3  which is the one right below it it's the exact same story again it sees all of the same inputs  but it has a different set of weight Matrix that it's going to apply to those inputs so we'll have  a different output but the mathematical equations are exactly the same so from now on I'm just  going to kind of simplify all of these lines and diagrams just to show these icons in the middle  just to demonstrate that this means everything is going to fully connect it to everything and  defined by those mathematical equations that we've been covering but there's no extra complexity in  these models from what you've already seen now if you want to Stack these types of Solutions on top  of each other these layers on top of each other you can not only Define one layer very easily but  you can actually create what are called sequential models these sequential models you can Define one  layer after another and they define basically the forward propagation of information not just  from the neuron level but now from the layer level every layer will be fully connected to the  next layer and the inputs of the secondary layer will be all of the outputs of the prior layer  now of course if you want to create a very deep neural network all the Deep neural network is is  we just keep stacking these layers on top of each other there's nothing else to this story that's  really as simple as it is once so these layers are basically all they are is just layers where the  final output is computed right by going deeper and deeper into this progression of different layers  right and you just keep stacking them until you get to the last layer which is your output layer  it's your final prediction that you want to Output right we can create a deep neural network to do  all of this by stacking these layers and creating these more hierarchical models like we saw very  early in the beginning of today's lecture one where the final output is really computed by you  know just going deeper and deeper into this system okay so that's awesome so we've now seen how  we can go from a single neuron to a layer to all the way to a deep neural network right  building off of these foundational principles let's take a look at how exactly we can use these  uh you know principles that we've just discussed to solve a very real problem that I think all  of you are probably very concerned about uh this morning when you when you woke up so that  problem is how we can build a neural network to answer this question which is will I how will  I pass this class and if I will or will I not so to answer this question let's see if we can  train a neural network to solve this problem okay so to do this let's start with a very simple  neural network right we'll train this model with two inputs just two inputs one input is going to  be the number of lectures that you attend over the course of this one week and the second input is  going to be how many hours that you spend on your final project or your competition okay so what  we're going to do is firstly go out and collect a lot of data from all of the past years that  we've taught this course and we can plot all of this data because it's only two input space we can  plot this data on a two-dimensional feature space right we can actually look at all of the students  before you that have passed the class and failed the class and see where they lived in this space  for the amount of hours that they've spent the number of lectures that they've attended and so  on greenpoints are the people who have passed red or those who have failed now and here's you right  you're right here four or five is your coordinate space you fall right there and you've attended  four lectures you've spent five hours on your final project we want to build a neural network  to answer the question of will you pass the class although you failed the class so let's do it we  have two inputs one is four one is five these are two numbers we can feed them through a neural  network that we've just seen how we can build that and we feed that into a single layered neural  network three hidden units in this example but we could make it larger if we wanted to be more  expressive and more powerful and we see here that the probability of you passing this class  is 0.1 it's pretty visible so why would this be the case right what did we do wrong because I  don't think it's correct right when we looked at the space it looked like actually you were a good  candidate to pass the class but why is the neural network saying that there's only a 10 likelihood  that you should pass does anyone have any ideas exactly exactly so this neural network is just uh  like it was just born right it has no information about the the world or this class it doesn't  know what four and five mean or what the notion of passing or failing means right so exactly right  this neural network has not been trained you can think of it kind of as a baby it hasn't learned  anything yet so our job firstly is to train it and part of that understanding is we first need  to tell the neural network when it makes mistakes right so mathematically we should now think  about how we can answer this question which is does did my neural network make a mistake and if  it made a mistake how can I tell it how big of a mistake it was so that the next time it sees this  data point can it do better minimize that mistake so in neural network language those mistakes  are called losses right and specifically you want to Define what's called a loss function  which is going to take as input your prediction and the true prediction right and how  far away your prediction is from the true prediction tells you how big of  a loss there is right so for example let's say we want to build a neural  network to do classification of or sorry actually even before that I want to  maybe give you some terminology so there are multiple different ways of saying the same thing  in neural networks and deep learning so what I just described as a loss function is also commonly  referred to as an objective function empirical risk a cost function these are all exactly the  same thing they're all a way for us to train the neural network to teach the neural network when it  makes mistakes and what we really ultimately want to do is over the course of an entire data set not  just one data point of mistakes we won't say over the entire data set we want to minimize all of the  mistakes on average that this neural network makes so if we look at the problem like I said of  binary classification will I pass this class or will I not there's a yes or no answer that  means binary classification now we can use what's called a loss function of the softmax Cross  entropy loss and for those of you who aren't familiar this notion of cross entropy is actually  developed here at MIT by Sean Sean Excuse me yes Claude Shannon who is a Visionary he did his  Masters here over 50 years ago he introduced this notion of cross-entropy and that was you  know pivotal in in the ability for us to train these types of neural networks even now into the  future so let's start by instead of predicting a binary cross-entropy output what if we wanted  to predict a final grade of your class score for example that's no longer a binary output yes or  no it's actually a continuous variable right it's the grade let's say out of 100 points what is the  value of your score in the class project right for this type of loss we can use what's called a mean  squared error loss you can think of this literally as just subtracting your predicted grade from  the true grade and minimizing that distance apart foreign so I think now we're ready to really put  all of this information together and Tackle this problem of training a neural network right to not  just identify how erroneous it is how large its loss is but more importantly minimize that loss  as a function of seeing all of this training data that it observes so we know that we want to find  this neural network like we mentioned before that minimizes this empirical risk or this empirical  loss averaged across our entire data set now this means that we want to find mathematically  these W's right that minimize J of w JFW is our loss function average over our entire data set  and W is our weight so we want to find the set of Weights that on average is going to give  us the minimum the smallest loss as possible now remember that W here is just a list basically  it's just a group of all of the weights in our neural network you may have hundreds of weights  and a very very small neural network or in today's neural networks you may have billions or trillions  of weights and you want to find what is the value of every single one of these weights that's  going to result in the smallest loss as possible now how can you do this remember that our loss  function J of w is just a function of our weights right so for any instantiation of our weights  we can compute a scalar value of you know how how erroneous would our neural network be for  this instantiation of our weights so let's try and visualize for example in a very simple example  of a two-dimensional space where we have only two weights extremely simple neural network here very  small two weight neural network and we want to find what are the optimal weights that would train  this neural network we can plot basically the loss how erroneous the neural network is for every  single instantiation of these two weights right this is a huge space it's an infinite space but  still we can try to we can have a function that evaluates at every point in this space now what  we ultimately want to do is again we want to find which set of W's will give us the smallest loss  possible that means basically the lowest point on this landscape that you can see here where  is the W's that bring us to that lowest point the way that we do this is actually just by  firstly starting at a random place we have no idea where to start so pick a random place to start in  this space and let's start there at this location let's evaluate our neural network we can compute  the loss at this specific location and on top of that we can actually compute how the loss is  changing we can compute the gradient of the loss because our loss function is a continuous function  right so we can actually compute derivatives of our function across the space of our weights and  the gradient tells us the direction of the highest point right so from where we stand the gradient  tells us where we should go to increase our loss now of course we don't want to increase our loss  we want to decrease our loss so we negate our gradient and we take a step in the opposite  direction of the gradient that brings us one step closer to the bottom of the landscape and  we just keep repeating this process right over and over again we evaluate the neural network  at this new location compute its gradient and step in that new direction we keep traversing  this landscape until we converge to the minimum we can really summarize this algorithm which  is known formally as gradient descent right so gradient descent simply can be written like this  we initialize all of our weights right this can be two weights like you saw in the previous  example it can be billions of Weights like in real neural networks we compute this gradient  of the partial derivative with of our loss with respect to the weights and then we can update our  weights in the opposite direction of this gradient so essentially we just take this small  amount small step you can think of it which here is denoted as Ada and we refer  to this small step right this is commonly referred to as what's known as the learning  rate it's like how much we want to trust that gradient and step in the direction of that  gradient we'll talk more about this later but just to give you some sense of code this this  algorithm is very well translatable to real code as well for every line on the pseudocode you can  see on the left you can see corresponding real code on the right that is runnable and directly  implementable by all of you in your labs but now let's take a look specifically at this term here  this is the gradient we touched very briefly on this in the visual example this explains like I  said how the loss is changing as a function of the weights right so as the weights move around will  my loss increase or decrease and that will tell the neural network if it needs to move the weights  in a certain direction or not but I never actually told you how to compute this right and I think  that's an extremely important part because if you don't know that then you can't uh well you can't  train your neural network right this is a critical part of training neural networks and that process  of computing this line This gradient line is known as back propagation so let's do a very quick  intro to back propagation and how it works so again let's start with the simplest neural network  in existence this neural network has one input one output and only one neuron right this is as simple  as it gets we want to compute the gradient of our loss with respect to our weight in this case let's  compute it with respect to W2 the second weight so this derivative is going to tell us how much a  small change in this weight will affect our loss if if a small change if we change our weight a  little bit in One Direction we'll increase our loss or decrease our loss so to compute that we  can write out this derivative we can start with applying the chain rule backwards from the loss  function through the output specifically what we can do is we can actually just decompose this  derivative into two components the first component is the derivative of our loss with respect to  our output multiplied by the derivative of our output with respect to W2 right this is just a  standard um uh instantiation of the chain rule with this original derivative that we had on the  left hand side let's suppose we wanted to compute the gradients of the weight before that which in  this case are not W1 but W excuse me not W2 but W1 well all we do is replace W2 with W1 and that  chain Rule still holds right that same equation holds but now you can see on the red component  that last component of the chain rule we have to once again recursively apply one more chain rule  because that's again another derivative that we can't directly evaluate we can expand that  once more with another instantiation of the chain Rule and now all of these components we  can directly propagate these gradients through the hidden units right in our neural network all  the way back to the weight that we're interested in in this example right so we first computed  the derivative with respect to W2 then we can back propagate that and use that information  also with W1 that's why we really call it back propagation because this process occurs  from the output all the way back to the input now we repeat this process essentially many many  times over the course of training by propagating these gradients over and over again through  the network all the way from the output to the inputs to determine for every single weight  answering this question which is how much does a small change in these weights affect our loss  function if it increases it or decreases and how we can use that to improve the loss ultimately  because that's our final goal in this class foreign so that's the back propagation algorithm  that's that's the core of training neural networks in theory it's very simple it's it's really  just an instantiation of the chain rule but let's touch on some insights that make  training neural networks actually extremely complicated in practice even though the algorithm  of back propagation is simple and you know many decades old in practice though optimization of  neural networks looks something like this it looks nothing like that picture that I showed you  before there are ways that we can visualize very large deep neural networks and you can think  of the landscape of these models looking like something like this this is an illustration from  a paper that came out several years ago where they tried to actually visualize the landscape  a very very deep neural networks and that's what this landscape actually looks like that's what  you're trying to deal with and find the minimum in this space and you can imagine the challenges  that come with that so to cover the challenges let's first think of and recall that update  equation defined in gradient descent right so I didn't talk too much about this parameter Ada  but now let's spend a bit of time thinking about this this is called The Learning rate like we saw  before it determines basically how big of a step we need to take in the direction of our gradient  on every single iteration of back propagation in practice even setting the learning rate  can be very challenging you as you as the designer of the neural network have to set this  value this learning rate and how do you pick this value right so that can actually be quite  difficult it has really uh large consequences when building a neural network so for example  if we set the learning rate too low then we learn very slowly so let's assume we start on  the right hand side here at that initial guess if our learning rate is not large enough  not only do we converge slowly we actually don't even converge to the global minimum right  because we kind of get stuck in a local minimum now what if we set our learning rate too high  right what can actually happen is we overshoot and we can actually start to diverge from the solution  the gradients can actually explode very bad things happen and then the neural network doesn't trade  so that's also not good in reality there's a very happy medium between setting it too small setting  it too large where you set it just large enough to kind of overshoot some of these local Minima  put you into a reasonable part of the search space where then you can actually Converge on the  solutions that you care most about but actually how do you set these learning rates in practice  right how do you pick what is the ideal learning rate one option and this is actually a very common  option in practice is to simply try out a bunch of learning rates and see what works the best right  so try out let's say a whole grid of different learning rates and you know train all of these  neural networks see which one works the best but I think we can do something a lot smarter  right so what are some more intelligent ways that we could do this instead of exhaustively  trying out a whole bunch of different learning rates can we design a learning rate algorithm  that actually adapts to our neural network and adapts to its landscape so that it's a bit  more intelligent than that previous idea so this really ultimately means that the learning  rate the speed at which the algorithm is trusting the gradients that it sees is going to depend  on how large the gradient is in that location and how fast we're learning how many other  options uh and sorry and many other options that we might have as part of training in  neural networks right so it's not only how quickly we're learning you may judge it on many  different factors in the learning landscape in fact we've all been these different algorithms  that I'm talking about these adaptive learning rate algorithms have been very widely studied in  practice there is a very thriving community in the Deep learning research community that  focuses on developing and designing new algorithms for learning rate adaptation and faster  optimization of large neural networks like these and during your Labs you'll actually get the  opportunity to not only try out a lot of these different adaptive algorithms which you can see  here but also try to uncover what are kind of the patterns and benefits of One Versus the other  and that's going to be something that I think you'll you'll find very insightful as part of your  labs so another key component of your Labs that you'll see is how you can actually put all of this  information that we've covered today into a single picture that looks roughly something like this  which defines your model at the first at the top here that's where you define your model we talked  about this in the beginning part of the lecture for every piece in your model you're now going  to need to Define this Optimizer which we've just talked about this Optimizer is defined together  with a learning rate right how quickly you want to optimize your lost landscape and over many  Loops you're going to pass over all of the examples in your data set and observe essentially  how to improve your network that's the gradient and then actually improve the network in those  directions and keep doing that over and over and over again until eventually your neural  network converges to some sort of solution so I want to very quickly briefly in the  remaining time that we have continue to talk about tips for training these neural networks  in practice and focus on this very powerful idea of batching your data into well what are  called mini batches of smaller pieces of data to do this let's revisit that gradient descent  algorithm right so here this gradient that we talked about before is actually extraordinarily  computationally expensive to compute because it's computed as a summation across all of the pieces  in your data set right and in most real life or real world problems you know it's simply not  feasible to compute a gradient over your entire data set data sets are just too large these days  so in you know there are some Alternatives right what are the Alternatives instead of computing  the derivative or the gradients across your entire data set what if you instead computed the gradient  over just a single example in your data set just one example well of course this this estimate of  your gradient is going to be exactly that it's an estimate it's going to be very noisy it may  roughly reflect the trends of your entire data set but because it's a very it's only one example in  fact of your entire data set it may be very noisy right well the advantage of this though is  that it's much faster to compute obviously the gradient over a single example because  it's one example so computationally this has huge advantages but the downside is that it's  extremely stochastic right that's the reason why this algorithm is not called gradient descent  it's called stochastic gradient descent now now what's the middle ground right instead of  computing it with respect to one example in your data set what if we computed what's called a  mini batch of examples a small batch of examples that we can compute the gradients over and when we  take these gradients they're still computationally efficient to compute because it's a mini batch  it's not too large maybe we're talking on the order of tens or hundreds of examples in our data  set but more importantly because we've expanded from a single example to maybe 100 examples  the stochasticity is significantly reduced and the accuracy of our gradient is much improved so  normally we're thinking of batch sizes many batch sizes roughly on the order of 100 data points  tens or hundreds of data points this is much faster obviously to compute than gradient descent  and much more accurate to compute compared to stochastic gradient descent which is that single  single point example so this increase in gradient accuracy allows us to essentially converge to  our solution much quicker than it could have been possible in practice due to gradient descent  limitations it also means that we can increase our learning rate because we can trust each of those  gradients much more efficiently right we're now averaging over a batch it's going to be much  more accurate than the stochastic version so we can increase that learning rate and actually  learn faster as well this allows us to also massively parallelize this entire algorithm in  computation right we can split up batches onto separate workers and Achieve even more significant  speed UPS of this entire problem using gpus the last topic that I very very briefly want to cover  in today's lecture is this topic of overfitting right when we're optimizing a neural network with  stochastic gradient descent we have this challenge of what's called overfitting overfitting I looks  like this roughly right so on the left hand side we want to build a neural network or let's say  in general we want to build a machine learning model that can accurately describe some patterns  in our data but remember we're ultimately we don't want to describe the patterns in our training data  ideally we want to define the patterns in our test data of course we don't observe test data we only  observe training data so we have this challenge of extracting patterns from training data and hoping  that they generalize to our test data so set in one different way we want to build models that can  learn representations from our training data that can still generalize even when we show them brand  new unseen pieces of test data so assume that you want to build a line that can describe or find  the patterns in these points that you can see on the slide right if you have a very simple neural  network which is just a single line straight line you can describe this data sub-optimally right  because the data here is non-linear you're not going to accurately capture all of the nuances  and subtleties in this data set that's on the left hand side if you move to the right hand  side you can see a much more complicated model but here you're actually over expressive you're  too expressive and you're capturing kind of the nuances the spurious nuances in your training  data that are actually not representative of your test data ideally you want to end up with the  model in the middle which is basically the middle ground right it's not too complex and it's not too  simple it still gives you what you want to perform well and even when you give it brand new data so  to address this problem let's briefly talk about what's called regularization regularization  is a technique that you can introduce to your training pipeline to discourage complex models  from being learned now as we've seen before this is really critical because neural networks  are extremely large models they are extremely prone to overfitting right so regularization  and having techniques for regularization has extreme implications towards the success of  neural networks and having them generalize Beyond training data far into our testing domain  the most popular technique for regularization in deep learning is called Dropout and the idea of  Dropout is is actually very simple it's let's revisit it by drawing this picture of deep neural  networks that we saw earlier in today's lecture in Dropout during training we essentially randomly  select some subset of the neurons in this neural network and we try to prune them out with some  random probabilities so for example we can select this subset of neural of neurons we can randomly  select them with a probability of 50 percent and with that probability we randomly turn them off  or on on different iterations of our training so this is essentially forcing the neural network  to learn you can think of an ensemble of different models on every iteration it's going to be exposed  to kind of a different model internally than the one it had on the last iteration so it has  to learn how to build internal Pathways to process the same information and it can't rely on  information that it learned on previous iterations right so it forces it to kind of capture some  deeper meaning within the pathways of the neural network and this can be extremely powerful  because number one it lowers the capacity of the neural network significantly right you're  lowering it by roughly 50 percent in this example but also because it makes them easier to  train because the number of Weights that have gradients in this case is also reduced so  it's actually much faster to train them as well now like I mentioned on every iteration we  randomly drop out a different set of neurons right and that helps the data generalize better and the  second regularization techniques which is actually a very broad regularization technique far beyond  neural networks is simply called early stopping now we know the the definition of overfitting  is simply when our model starts to represent basically the training data more than the  testing data that's really what overfitting comes down to at its core if we set aside some of  the training data to use separately that we don't train on it we can use it as kind of a testing  data set synthetic testing data set in some ways we can monitor how our network is learning on  this unseen portion of data so for example we can over the course of training we can basically  plot the performance of our Network on both the training set as well as our held out test set and  as the network is trained we're going to see that first of all these both decrease but there's  going to be a point where the loss plateaus and starts to increase the training loss will actually  start to increase this is exactly the point where you start to overfit right because now you're  starting to have sorry that was the test loss the test loss actually starts to increase because now  you're starting to overfit on your training data this pattern basically continues for the rest  of training and this is the point that I want you to focus on right this Middle Point  is where we need to stop training because after this point assuming that this test set  is a valid representation of the true test set this is the place where the accuracy  of the model will only get worse right so this is where we would want to early stop  our model and regularize the performance and we can see that stopping anytime before  this point is also not good we're going to produce an underfit model where we could  have had a better model on the test data but it's this trade-off right you can't stop  too late and you can't stop too early as well so I'll conclude this lecture by just summarizing  these three key points that we've covered in today's lecture so far so we've first covered  these fundamental building blocks of all neural networks which is the single neuron the perceptron  we've built these up into larger neural layers and then from their neural networks and deep neural  networks we've learned how we can train these apply them to data sets back propagate through  them and we've seen some trips tips and tricks for optimizing these systems end to end in  the next lecture we'll hear from Ava on deep sequence modeling using rnns and specifically  this very exciting new type of model called the Transformer architecture and attention mechanisms  so maybe let's resume the class in about five minutes after we have a chance to swap speakers  and thank you so much for all of your attention thank you
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2018_Faster_ML_Development_with_TensorFlow.txt
thanks T so I'm Jean Ching and I'm like the also at Google Cambridge office and I'm one of the authors of tensor flow so we're going to change topic and talk about tensor flow so I read your course material a little bit and I gathered that you all use tensor flow in previous parts of the lecture is there right great ok so my focus will be how to more efficiently write and debug machine learning models in tensor flow so the question is um whether you need to debug a machine learning model I think the answer is yes of course machine learning models are very different from traditional programs but they are software and they're code and if they're software and code they will have bugs and you need to debug your models from time to time so hopefully after this lecture you will know a little bit more about how to more efficiently debug your machine healing models in tensorflow so before we dive into debugging I want to talk about how machine learning models are represented in a computer because that turns out to be important for how you write and debug your programs so there are two ways in which a machine learning model can be represented so it's either a data structure or program so if it's a data structure then when you write code to for example define a layer of neural network you're actually building a graph and those lines of code when they're executed they don't actually do the computation they're just building the graph and the graph means to be later fed into some kind of machinery kind of execution engine to actually run the model and the second way in which you can define a machine in a model is to write it as a program so that's more straightforward so those lines of lines of code will actually do the competition on either the CPU or GPU depending on whether you have a GPU or not um so the first paradigm is also called arm symbolic execution or deferred execution and the second one is also called eager execution or imperative execution so now the question for you is whether tensorflow is the first paradigm or the second paradigm so I heard someone said first second both yeah so I think it's a trick question right so and the answer is both so if you ask the question like half a year ago then the answer will be only the but in the latest version of tensorflow we support both modes and I'm going to give some examples in the following slides so by default tensorflow is the first mode so that's the classical traditional kinds of low style so just to give you a refresher of how to use tensorflow to define a simple model you import tensorflow stf and then you define some constants or maybe some variables as inputs and then you write a line to say like you want to multiply x MW and you want to add the result of the multiplication to two another thing B right so you can think of this as a very simple linear regression model if you will now the important thing here is when this line is executed it's actually not doing the computation so the multiplication will not happen at this point if you print the results of this line why there you will see it's not 40 it's not 10 times 4 equals 40 instead it's um it's like a abstract symbolic thing so it's called a tensor and it knows what kind of operation it needs to do when it's actually executed in the future so mall is that operation it also needs some information about it also knows information about like what its dependencies are which are X and W in this case but it's not shown in the printed message here and likewise when you do a TV ad when that line of code is executed the addition will not happen is going to happen later so by later I mean a case the point at which you create a session by calling TF dot session and when TF duck session is created it will basically automatically pull in the graph you have already already built in the previous lines of code and then you tell the session like which tensor which abstract symbol in the graph you want to execute and then it's going to basically analyze the structure of the graph sort out all the dependencies and topple logically execute all the nodes in the graph to do the multiplication first and undo the addition next and then it's going to give you the final result so which is 42 so you can think of TF da station as a ng so it's going to run the model on CPU if you only have a CPU it's going to run the model on GPU if you have a GPU now obviously um this paradigm of defining our motto is not a most straightforward because those lines of codes that look like doing competition is not doing any actual competition and you need to learn a new API card here that session so why does tensorflow do it this way so obviously it's because there are some advantages you can get so the first advantage is because the model is a data structure it's relatively easy to serialize this and then deserialize this somewhere else you can train your model and you can load your model onto some kind of other devices like mobile devices or embedded devices like raspberry pie or car or robot and you can also serialize the model and then load the model on like a faster hardware like Google's TPU so these things are hard hard to do if your model is a Python right if your model has a Python program because those devices may not have Python and running on them and even if those devices have Python running on them that's probably not what you want to use because python is slow sometimes so I have those links here in the slide so I'm going to send those flies to the course of own answers and you can click on those links if you're interested in any of those topics like deployments on mobile devices and so on so the next advantage is because your model is a data structure you are not tied down to the language in which the model is defined so nowadays most machine learning models are written in Python but maybe your application server may be a web server is running Java or C++ and you don't want to really write the whole stack in in Python just to be able to add some machine learning to your stack right so if a model is a data structure that you can save the model after training and you can load it into Java or C++ or C sharp or any of the some supporting languages and you will be ready to serve the train model from your web server or application server and the other nice thing about representing data as a the model as a data structure is you can distribute the model very easily onto a number of machines called workers and those workers will basically use the same graph and they're going to do the exact the same competition but they're going to do it on different slices of the training data so this kind of training in a distributed way is very important for cases in which you need to train a very large amount of data quickly the kind of problem that Google sometimes has to deal with so of course you have to slightly modify your model graph so that the shared things like the weights variables in the model are shared on a server called parameter server here but that's basically distributed training intensive flow in a nutshell so again if you're interested you can look at the slide and can click that link to learn more about distributive training any questions so far okay okay so also because you are representing your model as a data structure you're not limited by the speed or the concurrency of the language in which the model is defined we know that um python is slow sometimes and even if you try to make python or parallel parallel as by writing multi-threading you will run into the issue called python global interpreter lock and that will slow your model down especially for the kind of competition that a deep learning model needs to do so the way in which we solve this problem in symbolic execution is by sending the model as a data structure from the layer of Python into C++ so there are other layer of C++ you can use to concurrency you can fully parallel those things and that can benefits the speed of the model okay so obviously there are all those advantages but there are also those like um shortcomings of symbolic execution for example it's less intuitive it's harder to learn so you need to spend some time getting used to the idea of you you you're defining our model first and then run the model later with TF data session and it's harder to debug when your model goes wrong that's because everything every actual computation happens inside the TF dot session and that's a single line of Python code calling out to C++ so you can't use usual kinds of Python debugger to debug that but I'm going to show you that they're actually very good tools intensive flow that you can use to debug things that happen in TF that session and another shortcoming of symbolic execution is that it's harder to write control flow structures so by that I mean structures like looping over a number of things or if-else branches so the kind of thing that we encounter every day in programming languages but some machine learning models also need to do that right so like recurrent neural networks need to loop over things and some kind of like fancy um dynamic models need to do if else branches and so on I'm also going to show some slides to show those concrete examples so it's very hard to it's sometimes very hard to write that kind of control flow structures in symbolic execution but it's much easier in ego execution so with eager execution your program can be more pythonic and it's easier to learn and easier to read so here's an example so on the Left you're seeing the same code as before so we are using the default symbolic execution of tensorflow no now how do we switch to the new eager execution so you just add two lines of code you imports the eager module and then you call a method called enable eager execution and you don't have to make any other change to your program in this case but you will because of these few lines you changed the semantics of these two lines multiply and add so now instead of building a graph this line is actually doing the multiplication of ten four and if you print while you will see the value and if you print a value of Z you will also see the value so everything is like a flatter and easier to understand now as I mentioned before ego mode also makes it easier to write control flow dependency and dynamic models so here's an example so suppose you want to write a recurrent neural network which I think you have seen in previous parts of the lecture before in in the default mode of tensorflow here is about the amount of code you need to write so you cannot use the default native for loop or while loop in Python you have to use the tensorflow special while loop and in order to use it you have to define two functions one for the termination condition of the loop and one for the body of the loop and then you need to feed those two functions into in the while loop and get tensors back and remember those tensors are not actual values you have to send those tensors into your session don't run together actual data so there are a few hoops to jump through you know if you want to write an RNN from scratch in the default mode of tensorflow but with execution the things becomes much simpler so you can use the native for loop in python to loop over time steps in the input and you don't have to worry about those symbolic tensors or sessions on run the the the variables you get from this for loop is the result of the commutation so eager mode makes it much easier to write the so-called dynamic models so what do we mean by static models in the dynamic models so by static models we mean models whose structure don't change with the input data and I think you have seen examples like that in the image model sections of this lecture so the guitar gram here shows the inception model in tensorflow so the model can be very complicated can have like hundreds of layers and it can do can do like a complicated competition like combination pooling and dense multiplication and so on but the structure of the model is always the same no matter how the image changes the image always has the same size and the same color depth but the model will always compute the same I mean it will always do the same competition regardless how the image changes so that's what we mean by a static model but there are also models whose structure change with input data so the recurrence neural network we have seen it's actually an example of that and the reason why it's changes is because it needs to loop over things so in the simplest kind of recurrent neural network it will loop over items in the sequence like words in a sentence so you can say that that the length of the model is proportional to the length of the input sentence but there are also more complicated changes of the model structure with input data so some of the state-of-the-art models that deal with natural language will actually take a parse tree of a sentence as the input and the structure of the model will reflect that parse tree so as we have seen before it's complicated to write our while loops or control flow structures in in the default symbolic mode now if you want to write that kind of model it gets even more complicated because there you will need to nest conditional branches and the while loops but it's much easier to write in the ego remote because you can just use the native for loops and while loops and if-else statements in Python so we actually have an example to show you how to write model starts take parse trees as input and the process natural language so please check that out if you want you have a question the tree is static yeah the tree is static in this particular input but you can have a longer sentence right it's the the the grammar of the armed sentence the parts of the of the sentence can change from one sentence to another right so that won't make the model structure change as well so basically you can't hard-code the structure of the model you have to like look at a parse tree and then like do some kind of like if-else statements and wild loops in order to like turn the parse tree into the model okay so we have seen that the eager mode is very good for learning and debugging and for writing control flow structures but sometimes you may still want to debug tensorflow programs running in the default symbolic mode and there are a few reasons for that first you may be using some kind of old code of tensorflow that hasn't been reported to you remote yet and some high level API is you might be using like TF learn or Kara's or TF slim have not been put into your mode yet and you may want to stick to the default symbolic mode because you care about speed because eager mode is sometimes slower than then the default mode so um the good news is that we have a tool in terms of flow that can help you like debug attends to flow model running in the theater um TF that session in the default mode and that's always called TF d BG or tensorflow debugger so the way in which you you use it is kind of similar to the way in which use eager execution we import an additional module and after you have created the session object you will wrap the session object where the wrapper in this case is called local command line interface debug wrapper so you don't need to make any other change to your code because this wrapped object has the same interface as the unwrapped object but basically you can you can think of this as like an otoscope some kind of instrument on your TF dart session which is otherwise opaque so now once you have wrapped that session when sessions are run is called your going to drop into the command line interface you're going to see basically a presentation of what intermediate answers are executed in the sessions are run and the structure in the graph and so on so I encourage you to play with that after the lecture so the TF debugger is also very useful for debugging a kind of bugs in your machine learning models which will probably occur those are called numerical instability issues so by that I mean sometimes values in the network will become Nan's or infinities so nan stands R stands for not a number Nan's and infinities are like bad float values that will sometimes happen now if you don't have a specialized tool in terms of law it can be difficult to pinpoint the exact node which generates the lens and infinities but the debugger has a special command with which you can run the model until any nodes in the graph contains an ends or infinities so in our experience that happens quite often and the most common causes of Nan's and infinities are in the flow and overflow so for example if there's under flow then a number will become zero and when you use that number in division or lock you will get infinities and overflow can be caused by learning rates being too high or by some kind of bad training example that you haven't sanitized or pre-processed but under the debugger should help you find the root cause of that kind of issues more more quickly so the TF debugger is a command-line tool it's nice it's low footprint you can use it if you have access to a computer only via a terminal but obviously it's going to be even nicer if we can debug the tensorflow models in a graphical user interface so I'm excited to tell you about a feature of tensorflow that's upcoming so it's called tensor board debugger plugin or visual graphical debugger for tensorflow so it's not included in the latest release of tensorflow which is 1.5 but it's coming in the next release 1.6 it's available for preview in Knightley's so you can copy and paste the code from here install those nightly builds of tensorflow and pencil board and you can use the feature so after you have installed these packages you can run a command so all the code in my slides are copy paste executables yeah so these are about the main features of upcoming tool so if you're interested please copy and paste these code and try it out this light here is just a reminder of the main features in here ok so as a summary we see that there are two ways to represent machine learning models intensive flow or in any deep learning framework either as a as a data structure or as a program if it's a data structure then you will get symbolic execution and symbolic execution is good for deployment distribution and optimization and if it's a program that you will get eager execution it's good for prototyping debugging and writing control flow structures and it's also easier to learn and the currently tensorflow supports both modes so you can pick and choose the best mode for you depending on your need and we also went over tens of load debugger both the command line interface and the browser version which will help you debug your model more efficiently so with that I'm going to thank my colleagues on the Google brain team both in the Mountain View headquarters and here in Cambridge Qi and Mahima are the two collaborators with me on the tensor board debugger plug-in project and tensorflow is an open source project there have been over 1,000 contributors like you who have arm fixed bugs and contributed new features so if you see any interesting things that you can do feel free to contribute to the tensor flow on github if it has have questions please email me and if you see any bugs or feature requests about tensor flow or tensor boards you can follow the bugs on these links thank you very much for your attention questions [Applause]
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2018_Convolutional_Neural_Networks.txt
let's get started so good morning everyone my name is Alva and today we're going to learn about how deep learning can be used to build systems capable of perceiving images and making decisions based on visual information neural networks have really made a tremendous impact in this area over the past 20 years and I think to really appreciate how and why this has been the case it helps to take a step back way back to five hundred forty million years ago and the Cambrian explosion where biologists traced the evolutionary origins of vision the reason vision seemed so easy for us as humans is because we have had five hundred forty million years of data for evolution to train on compare that to bipedal movement human language and the difference is significant starting in around the 1960s there was a surge in interest in in both the neural basis of vision and in developing methods to systematically characterize visual processing and this eventually led to computer scientists wondering about how these findings from neuroscience could be applied to artificial intelligence and one of the biggest breakthroughs in our understanding of the neural basis of vision came from to scientists as at Harvard Hubel and Wiesel and they had a pretty simple experimental setup where they will where they were able to measure visual neural activity in the visual cortex of cats by recording directly for the electrical signals from neurons in this region of the brain and they displayed a simple stimulus on a screen and then probed the visual cortex to see which neurons fired in response to that stimulus there were a few key takeaways from this experiment that I'd like you to keep in mind as we go through today's lecture the first was that they found that there was an mechanism for spatial invariance they could record neural responses to particular patterns and this was constant regardless of the location of those patterns on the screen the second was that the neurons they recorded from had what they called a receptive field they certain neurons only responded to certain regions of the input while others responded to other regions finally they were able to tease out that there was a mapping a hierarchy to neural organization in the visual cortex there are cells that responded to more simple images such as rods and rectangles and then downstream layers of neurons they use the activations from these upstream neurons in their computation cognitive scientists and neuroscientists have since built off this early work to indeed confirm that the visual cortex is organized into layers and this hierarchy of layers allows for the recognition of increasingly complex features that for example allow us to immediately recognize the face of a friend so now that we've gotten a sense at a very high level of how our brains process visual information we can turn our attention to what computers see how does a computer process an image well to a computer images are just numbers so suppose we have a picture of Abraham Lincoln it's made up of pixels and since this is a grayscale image each of these pixels can be represented by just a single number so we can represent our image as a 2d array of numbers one for each pixel in the image and if we were to have a RGB color image not grayscale we can represent that with a 3d array where we have two D matrices for each of our g and b now that we have a way to represent images two computers we can next think about what types of computer vision tasks we can perform and in machine learning more broadly we can think of tasks of regression and those of classification in regression our output takes a continuous value and named classification a single class label so let's consider the task of image classification we want to predict a single label for a given input image for example let's say we have a bunch of images of US presidents and we want to build a classification pipeline to tell us which President is in an image outputting the probability that the image is of a particular President so in order to be able to classify these images our pipeline needs to be able to tell what is unique about a picture of Lincoln versus a picture of Washington versus a picture of Obama and another way to think about this problem at a high level is in terms of the features that are characteristic of a particular class of images classification would then be done by detecting the presence of these features in a given image if the if the features for a particular image are all present in an image we can then predict that class with a high probability so for our image classification pipeline our model needs to know first what the features are and secondly it needs to be able to detect those features in an image so one way we can solve this is to leverage our knowledge about a particular field and use this to manually define the features ourselves and classification pipeline would then try to detect these manually defined features in images and use the results of some detection algorithm for classification but there is a big problem with this approach remember that images are just 3d arrays of brightness values and image data has a lot of variation occlusion variations in illumination viewpoint variation intraclass variation and our classification pipeline has to be invariant to all these variations while still being sensitive to the variability between different classes even though our pipeline could use features that we the human define where this manual extraction would break down is in the detection task itself and this is due to the incredible variability that I just mentioned the detection of these features would actually be really difficult in practice because your detection algorithm would need to withstand each and every one of these different variations so how can we do better we want a way to both extract features and detect their presence in images automatically in a hierarchical fashion and we can use neural network based approaches to do exactly this to learn visual features directly from image data and to learn a hierarchy of these features to construct an internal representation of the image for example if we wanted to be able to classify images of faces maybe we could first learn and detect low-level features like edges and dark spots mid level features eyes ears and noses and then high-level features that actually resemble facial structure and neural networks will allow us to directly learn visual features in this manner if we construct them cleverly so in yesterday's first lecture we learned about fully connected neural networks where you can have multiple hidden layers and where each neuron in a given layer is connected to every neuron in the subsequent layer let's say we wanted to use a fully connected and neural network like the one you see here for image classification in this case our input image would be transformed into a vector of pixel values fed into the network and each neuron in the hidden layer would be connected to all neurons in the input layer I hope you can appreciate that all spatial spatial information from our 2d array would be completely lost additionally additionally we'd have many many parameters because this is a fully connected network so it's not going to be really feasible to implement such a network in practice so how can we use the spatial structure that's inherent in the input to inform the architecture of our network the key insight in how we can do this is to connect patches of the input represented as a 2d array to neurons in hidden layers this is to say that each neuron in a hidden layer only sees a particular region of what the input to that layer is this will not only reduce the number of in our network but also allow us to leverage the fact that in an image pixels that are near each other are probably somehow related to define connections across the whole input we can apply this same principle by sliding up the patch window across the entirety of the input image in this case by two units in this way we'll take into account the spatial structure that's present but remember our ultimate task is to learn visual features and we can do this by smartly weighting the connections between a patch of our input to the neuron it's connected to in the next hidden layer so as to detect particular features present in that input and essentially what this amounts to is applying a filter a set of weights to extract local features and it would be useful for us in our classification pipeline to have many different features to work with and we can do this by using multiple filters multiple sets of weights and finally we want to be able to share the parameters of each filter across all the connections between the input layer and the next layer because the features that matter in one part of the input should matter elsewhere this is the same concept of spatial invariants that I alluded to earlier in practice this amounts to a patchy operation known as convolution let's first think about this at a high level suppose we have a four by four filter which means we have 16 different weights and we're going to apply the same filter to four by four patches in the input and use the result of that operation to somehow influence the state of the neuron in the next layer that this patch is connected to then we're going to shift our filter by two pixels as for example and grab the next patch of the input so in this way we can start thinking about convolution at a very high level but you're probably wondering how does this actually work what do we mean by features and how does this convolution operation allow us to extract them hopefully we can make this concrete by walking through a couple of examples suppose we want to classify X's from a set of black and white images of letters where black is equal to negative 1 and y is represented by a value of 1 for classification it would clearly not be possible to simply compare the two matrices to see if they're equal we want to be able to classify an X as an X even if it's shifted shrunk rotated deformed transformed in some way we want our model to compare the images of an X piece by piece and look for these important pieces that define an X as an X those are the features and if our model can find rough feature matches in roughly the same positions in two different images it can get a lot better at seeing the similarity between different examples of X's you can think of each feature as a mini image a small two-dimensional array of values and we're going to use filters to pick up on the features common to X's in the case of an X filters that can pick up on diagonal lines and a crossing will probably capture all the important characteristics of most X's so know here that these smaller matrices are the filters of weights that we'll use in our convolution operation to detect the corresponding features in an input image so now all that's left is to define this operation that we'll pick up on when these features pop up in our image and that operation is convolution convolution preserves the spatial relationship between pixels by learning image features in small squares of the input to do this we perform an element-wise multiplication between the filter matrix and a patch of the input image that's of the same dimensions as the filter this results in a 3 by 3 matrix in example you see here all entries in this matrix are won and that's because there's a perfect correspondence between our filter and the region of the input that we're convolving it with finally we sum all the entries of this matrix get 9 and that's the result of our convolution operation let's consider one more example suppose now we want to compute the convolution of a five by five image and a three by three filter to do this we need to cover the entirety of the input image by sliding the filter over the image performing the same element wise multiplication and addition let's see what this looks like we'll first start off in the upper left corner element wise multiply this 3 by 3 patch with the entries of the filter and then add this results in the first entry in our output matrix called the feature map we'll next slide the 3 by 3 filter over by 1 to grab the next patch and repeat the same operation amande wise multiplication addition this results in the second entry and we continue in this way until we have covered the entirety of our 5x5 image and that's in the feature map reflects where in the input was activated by this filter that we applied now that we've gone through the mechanism of the convolution operation let's see how different filters can be used to produce different feature Maps so on the Left you'll see a picture of a woman's face and next to it the output of applying three different convolutional filters to that same image and you can appreciate that by simply changing the weights of the filters we can detect different features from the image so I hope you can now see how convolution allows us to capitalize on the spatial structure inherent to image data and use sets of weights to extract local features and to very easily be able to affect different types of features by simply using different filters these concepts of spatial structure and local feature extraction using convolution are at the core of the neural networks used for computer vision tasks which are very appropriately named convolutional neural networks or CN NS so first we'll take a look at a CN n architecture that's designed for image classification now there are three main operations to a CNN first convolutions which as we saw can be used to generate feature Maps second non-linearity which we learned in the first lecture yesterday because image data is highly nonlinear and finally pooling which is a down sampling operation in training we train our model our CNN model on a set of images and in training we learn the weights of the convolutional filters that correspond to future maps in convolutional layers and in the case of classification we can feed the output of these convolutional layers into a fully connected layer to perform classification now we'll go through each of these operations to break down the basic architecture of a CNN first let's consider the convolution operation as we saw yesterday each neuron and hidden layer will compute a weighted sum of its inputs apply bias and eventually activate with a non-linearity what's special and CNN's is this idea of local connectivity each neuron and hidden layer only sees a patch of its inputs we can now define the actual computation for a neuron and a hidden layer its inputs are those neurons in the patch of the input layer that it's connected to we apply a matrix of weights our convolutional filter 4x4 in this example do an element-wise multiplication add the outputs and apply bias so this defines how neurons in convolutional are connected but within a single convolutional layer we can have multiple different filters that we are learning to be able to extract different features and what this means that is that the output of a convolutional layer has a volume where the height and the width are spatial dimensions and these spatial dimensions are dependent on the dimensions of the input layer the dimensions of our filter and how we're sliding our filter over the input the stride the depth of a this output volume is then given by the number of different filters we apply in that layer we can also think of how neurons and convolutional layers are connected in terms of their receptive field the locations in the original input image that a node is path connected to these parameters define the spatial arrangement of the output of a convolutional layer the next step is to apply a non-linearity to this output volume as was introduced in yesterday's lecture we do this because data is highly nonlinear and in cnn's it's common practice to apply non-linearity after every convolution operation and a common activation function that's used is the relu which is a pixel-by-pixel operation that will replace all negative values following convolution with a zero and the last key operation to a CNN is pooling and pooling is used to reduce dimension dimension yeah excuse me dimensionality and to preserve spatial invariants a common text technique which you'll very often see is max pooling as shown in this example and it's exactly what it sounds like we simply take the maximum value in a patch of the input in this case a two-by-two patch and that determines the output of the pooling operation and I encourage you to kind of meditate on other ways in which we could perform this sort of down sampling operation so these are the key operations of a CNN and we're now ready to put them together to actually construct our network we can layer these operations to learn a hierarchy of features present in the image data a CNN built for image classification can roughly be broken down into two parts the first is the feature learning pipeline where we learn features in input images through convolution through the introduction of non-linearity and the pooling operation and the second part is how we're actually performing the classification the convolutional and pooling layers output high-level features of the input data and we can feed these into fully connected layers to perform the actual classification and the output of the fully connected layers in practice is a probability distribution for an input images membership over set of possible classes and a common way to do this is using a function called softmax where the output represents this categorical probability distribution now that we've gone through all the components of a CN n all that's left is to discuss how to train it again we'll go back to a CNN for image classification during training we learn the weights of the convolutional filters what features the network is detecting and in this case we'll also learn the weights of the fully connected layers since the output for classification is a probability distribution we can use cross entropy loss for optimization with back prop okay so I'd like to take a closer look at CN NS for classification and discuss what is arguably the most famous example of a CNN the the ones trained and tested on the image net dataset imagenet is a massive dataset with over 14 million images spanning 20,000 different categories for example there are nine different pictures of bananas in this dataset alone even better bananas are succinctly described as an elongated crescent-shaped yellow fruit with soft sweet flesh which both gives a pretty good descriptive value of what about banana is and speaks to its deliciousness so the creators of the image net dataset also have created a set of visual recognition challenges beyond this data set and what's most famous is the image net classification task where users are challengers are simply tasked with producing a list of the object categories present in a given image across 1000 different categories and the results of the neural networks have had on this classification tasks are pretty remarkable 2012 was the first time a CNN won this challenge with the famous Alex net CNN and since then neural networks have dominated the competition and the error has kept decreasing surpassing human error in 2015 with the resonant architecture but with improved accuracy the number of layers in these networks has been increasing quite dramatically so there's a trade-off here right build your network deeper how deep can you go so so far we've talked only about using CNN's for image classification but in reality this idea of using convolutional layers to extract features can extend to a number of different applications when we consider to CNN for classification we saw that we could think of it in two parts feature learning and the classification what is at the core of the convolutional neural networks is is the feature learning pipeline after that we can really change what follows to suit the application that we desire for example the portion following the Khan lucien layers may look different for different image classification domains we can also introduce new architectures beyond fully connected layers for tasks such as segmentation and image captioning so today I'd like to go through three different applications of CNN's beyond image classification semantic segmentation where the task is to assign each pixel in an image an object class to produce a segmentation of that image object detection where we are tasked with detecting instances of semantic objects in an image and image captioning where the task is to generate a short description of the image that captures its semantic content so first let's talk about semantic segmentation with fully convolutional net works or fcns here the network again takes an image input of arbitrary size but instead it has to produce a correspondingly sized output where each pixel has been assigned a class label which we can then visualize as a segmentation as we saw before with CNN s for image classification we first have a series of convolutional layers that are down sampling operations for future extraction and this results in a hierarchy of learned features now the difference is that we have a series of up sampling operations to increase the resolution of our output from the Foley from the convolutional layers and to then combine this output of the up sampling operations with those from our down sampling path to produce a segmentation one application of this sort of architecture is to the real-time segmentation of the driving scene here the network has this encoder decoder architecture for encoding the architecture is very similar to what we discussed earlier earlier convolutional layers to learn a hierarchy of future maps and then the decoder portion of the network actually uses the end from pooling operations to up sample from these feature maps and output a segmentation another way CNN's have been extended and applied is for object detection where we're trying to learn features that characterize particular regions of the input and then classify those regions the original pipeline for doing this called our CNN is pretty straightforward given an input image the algorithm extracts region proposals bottom-up computes features for each of these proposals using convolutional layers and then classifies each region proposal but there's a huge downside to this so in their in their original pipeline this group extracted about 2000 different region proposals which meant that they had to run two thousand cnn's for feature extraction so since then they're and that takes a really really long time so since then there have been extensions of this basic idea one being to first run the CNN on the input image to first extract features and then get region proposals from the future Maps finally let's consider image captioning so suppose we're given an image of a cat riding a skateboard in classification our task is to output a class label for this image cat and as we've probably hammered home by now this is done by feeding our input image through a set of convolutional layers extracting features and then passing these features on to fully connected layers to predict a label in image captioning we want to generate a sentence that describes the semantic content of the same image so let's take the same network from before and remove the fully connected layers at the end now we only have convolutional layers to extract features and will replace the fully connected layers with a recurrent neural network the output of the convolutional layers gives us a fixed length encoding of the features present in our input image which we can use to initialize an RNN that we can then train to predict the words that describe this image using using the Arnon so now that we've talked about convolutional neural networks their applications we can introduce some tools that have recently been been designed to probe and visualize the inner workings of a CN n to get at this question of what is the network actually seeing so first off a few years ago there was a paper that published an interactive visualization tool of a convolutional neural network trained on a dataset of handwritten digits a very famous data set called M nest and you can play around with this tool to visualize the behavior of the network given a number that you yourself draw in and what you're seeing here is the feature maps for this seven that someone has drawn in and as we can see in the first layer the first six filters are showing primarily edged detection and deeper layers will start to pick up on corners crosses curves more complex features the exact hierarchy that that we introduced in the beginning of this lecture a second method which you'll you yourself will have the chance to play around with in the second lap is called class activation Maps or cams cams generate a heat map that indicates the regions of an image - which are CNN for classification attends to in its final layers and the way that this is computed is is the following we choose an output class that we want to visualize we can then obtain the weights from the last fully connected layer because these represent the importance of each of the final feature maps in outputting that class we can compute our heat map as simply a weighted combination of each of the final convolutional feature maps using the weights from the last fully connected layer we can apply cams to visualize both the activation maps for the most likely predictions of an object class for one image as you see on the left and also to visualize the image regions used by the CNN to identify in different instances of one object class as you see on the right so to conclude this talk I'd like to take a brief consideration of how deep learning for computer vision has really made an impact over the past several years and the advances that have been made in deep learning for computer vision would really not be possible without the available availability of large and well annotated image datasets and this has really facilitated the progress that's been made some datasets include image net which we discussed and this a data set of handwritten digits which was used in some of the first big CNN papers places a database from here at MIT of scenes and landscapes and cypher 10 which contains images from ten different categories listed here the impact has been broad and deep and spanning a variety of different fields everything from medicine to self-driving cars to security one area in which convolutional neural networks really made a big impact early on was in facial recognition software and if you think about it nowadays this software is pretty much ubiquitous from social media to security systems another area that's generated generated a lot of excitement as of late is in autonomous vehicles and self-driving cars so NVIDIA has a research team working on a CNN based system for end-to-end control of self-driving cars their pipeline feeds a single image from a camera on the car to a CNN that directly outputs a single number which is the steering is the predicted steering wheel angle and the man you see in this video is actually one of our guest lectures and on Thursday we'll hear about how his team is is developing this platform finally we've also seen a significant impact in the medical field where deep learning models have been applied to the analysis of a whole host of different types of medical images just this past year a team from Stanford developed a CNN that could achieve dermatologists level classification of skin lesions so you can imagine and this is what they actually did having an app on your phone where you take a picture upload that picture to the app it's fed into a CNN that gen that then generates a prediction of whether or not that lesion is reason for concern so to summarize what we've covered in today's lecture we first consider the origins of the computer vision problem how we can represent images as arrays of pixel values and what convolutions are how they work we then discuss the basic architecture of cnn's and finally we extended this to consider some different applications of convolutional neural networks and also talked a bit about how we can visualize their behavior so with that I'd like to conclude I'm happy to take questions after the lecture portion is over it's now my pleasure to introduce our next speaker a special guest professor Aaron Kerrville from the University of Montreal professor Kerrville is one of the creators of generative adversarial networks and he'll be talking to us about deep generative models and their applications so please join me in welcoming him [Applause]
MIT_6S191_Introduction_to_Deep_Learning
MIT_Introduction_to_Deep_Learning_6S191.txt
[Music] good afternoon everyone and welcome to MIT sus1 191 my name is Alexander amini and I'll be one of your instructors for the course this year along with Ava and together we're really excited to welcome you to this really incredible course this is a very fast-paced and very uh intense one week that we're about to go through together right so we're going to cover the foundations of a also very fast-paced moving field and a field that has been rapidly changing over the past eight years that we have taught this course at MIT now over the past decade in fact even before we started teaching this course Ai and deep learning has really been revolutionizing so many different advances and so many different areas of science meth mathematics physics and and so on and not that long ago we were having new types of we were having challenges and problems that we did not think were necessarily solvable in our lifetimes that AI is now actually solving uh Beyond human performance today and each year that we teach this course uh this lecture in particular is getting harder and harder to teach because for an introductory level course this lecture lecture number one is the lecture that's supposed to cover the foundations and if you think to any other introductory course like a introductory course 101 on mathematics or biology those lecture ones don't really change that much over time but we're in a rapidly changing field of AI and deep learning where even these types of lectures are rapidly changing so let me give you an example of how we introduced this course only a few years ago hi everybody and welcome to MIT 6s one91 the official introductory course on deep learning taught here at MIT deep learning is revolutionizing so many fields from robotics to medicine and everything in between you'll learn the fundamentals of this field and how you can build so these incredible algorithms in fact this entire speech and in video are not real and were created using deep learning and artificial intelligence and in this class you'll learn how it has been an honor to speak with you today and I hope you enjoy the course the really surprising thing about that video to me when we first did it was how viral it went a few years ago so just in a couple months of us teaching this course a few years ago that video went very viral right it got over a million views within only a few months uh people were shocked with a few things but the main one was the realism of AI to be able to generate content that looks and sounds extremely hyperrealistic right and when we did this video when we created this for the class only a few years ago this video took us about $10,000 and compute to generate just about a minute long video extremely I mean if you think about it I would say it's extremely expensive to compute something what we look at like that and maybe a lot of you are not really even impressed by the technology today because you see all of the amazing things that Ai and deep learning are producing now fast forward today the progress in deep learning yeah and people were making all kinds of you know exciting remarks about it when it came out a few years ago now this is common stuff because AI is really uh doing much more powerful things than this fun little introductory video so today fast forward four years about yeah four years to today right now where are we AI is now generating content with deep learning being so commoditized right deep learning is in all of our fingertips now online in our smartphones and so on in fact we can use deep learning to generate these types of hyperrealistic pieces of media and content entirely from English language without even coding anymore right so before we had to actually go in train these models and and really code them to be able to create that one minute long video today we have models that will do that for us end to end directly from English language so we can these models to create something that the world has never seen before a photo of an astronaut riding a horse and these models can imagine those pieces of content entirely from scratch my personal favorite is actually how we can now ask these deep learning models to uh create new types of software even themselves being software to ask them to create for example to write this piece of tensorflow code to train a neural network right we're asking a neural network to write t flow code to train another neural network and our model can produce examples of functional and usable pieces of code that satisfy this English prompt while walking through each part of the code independently so not even just producing it but actually educating and teaching the user on what each part of these uh code blocks are actually doing you can see example here and really what I'm trying to show you with all of this is that this is just highlighting how far deep learning has gone even in a couple years since we've started teaching this course I mean going back even from before that to eight years ago and the most amazing thing that you'll see in this course in my opinion is that what we try to do here is to teach you the foundations of all of this how all of these different types of models are created from the ground up and how we can make all of these amazing advances possible so that you can also do it on your own as well and like I mentioned in the beginning this introduction course is getting harder and harder to do uh and to make every year I don't know where the field is going to be next year and I mean that's my my honest truth or even honestly in even one or two months time from now uh just because it's moving so incredibly fast but what I do know is that uh what we will share with you in the course as part of this one week is going to be the foundations of all of the tech technologies that we have seen up until this point that will allow you to create that future for yourselves and to design brand new types of deep learning models uh using those fundamentals and those foundations so let's get started with with all of that and start to figure out how we can actually achieve all of these different pieces and learn all of these different components and we should start this by really tackling the foundations from the very beginning and asking ourselves you know we've heard this term I think all of you obviously before you've come to this class today you've heard the term deep learning but it's important for you to really understand how this concept of deep learning relates to all of the other pieces of science that you've learned about so far so to do that we have to start from the very beginning and start by thinking about what is intelligence at its core not even artificial intelligence but just intelligence right so the way I like to think about this is that I like to think that in elligence is the ability to process information which will inform your future decision-mak abilities now that's something that we as humans do every single day now artificial intelligence is simply the ability for us to give computers that same ability to process information and inform future decisions now machine learning is simply a subset of artificial intelligence the way you should think of machine learning is just as the programming ability or let's say even simpler than that machine learning is the science of of trying to teach computers how to do that processing of information and decision making from data so instead of hardcoding some of these rules into machines and programming them like we used to do in in software engineering classes now we're going to try and do that processing of information and informing a future decision decision making abilities directly from data and then going one step deeper deep learning is simply the subset of machine learning which uses neural networks to do that it uses neural networks to process raw pieces of data now unprocessed data and allows them to ingest all of those very large data sets and inform future decisions now that's exactly what this class is is really all about if you think of if I had to summarize this class in just one line it's about teaching machines how to process data process information and inform decision-mak abilities from that data and learn it from that data now this program is split between really two different parts so you should think of this class as being captured with both technical lectures which for example this is one part of as well as software Labs we'll have several new updates this year as I mentioned earlier just covering the rap changing of advances in Ai and especially in some of the later lectures you're going to see those the first lecture today is going to cover the foundations of neural networks themselves uh starting with really the building blocks of every single neural network which is called the perceptron and finally we'll go through the week and we'll conclude with a series of exciting guest lectures from industry leading sponsors of the course and finally on the software side after every lecture you'll also get software experience and project building experience to be able to take what we teach in lectures and actually deploy them in real code and and actually produce based on the learnings that you find in this lecture and at the very end of the class from the software side you'll have the ability to participate in a really fun day at the very end which is the project pitch competition it's kind of like a shark tank style competition of all of the different uh projects from all of you and win some really awesome prizes so let's step through that a little bit briefly this is the the syllabus part of the lecture so each day we'll have dedicated software Labs that will basically mirror all of the technical lectures that we go through just helping you reinforce your learnings and these are coupled with each day again coupled with prizes for the top performing software solutions that are coming up in the class this is going to start with today with lab one and it's going to be on music generation so you're going to learn how to build a neural network that can learn from a bunch of musical songs listen to them and then learn to compose brand new songs in that same genre tomorrow lab two on computer vision you're going to learn about facial detection systems you'll build a facial detection system from scratch using uh convolutional neural networks you'll learn what that means tomorrow and you'll also learn how to actually debias remove the biases that exist in some of these facial detection systems which is a huge problem for uh the state-of-the-art solutions that exist today and finally a brand new Lab at the end of the course will focus on large language models well where you're actually going to take a billion multi-billion parameter large language model and fine-tune it to build an assistive chatbot and evaluate a set of cognitive abilities ranging from mathematics abilities to Scientific reasoning to logical abilities and so so on and finally at the very very end there will be a final project pitch competition for up to 5 minutes per team and all of these are accompanied with great prices so definitely there will be a lot of fun to be had throughout the week there are many resources to help with this class you'll see them posted here you don't need to write them down because all of the slides are already posted online please post to Piaza if you have any questions and of course we have an amazing team uh that is helping teach this course this year and you can reach out to any of us if you have any questions the Piaza is a great place to start myself and AA will be the two main lectures for this course uh Monday through Wednesday especially and we'll also be hearing some amazing guest lectures on the second half of the course which definitely you would want to attend because they they really cover the really state-of-the-art sides of deep learning uh that's going on in Industry outside of Academia and very briefly just want to give a huge thanks to all of our sponsors who without their support this course like every year would not be possible okay so now let's start with the the fun stuff and my favorite part of of the course which is the technical parts and let's start by just asking ourselves a question right which is you know why do we care about all of this why do we care about deep learning why did you all come here today to learn and to listen to this course so to understand I think we again need to go back a little bit to understand how machine learning used to be uh performed right so machine learning typically would Define a set of features or you can think of these as kind of a set of things to look for in an image or in a piece of data usually these are hand engineered so humans would have to Define these themselves and the problem with these is that they tend to be very brittle in practice just by nature of a human defining them so the key idea of keep learning and what you're going to learn throughout this entire week is this Paradigm Shift of trying to move away from hand engineering features and rules that computer should look for and instead trying to learn them directly from raw pieces of data so what are the patterns that we need to look at in data sets such that if we look at those patterns we can make some interesting decisions and interesting actions can come out so for example if we wanted to learn how to detect faces we might if you think even how you would detect faces right if you look at a picture what are you looking for to detect a face you're looking for some particular patterns you're looking for eyes and noses and ears and when those things are all composed in a certain way you would probably deduce that that's a face right computers do something very similar so they have to understand what are the patterns that they look for what are the eyes and noses and ears of those pieces of data and then from there actually detect and predict from them so the really interesting thing I think about deep learning is that these foundations for doing exactly what I just mentioned picking out the building blocks picking out the features from raw pieces of data and the underlying algorithms themselves have existed for many many decades now the question I would ask at this point is so why are we studying this now and why is all of this really blowing up right now and exploding with so many great advances well for one there's three things right number one is that the data that is available to us today is significantly more pervasive these models are hungry for data you're going to learn about this more in detail but these models are extremely hungry for data and we're living in a world right now quite frankly where data is more abundant than it has ever been in our history now secondly these algorithms are massively compute hungry they're and they're massively parallelizable which means that they have greatly benefited from compute Hardware which is also capable of being parallelized the particular name of that Hardware is called a GPU right gpus can run parallel processing uh streams of information and are particularly amenable to deep learning algorithms and the abundance of gpus and that compute Hardware has also push forward what we can do in deep learning and finally the last piece is the software right it's the open source tools that are really used as the foundational building blocks of deploying and building all of these underlying models that you're going to learn about in this course and those open source tools have just become extremely streamlined making this extremely easy for all of us to learn about these Technologies within an amazing onewe course like this so let's start now with understanding now that we have some of the background let's start with understanding exactly what is the fundamental building block of a neural network now that building block is called a perceptron right every single perceptor every single neural network is built up of multiple perceptrons and you're going to learn how those perceptrons number one compute information themselves and how they connect to these much larger billion parameter neural networks so the key idea of a perceptron or even simpler think of this as a single neuron right so a neural network is composed osed of many many neurons and a perceptron is just one neuron so that idea of a perceptron is actually extremely simple and I hope that by the end of today this idea and this uh processing of a perceptron becomes extremely clear to you so let's start by talking about just the forward propagation of information through a single neuron now single neurons ingest information they can actually ingest multiple pieces of information so here you can see this neuron taking has input three pieces of information X1 X2 and XM right so we Define the set of inputs called x 1 through M and each of these inputs each of these numbers is going to be elementwise multiplied by a particular weight so this is going to be denoted here by W1 through WM so this is a corresponding weight for every single input and you should think of this as really uh you know every weight being assigned to that input right the weights are part of the neuron itself now you multiply all of these inputs with their weights together and then you add them up we take this single number after that addition and you pass it through what's called a nonlinear activation function to produce your final output which here be calling y now what I just said is not entirely correct right so I missed out one critical piece of information that piece of information is that we also have what you can see here is called this bias term that bias term is actually what allows your neuron neuron to shift its activation function horizontally on that x axis if you think of it right so on the right side you can now see this diagram illustrating mathematically that single equation that I talked through kind of conceptually right now you can see it mathematically written down as one single equation and we can actually rewrite this using linear algebra using vectors and Dot products so let's do that right so now our inputs are going to be described by a capital x which is simply a vector of all of our inputs X1 through XM and then our weights are going to be described by a capital W which is going to be uh W1 through WM the input is obtained by taking the dot product of X and W right that dot product does that element wise multiplication and then adds sums all of the the element wise multiplications and then here's the missing piece is that we're now going to add that bias term here we're calling the bias term w0 right and then we're going to apply the nonlinearity which here denoted as Z or G excuse me so I've mentioned this nonlinearity a few times this activation function let's dig into it a little bit more so we can understand what is actually this activation function doing well I said a couple things about it I said it's a nonlinear function right here you can see one example of an activation fun function one common uh one commonly used activation function is called the sigmoid function which you can actually see here on the bottom right hand side of the screen the sigmoid function is very commonly used because it's outputs right so it takes as input any real number the x- axxis is infinite plus or minus but on the Y AIS it basically squashes every input X into a number between Z and one so it's actually a very common choice for things like probability distributions if you want to convert your answers into probabilities or learn or teach a neuron to learn a probability distribution but in fact there are actually many different types of nonlinear activation functions that are used in neural networks and here are some common ones and and again throughout this presentation you'll see these little tensorflow icons actually throughout the entire course you'll see these tensorflow icons on the bottom which basically just allow you to uh relate some of the foundational knowledge that we're teaching ing in the lectures to some of the software labs and this might provide a good starting point for a lot of the pieces that you have to do later on in the software parts of the class so the sigmoid activation which we talked about in the last slide here it's shown on the left hand side right this is very popular because of the probability distributions right it squashes everything between zero and one but you see two other uh very common types of activation functions in the middle and the right hand side as well so the other very very common one probably the this is the one now that's the most popular activation function is now on the far right hand side it's called the relu activation function or also called the rectified linear unit so basically it's linear everywhere except there's a nonlinearity at x equals z so there's a kind of a step or a break discontinuity right so benefit of this very easy to compute it still has the nonlinearity which we kind of need and we'll talk about why we need it in one second but it's very fast right just two linear functions piecewise combined with each other okay so now let's talk about why we need a nonlinearity in the first place why why not just deal with a linear function that we pass all of these inputs through so the point of the activation function even at all why do we have this is to introduce nonlinearities in of itself so what we want to do is to allow our neural network to deal with nonlinear data right our neural networks need the ability to deal with nonlinear data because the world is extremely nonlinear right this is important because you know if you think of the real world real data sets this is just the way they are right if you look at data sets like this one green and red points right and I ask you to build a neural network that can separate the green and the red points this means that we actually need a nonlinear function to do that we cannot solve this problem with a single line right in fact if we used linear uh linear functions as your activation function no matter how big your neural network is it's still a linear function because linear functions combined with linear functions are still linear so no matter how deep or how many parameters your neural network has the best they would be able to do to separate these green and red points would look like this but adding nonlinearities allows our neural networks to be smaller by allowing them to be more expressive and capture more complexities in the data sets and this allows them to be much more powerful in the end so let's understand this with a simple example imagine I give you now this trained neural network so what does it mean trained neural network it means now I'm giving you the weights right not only the inputs but I'm going to tell you what the weights of this neural network are so here let's say the bias term w0 is going to be one and our W Vector is going to be 3 and ne2 right these are just the weights of your train neural network let's worry about how we got those weights in a second but this network has two inputs X1 and X2 now if we want to get the output of this neural network all we have to do simply is to do the same story that we talked about before right it's dot product inputs with weights add the bias and apply the nonlinearity right and those are the three components that you really have to remember as part of this class right dot product uh add the bias and apply a nonlinearity that's going to be the process that keeps repeating over and over and over again for every single neuron after that happens that neuron was going to Output a single number right now let's take a look at what's inside of that nonlinearity it's simply a weighted combination of those uh of those inputs with those weights right so if we look at what's inside of G right inside of G is a weighted combination of X and W right added with a bias right that's going to produce a single number right but in reality for any input that this model could see what this really is is a two-dimensional line because we have two parameters in this model so we can actually plot that line we can see exactly how this neuron separates points on these axes between X1 and X2 right these are the two inputs of this model we can see exactly and interpret exactly what this neuron is is doing right we can visualize its entire space because we can plot the line that defines this neuron right so here we're plotting when that line equals zero and in fact if I give you if I give that neuron in fact a new data point here the new data point is X1 = -1 and X2 = 2 just an arbitrary point in this two-dimensional space we can plot that point in the two-dimensional space And depending on which side of the line it falls on it tells us you know what the what the answer is going to be what the sign of the answer is going to be and also what the answer itself is going to be right so if we follow that that equation written on the top here and plug in -1 and 2 we're going to get 1 - 3 - 4 which equal -6 right and when I put that into my nonlinearity G I'm going to get a final output of 0.2 right so that that don't worry about the final output that's just going to be the output for that signal function but the important point to remember here is that the sigmoid function actually divides the space into these two parts right it squashes everything between Z and one but it divides it implicitly by everything less than 0.5 and greater than 0.5 depending on if it's on if x is less than zero or greater than zero so depending on which side of the line that you fall on remember the line is when x equals z the input to the sigmoid is zero if you fall on the left side of the line your output will be less than 0.5 because you're falling on the negative side of the line if your output is if your input is on the right side of the line now your output is going to be greater than 0.5 right so here we can actually visualize this space this is called the feature space of a neural network we can visualize it in its completion right we can totally visualize and interpret this neural network we can understand exactly what it's going to do for any input that it sees right but of course this is a very simple neuron right it's not a neural network it's just one neuron and even more than that it's even a very simple neuron it only has two inputs right so in reality the types of neuron neurons that you're going to be dealing with in this course are going to be neurons and neural networks with millions or even billions of these parameters right of these inputs right so here we only have two weights W1 W2 but today's neural networks have billions of these parameters so drawing these types of plots that you see here obviously becomes a lot more challenging it's actually not possible but now that we have some of the intuition behind a perceptron let's start now by building neural networks and seeing how all of this comes together so let's revisit that previous diagram of a perceptron now again if there's only one thing to take away from this lecture right now it's to remember how a perceptron works that equation of a perceptron is extremely important for every single class that comes after today and there's only three steps it's dot product with the inputs add a bias and apply your nonlinearity let's simplify the diagram a little bit I'll remove the weight labels from this picture and now you can assume that if I show a line every single line has an Associated weight that comes with that line right I'll also also remove the bias term for Simplicity assume that every neuron has that bias term I don't need to show it and now note that the result here now calling it Z which is just the uh dot product plus bias before the nonlinearity is the output is going to be linear first of all it's just a it's just a weighted sum of all those pieces we have not applied the nonlinearity yet but our final output is just going to be G of Z it's the activation function or nonlinear activ function applied to Z now if we want to step this up a little bit more and say what if we had a multi-output function now we don't just have one output but let's say we want to have two outputs well now we can just have two neurons in this network right every neuron say sees all of the inputs that came before it but now you see the top neuron is going to be predicting an answer and the bottom neuron will predict its own answer now importantly one thing you should really notice here is that each neuron has its own weights right each neuron has its own lines that are coming into just that neuron right so they're acting independently but they can later on communicate if you have another layer right so let's start now by initializing this uh this process a bit further and thinking about it more programmatically right what if we wanted to program this this neural network ourselves from scratch right remember that equation I told you it didn't sound very complex it's take a DOT product add a bias which is a single number and apply nonlinearity let's see how we would actually Implement something like that so to to define the layer right we're now going to call this a layer uh which is a collection of neurons right we have to first Define how that information propagates through the network so we can do that by creating a call function here first we're going to actually Define the weights for that Network right so remember every Network every neuron I should say every neuron has weights and a bias right so let's define those first we're going to create the call function to actually see how we can pass information through that layer right so this is going to take us input and inputs right this is like what we previously called X and it's the same story that we've been seeing this whole class right we're going to Matrix multiply or take a DOT product of our inputs with our weights we're going to add a bias and then we're going to apply a nonlinearity it's really that simple right we've now created a single layer neural network right so this this line in particular this is the part that allows us to be a powerful neural network maintaining that nonlinearity and the important thing here is to note that modern deep learning toolboxes and libraries already Implement a lot of these for you right so it's important for you to understand the foundations but in practice all of that layer architecture and all that layer logic is actually implemented in tools like tensorflow and P torch through a dense layer right so here you can see an example of calling or creating initializing a dense layer with two neurons right allowing it to feed in an arbitrary set of inputs here we're seeing these two neurons in a layer being fed three inputs right and in code it's only reduced down to this one line of tensorflow code making it extremely easy and convenient for us to use these functions and call them so now let's look at our single layered neural network this is where we have now one layer between our input and our outputs right so we're slowly and progressively increasing the complexity of our neural network so that we can build up all of these building blocks right this layer in the middle is called a hidden layer right obviously because you don't directly observe it you don't directly supervise it right you do observe the two input and output layers but your hidden layer is just kind of a uh a neuron neuron layer that you don't directly observe right it just gives your network more capacity more learning complexity and since we now have a transformation function from inputs to Hidden layers and hidden layers to Output we now have a two- layered neural network right which means that we also have two weight matrices right we don't have just the W1 which we previously had to create this hidden layer but now we also have W2 which does the transformation from hidden layer to Output layer yes what happens nonlinearity in Hidden you have just linear so there's no it's not is it a perceptron or not yes so every hidden layer also has an nonlinearity accompanied with it right and that's a very important point because if you don't have that perceptron then it's just a very large linear function followed by a final nonlinearity at the very end right so you need that cascading and uh you know overlapping application of nonlinearities that occur throughout the network awesome okay so now let's zoom in look at a single unit in the hidden layer take this one for example let's call it Z2 right it's the second neuron in the first layer right it's the same perception that we saw before we compute its answer by taking a DOT product of its weights with its inputs adding a bias and then applying a nonlinearity if we took a different hidden nodee like Z3 the one right below it we would compute its answer exactly the same way that we computed Z2 except its weights would be different than the weights of Z2 everything else stays exactly the same it sees the same inputs but of course you know I'm not going to actually show Z3 in this picture and now this picture is getting a little bit messy so let's clean things up a little bit more I'm going to remove all the lines now and replace them just with these these boxes these symbols that will denote what we call a fully connected layer right so these layers now denote that everything in our input is connected to everything in our output and the transformation is exactly as we saw before dot product bias and nonlinearity and again in code to do this is extremely straightforward with the foundations that we've built up from the beginning of the class we can now just Define two of these dense layers right our hidden layer on line one with n hidden units and then our output layer with two hidden output units does that mean the nonlinearity function must be the same between layers nonlinearity function does not need to be the same through through each layer often times it is because of convenience there's there are some cases where you would want it to be different as well especially in lecture two you're going to see nonlinearities be different even within the same layer um let alone different layers but uh unless for a particular reason generally convention is there's no need to keep them differently now let's keep expanding our knowledge a little bit more if we now want to make a deep neural network not just a neural network like we saw in the previous side now it's deep all that means is that we're now going to stack these layers on top of each other one by one more and more creating a hierarchical model right the ones where the final output is now going to be computed by going deeper and deeper and deeper into the neural network and again doing this in code again follows the exact same story as before just cascading these tensorflow layers on top of each other and just going deeper into the network okay so now this is great because now we have at least a solid foundational understanding of how to not only Define a single neuron but how to define an entire neural network and you should be able to actually explain at this point or understand how information goes from input through an entire neural network to compute an output so now let's look at how we can apply these neural networks to solve a very real problem that uh I'm sure all of you care about so here's a problem on how we want to build an AI system to learn to answer the following question which is will I pass this class right I'm sure all of you are really worried about this question um so to do this let's start with a simple input feature model the feature the two features that let's concern ourselves with are going to be number one how many lectures you attend and number two how many hours you spend on your final project so let's look at some of the past years of this class right we can actually observe how different people have uh lived in this space right between how many lectures and how much time You' spent on your final project and you can actually see every point is a person the color of that point is going to be if they passed or failed the class and you can see and visualize kind of this V this feature space if you will that we talked about before and then we have you you fall right here you're the point 45 uh right in between the the this uh feature space you've attended four lectures and you will spend 5 hours on the final project and you want to build a neural network to determine given everyone else in the class right that I've seen from all of the previous years you want to help you want to have your neural network help you to understand what is your likelihood that you will pass or fail this class so let's do it we now have all of the building blocks to solve this problem using a neural network let's do it so we have two inputs those inputs are the number of lectures you attend and number of hours you spend on your final project it's four and five we can pass those two inputs to our two uh X1 and X2 variables these are fed into this single layered single hidden layered neural network it has three hidden units in the middle and we can see that the final predicted output probability for you to pass this class is 0.1 or 10% right so very Bleak outcome it's not a good outcome um the actual ual probability is one right so attending four out of the five lectures and spending 5 hours in your final project you actually lived in a part of the feature space which was actually very positive right it looked like you were going to pass the class so what happened here anyone have any ideas so why did the neural network get this so terribly wrong right it's not trained exactly so this neural network is not trained we haven't shown any of that data the green and red data right so you should really think of neural networks like babies right before they see data they haven't learned anything there's no expectation that we should have for them to be able to solve any of these types of problems before we teach them something about the world so let's teach this neural network something about uh the problem first right and to train it we first need to tell our neural network when it's making bad decisions right so we need to teach it right really train it to learn exactly like how we as humans learn in some ways right so we have to inform the neural network when it gets the answer incorrect so that it can learn how to get the answer correct right so the closer the answer is to the ground truth so right so for example the actual value for you passing this class was probability one 100% but it predicted a probability of 0.1 we compute what's called a loss right so the closer these two things are together the smaller your loss should be and the and the more accurate your model should be so let's assume that we have data not just from one student but now we have data from many students we many students have taken this class before and we can plug all of them into the neural network and show them all to this to this system now we care not only about how the neural network did on just this one prediction but we care about how it predicted on all of these different people that the neural network has shown in the past as well during this training and learning process so when training the neural network we want to find a network that minimizes the empirical loss between our predictions and those ground truth outputs and we're going to do this on average across all of the different inputs that the that the model has seen if we look at this problem of binary classification right between yeses and NOS right will I pass the class or will I not pass the class it's a zero or one probability and we can use what is called the softmax function or the softmax cross entry function to be able to inform if this network is getting the answer correct or incorrect right the softmax cross or the cross entropy function think of this as a as an objective function it's a loss function that tells our neural network how far away these two probability distributions are right so the output is a probability distribution we're trying to determine how bad of an answer the neural network is predicting so that we can give it feedback to get a better answer now let's suppose in instead of training a or predicting a binary output we want to predict a real valued output like a like any number it can take any number plus or minus infinity so for example if you wanted to predict the uh grade that you get in a class right doesn't necessarily need to be between Z and one or Z and 100 even right you could now use a different loss in order to produce that value because our outputs are no longer a probability distribution right so for example what you might do here is compute a mean squared error probabil or mean squared error loss function between your true value or your true grade of the class and the predicted grade right these are two numbers they're not probabilities necessarily you compute their difference you square it to to look at a distance between the two an absolute distance right sign doesn't matter and then you can minimize this thing right okay great so let's put all of this loss information with this problem of finding our Network into a unified problem and a unified solution to actually train our neural network so we knowe that we want to find a neural network that will solve this problem on all this data on average right that's how we contextualize this problem earlier in the in the lectures this means effectively that we're trying to solve or we're trying to find what are the weights for our neural network what are this ve this big Vector W that we talked about in earlier in the lecture what is this Vector W compute this Vector W for me based on all of the data that we have seen right now the vector W is also going to determine what is the loss right so given a single Vector w we can compute how bad is this neural network performing on our data right so what is the loss what is this deviation from the ground truth of our network uh based on where it should be now remember that that W is just a group of a bunch of numbers right it's a very big list of numbers a list of Weights uh for every single layer and every single neuron in our neural network right so it's just a very big list or a vector of of Weights we want to find that Vector what is that Vector based on a lot of data that's the problem of training a neural network and remember our loss function is just a simple function of our weights if we have only two weights in our neural network like we saw earlier in the slide then we can plot the Lost landscape over this two-dimensional space right so we have two weights W1 and W2 and for every single configuration or setting of those two weights our loss will have a particular value which here we're showing is the height of this graph right so for any W1 and W2 what is the loss and what we want to do is find the lowest point what is the best loss where what are the weights such that our loss will be as good as possible so the smaller the loss the better so we want to find the lowest point in this graph now how do we do that right so the way this works is we start somewhere in this space we don't know where to start so let's pick a random place to start right now from that place let's compute What's called the gradient of the landscape at that particular point this is a very local estimate of where is going up basically where where is the slope increasing at my current location right that informs us not only where the slope is increasing but more importantly where the slope is decreasing if I negate the direction if I go in the opposite direction I can actually step down into the landscape and change my weights such that I lower my loss so let's take a small step just a small step in the opposite direction of the part that's going up let's take a small step going down and we'll keep repeating this process we'll compute a new gradient at that new point and then we'll take another small step and we'll keep doing this over and over and over again until we converge at what's called a local minimum right so based on where we started it may not be a global minimum of everywhere in this lost landscape but let's find ourselves now in a local minimum and we're guaranteed to actually converge by following this very simple algorithm at a local minimum so let's summarize now this algorithm this algorithm is called gradient descent let's summarize it first in pseudo code and then we'll look at it in actual code in a second so there's a few steps first step is we initialize our location somewhere randomly in this weight space right we compute the gradient of of our loss at with respect to our weights okay and then we take a small step in the opposite direction and we keep repeating this in a loop over and over and over again and we say we keep we keep doing this until convergence right until we stop moving basically and our Network basically finds where it's supposed to end up we'll talk about this this uh this small step right so we're multiplying our gradient by what I keep calling is a small step we'll talk about that a bit more about a bit more in later part of this this lecture but for now let's also very quickly show the analogous part in in code as well and it mirrors very nicely right so we'll randomly initialize our weight this happens every time you train a neural network you have to randomly initialize the weights and then you have a loop right here showing it without even convergence right we're just going to keep looping forever where we say okay we're going to compute the loss at that location compute the gradient so which way is up and then we just negate that gradient multiply it by some what's called learning rate LR denoted here it's a small step and then we take a direction in that small step so let's take a deeper look at this term here this is called the gradient right this tells us which way is up in that landscape and this again it tells us even more than that it tells us how is our landscape how is our loss changing as a function of all of our weights but I actually have not told you how to compute this so let's talk about that process that process is called back propagation we'll go through this very very briefly and we'll start with the simplest neural network uh that's possible right so we already saw the simplest building block which is a single neuron now let's build the simplest neural network which is just a one neuron neural network right so it has one hidden neuron it goes from input to Hidden neuron to output and we want to compute the gradient of our loss with respect to this weight W2 okay so I'm highlighting it here so we have two weights let's compute the gradient first with respect to W2 and that tells us how much does a small change in w 2 affect our loss does our loss go up or down if we move our W2 a little bit in One Direction or another so let's write out this derivative we can start by applying the chain rule backwards from the loss through the output and specifically we can actually decompose this law this uh derivative this gradient into two parts right so the first part we're decomposing it from DJ dw2 into DJ Dy right which is our output multiplied by Dy dw2 right this is all possible right it's a chain rule it's a I'm just reciting a chain rule here from calculus this is possible because Y is only dependent on the previous layer and now let's suppose we don't want to do this for W2 but we want to do it for W1 we can use the exact same process right but now it's one step further right we'll now replace W2 with W1 we need to apply the chain rule yet again once again to decompose the problem further and now we propagate our old gradient that we computed for W2 all the way back one more step uh to the weight that we're interested in which in this case is W1 and we keep repeating this process over and over again propagating these gradients backwards from output to input to compute ultimately what we want in the end is this derivative of every weight so the the derivative of our loss with respect to every weight in our neural network this tells us how much does a small change in every single weight in our Network affect the loss does our loss go up or down if we change this weight a little bit in this direction or a little bit in that direction yes I think you use the term neuron is perceptron is there a functional difference neuron and perceptron are the same so typically people say neural network which is why like a single neuron it's also gotten popularity but originally a perceptron is is the the formal term the two terms are identical Okay so now we've covered a lot so we've covered the forward propagation of information through a neuron and through a neural network all the way through and we've covered now the back propagation of information to understand how we should uh change every single one of those weights in our neural network to improve our loss so that was the back propop algorithm in theory it's actually pretty simple it's just a chain rule right there's nothing there's actually nothing more than than just the chain Rule and the nice part that deep learning libraries actually do this for you so they compute back prop for you you don't actually have to implement it yourself which is very convenient but now it's important to touch on even though the theory is actually not that complicated for back propagation let's touch on it now from practice now thinking a little bit towards your own implementations when you want to implement these neural networks what are some insights so optimization of neural networks in practice is a completely different story it's not straightforward at all and in practice it's very difficult and usually very computationally intensive to do this backrop algorithm so here's an illustration from a paper that came out a few years ago that actually attempted to visualize a very deep neural Network's lost landscape so previously we had that other uh depiction visualization of how a neural network would look in a two-dimensional landscape real neural networks are not two-dimensional they're hundreds or millions or billions of dimensions and now what would those lost landscap apes look like you can actually try some clever techniques to actually visualize them this is one paper that attempted to do that and it turns out that they look extremely messy right um the important thing is that if you do this algorithm and you start in a bad place depending on your neural network you may not actually end up in the the global solution right so your initialization matters a lot and you need to kind of Traverse these local Minima and try to try and help you find the global Minima or even more than that you need to construct neural networks that have lost Landscapes that are much more amenable to optimization than this one right so this is a very bad lost landscape there are some techniques that we can apply to our neural networks that smooth out their lost landscape and make them easier to optimize so recall that update equation that we talked about earlier with gradient descent right so there is this parameter here that we didn't talk about we we described this as the little step that you could take right so it's a small number that multiply with the direction which is your gradient it just tells you okay I'm not going to just go all the way in this direction I'll just take a small step in this direction so in practice even setting this value right it's just one number setting this one number can be rather difficult right if we set the learning rate too um small then the model can get stuck in these local Minima right so here it starts and it kind of gets stuck in this local Minima it converges very slowly even if it doesn't get stuck if the learning rate is too large it can kind of overshoot and in practice it even diverges and explodes and you don't actually ever find any Minima now ideally what we want is to use learning rates that are not too small and not too large to so they're large enough to basically avoid those local Minima but small enough such that they won't diverge and they will actually still find their way into the global Minima so something like this is what you should intuitively have in mind right so something that can overshoot the local minimas but find itself into a a better Minima and then finally stabilize itself there so how do we actually set these learning rates right in practice what does that process look like now idea number one is is very basic right it's try a bunch of different learning rates and see what works and that's actually a not a bad process in practice it's one of the processes that people use um so that that's uh that's interesting but let's see if we can do something smarter than this and let's see how can design algorithms that uh can adapt to the Landscapes right so in practice there's no reason why this should be a single number right can we have learning rates that adapt to the model to the data to the Landscapes to the gradients that it's seeing around so this means that the learning rate may actually increase or decrease as a function of the gradients in the loss function right how fast we're learning or many other options right there are many different ideas that could be done here and in fact there are many widely used different procedures or methodologies for setting the learning rate and during your Labs we actually encourage you to try out some of these different ideas for different types of learning rates and and even play around with you know what what's the effect of increasing or decreasing your learning rate you'll see very striking differences do it because it's on a close interval why not just find the absolute minimum you know test right so so a few things what number one is that it's not a closed space right so there's an infinite every every weight can be plus or minus up to Infinity right so even if it was a one-dimensional neural network with just one weight it's not a closed space in practice it's even worse than that because you have billions of Dimensions right so not only is your uh space your support system in one dimension is it infinite but you now have billions of infinite Dimensions right or billions of uh infinite support spaces so it's not something that you can just like search every weight every possible weight in your neural in your configuration or what is every possible weight that this neural network could take and let me test them out because it it's not practical to do even for a very small neural network in practice so in your Labs you can really try to put all of this information uh in this picture into practice which defines your model number one right here defines your Optimizer which previously we denoted as this gradient descent Optimizer here we're calling it uh stochastic gradient descent or SGD we'll talk about that more in a second and then also note that your Optimizer which here we're calling SGD could be any of these adaptive optimizers you can swap them out and you should swap them out you should test different things here to see the impact of these different methods on your training procedure and you'll gain very valuable intuition for the different insights that will come with that as well so I want to continue very briefly just for the end of this lecture to talk about tips for training neural networks in practice and how we can focus on this powerful idea of really what's called batching data right not seeing all of your data but now talking about a topic called batching so to do this let's very briefly revisit this gradient descent algorithm the gradient is compute this gradient computation the backrop algorithm I mentioned this earlier it's a very computationally expensive uh operation and it's even worse because we now are we previously described it in a way where we would have to compute it over a summation over every single data point in our entire data set right that's how we defined it with the loss function it's an average over all of our data points which means that we're summing over all of our data points the gradients so in most real life problems this would be completely infeasible to do because our data sets are simply too big and the models are too big to to compute those gradients on every single iteration remember this isn't just a onetime thing right it's every single step that you do you keep taking small steps so you keep need you keep needing to repeat this process so instead let's define a new gradient descent algorithm called SGD stochastic gradient descent instead of computing the gradient over the entire data set now let's just pick a single training point and compute that one training Point gradient right the nice thing about that is that it's much easier to compute that gradient right it only needs one point and the downside is that it's very noisy it's very stochastic since it was computed using just that one examples right so you have that that tradeoff that exists so what's the middle ground right the middle ground is to take not one data point and not the full data set but a batch of data right so take a what's called a mini batch right this could be something in practice like 32 pieces of data is a common batch size and this gives us an estimate of the true gradient right so you approximate the gradient by averaging the gradient of these 32 samples it's still fast because 32 is much smaller than the size of your entire data set but it's pretty quick now right it's still noisy but it's okay usually in practice because you can still iterate much faster and since B is normally not that large again think of something like in the tens or the hundreds of samples it's very fast to compute this in practice compared to regular gradient descent and it's also much more accurate compared to stochastic gradient descent and the increase in accuracy of this gradient estimation allows us to converge to our solution significantly faster as well right it's not only about the speed it's just about the increase in accuracy of those gradients allows us to get to our solution much faster which ultimately means that we can train much faster as well and we can save compute and the other really nice thing about mini batches is that they allow for parallelizing our computation right and that was a concept that we had talked about earlier in the class as well and here's where it's coming in we can split up those batches right so those 32 pieces of data let's say if our batch size is 32 we can split them up onto different workers right different parts of the GPU can tackle those different parts of our data points this can allow us to basically achieve even more significant speed up using GPU architectures and GPU Hardware okay finally last topic I want to talk about before we end this lecture and move on to lecture number two is overfitting right so overfitting is this idea that is actually not a deep learning Centric problem at all it's it's a problem that exists in all of machine learning right the key problem is that and the key problem is actually one that addresses how you can accurately Define if if your model is is actually capturing your true data set right or if it's just learning kind of the subtle details that are kind of sply correlating to your data set so said differently let me say it a bit differently now so let's say we want to build models that can learn representations okay from our training data that still generalize to brand new unseen test points right that's the real goal here is we want to teach our model something based on a lot of training data but then we don't want it to do well in the training data we want it to do well when we deploy it into the real world and it's seeing things that it has never seen during training so the concept of overfitting is exactly addressing that problem overfitting means if if your model is doing very well on your training data but very badly in testing it pro it's that means it's overfitting it's overfitting to the training data that it saw on the other hand there's also underfitting right on the left hand side you can see basically not fitting the data enough which means that you know you're going to achieve very similar performance on your testing distribution but both are underperforming the actual capabilities of your system now ideally you want to end up somewhere in the middle which is not too complex where you're memorizing all of the nuances in your training data like on the right but you still want to continue to perform well even based on the brand new data so you're not underfitting as well so to talk to actually address this problem in neural networks and in machine learning in general there's a few different ways that you should be aware of and how to do it because you'll need to apply them as part of your Solutions and your software Labs as well so the key concept here is called regularization right regularization is a technique that you can introduce and said very simply all regularization is is a way to discourage your model from from these nuances in your training data from being learned that's all it is and as we've seen before it's actually critical for our models to be able to generalize you know not just on training data but really what we care about is the testing data so the most popular regularization technique that's important for you to understand is this very simple idea called Dropout let's revisit this picture of a deep neural network that we've been seeing all lecture right in Dropout our training during training what we're going to do is randomly set some of the activations right these outputs of every single neuron to zero we're just randomly going to set them to zero with some probability right so let's say 50% is our probability that means that we're going to take all of the activation in our in our neural network and with a probability of 50% before we pass that activation onto the next neuron we're just going to set it to zero and not pass on anything so effectively 50% of the neurons are are going to be kind of shut down or killed in a forward pass and you're only going to forward pass information with the other 50% of your neurons so this idea is extremely powerful actually because it lowers the capacity of our neural network it not only lowers the capacity of our neural network but it's dynamically lowering it because on the next iteration we're going to pick a different 50% of neurons that we drop out so constantly the network is going to have to learn to build Pathways different pathways from input to output and that it can't rely on any small any small part of the features that are present in any part of the training data set too extensively right because it's constantly being forced to find these different Pathways with random probabilities so that's Dropout the second regularization technique is going to be this notion called early stopping which is actually something that is model agnostic you can apply this to any type of model as long as you have a testing set that you can play around with so the idea here is that we have already a pretty formal mathematical definition of what it means to overfit right overfitting is just when our model starts to perform worse on our test set that's really all it is right so what if we plot over the course of training so x-axis is as we're training the model let's look at the performance on both the training set and the test set so in the beginning you can see that the training set and the test set are both going down and they continue to go down uh which is excellent because it means that our model is getting stronger eventually though what you'll notice is that the test loss plateaus and starts to increase on the other hand the training loss there's no reason why the training loss should ever need to stop going down right training losses generally always continue to Decay as long as there is capacity in the neural network to learn those differences right but the important point is that this continues for the rest of training and we want to BAS basically we care about this point right here right this is the really important point because this is where we need to stop training right after this point this is the happy medium because after this point we start to overfit on parts of the data where our training accuracy becomes actually better than our testing accuracy so our testing accuracy is going bad it's getting worse but our training accuracy is still improving so it means overfitting on the other hand on the left hand side this is the opposite problem right we have not fully utilized the capacity of our model and the testing accuracy can still improve further right this is a very powerful idea but it's actually extremely easy to implement in practice because all you really have to do is just monitor the loss of over the course of training right and you just have to pick the model where the testing accuracy starts to get worse so I'll conclude this lecture by just summarizing three key points that we've cover covered in the class so far and this is a very g-pack class so the entire week is going to be like this and today is just the start so so far we've learned the fundamental building blocks of neural network starting all the way from just one neuron also called a perceptron we learned that we can stack these systems on top of each other to create a hierarchical network and how we can mathematically optimize those types of systems and then finally in the very very last part of the class we talked about just techniques tips and techniques for actually training and applying these systems into practice ice now in the next lecture we're going to hear from Ava on deep sequence modeling using rnns and also a really new and exciting algorithm and type of model called the Transformer which uh is built off of this principle of attention you're going to learn about it in the next class but let's for now just take a brief pause and let's resume in about five minutes just so we can switch speakers and Ava can start her presentation okay thank you
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2023_Deep_Learning_New_Frontiers.txt
foreign this next lecture is my absolute favorite lecture in introduction to deep learning focusing on the limitations of deep learning methods as they exist today and how those limitations and outstanding challenges really motivate new research at The Cutting Edge and the New Frontiers of deep learning and AI before we dive in we have a couple of logistical things to to discuss and and go through starting with perhaps one of the most important aspects of this of this course we have a tradition of Designing and giving t-shirts as part of this course and hopefully you can all see them we have them here today right at the front so we're going to do the distribution of the t-shirts at the end of today's program and so please please stay if you wish to pick up a t-shirt all right so where we are right now we have this lecture that I'm going to be giving on deep learning limitations in New Frontiers and we're going to have three more lectures following that continuing our series of guest lectures for this year importantly we still have the competitions surrounding both the software lab and the project pitch proposal for the project pitch proposal we ask that you please upload your slides by tonight we have all the detailed instructions for that on the course syllabus and if you are interested in submitting to the lab competitions the deadline for that has been extended to tomorrow afternoon at 1 pm so please submit the labs for the labs as well motivation hopefully is not only to is mostly to exercise your skills in deep learning and to build knowledge but we also have these amazing competitions and prizes for each of the labs as well as the project pitch proposal competition so to remind you for the first lab on designing neural networks for music generation we have audio related prizes that are up up in the air the competition is wide open it can be anyone's game so please please please submit your entries tomorrow we'll have the project pitch proposal competition every year it's a highlight of this program we hear your amazing ideas about new deep learning algorithms applications in a quick Shark Tank style three minute pitch it's really fun not only to be up here and have the chance to present to everyone but also to hear the Amazing Ideas of your course mates and your peers and colleagues again some of the logistical information for how to submit your slides up to the Google slide deck are included on the syllabus and finally as we introduced in the first lecture and hopefully you've realized we have a exciting grand prize competition surrounding the lab on trustworthy deep learning robustness uncertainty and bias and again emphasizing the competition is wide open right now please submit your entries should be very very exciting and we look forward to receiving your submissions foreign okay so in addition to that those technical components of the course we have three remaining guest lectures to round out our lecture series we heard an amazing talk Yesterday by Saturn from themus AI on robust and trustworthy deep learning today we're going to hear from Ramin Hassani from Vanguard who's going to talk about the new age of statistics and what that can mean for deep learning algorithms tomorrow we'll have two awesome guest lectures from dilip Krishnan from Google and to round it out from Daniela Roos the director of mitc sale herself yes some we know that there are many fans of Danielle in the audience us included so please attend these should be really awesome um talks and you'll get to hear more about The Cutting Edge of research in deep learning okay so that rounds out all the logistical and program announcements that I wanted to make now we can really dive into the fun stuff the technical content for this class so so far an introduction to deep learning you've learned about the foundations of neural network algorithms and also gotten a taste for how deep learning has already started to make an impact across many different research areas and applications from advances in autonomous vehicles to think about to thinking about applications in medicine and health care to reinforcement learning that is changing the way that we think about games and play to new generative modeling advances to robotics to many many other applications like natural language processing Finance security and more what we really hope you come away with from this course is a concrete understanding of how deep neural networks work and how these foundational algorithms are really enabling these advances across this multitude of disciplines and you've seen in this class and in this program that we've dealt with neural networks as a way as an algorithmic way to think about going from input data in the form of signals or measurements that we can derive from sensors of our world to directly produce some sort of decision that could be a prediction like a class label or a numerical value or it could be an action itself like in the case of reinforcement learning we've also seen the inverse where we can think about now building neural network algorithms that can go from a desired prediction or a desired action to try to generate new data instances as is the case with generative modeling now taking a step back right in both these cases whether we're going from data to decision or the reverse neural networks can be thought of as very powerful function approximators what that means and to get at this in more detail we have to go back to a really famous theorem in computer science and in the theory of neural networks that was this theorem that was presented in 1989 and it generated quite the stir in the field this theorem is called the universal approximation theorem and what it states is that if we start with a neural network with just a single hidden layer that single layer neural network is sufficient to approximate any function now in this class we've been thinking about not single layer neural networks but rather deep models that stack multiple of these hidden layers together but this theorem says oh you don't even need that all you need is a single layer and if you believe that any problem can be simply reduced to this idea of mapping an input to an output through some function that neural network can exist the universal approximate approximation theorem states that there is some neural network with a sufficient number of neurons that can approximate any arbitrary function now this is an incredibly powerful result but if we look more closely at this and dig a little further you can start to see that there are a few caveats that we have to keep in mind first of all this theorem makes no guarantees on how many units how many neurons in that layer you need to solve such a problem furthermore it says okay how do it doesn't answer the question of how we can actually Define that neural network how do we find the ways to support that architecture all it claims is that such a neural network exists but we know that using gradient descent and some of the tools that we learned about in this class that finding those weights is not always a straightforward problem it can be very complex very non-linear very non-convex and thinking about how we actually achieve that training of the neural network is a very very hard problem secondly what is really important about this theorem is that it does not make any guarantees on how well that Network would generalize to new tasks all it says is that given this desired mapping from input to Output we can find some neural network that exists no guarantees on its performance in other tasks or in other settings and I think that this theorem this idea of the universal approximation theorem and its underlying caveats is a perfect example of what can be thought of as the possible effects of the overhype in artificial intelligence and in deep learning and now here you've become part of this community right that's interested in advancing the state of deep learning and AI and I think collectively we need to be extremely careful in how we think about these algorithms how we Market them how we advertise them how we use them in the problems that we care about and while Universal approximation tells us and that neural networks can be very very powerful and generates a lot of excitement around this idea at a time historically it actually provided false hope to the computer science and AI community that neural networks could solve any problem of any complexity in the real world now I think it goes without saying that such over hype can be extremely dangerous and in fact it's not only the effect is not only in the course of research but also potentially at Society at Large so this is why that today for the rest of this lecture I really want to focus on some of the limitations of deep learning algorithms that you've learned about in this over the course of this class and I want to not only stop there I want to extend beyond that to see how these limitations motivate new opportunities for new research that can be aimed at addressing some of those problems and overcoming them so to start one of my favorite examples of a potential danger of deep neural networks come from this paper comes from this paper that says understanding deep neural networks requires rethinking generalization this paper proposes a really elegant and simple experiment that highlights this notion of generalization and its limitations with deep learning models so what they do in this paper is that they took images from this large data set called imagenet and each of these images is associated with a particular label dog banana dog tree as you can see here then what the what the authors of this paper did was that they considered each of these image examples and for every image in the data they took a k-sided die where K is the number of possible class labels in this data set and use that die to randomly assign new labels to each of these instances now all of the labels have been completely scrambled right they're doing this random sampling assigning these brand new labels to these images and what this means is that the labels that are associated with an image as you see are completely random with respect to what is actually in that image then what they did and if you know here is you can have instances because you have overlaps of multiple instances corresponding to the same class but now when you introduce Randomness you can have two instances of the same class that now end up with completely different labels dog here maps to banana in one case and two Tree in another case so literally they're completely completely randomizing the labels entirely with that in hand what they did was they tried to fit a deep neural network model to the sampled data from this data set imagenet and they vary the degree of Randomness ranging from the preserved original labels to completely random and then they took the resulting model looked at its performance on the test set an independent data set where now we're given new images and the task of the network is to predict the associated label and as you can expect the accuracy of the model on this independent test set decreases as you introduce more and more Randomness into the label process what was really interesting however was what they saw when they now looked not at the test set but at the training set and this is what they found that no matter how much they randomized these labels the neural network model when trained on that resulting data was able to get 100 accuracy on the training set what this means is that these neural network models can really do this perfect fitting this very powerful function approximation and I think that this is a very powerful example because it shows once again like the universal approximation theorem that deep neural Nets can perfectly fit to any function even if that function is defined by entirely random labels now you'll note that this difference between the training set performance and the test set performance captures this idea of generalization what is the limit of the neural network with respect to function fitting and how it actually performs on unseen data and to drive this point home even further again we can really understand neural networks simply as functional approximators and what the universal approximation theorem is telling us is that neural networks are just really really really good at doing this job so if we consider this example here where I've shown these data points in this 2D space we can always train a neural network to learn what is the most likely position of the data within this space within this realm that it has seen examples before such that if we get an if we give the model a new data point here in purple we can expect that the neural network is going to generate a reasonable prediction of the estimate for that data point now the challenge becomes what if you now start to extend beyond this landscape beyond the region that you have information that you have data examples well we have no guarantees on what the training data looks like in these regions and this is in fact one of the large limitations that exist in modern deep neural networks that they're very very effective at functional approximation in regions when we have training data but we can't make guarantees on their performance out of those regions and so this raises this question which again ties back to sada's lecture yesterday of how can we derive methods that tell us when the network doesn't know when it needs more information needs more examples building off this idea a little further I think there's often this common conception which can indeed be inflated by the media that deep learning is basically this magic solution Alchemy it can be the be-all and all solution to any problem but this spawns the belief that if we take some data examples and apply some training architecture to it train the resulting Model turn the crank on the Deep learning algorithm that that will spit out some beautiful excellent results solving our problem but this is simply not how deep Learning Works there's this common idea of garbage in garbage out if your data is noisy if you have insufficient data if you try to build this very large neural network model to operate on the data you're not going to guarantee performant results at the outset and this motivates one of the most pertinent failure modes of modern deep neural Nets and it highlights just how much they depend on the underlying data that they're trained with so let's say we have this image of a dog and we're going to pass it into a CNN architecture and our task is now to try to colorize this image black and white image of a dog and produce a colored output this can be the result look closely at this resulting image anyone notice anything unusual about the dog yes the ear is green awesome observation anything else yes yeah the chin is pink or purple and so two different instances pointed out by two different people of things that don't really align why could this be the case in particular if we look at the data that the that this model was trained with amongst the images of the dogs many of those images are going to be probably of the dogs sticking their tongues out or having some grass in the background and as a result the CNN model may have mapped that region around the chin to be pink in color or around the ears to be green in color and what this example really highlights in thinking about this contrast between what the training data looks like and what the predictive outputs can be is that deep learning models all they're doing is that they're building up this representation based on the the data that they have seen over the course of training and that raises this question of how do neural networks effectively handle those instances where they may not have seen enough information they may be highly uncertain and exactly as sadhana motivated in yesterday's lecture and laid out beautifully This is highly relevant to real world safety critical scenarios for example in the case of autonomous driving where cars that are on autopilot can or in autonomous mode can end up crashing with the results often being fatal or having very significant significant consequences in this particular instance to really highlight this further from a few years ago there was the case of a autonomous vehicle that resulted in a fatal accident where it it crashed into a pylon a construction pylon that was present on the road and it turned out that in when they looked back at the data that was used to train the resulting neural network that was used to control the car that in those training examples the Google street view images of that particular region of the road did not have those construction barriers and construction pylons that resulted in the car crashing in the end and so again this idea of how disparities and issues with the training data can lead to these Downstream consequences is a really prominent failure mode with neural network systems and it's exactly these types of failure modes that motivated the need for understanding uncertainty in neural network systems and this was what you learned about in yesterday's lecture and hopefully have gotten in-depth experience with through the software lab highlighting the importance of developing robust methods to understand these metrics of uncertainty and risk and how they can be really important for safety critical applications from autonomy to medicine to facial recognition as you're exploring and how these Downstream consequences are linked fundamentally to issues of data imbalance feature imbalance and Noise so overall I think that what these instances and these considerations point us to is the susceptibilities of neural networks to succumb to these failure modes the final failure mode that I'd like to consider and highlight is this idea of adversarial examples the intuition and the key idea here is that we can take a data instance for example an image which if deployed and inputted into a standard CNN is going to be predicted to contain a temple with 97 probability what we can do is now apply some tiny perturbation in the form of random noise Choice random noise to that original image that then results in a perturbed image which to us humans appears visually largely the same but if you pass that resulting image back to that same neural network it produces a prediction that completely does not align with what is actually in the image predicting ostrich label of ostrich which with 98 probability and this is this notion of an adversarial attack or an adversarial perturbation what exactly is this perturbation doing remember that when we train neural networks using gradient descent the task is to optimize or minimize some loss function an objective and the goal in standard gradient descent is to say okay how can I change the weights of the neural network to decrease the loss to optimize that objective we are specifically concerned with changing those weights W in a way to minimize our loss if you look closer here we're considering only the changes of the weights with respect to the input data and the corresponding label X and Y in contrast in thinking about adversarial attacks we now ask a different question how can we modify that input data for example that image in order to increase the error in the Network's prediction concretely how does a small change a tiny perturbation in the input data X result in a maximal increase in the loss what that means is that we can fix the weights keep the network the same and look at how we can perturb and change and manipulate that input data to try to increase that loss function an extension of this idea was developed and presented by a group of students right here at MIT where they took this idea of adversarial perturbation and created an algorithm that could synthesize not only adversarial realizations in 2D but actually in 3D using a set of different Transformations like rotations color changes other types of perturbations then using those learned perturbations they took those information and actually physically synthesized objects using 3D printing that were designed to be adversarial examples for a neural network and so they printed 3D printed a bunch of these examples of turtles 3D physical objects such that images of those would be completely fooling a neural network that looked at that image and tried to classify the correct label and so this shows that this notion of adversarial examples and perturbation can extend to different domains in the real world where now we can think about constructing these synthetic examples that are designed to probe at the weaknesses of neural networks and expose their vulnerabilities finally and we've discussed a lot about this idea uh yesterday as well is this notion of algorithmic bias the fact that as AI systems become more broadly deployed in society that they're susceptible to significant biases resulting from many of those issues that I highlighted earlier and these can lead to very real detrimental consequences as we've been exploring throughout this course in both labs and lectures so the examples that I covered so far are certainly not a exhaustive list of all the limitations of neural networks but I'd like to think of these not strictly as limitations but rather an invitation and a motivation for new innovation Creative Solutions that are aimed at trying to tackle some of these questions which are indeed open problems in Ai and deep learning research today specifically we've tackled and already thought about ways of by to address issues of bias and uncertainty in neural networks and now for the remainder of this talk I want to focus on some of the really exciting New Frontiers of deep learning tackling some of these additional limitations that are introduced the first being this idea that stems from this idea of the size and scale of neural networks they rely heavily on data they're massive and as a result it can be difficult to understand what is the underlying structure if any that the neural network is picking up on and so there's a very very exciting line of work now that is focused on introducing structure and more information from prior human knowledge into neural network architectures to enable them to become much smaller more compact more expressive more efficient and we're going to see a bit of this in today's lecture but also in our following guest lecture by Ramin where he will talk about this idea of structural encoding and how that can result in highly expressive and efficient neural network architectures the second issue that I'm going to talk about and discuss and is very related to this uh problem of scale and structure is how we can encourage neural networks to now extrapolate more effectively Beyond data and this will tie very neatly into some of the ideas we've been discussing around generative Ai and will be the last topic that I'm going to discuss in the remainder of today's lecture okay so let's focus on this first first concept of how we can leverage more of our human domain knowledge to encode greater structure into deep learning algorithms turns out we've already seen examples of this in the course so far particularly highlighted in cnns used for spatial processing and visual data as we saw cnns were introduced to be a highly efficient architecture for capturing spatial dependencies in data and the core idea behind CNN's was this notion of convolution how it would relate spatially similar and adjacent portions of an input image to each other through this efficient convolution operation and we saw that convolution was able to be effective at extracting local features from visual data and then using this local feature extraction to then enable Downstream predictive tasks like classification object detection and so forth what about now instead of images if we consider more irregular data structures that can be encoded in different ways Beyond 2D grids of pixels graphs are a particularly powerful way of encoding structural information and it can be in many cases that the notion of a graph or a network is very important to the problem that we may be considering graphs as a structure and as an example for representing data are really all around us across many different applications ranging from social networks defining how we are all connected to each other to State machines that describe how a process can transition from one state to another to metrics and models of human Mobility Urban transport to chemical molecules and biological Networks and the motivation across all these examples is that so many instances of real world data fall into naturally this idea of a network or graph structure but that this structure cannot be readily incorporated or captured by standard data encodings or geometries and so we're going to talk a little bit about graphs as a structure and how we can try to represent this information encode it in a graph to build neural networks that can capable are capable of handling these data types to see how we can do this and and approach this problem let's go back for a moment to the CNN in cnns we saw that we have this rectangular 2D kernel and what it effectively does is it slides across the image paying attention and picking up to features that exist across this 2D grid and all this function is doing is that element wise multiplication that defines the convolution operation and intuitively we can think about convolution as this visualization of sliding this kernel iteratively patch by patch across the image continuing on to try to pick up on informative features that are captured in this 2D pixel space this core idea of convolution and sliding this this feature extraction filter is at the heart of now how we can extend this idea to graph based data this motivates a very recent type of neural network architecture called graph convolutional networks where the idea is very similar to standard cnns we have some type of weight kernel and all it is is a matrix defined by neural network weights as before and now rather than sliding that kernel across a 2d grid of pixels the kernel goes around to different parts of the graph defined by particular nodes and it looks at what the neighbors of the of that node is how it's connected to adjacent nodes and the kernel is used to extract features from that local neighborhood within the graph to then capture the relevant information within the structure so we can visualize this concretely here where now instead of seeing the 2D kernel looking at a particular patch of an image what I'm highlighting is how the kernel is paying attention to node and its local Neighbors and effectively the graph convolution operation what it's doing is picking up on the local connectivity of a particular node and learning weights that are associated with those edges defining that local connectivity the kernel is just then applied iteratively to each node in the graph trying to extract information about the local connectivity such that we can go around and around and at the end start to bring all this information together in aggregation to then encode and update the weights according to what the local observations were the key idea is very very similar to The Standard convolution operation where this weight kernel all it's doing is a feature extraction procedure that's now defined by nodes and edges in this graph rather than a 2d grid of pixels in an image this idea of graph encoding and graph convolution is very exciting and very powerful in terms of now how we can extend this to real world settings that are defined by data sets that naturally lend themselves to this graph like structure so to highlight some of those applications across a variety of domains the first example I'd like to draw attention to is in the chemical and biological sciences where now if we think about the structure of a molecule right it's defined by atoms and those atoms are individually connected to each other according to molecular bonds and it turns out that we can design graph neural networks that can effectively build representations of molecules in the same operation of this graph convolution to build up an informative representation of chemical structures this very same idea of graph convolution was recently applied to the problem of drug Discovery and in fact out of work here at MIT it turned out that we can Define graph neural network architectures that can look at data of small molecule drugs and then look at new data sets to try to predict and discover new therapeutic compounds that have potent activity such as killing bacteria and functioning as antibiotics Beyond this example another recent application of graph neural networks has been in traffic prediction where the goal is to look at streets and intersections as nodes and edges defined by a graph and apply a graph neural network to that structure and to patterns of Mobility to learn to predict what resulting traffic patterns could be and indeed in work from Google and deepmind this same modeling approach was able to result in significant improvements in the prediction of ETA estimated time of arrival in Google Maps so when you're looking at Google Maps on your phone and predicting when you're going to arrive at some location what it's doing on the back end is applying this graph neural network architecture to look at data of traffic and make more informative predictions about how those patterns are changing over time a final and another recent example was in forecasting the spread of covid-19 where again building off of this same idea of Mobility patterns and contacts graph neural networks were employed to look at not only spatial distributions of covid spread but also incorporate a temporal Dimension so that researchers could effectively forecast what were the most likely areas to be affected by covid-19 and what time scales those effects were likely to occur on so hopefully this this highlights this idea of how simple ideas about data structure can then be translated into new neural network architectures that are specifically designed for particular problems in relevant application areas and this also lends very naturally to how we can extend not Beyond 2D data to graphs to now three-dimensional data which is often referred to as 3D Point clouds and these are unordered sets of data and you can think about them as being in scattered in 3D space and it turns out that you can extend graph neural networks very naturally to operate on 3D data sets like Point clouds where the core idea is that you can dynamically compute meshes present in this three-dimensional space using the same idea of graph convolution and graph neural networks okay so hopefully with that run through of graph structure graph neural networks you've started to get a little bit of idea about how inspection of the underlying structure of data and incorporation of our prior knowledge can be used to inform new neural network architectures that are very well suited for particular tasks on that I want to spend the remaining of time of this lecture on our second New Frontier focusing on generative AI so as we first introduced in the first lecture of this course and throughout I think that today we're really at this inflection point in AI where we're seeing tremendous new capabilities with generative models in particular enabling new advances in a variety of fields and I think we're just at the at the cusp of this inflection point in that in the coming years we're going to see generative AI radically transform the landscape of our world the landscape of our society so today in the remaining portion of the New Frontiers lecture we're going to focus specifically on a new class of generative models diffusion models that are powering some of these latest advances in generative AI all right to start let's think back to the lecture on generative modeling where we focus primarily on two classes of generative models vas and Gans what we didn't have time to go into into depth was what are some of the underlying limitations of these models vaes and Gans turns out that there are three primary limitations the first is what we call mode collapse meaning that in the generative process va's and Gans can kind of collapse down to this mode this phase where they're generating a lot of predictions a lot of new samples that are very very similar to each other we often kind of think about this as regression to the average value or the most common value the second key limitation as I kind of alluded to in our generative modeling lecture was that these models really struggle to generate radically new instances that are not similar to the training data that are more diverse finally it turns out that Gans in particular NBA's as well in practice can be very difficult to train they're unstable inefficient and this leads to a lot of problems in practice when thinking about how to scale these models these limitations then motivate a concrete set of challenges or criteria that we really want to meet when thinking about generative models we want our generative model to be stable efficient to train we wanted to generate high quality samples that are synthetic and novel diverse different from what the model has seen before in its training examples today and tomorrow we're going to focus on two very very exciting new classes of generative models that tackle these challenges head on today I'm going to specifically focus on diffusion models provide the intuition behind how they work where you may have seen them before and what are some of the advances that they're enabling and tomorrow in the guest lecture from Google we're going to hear from dilip on another generative modeling approach that's focused specifically on this task of text to image generation okay so let's get into it four diffusion models I think it first helps us to compare back again to the models we've seen the architectures we've seen and know about with va's and Gans that we talked about in lecture four the task is to generate examples synthetic examples let's say an image in a single shot by taking some compressed or noisy latent space and then trying to decode or generate from that to produce now a new instance in our original data space diffusion models work fundamentally differently from this approach rather than doing this One-Shot prediction what diffusion models do is that they generate new samples iteratively by starting from purely random noise and learning a process that can iteratively remove small increments of noise from that complete random State all the way up back to being able to generate a synthetic example in the original data landscape that we started off with in caring about the intuition here is really clever and really I think powerful in that what diffusion models allow us to do is to capture maximum variability maximum amount of information by starting from a completely random state diffusion models can be broken down into two key aspects the first is what we call the forward noising process and at the core of this is this idea of how we build up data for the diffusion model to look at and then learn how to denoise and generate from the key step here is that we start with training data let's say examples of images and over the course of this forward noising diffusion process what we do is we progressively add increasing increments of noise such that we are slowly wiping out the details in the image corrupting the information destroying that information until we arrive at a state that is Pure Noise then what we actually build our neural network to do is to learn a mapping that goes from that completely noise State back up to the original data space what we call the reverse process learning and mapping that denoises going from noise back up to data and the core idea here is that denoising is iteratively recovering back more and more information from a completely random state these are the two core components the forward noising process the reverse denoising process and we're going to look at how each one works distilled down to the core intuition underlying this is the forward noising the first step where we're given an image and we want to arrive at a random sample of noise importantly this process does not require any neural network learning any training what we are able to do is Define a fixed way a determined way to go from that input image to the noise the core idea is really simple all we do is we have some noising function and starting at the first time step our initial time step t0 where we have 100 image no noise added all we do is progressively add more and more noise in an iterative way over the course of these individual time steps such that the result is increasingly more noisy first we go from all image less image less image more noise more noise all noise in this defined forward process no training no learning all we have is a noising function now this gives us a bunch of examples right if we have a bunch of instances of instances in the data set we apply this noising to every one of them and so we have those slices at each of these different noising time steps our goal now in learning the neural network is to then learn how to denoise in this reverse process starting from data in the input space what the noising process led us to is this time series of these iteratively more noised examples now our task is given an image can we given a given one of these slices at these time steps let's say t all we're asking is can we learn the next less next most denoised example from that time step making this more concrete walk through it a particular image add a slice let's call it t what we ask the neural network to learn is what is the estimated image at that prior step T minus one making this very concrete let's say we have this noised image at time three the neural network is then trying to predict what was the denoise result at just one step before comparing these two let's take a step back all these two images differ by is that noise function and so the question that's posed to the neural network in the course of training is how can we train how can we learn this difference if our goal is to learn this iterative denoising process and we have all these consecutive steps going from more noise to less noise I pose the question to you all how could you think about defining a loss defining a training objective for the neural network to learn any ideas yes can you expand a little further on that it gets so the idea was can we look at the same concept of trying to encode information from an image to maybe a reduced latent space and try to learn that it's getting it's getting that's um very related to what the network is actually doing but something even simpler thinking about what how can we just compare these two images yes simple just think really simply yes exactly the idea was how many of the pixels are the same all we need to do it turns out is look at the pixel wise difference between these two steps and the cleverness behind diffusion models is defining that exact uh residual noise as what defines the loss the training objective for such a model all you do is compare these consecutive time steps of iterative denoi of iterative noising and ask what does that mean squared error what is that pixel wise difference between those two and it turns out that this works very effectively in practice that we can train a neural network to do this iterative denoising by this very um intuitive idea of the loss all right so hopefully this gives you the space of how the diffusion model builds up this understanding of the denoising process and is able to go in learning to iteratively denoise examples now the task is how can we actually sample something brand new how can we generate a new instance well it's very related to that exact reverse process the denoising process that we just walked through what we do is we take a brand new instance of completely random noise and do take our trained diffusion model and ask it okay just predict that residual noise difference that will get us to something that is slightly less noisy and this is all that is done repeatedly at these iterative time steps such that we can go from Pure Noise to something less noisy repeatedly and as this process occurs hopefully you can see the emergence of something that reflects image related content time step by time step iteratively doing this sampling procedure over many generations and many time steps such that we can get back to the point of this dog that's peeking out through us what I want you to take away with is that what the diffusion model enables is that going from a completely random noise state where we have this maximum notion of variability by defining it and breaking it down to this iterative process at each of those steps there is a prediction that's made there's a generation that's made and so that maximum variability is translated into maximum diversity in the generated samples such that when we get to the end State we have arrived at an image a synthetic sample that's back in our completely Noise free data space so at its core this is the summary of how a diffusion model works and how they're able to go from completely random noise to very faithful diverse newly generated samples as I've kind of been alluding to what I think is so powerful about this approach and this idea is the fact that we can start from complete randomness it encapsulates maximum variability such that now diffusion models are able to produce generated samples that are very very diverse from noise samples that are fundamentally different to each other what this means is now in this one instance we go from some noise sample to this image but equivalent equivalently and what is striking is if we consider now a completely different instance of random noise they're very variable from each other right random noise random noise maximum variability internally and in comparison to each other the result as a result of that denoising generation is going to be a different sampled image that's highly diverse highly different to what we saw before and that's really the power of diffusion models in thinking about how we can generate new samples that are very different and diverse quick question yes that's a fantastic question so the question was how does the model know when to stop when you build these models you define a set number of time steps and this is a parameter that you can play around with when building and training the diffusion model and there are different studies looking at what works and and what doesn't work but the core ideas by more time steps you're going to result in better resolution effectively of generations but it's a trade-off as well in terms of the stability of the model and how efficiently it can train and learn you know it sounds like an earth so so the question was if explaining why there could be some missing details or incorrect details in the results of the diffusion models and the particular example was hands in particular I don't know if there's a specific reason for why hands seem to to cause issues I personally haven't seen reports or examples or literature discussing that but I think um the the general point is yes there is there are going to be imperfections right it's not going to be 100 percent perfect or accurate in terms of uh faithfulness to the to the true example um and I think a lot of advances are now like thinking about what are modifications to the underlying architecture that could try to alleviate some of those details or those issues but for the sake of lecture I can discuss further with you about that particular example afterwards as well okay so so yeah indeed right I think that the power and um the power of these models is is very very significant and going back right to the example that we showed at lecture one right this ability to generate now a synthetic image from a language prompt photo of an astronaut riding a horse in fact it is a diffusion model that is underlying this uh this approach and what the diffusion model is doing is it's able to take a embedding that translates between text and language and then run a diffusion process between the two such that image generation can be guided by the particular language prompt more examples of this idea of text to image generation have I think really taken the internet kind of by storm but I think what is so powerful about this idea is that we can guide the generative process according to constraints that we specify through language which is very very powerful everything from something that's highly photorealistic or generated with the specific architect artistic style as in the examples that I'm showing here so so far in both today's portion on General on diffusion models and our discussion of generative models more broadly we've largely focused on examples in images but what about other data modalities other applications can we design new generative models leveraging these ideas to design new synthetic instances in other real world application areas to me and I'm definitely biased in saying this but one of the most exciting aspects about thinking about this idea is in the context of molecular design and how we can relate to chemistry the life sciences and environmental science as well so for example in the in the landscape of chemistry and molec and small molecules now instead of thinking about images and pixels we can think about atoms and molecules in three-dimensional space and there's been a lot of recent work in building now diffusion models that can look at the 3D coordinates of atoms defining a molecule and go from a completely random state in that 3D space to again perform this same iterative denoising procedure to now generate molecular structures that are well defined and could be used for applications in drug Discovery or therapeutic design beyond that and in my research specifically we are building new diffusion models that can generate biological molecules or biological sequences like those of proteins which are the actuators and executors of all of biological function in human life and across all the domains of life and specifically in my work and in my research team we've developed a new diffusion model that is capable of generating brand new protein structures I'm going to share a little bit about how that model works and our core idea behind it to kind of close out this section the motivation that is really driving this work across the field at large is this goal of trying to design new biological entities like proteins that could have therapeutic functions or expand the functional space of biology at Large and there are many many efforts across generative AI that are now aiming at this problem because of the tremendous potential impact that it can have so in our work in specific we considered this problem of protein design by going back to the biology and drawing inspiration from it and if you consider how protein function is determined and encoded all it is distilled down to is the structure of that protein in three-dimensional space and how that structure informs and encodes a particular biological function in turn proteins don't always start out adopting a particular 3D structure they go through this process of what we call protein folding that defines how the chain of atoms in the protein wiggle around in three-dimensional space to adopt a final defined 3D structure that is highly specific and highly linked to a particular biological function so we asked and we looked at this question of protein folding and how protein folding leads to structure to inspire a new diffusion model approach for this protein structure design problem where we said okay if a protein is in a completely unfolded State you can think about that as equivalent to the random noise state in an image it's in a completely floppy configuration it doesn't have a defined structure it's unfolded and what we did is we took that notion of protein protein structure being unfolded as the noisy state for a diffusion model and we designed a diffusion model that we trained to go from that unfolded state of the protein to then produce a prediction a generated prediction about a new structure that would define a specific 3D structure and we can visualize how this algorithm works and we call it folding diffusion in this video that I'm going to show here where at the start in this initial frame we have a random noise protein chain completely random configuration uh unfolded and and unfolded in 3D space now what it's going to show this video is the iterative denoising steps that go from that noise random state to the final generated structure and as you can see right it's this process very similar to the concept that I introduce with images where we're trying to go from something noisy to something structured arriving now at a final generated protein structure so our work was really focused on introducing a new and foundational algorithm for this protein structure design problem via this via diffusion models but it turns out that in just um the very short time from when we and others introduced these algorithms that it precipitated really a rise in scaling and extending these diffusion models for very very specific controlled programmable protein design where now we are seeing large-scale efforts that use these diffusion model algorithms to generate protein designs and actually realize them experimentally in the physical world so in this image I'm showing on the left the colored image is the pre is the generated output by the diffusion model and the black and white image is what if you take that result synthesize it in the biological lab and look and take a picture of the resulting protein and assess its structure we see that there is a strong correspondence meaning that we're at the ability to leverage these powerful diffusion models to now generate new proteins that can be realized in the physical world it doesn't stop there we can now think about designing proteins for a very specific therapeutic applications so for example in this visualization this is designed as a novel protein binder that's designed to block to bind to and block the activity of the covid spike receptor so what you're seeing again is that visualization of the diffusion process and arriving at a final design that binds to and blocks the top of the covet Spike receptor so hopefully you know this kind of paints a bit of the landscape of where we are today with generative AI and I think that this example in in protein science and in biology more broadly really highlights the fact that generative AI is now at the ability to enable truly impactful applications not just for creating cool images or what we can think of as AI synthetic art generative AI is now enabling us to think about designed solutions for real world problems and I think there are so many open questions in understanding the capabilities the limitations the applications of generative AI approaches and how they can Empower our society so to conclude I want to end on a higher level thought and that's the fact that this idea of generative design kind of raises this higher level concept of what it means to understand and I think it's captured perfectly in this quote by the physicist Richard Feynman who said what I cannot create I cannot understand in order to create something we as humans really have to understand how it works and conversely in order to understand something fully I'd argue that we'd have to create it engineer design generative Ai and generative intelligence is at the core of this idea of what I think about as concept learning being able to design a distill a very complex problem down to its core roots and then build up from that Foundation to design and to create when this course began on on Monday Alexander introduced the idea of what it means to be intelligent Loosely speaking right the ability to take information use it to inform a future decision human learning is not only restricted to the ability to solve specific distinct tasks it's foundational and founded by the idea of being able to understand Concepts and using those Concepts to be creative to imagine to inspire to dream I think now we're at the point where we have to really think about what this notion of intelligence really means and how it relates to the connections and distinctions between artificial intelligence and what we as humans can do and create so with that I'm going to leave you with that and I hope that inspires you to think further on what we've talked about in the course and intelligence AI deep deep learning more broadly so thank you very much for for your attention we're going to take a very brief pause since where I know we're running a little bit late and then have our fantastic lecture from Rami next thank you so much [Applause]
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2018_Computer_Vision_Meets_Social_Networks.txt
thank you for the invitation and glad to give a presentation about the computerization meets title as come division meet social media social network actually we were in this slide I will introduce the works that we have done in pencil a lab and these techniques is one of some research topics in our labs and this topics can be shifted with with the product of our company ok first I will give an introduction of brief introduction about our company so I'm not sure whether have you heard tension before actually tenzin tourism one of the largest internal company I think that I can say that not only in men of China either all over the world so this is the one of the largest Internet company and the democra capitalized is five hundred billion US dollars so I think that even comparable with Facebook oh you know even larger than Facebook readings sometimes and also we have several products and several several trends for the most important one is social social net social media is social network so we have WeChat and this is which allowed also we call the whishing in Chinese so which had and also QQ so this is two types of Instant Messenger something like the Facebook and it's going I mean us and also we have games and the Tencent games is the largest I think that the number one game company we have the REM U of U in the USA with over 10 billion u.s. dollars in 2016 and we have several you know maybe someone is familiar with supercell which we have already acquired 80 percentage of shares so this is we think that we are the largest weak we are the owners or the eight percentage share the owners of Isuzu and also we have some content working on uncounted or something like that in some video this is something like the Netflix and also the tensor music is something is modify and we have the largest copyright to you know Menem China for the music ones and also we have Tencent news so it about common content and also because the tension is them one of the largest Internet company so we have the other you know the business such as a finance or insurance or something like that these are all based on Internet okay so this is a man instruction about retention and in the next I will give a brief instruction about Tessa Allah that I were I'm now in so Tanzania Arab have actually is funded in 2016 and we have two locations while in Milano China in Xinjiang manam China and a wire office is in Seattle USA okay actually we have for research topics under the center is the Machinery's we focus on the basic machine learning the machine learning series and also la la scale machine learning platform and as well as the infrastructures and based on the machine learning part we have three application research areas come division language and speech and we would like to something actually from for this we focus on the areas of this application and we do some well reinforce learning for the decision and also generation for the creation and also we would like to maybe make some cocoa nations for the ANA standing of the world of the humans of the world okay so in the following in the following of this lecture I will give an introduction about computability the work that we have done in computer vision and we from the low level image and image video processing to the meta level image video analysis and the human that defines the image and the video understanding okay and we have done several works here subtract the start transfer enhancement and solution under the hazing I would like to select some topics that have already this technology we have already got paper publish and also shifted to the product and the segment is the image analysis we focusing we I will try to introduce some pose estimation and is a video interest attractiveness and video classifications and also in last for the even hard deeper learning in a different standing of the videos I will give some instructor images I will give some instruction about captioning and also the natural language would be the localization okay for this is a video start transfer they say they'll give a brief image about the video star transfer and we have the videos and we have a style image and we perform the star transfer we would like the study made videos to preserve to preserve the content of the video sequences and also it will borrow the style information from the style style image and the most important this is very popular in 2016 at the Prisma but nowadays for the video we have there we are the first work from working on the videos for the video this is the most important thing to maintain the temporal consistence between adjacent frames so from the you know from the secrets here the temporal we have compared with the traditional image style transfer and no or compared with our proposed method the images traditional image star transfer command maintains the consistencies so the circles and the bond boxes there you can see that the style changes the changes significantly but for our video part it can maintain a good consistency so this will prevent it will eliminate the chattering chatting the flickering artifacts in the videos okay so this is the Memphis no the main network of our video star transfer and the first part is a style sterilized in network this is very traditional and we have a intro encoder decoder encoder decoder style style and we have the encoders and we ski we connected with fives the rest night's rest blocks and then the decoders and after that we realize the lost networks two computers a stylized Network and the content lost Network and also the the temporal loss so with this loss is the well design we can we can train the networks to make the network okay preserve the content and as well as the introducer style and the most important can preserve the consistency between our choice on the frames okay so for these are special training we have the two frame coordinator training so it means that we input the adjacent to firms into the network and then you try the designers obsessive specific design Allah function to join the network and also another another advantage is that we do not need to computer optical flow information on the on the fly so it means that we can well we can well make the Magnum we can well max the inference very fast because we do not need to compute computer the optic flow normally the optical is the most complicated part in video parts okay and I thing is that because we are you know this is name instant message this is instant message a PPS so the person's will share therefore their pictures in with the with the with their friends so the pictures may be taking in and the different conditions for this case we want to handle the yeah makes the image quality enhanced enhanced for the low-light images so we have if you take an image in the low light you can now you cannot define you cannot find the age of the content and also the the color you cannot find you cannot take the right colors so we proposed a method to enhance the low-light images and we have the image here and with Mecca displayed to the original video for remade part and also the edge part so because the aji the mostess talent to the human perception so for the low lighting in low light input original part which we use in critical style to encoded image and decode image cats are in low results and also for the edge part will utilize also something like the encoder decoder and afterwards we have the skipped layer and a ways the iron style to catch the edge information then paste this to the content information and a as and also the edge information will combine them a together then to have a enhanced quality of the images and then we catch some perceptual losses and also other adverse or losses to enhance the quality of the image okay this is some results and we have the the first zero is a low-light images that were provided and we use the our networking how we can well Rick we recover the content of the image and also the colors can be well preserved okay so this is this is one part of we can share the images and improve the image quality and the next step the next work is a mobile pose estimation actually for the post estimation has been well to study they either have been well studied but for the mobile part if you will need to perform the network compression and also Mac hated to detect the scattering the other joints very accurately so in this means this part when mostly focusing on the network compression and the network returning okay so they we ship to the network from the PCs to the mobile wires and we can perform the detection of the person in the market persons and also for some front and back bone and something like and also instances choose a to the colors of their closest and as they see the demos here and the first the lefty I think that is EC maybe for the projection part so this is not for real time and this is the 22 key points for the joints of the persons and the under the laptop wha is a we consider to detect the hand because the monster hand the hand detection will will facilitate our you know the touch based interaction with the with the phones all the TVs okay and also for our mobile post estimation we have some you can see that for this one we we are Rob to the full or half body half body half body that conclusions on the half bodies and also we are we are able to perform a single or multiple persons and also we are you know we are insensitive to the dressing colors or the other other dressings and also we are robust to to the front and back even from the front and that we have detected the joints and the under the under the informations of the scallops and also we have already deployed this to mix into mobile queue and for this one this is the left the right one is a detector of my person of the scanners and left why is that we have the skeletons and we can have a dancing machine so you can for you know the guidance and the performed advances and then they say that very interesting to you know interact with the films are interactive with the TVs yeah so you can with this one we can find the you know with the this oppose direction contractor can match your positions with design the other specified actions okay so this is a work on the post estimation okay and on topic either very very interesting topic either for the video attractive attractiveness now the because for we have that instant video so we have a lot of a lot of video dramas or a lot of TV series uploaded on the website so we want to find whether the users is interested in to some part of the videos so we would like to examine whether we can predict the video attractiveness from the only the visual information or the other informations so for this one we have the video input and we have sorry we have the video input and also also is associated with the audio input and we would like to predict its attractiveness and the grant choose is the orange line and the predict voice is pretty ones either either blue line actually for this actual attractive it is very hard to get to the ground truth data so we utilize the muse because this is a transom video have a lot of registrate errs so there are many many of you many many viewers at this time so for each of these is this each of these values in each points it means that at this specific time how many person have already have already watched this video sequence and this is specific time so the in the statistically the larger of the number it means that there are many members there are a lot of large number of person watch this video at this moment so this moment it should be the much more attractive much more attractive attractive than the other with smaller values so we we utilize this video as the attractiveness conscious value okay so for this chart Scott we build a large largest large scale that such we have one cell about one of the and they in total about seven eight hours and they're in we consist of twenty two hundred and seventy episodes of American dramas and also certainly true career dramas and also seven episodes of Chinese drummers so we have this one we consider diverse of types and we have also movie the new information that we can you know start a meki to the robusta to the video attractive star study and the process is statistical information of our data set and the this is the number of the episodes and this is number of the views attracting the views and in the law was you know look okay and also besides of the views actually for because this is for the transom video your father is a video platform you will have some of the other metadata or some other user engagement Reformation the ad my engagement information will crowd is start of it is the start of the forward and the end of the forward the start of the backward and the start of the end of the backward and also have sample it's quiz and the balloon the number of double screens and the likes of the pollen squeeze and also we have the the the forward skips and the first the word faster we were ruined skips so with these ones we can get another more information about the user engagements with the videos with these user engagements of videos you can kind of more in for more detailed information about the video attract news and we can based on these engagements to analysis more alanis more about the human behaviors okay as we mentioned before we would like to make a prediction about attracted from the audio and the video or audio information and the visual information so for these two different modalities because we have a world apart and all operation part so we before we propose three different of furious strategy why is in the low level and away in the middle bear and the way in the high level and they see that three strong this industry architecture of the heart and we have also each part of it before we utilize we perform a contact skating here okay so based on this this architectural we perform the prediction and we we have the we also propose the three about three criteria to evaluate whether we can actually perform the a accurate prediction so we have we have the four matrix and then we'll compare with different visual representation from which G to inception to rest last night and the for the other part will compare with M FCC and and also the one that part so we can see that you consider all the audio features and video feature together we can get a better result and better results but if we kind me sample the middle level high level and a lot of fuel strategy together we can get to the best performance so another thing is that from nothing imagine to note that either we have the if we have the engagements the inkatha means that is the behavior of the users so we can if we can cut his engagement so we can make a very I could bury accurate a prediction on the video attract noticed because if they like it it will wash it and otherwise they will leave this leave these videos so the in the human engagements is the most important part in the indicator the indicator of the video attract noise okay so this is for the video attracting starting and also we study we make some study on the video classification and as I mentioned before the mention about Lisa before the moments are the moments in time the three main seconds of the video test that it also about video classifications right and now now before that they are the arch there there's a benchmark that sat on the UCF 1 0 1 so we perform for this part will perform the to stream we utilize that to stream say ends on the on the video classification and the one stream is describes the content and one stream it describes the optical information optical flow which means that motion information between our Johnson the frames and we propose a principled back propagation propagation that we forward other snippers so the forwarding formation can capture all the information of the videos and but for the backward will perform as selectively to backward only a selected number of snipers niklas so one info when one one benefits is that from this efficient fabrication we can get we can perform official training and another benefit is that we can also perform the snipers in different skills to characterize the motion the different motion information in the videos okay this is our results and for this test set the people not here and out even at achieved the top one error rate is about only 8.6 percentage so this is slow this is smaller than the other competitive models okay also for the video classifications we will participate in the YouTube YouTube 8,000,000 challenge here and the for the YouTube 8 million challenges here they only provided the weight of the features for the visual features and the audio features so we work on the audio and the video feature to perform the video classification and we for these things that we perform a to two-level cascading winding in the video level so the beta level where averages the all the frames together to get to the video representation and we propose a multi fueling layer here as typically here and the them pop we follow classifiers which has a mixed mixture of experts and a global accuracy precision it about is only 0.8 - okay and also we performed the frame level classification in the forum through a level class it is very intuitive to have the Alice TM and all that you are you to model the sequential data so for the video the video features this is also can be viewed as a sequential data see we so we utilize the up a direction or our stem and year old and the GRU and also we consider different different emotions information so we perform a multi scale so for you to scale we perform our we perform a direction by directional GRU or our stem and the Jaypee here the globe the global average precision is zero point zero point eighty three and if we consider the video level part and the frame lab of heart together we can get the Jaypee is the zero point eight four and we achieved about to the force positions number four the fourth positions over the six hundred sixty 100's emissions okay so this is about the video and email and a video analysis okay afterwards I will ensure that I will afterwards I will give an instruction about some work that we have done on the learn on the understanding one part is the image understanding here and the image understanding this is a very difficult part because we you may understand the image caption and image can be a very difficult part because we're not only to understand the image content but also we need to learn the language the properties or the behavior of the language compositions and also we need to perform a Amati motio interactions between the image and the sentence system okay so for this one here we also utilize our traditional something like that our traditional architecture for the image we utilize the images and to encode the information and the weak max we get the image representation in the combo wines and in the localized further if we have a thing here if I have a if you have familiar with singing for the last at the for the connect layer this is a global representation and the folder in the media convolution layers this is a global this is the local representations so for each image images saying we can get the we can get the image global and local representations and with this represent if we have different things such as a inception and ResNet or the bgg we have different things we will have multiple multiple representation of the image for each group for each side we have a global representation and a local representation and with these different representations we can perform a multi stage attention and the multi stage acum multi stage attention can first summarize the information from different encoders such as the rest not in separate reception we've before and also the bgg so we can we can example this represent and together and also Max is the represented to interact with each other and for the attention part so they can broadcast their their information to the other encoder tune to let the other Hoda know whether what information has been already considered for the generation part okay if you have afterwards with will perform the multi stage attention then we utilize another ticketed another another decoder to decode these representations to sentences the decoder normally we can utilize and now iron and the L stem and Jo is the most effective arms so we have here we utilize that our stem so we have different we have the image in coda cattle representation and we have the decoder to decode a representation into a sentence okay so this is first this is this PA firstly this turkey is this task is you need to consider the information of the content in the email in the image and also integrate with the sentences to make it to focus on the each object during the generation of each words such as for the persons you need to pay attention to the object of the the person need to pay attention to the local region or the persons and then generate your words and I've only the heels you need to pay attention to the local region of the image and then generates the words here okay so for these things we propose a based on our multi stage attention we have a performer you know we have some meat our adults on the mattress of the cocoa challenge here this most I think that is the most most influenceable influence that fact the image captioning for each image it will each image is accompanied with five sentences and there are about about 100,000 images and for each images as it is come accompanied with five sentences so I think that appeared the pear date has four imagery and a centipede theater it is about six hundred thousand peers okay based on this data you can perform the you can perform you can perform you can promote training of the the of your model and less damage that results of the test data on the website and the for this website we have we have several criteria to evaluate results and for the resulting has see that for this is the blue and the blue blue one blue two blue three blue for and also we have the meter raj and a cider CID are listed over for the CIDR this is the most specifically designed for evaluators or quality of the image captioning okay so so this will fall for this we can see that for the c5 and the c40 we can achieve all four all the metrics we achieve the top one top well results and compared to the other are there other models here okay also we also we have transformed we have we have a collector a mother Chinese data set we have a image here and also we have the imager and it's accompanied our Chinese sentences and we train the models and also we deployed we have released the results and we the way we we build a latest program in we chat so you can experience it if you have the which have here ok this is a Chinese for integrant image recognition okay so for this image it for the Chinese it is give a brief instruction wery good disclosure about the content of the image okay and for this thing if we have the image captioning the readouts will give an image and I gave you sentences to describe the content page we can perform first the most restrictive way it is a perform the image description another is that we have these sentences that we can perform the image retrieval and also as well as the image recommendations and also for the visual dialogues and the most important saying if you have the image under the content that these questions will help the visually impaired and impaired the email person to read or to see the images okay so this is a work on image captioning and afterwards the last work that I'm going to introduce is the natural language of video localization they see the also other challenge at have changnesia her school want to use utilize a language we give a we tries we're giving a untrimmed a video and a natural language description the girl we would like to look like a segment in the video which correspondent to the given natural language a description it means that if you will if we have a sorry if we have a very long video services and we have a sentence that describes such as a woman rails keep a kite backing in forward herself we would like to localize these centers in the videos so this is the very same as the image captioning it is a very challenge because first we need to understand the language under the next we need to understand the video sequester but for the video sequester understand is even challenger the images because the temporal we need to not only consider the image account the spatial content of the image but also we need to consider the temporal relationship between the adjacent frames so we perform we propose a single stream a single stream a natural language video local sestra network for this part we have the video sorry we have the videos and I will have a sentence that would perform frame by word interactions or matrix here all matching serial is matching matching performance matching between the and the video then we have with these matching behaviors we perform temporal proposals and they have a different anchors then we'll have different scores with these single-stream processes we can efficiently localize the sentences in the video sequences okay and this is some results of our this is summer results of our proposed masters and they see the videos and the DC have a sentence a man in a red shirt claps his hand and the great wines you know crunches and the green one is we utilize the vgg 16 as the image as the encoder and for the blue ones is we utilize the inception before as a encoder and the yellow ones we utilize the c3d at the encoder so you can see that if we you know the vgg 16 and inception v4 we can get a reasonable accurate result I'm sure for this for this specific trust we up from all the competitor models on this especially the tasks and for this that's that before this the the camera will lose out to show where the what-for is coming from so this can you can see that the folder for the for the Furion we fear in the inception before and optic flow we can get the accurate prediction on this on this example okay wait another besides we examine we examine the frame and the frame by word attentions here and for this sentence what for for this and for this and and what for in forest for they say you can see that for this all video sequences in the forest the first is appeared over all frames so the attention Mario's will be attentive will be triggered equally but for this for this woful only appearing in the first first of frame so it won't have this burials here ok and the same observation can be of can be observed from this to give examples okay so this is the lots of works that I introduced about a computer vision meets with social network actually besides that in our lab we not only I would not only do the work uncommon variant but also we have some of the other works on the AI projects the most improper one of the Muslim part of our project is the air + house and we would like to deploys or utilize the air technology to make the healthcare and also the first the healthcare the first day is that we would like to UT the air tagging or deep learning tomorrow to analysis the medical images and them to detect to helps the the to helps the doctors to make the room to help the detection to make the diagnosis and the forces for this is specific for these specific cancers such as a lung cancer screens about for this recognition rate Yaba is over 98 percentage and also in the future we would like to would you like to do perform a computer editor diagnosis and we have somewhat just scripts about the discretion of the patients and also we have some medical images about the passions and would like to have the doctor to attack max the diagnosis and also even in the future we have to help the doctor to design the treatment about some the disease okay another important topic is a again because the game because tension the game is large the number one game company in the of the world so we would like to utilize the AI technology to involved to be multiply involved in the game may be the design the playing or something like that to make the game to be more interesting and more more interesting to the user to the players ok so I will give a brief summary here and the first that I will give an introduction about our talk computer ramiz social networks and we introduced some some work that we have done in testing a lab from the research product and something related to image a video processing understanding and analysis something like that and also we have I have introduced about some when our several our employment projects a applies the ax the ax maybe house gain or in the future some robotics or something like that okay so this is the another introduc my lecture okay thank you [Applause]
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2021_Deep_Learning_New_Frontiers.txt
Hi everyone and welcome to lecture six of MIT  6.S191! This is one of my absolute favorite lectures in the course and we're going to  focus and discuss some of the limitations of deep learning algorithms as well as some of the  emerging new research frontiers in this field before we dive into the technical content  there are some course related and logistical announcements that i'd like to make the first  is that our course has a tradition of designing and delivering t-shirts to students participating  this year we are going to continue to honor that so to that end we have a sign up sheet on canvas  for all students where you can indicate your interest in receiving a t-shirt and once you  fill out that sign up sheet with the necessary information will ensure that a t-shirt is  delivered to you by the appropriate means as soon as possible and if after the class if the  canvas is closed and you can't access that signup form please just feel free to send us an email  and we'll find a way to get the t-shirt to you so to provide a take a step back and give  an overview of our schedule of this course so far where we've been and where we're going  following this lecture on limitations and new frontiers we'll have the due date for our final  software lab on reinforcement learning tomorrow we're going to have two really exciting hot  topic spotlight lectures with brand new content and that will be followed by a series of four  guest lectures you'll have time over the rest of this week to continue to work on your final  projects and the class will conclude on friday with the student final project presentations  and proposal competition as well as our award ceremony so speaking of those final projects  let's get into some details about those for those of you taking the course for credit you  have two options to fulfill your grade the first is a project proposal where you will work  in up to a group of four to develop a new and novel deep learning idea or application and we  realize that two weeks is a very short amount of time to come up with and implement a project  so we are certainly going to be taking this into consideration in the judging then on friday  january 29th you will give a brief three-minute presentation on your project proposal to a group  of judges who will then award the final prizes as far as logistics and timelines you  will need to indicate your interest in presenting by this wednesday at midnight  eastern time and will need to submit the slide for your presentation by midnight eastern time  on thursday instructions for the project proposal and submission of these requirements are on  the course syllabus and on the canvas site our top winners are going to be awarded  prizes including nvidia gpus and google homes the key point that i'd like to make  about the final proposal presentations is that in order to participate and be eligible  for the prize synchronous attendance is required on friday's course so friday january 29th from 1  to 3 p.m eastern time you will need to be present your or your group will need to be present  in order to participate in the final proposal competition the second option for  fulfilling the credit requirement is to write a one-page review of a deep learning  paper with the evaluation being based on the completeness and clarity of your review this  is going to be due by thursday midnight eastern time and further information and instruction  on this is also available on canvas so after this lecture next we're going to have a  series of two really exciting hot topic spotlight talks and these are going to focus on two rapidly  emerging in developing areas within deep learning deep learning research the first is going  to highlight a series of approaches called evidential deep learning that seeks to develop  algorithms that can actually learn and estimate the uncertainties of neural networks and the  second spotlight talk is going to focus on machine learning bias and fairness and here we're going to  discuss some of the dangers of implementing biased algorithms in society and also emerging strategies  to actually mitigate these unwanted biases that will then be followed by a series of  really exciting and awesome guest lectures from leading researchers in industry and  academia and specifically we're going to have talks that are going to cover a diversity  of topics everything from ai and healthcare to document analysis for business applications  and computer vision and we highly highly highly encourage you to join synchronously for these  lectures if you can on january 27th and january 28th from 1 to 3 p.m eastern these are going to  be highlighting very exciting topics and they may extend a bit into the designated software lab  time so that we can ensure we can have a live q a with our fantastic guest speakers all right  so that concludes the logistical and course related announcements let's dive into the fun  stuff and the technical content for this lecture so so far in taking success 191 i hope that  you've gotten a sense of how deep learning has revolutionized and is revolutionizing so  many different research areas and fields from advances in autonomous vehicles to medicine  and healthcare to reinforcement learning generative modeling robotics and a variety  of other applications from natural language processing to finance and security and alongside  with understanding the tremendous application utility and power of deep learning i hope that  you have also established concrete understanding of how these algorithms actually work and how  specifically they have enabled these advances to take a step back at the types of algorithms and  models that we've been considering we've primarily dealt with systems that take as input data as the  in the form of signals images other sensory data and move forward to produce a decision as the  output this can be a prediction this can be a outputted detection it can also be an action as  in the case of reinforcement learning we've also considered the inverse problem as in the case  of generative modeling where we can actually train neural networks to produce new data  instances and in both these paradigms we can really think of neural networks as very powerful  function approximators and this relates back to a long-standing theorem in the theory of neural  networks and that's called the universal approximation theorem and it was presented in  1989 and generated quite the stir in the community and what this theorem the universal approximation  theorem states is that a neural network with a single hidden layer is sufficient to approximate  any arbitrary function to any arbitrary position all it requires is a single layer and  in this class we've primarily dealt with deep neural models where we are stacking multiple  hidden layers on top of each other but this theorem completely ignores that fact and says  okay we only need one layer so long as we can reduce our problem to a set of outputs inputs and  a set of outputs this means there has to exist a neural network that can solve this problem it's  a really really powerful and really big statement but if you consider this closely there are a  couple of caveats that we have to be aware of the first is that this theorem makes no  guarantees on the number of hidden units or size of the layer that's  going to be required to solve such a problem right and it also leaves open  the question of how we could actually go about training such a model finding the weights  to support that architecture it doesn't make any claims about that it just says it proves  that one such network exists but as we know with gradient descent finding these weights  is highly non-trivial and due to the very non-convex nature of the optimization problem  the other critical caveat is that this theorem places no guarantees on how well the resulting  model would actually generalize to other tasks and indeed i think that this this theorem  the universal approximation theorem points to a broader issue that relates to the possible  effects of overhype in artificial intelligence and us as a community as students invested  in advancing the state of this field i think we need to be really careful in how  we consider and market and advertise these algorithms while the universal approximation  theorem was able to generate a lot of excitement it also provided a false sense of  hope to the community at the time which was that neural networks could be used to  solve any problem and as you can imagine this overhype is very very very dangerous and this  over hype has also been tied in to what were two historic a.i winters where research in artificial  intelligence and neural networks more specifically slowed down very significantly and i think we're  still in this phase of explosive growth which is why today for the rest of the lecture i want  to focus in on some of the limitations of the algorithms that we've learned about and extend  beyond to discuss how we can go beyond this to consider new research frontiers  all right so first the limitations one of my favorite and i think one of  the most powerful examples of a potential danger and limitation of deep neural networks  come from this paper called understanding deep neural networks requires rethinking generalization  and what they did in this paper was a very simple experiment they took images from the dataset  imagenet and each of these images are associated with a particular class label as seen here and  what they did was they did this experiment where for every image in the data set not class but  individual images they flipped a die a k sided die where k was the number of possible classes  they were considering and they used this this flip of the die to randomly assign a brand new  label to a particular image which meant that these new labels associated were completely random with  respect to what was actually present in the image so for example a remapping could be visualized  here and note that these two instances of dogs have been mapped to different classes altogether  so we're completely randomizing our labels what they next did was took this data this  scrambled data and tried to fit a deep neural network to the to the imagenet data by applying  varying degrees of randomization from the original data with the untouched class labels to the  completely randomized data and as you ex may expect the model's accuracy on the test set an  independent test set progressively tended to zero as the randomness in the data increased but what  was really interesting was what they observed when they looked at the performance on the training  set and this is what they found they found that no matter how much they randomized the labels the  model was able to get close to 100 accuracy on the training set and what this highlights is that in a  very similar way to the statement of the universal approximation theorem it gets at this idea that  deep neural networks can perfectly fit to any function even if that function is associated with  entirely random data driven by random labeling so to draw really drive this point home i  think the best way to consider and understand neural networks is as very very good function  approximators and all the universal approximation theorem states is that neural networks are very  good at this right so let's suppose here we have some data points and we can learn using a neural  network a function that approximates this this data and that's going to be based on sort of a  maximum likelihood estimation of the distribution of that data what this means is that if we give  the model a new data point shown here in purple we can expect that our neural network is going  to predict a maximum likelihood estimate for that data point and that estimate is probably going  to lie along this function but what happens now if i extend beyond this in distribution region to  now out of domain regions well there are really no guarantees on what the data looks like in  this region in these regions and therefore we can't make any statements about how our model  is going to behave or perform in these regions and this is one of the greatest limitations  that exist with modern deep neural networks so there's a revision here to this  statement about neural networks being really excellent function approximators they're  really excellent function approximators when they have training data and this  also raises the question of what happens in these out-of-distribution regions where the  network has not seen training examples before how do we know when our network doesn't know  is not confident in the predictions it's making building off this idea i think there can be this  conception that can be amplified and inflated by the media that deep learning is basically  alchemy right it's this magic cure it's this be all and all solution that can be applied to any  problem i mean its power really seems awesome and i'm almost certain that was probably a draw  for you to attend and take this course but you know if we can say that deep learning  algorithms are sort of this be all all convincing uh solution that can be applied  to any arbitrary problem or application there's this also resulting idea and belief that you can  take some set of training data apply some network architecture sort of turn the crank on your  learning algorithm and spit out excellent results but that's simply not how deep learning works your  model is only going to be as good as your data and as the adage in the community goes if you  put garbage in you're going to get garbage out i think an example that really highlights  this limitation is the one that i'm going to show you now which emphasizes just how much  these neural network systems depend on the data they're trained with so let's say we have this  image of a dog and we're going to pass it into a cnn based architecture where our goal is to try  to train a network to take a black and white image and colorize it what happened to this image of  a dog when it was passed into this model was as follows take a cl close look at this result if  you'll notice under the nose of the dog there's this pinkish region in its fur which probably  doesn't make much sense right if if this was just a natural dog but why could this be the case why  could our model be spitting out this result well if we consider the data that may  have been used to train the network it's probably very very likely that amongst  the thousands upon thousands of images of dogs that were used to train such a model the  majority or many of those images would have dogs sticking their tongues out  right because that's what dogs do so the cnn may have mapped that region under the  mouth of the dog to be most likely to be pink so when it saw a dog that had its mouth closed  it didn't have its tongue out it assumes in a way right or it's it's built up representation is such  that it's going to map that region to a pink color and what this highlights is that deep learning  models build up representations based on the data they've seen and i think this is a really  critical point as you go out you know you've taken this course and you're interested in applying  deep learning perhaps to some applications and problems of interest to you your model is  always going to be only as good as your data and this also raises a question of how do  neural networks handle data instances where that they have not encountered before and  this i think is highlighted uh very potently by this infamous and tragic example from a  couple years ago where a car from tesla that was operating autonomously crashed while operating  autonomously killing the driver and it turned out that the driver who was the individual killed  in that crash had actually reported multiple instances in the weeks leading up to the crash  where the car was actually swiveling towards that exact same barrier into which it crashed why  could it have been doing that well it turned out that the images which were representative  of the data on which the car's autonomous system was trained the images from that region  of the freeway actually lacked new construction that altered the appearance of that barrier  recent construction such that the car before it crashed had encountered a data instance  that was effectively out of distribution and it did not know how to handle this situation  because i had only seen particular bear a particular style and architecture of the barrier  in that instance causing it tragically to crash and in this instance it was a a occurrence where  a neural network failure mode resulted in the loss of human life and this points these sorts of  failure modes points to and motivate the need for really having systematic ways to understand  when the predictions from deep learning models cannot be trusted in other words when it's  uncertain in its predictions and this is a very exciting and important topic of research in deep  learning and it's going to be the focus of our first spotlight talk this notion of uncertainty  is definitely very important for the deployment of deep learning systems and what i like  to think of as safety critical applications things like autonomous driving things like  medicine facial recognition right as these algorithms are interfacing more and more with  human life we really need to have principled ways to ensure their robustness uncertainty metrics  are also very useful in cases where we have to rely on data sets that may be imbalanced  or have a lot of noise in present in them and we'll consider these different use  cases further in the spotlight lecture all right so before as a preparation for  tomorrow's spotlight lecture i'd like to give a bit of an overview about what uncertainties  we need and what uncertainties we can talk about when considering deep learning algorithms so  let's consider this classification problem where we're going to try to build a neural network that  models probabilities over a fixed set of classes so in this case we're trying to train a neural  network on images of cats images of dogs and then output whether a new image has a cat or has  a dog right keep in mind that um the probabilities of cat and dog have to sum to one so what happens  when we now train our model we're ready to test it and we have an image that contains both a cat and  a dog still the network is going to have to output class probabilities that are going to sum to one  but in truth this image has both a cat and a dog this is an instance of what we can think about  as noise or stochasticity that's present in the data if we train this model on images of cats  alone or dogs alone a new instance that has both a dog and a cat is noisy with respect to  what the the model has seen before uncertainty metrics can help us assess the noise that's the  statistical noise that's inherent in the data and present in the data and this is called  data uncertainty or alliatoric uncertainty now let's consider another case let's take our  same cat dog classifier and input now an image of a horse to this classifier again the output  probabilities are going to have to sum to one but even if the network is predicting that this  image is most likely containing a dog we would expect that it should really not be very confident  in this prediction and this is an instance where our model is now being tested on an image that's  totally out of distribution an image of a horse and therefore we're going to expect that  it's not very confident in its prediction this type of uncertainty is a different type  of uncertainty than that data uncertainty it's called model or epistemic uncertainty and it  reflects how confident a given prediction is very very important for understanding how well neural  networks gener generalized to out of distribution regions and how they can report on their  performance in out-of-distribution regions and in the spotlight lecture you'll really take a deep  dive into these ideas of uncertainty estimation and explore some emerging approaches to actually  learn neural network uncertainties directly the third failure mode i'd like to  consider is one that i think is super fun and also in a way kind of scary and  that's this idea of adversarial examples the idea here is we take some input example for  example this image of a temple and a standard cnn trained on you know a set of images is  going to classify this particular image as a temple with 97 probability we then take that  image and we apply some particular perturbations to that image to generate what we call an  adversarial example such that if we now feed this perturbed example to that same cnn it  no longer recognizes that image as a temple instead it incorrectly classifies this image  as an ostrich which is kind of mind-boggling right so what was it about this perturbation that  actually achieved this complete adversarial attack what is this perturbation doing remember that when  we train neural networks using gradient descent our our task is to take some objective j and try  to optimize that objective given a set of weights w an input x and a prediction y and our goal  and what we're asking in doing this gradient descent update is how does a small change in  the weights decrease the loss specifically how can we perturb these weights in order to minimize  the loss the objective we're seeking to minimize in order to do so we train the network with a  fixed image x and a true label y and perturb only the weights to minimize the loss with  adversarial attacks we're now asking how can we modify the input image in order to increase the  error in the network's prediction therefore we're trying to predict to perturb the input x in some  way such that when we fix the set of weights w and the true label y we can then increase the  loss function to basically trip the network up make it make a mistake this idea of adversarial  perturbation was recently extended by a group here at mit that devised an algorithm that could  actually synthesize adversarial examples that were adversarial over any set of transformations  like rotations or color changes and they were able to synthesize a set of 2d adversarial  attacks that were quite robust to these types of transformations what was really cool was  they took this a step further to go beyond 2d images to actually synthesize physical  objects 3d objects that could then be used to fool neural networks and this was the  first demonstration of adversarial examples that actually existed in the real physical world so the  example here these turtles that were 3d printed adversarial to be adversarial were incorrectly  classified as rifles when images of those turtles were taken again these are real physical  objects and those images were then fed into a classifier so a lot of interesting questions  raised in terms of what how can we guarantee the robustness and safety of deep learning  algorithms to such adversarial attacks which can be used perhaps maliciously to try to perturb the  systems that depend on deep learning algorithms the final limitation but certain that i'd like  to introduce in this lecture but certainly not the final limitation of deep learning overall is  that of algorithmic bias and this is a topic and an issue that deservingly so has gotten a lot of  attention recently and it's going to also be the focus of our second hot topic lecture and this  idea of algorithmic bias is centered around the fact that neural network models and ai systems  more broadly are very susceptible to significant biases resulting from the way they're built the  way they're trained the data they're trained on and critically that these biases can lead to very  real detrimental societal consequences so we'll discuss this issue in tomorrow's spotlight talk  which should be very exciting so these are just some of many of the limitations of neural networks  and this is certainly not an exhaustive list and i'm very excited to again re-emphasize that we're  going to focus in on two of these limitations uncertainty and algorithmic bias in  our next two upcoming spotlight talks all right for the remainder  of this talk this lecture i want to focus on some of the really exciting  new frontiers of deep learning that are being targeted towards tackling some of these  limitations specifically this problem of neural networks being treated as like black box  systems that uh lack sort of domain knowledge and structure and prior knowledge and finally  the broader question of how do we actually design neural networks from scratch  does it require expert knowledge and what can be done to create more generalizable  pipelines for machine learning more broadly all right the first new frontier that we'll delve  into is how we can encode structure and domain knowledge into deep learning architectures to take  a step back we've actually already seen sort of an example of this in our study of convolutional  neural networks convolutional neural networks cnns were inspired by the way that visual  processing is thought to work in in the brain and cnns were introduced to try to capture spatial  dependencies in data and the idea that was key to enabling this was the convolution operation  and we saw and we discussed how we could use convolution to extract local features present  in the data and how we can apply different sets of filters to determine different features and  maintain spatial invariance across spatial data this is a key example of how the structure of  the problem image data being defined spatially inspired and led to a advance in encoding  structure into neural network architecture to really tune that architecture specifically for  that problem and or class of problems of interest moving beyond image data or sequence data the  truth is that all around us there are there are data sets and data problems that have  irregular structures in fact there can be a the paradigm of graphs and of networks is one  where there's a very very high degree of rich structural information that can be encoded in a  graph or a network that's likely very important to the problem that's being considered but it's  not necessarily clear how we can build a neural network architecture that could be well suited to  operate on data that is represented as a graph so what types of data or what types of examples could  lead naturally to a representation as a graph well one that we're all too immersed in and  familiar with is that of social networks beyond this you can think of state machines which  define transitions between different states in a system as being able to be represented  by a graph or patterns of human mobility transportation chemical molecules where you can  think of the individual atoms in the molecule as nodes in the graph connected by the bonds  that connect those atoms biological networks and the commonality to all these instances and  graphs as a structure more broadly is driven by this appreciation for the fact that there are so  many real world data examples and applications where there is a structure that can't be readily  captured by a simple a simpler data encoding like an image or a temporal sequence and so we're going  to talk a little bit about graphs as a structure that can provide a new and non-standard  encoding for a series of of of problems all right to see how we can do this and to build  up that understanding let's go back to a network architecture that we've seen before we're  familiar with the cnn and as you probably know and i hope you know by now in cnn's we  have this convolutional kernel and the way the convolutional operation in cnn layers  works is that we slide this rectangular kernel over our input image such that the kernel can  pick up on what is inside and this operation is driven by that element-wise multiplication  and addition that we reviewed previously so stepping through this if you have an  image right the the convolutional kernel is effectively sliding across the image applying  its filter its set of weights to the image going on doing this repeatedly and repeatedly  and repeatedly across the entirety of the image and the idea behind cnns is by designing these  filters according to particular sets of weights we can pick up on different types of  features that are present in the data graph convolutional networks operate on  a very using a very similar idea but now instead of operating on a 2d image the network is  operating on data that's represented as a graph where the graph is defined by nodes shown here in  circles and edges shown here in lines and those edges define relationships between the nodes  in the graph the idea of of how we can extract information from this graph is very similar in  principle to what we saw with cnns we're going to take a kernel again it's just a weight matrix  and rather than sliding that kernel across the 2d the 2d matrix representation of our image that  kernel is going to pop around and travel around to different nodes in the graph and as it does so  it's going to look at the local neighborhood of that node and pick up on features relevant to the  local connectivity of that node within the graph and so this is the graph convolution operation  where we now learn to the the network learns to define the weights associated with  that filter that capture the edge dependencies present in the graph so let's  step through this that weight kernel is going to go around to different nodes and it's  going to look at its emergent neighbors the graph convolutional operator is going to  associate then weights with each of the edges present and is going to apply those weights across  the graph so the kernel is then going to be moved to the next node in the graph extracting  information about its local connectivity so on applying to all the different nodes in the  graph and the key as we continue this operation is that that local information is going to be  aggregated and the neural network is going to then learn a function that encodes that local  information into a higher level representation so that's a very brief and intuitive introduction  hopefully about graph confirmation on neural networks how they operate in principle and it's  a really really exciting network architecture which has now been enabling  enormously powerful advances in a variety of scientific domains for example  in chemical sciences and in molecular discovery there are a class of graph neural networks  called message passing networks which have been very successfully deployed on 2d two-dimensional  graph-based representations of chemical structures and these message passing networks build up a  learned representation of the atomic and chemical bonds and relationships that are present in a  chemical structure these same networks based on graph neural networks were very recently applied  to discover a novel antibiotic a novel drug that was effective at killing resistant bacteria  in animal models of bacterial infection i think this is an extremely exciting avenue for research  as we start to see these deep learning systems and neural network architectures being applied  within the biomedical domain another recent and very exciting application area is in mobility and  in traffic prediction so here we can take streets represent them as break them up to represent  them as nodes and model the intersections and the regions of the street network by a graph where the  nodes and edges define the network of connectivity and what teams have done is to build up  this graph neural network representation to learn how to predict traffic patterns across  road systems and in fact this modeling can result in improvements in how well estimated  time of arrivals can be predicted in things and interfaces like google maps another very  recent and highly relevant example of graph neural networks is in forecasting the spread of  covin-19 disease and there have been groups that have looked into incorporating both geographic  data so information about where a person lives and is located who they may be connected to as well as  temporal data information about that individual's movement and trajectory over time and using  this as the input to graph neural networks and because of the spatial and temporal component to  this data what has been done is that the graph neural networks have been integrated with temporal  embedding components such that they can learn to forecast the spread of the covid19 disease  based not only on spatial geographic connections and proximities but also on temporal patterns  another class of data that we may encounter is that of three-dimensional data three-dimensional  sets of points which are often referred to as point clouds and this is another domain in  which the same idea of graph neural networks is enabling a lot of powerful advances  so to appreciate this you will first have to understand what exactly these  three-dimensional data sets look like these point clouds are effectively unordered sets  of data points in space a cloud of points where there's some underlying spatial dependence  between the points so you can imagine having these sort of point-based representations of  a three-dimensional structure of an object and then training a neural network on these  data to do many of the same types of of tasks and problems that we saw in our computer vision  lecture so classification taking a point cloud identifying that as an object as a particular  object segmentation taking a point cloud segmenting out instances of that point cloud  that belong to particular objects or particular content types we what we can do is we can extend  graph convolutional networks to be able to operate to point clouds the way that's done which i  think is super awesome is by taking a point cloud expanding it out and dynamically computing a graph  using the meshes inherent in the point cloud and this is example is shown with this this structure  of a rabbit where we're starting here from the point cloud expanding out and then defining the  local connectivity uh across this 3d mesh and therefore we can then apply graph convolutional  networks to sort of maintain invariances about the order of points in 3d space and also still  capture the local geometries of such a data system all right so hopefully that gives you a sense  of different types of ways we can start to think about encoding structure internal  neural network architectures moving beyond the architectures that we saw in the first five  lectures for the second new frontier that i'd like to focus on and discuss in the remainder of this  talk it's this idea of how we can learn to learn and i think this is a very  powerful and thought-provoking domain within deep learning research and  it spawns some interesting questions about how far and how deep we can push the  capabilities of machine learning and ai systems the motivation behind this field of what is now  called automated machine learning or auto ml is the fact that standard deep neural network  architectures are optimized for performance on a single task and in order to build a new model we  require sort of domain expertise expert knowledge to try to define a new architecture that's going  to be very well suited for a particular task the idea behind automated machine learning is that  can we go beyond this this tuning of of you know optimizing a particular architecture robustly  for a single task can we go beyond this to build broader algorithms that can actually learn what  are the best models to use to solve a given problem and what we mean in terms of best model  or which model to use is that its architecture is optimal for that problem the hyper parameters  associated with that architecture like the number of layers it has the number of neurons per layer  those are also optimized and this whole system is built up and learned via an a a  algorithm this is the idea of automl and in the original automl work which stands  for automated machine learning the original work used a framework based on reinforcement learning  where there was a neural network that is referred to as a controller and in this case this  controller network is a recurrent neural network the controller what it does is it proposes a  sample model architecture what's called the child architecture and that architecture is going  to be defined by a set of hyper parameters that resulting architecture is can then be trained  and evaluated for its performance on a particular task of interest the feedback of the performance  of that child network is then used as sort of the reward in this reinforcement learning framework  to try to promote and inform the controller as to how to actually improve its network  proposals for the next round of optimization so this cyclic process is repeated thousands upon  thousands of times generating new architectures testing them giving that feedback to  the controller to build and learn from and eventually the controller is going to tend  towards assigning high probabilities to hyper parameters and regions of the architecture  search space that achieve higher accuracies on the problem of interest and will assign low  probability to those areas of the search space that perform poorly so how does this agent  how does this controller agent actually work well at the broad view at the macro scale it's  going to be a rnn based architecture where at each step each iteration of this pipeline the  model is this controller model is going to sample a brand new network architecture and that this  controller network is specifically going to be optimized to predict the hyper parameters  associated with that spawned child network so for example we can consider the  optimization of a particular layer that optimization is going to involve prediction  of hyper parameters associated with that layer like as for a convolutional layer the size  of the filter the length of the stride and so on and so forth then that resulting  network that child network that's spawned and defined by these predicted hyper parameters  is going to be tested trained and tested such that after evaluation we can take the  resulting accuracy and update the recurrent neural network controller system based on how  well the child network performed on our task that rnn controller can then learn to create an  even better model and this fits very nicely into the reinforcement learning framework where the  agent of our controller network is going to be rewarded and updated based on the performance  of the child network that it spawns this idea has now been extended to a number of different  domains for example recently in the context of image recognition with the same principle of a  controller network that spawns a child network that's then tested evaluated to improve the  controller was used to design a optimized neural network for the task of image  recognition in this paradigm of designing this designing an architecture can be thought of as  neural architecture search and in this work the controller system was used to construct and design  convolutional layers that were used in an overall architecture tested on image recognition tasks  this diagram here on the left depicts what that learned architecture of a convolutional cell in  a convolutional layer actually looked like and what was really really remarkable about this work  was when they evaluated was the results that they found when they evaluated the performance of  these neural network designed neural networks i know that's kind of a mouthful but let's  consider those results so first here in black i'm showing the accuracy of the state-of-the-art  human-designed convolutional models on an image recognition task and as you can appreciate  the accuracy shown on the y-axis scales with the number of parameters in the millions shown  on the x-axis what was striking was when they compared the performance of these human-designed  models to the models spawned and returned by the automl algorithm shown here in red these neural  designed neural architectures achieved superior accuracy compared to the human-designed  systems with relatively fewer parameters this idea of using machine learning using deep  learning to then learn more general systems or more general paradigms for predictive modeling and  decision making is a very very powerful one and most recently there's now been a lot of emerging  interest in moving beyond automl and neural architecture search to what we can think of more  broadly as auto ai an automated complete pipeline for designing and deploying machine learning  and ai models which starts from data curation data pre-processing to model selection and design  and finally to deployment the idea here is that perhaps we can build a generalizable pipeline  that can facilitate and automatically accelerate and design all steps of this process i think this idea spawns a very very  thought-provoking point which is can we build ai systems that are capable of generating  new neural networks designed for specific tasks but the higher order ai system that's built is  then sort of learning beyond a specific task not only does this reduce the need for us as  experienced engineers to try to hand design and optimize these networks it also makes these  deep learning algorithms more accessible and more broadly we start to get at this consideration of  what it means to be creative what it means to be intelligent and when alexander introduced this  course he spoke a little bit about his thoughts on what intelligence means the ability to take  information using it to inform a future decision and as humans our learning pipeline is definitely  not restricted to optimization for a very specific task our ability to learn and achieve and  solve problems impacts our ability to learn completely separate problems are and  improves our analytical abilities the models and the neural network  algorithms that exist today are certainly not able to extend to this point  and to capture this phenomena of generalizability i think in order to reach the point of true  artificial intelligence we need to be considerate of what that true generalizability  and problem-solving capability means and i encourage you to think about this this  point to think about how automl how auto ai how deep learning more broadly falls into  this broader picture of the intersection and the interface between artificial and human  intelligence so i'm going to leave you with that as a point of reflection for you at this  point in the course and beyond with that i'm going to close this lecture and remind  you that we're going to have a software lab and office hour session we're going to be  focusing on providing support for you to finish the final lab on reinforcement learning  but you're always welcome to come discuss with us ask your questions discuss with  your classmates and teammates and for that we encourage you to come to the class gather town  and i hope to see you there thank you so much
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2019_Deep_Reinforcement_Learning.txt
today we're going to be discussing deep reinforcement learning which is actually one of a combination of disciplines between deep learning and the long-standing community of reinforcement learning which has been around for many decades now in machine learning and at a high level reinforcement learning provides us with a set of mathematical tools and methods for teaching agents how to actually go from perceiving the world which is the way we usually talk about deep learning or machine learning problems in the context of computer vision perception to actually go beyond this perception to actually act acting in the world and figuring out how to optimally act in that world and I'd like to start by showing a rather short but dramatic video of a trailer of the movie based on the alpha ghost story which you might have heard of just to give us an introduction to the power of these techniques go is the world's oldest continuously played board game it is one of the simplest and also most abstract beats me a professional player it go is a long-standing challenge of artificial intelligence [Music] everything we've ever tried in AI just falls over when you try the game of Go a number of possible configurations of the board is more than the number of atoms in the universe I'll forego found a way to learn how to play go so far alphago has beaten every challenge me giving it but we won't know its true strength until we play somebody who is at the top of the world like Lisa doll I'm not like no other is about to get underway in South Korea they said all is to go what Roger Federer is to tennis just the very thought of a machine playing a human because inherently intriguing the place is a madhouse welcome to the deep mind challenge for world is watching can Lisa doll find alphago's weakness whoa is there in fact a weakness the game kind of turned on its axis right now he's not confident thanks it's developing into a very very dangerous fight hold the phone Rena's left the room in the end he used about the pride I think something went wrong gasps man thank you he's got a plan here these ideas that are driving alphago are gonna drive our future this is it folks so for those of you interested that's actually a movie that came out about a year or two ago and it's available on Netflix now it's a rather dramatic depiction of the true story of alphago facing Lisa Dole but it's an incredibly powerful story at the same time because it really shows the impact that this algorithm had on the world and the press that it received as a result and hopefully by the end of this lecture you'll get a sense of the way that this this remarkable algorithm that Valve ago was trained and kind of going beyond that although will then give a lecture on some of the new frontiers of deep learning as well so let's start by actually talking about some of the classes of what we've seen in this lecture so far and comparing and seeing how reinforcement learning fits into those classes so first of all supervised learning is probably the most common thing that we've been dealing with in this class we're given data and labels and we're trying to learn this functional mapping to go from new data to a new to one of the existing labels in our training set and for an example we can do things like classification where I give you an image of an apple and the algorithm is tasked to determine that this is indeed an apple unsupervised learning is what we discussed yesterday and this deals with the problem where there's only data and no labels and comparing our problem with the Apple example here I give it another Apple and it's able to learn that this thing is like that other thing even though if it doesn't know exactly that these are apples but it's able to understand some underlying structure about these two objects now finally how does reinforcement learning fit into this paradigm reinforcement learning deals with something that we call state action pairs so it says pairs of both states in the environment that an agent receives as well as actions that it takes in that environment to execute and observe new States and the goal of reinforcement learning as opposed to supervised or unsupervised learning is to actually maximize the future rewards that it could see in any future time so to act optimally in in this environment such I can maximize all future rewards that it sees and going back to the Apple example if I show it this image of an apple the agent might now respond in a reinforcement learning setting by saying I should eat that thing because I've seen in the past that it helps me get nutrition and it helps keep me alive so again we don't know what this thing is it's not a supervised learning problem where we're explicitly telling you that this is an apple with nutritional value but it's learned over time that it gets reward from eating this and that should continue eating it in the future so our focus today in this class will be on reinforcement learning and seeing how we can build algorithms to operate in this state action or learning and perception paradigm so I've defined a couple concepts in that previous slide like an agent and environment action rewards that I didn't really define and I'd like to now start by going through simple example or a simple schematic where I clearly define all of those things so we can use them later in the lecture and go into greater greater levels of abstraction so the idea of reinforcement learning deals with the central component of reinforcement learning deals with an agent so an agent for example is like a drone that's making a delivery it could be also Super Mario that's trying to navigate a videogame the algorithm is the agent and in real life you are the agent okay so you're trying to build an algorithm or a machine learning work model that models that agent and the agent takes actions in some environment now the environment is like I said the place where the agent exists and it operates within it can send actions to that environment an action is simply just a possible move that the agent can make in that environment and your action is almost self-explanatory but I should note that the action that the agent executes at time T here I'm Dino Nia's little a of T can be chosen from usually a discrete subset of possible actions which we'll call capital a so in this first part of the lecture I'll focus on discrete action spaces where there's a limited number of possible actions you can imagine me telling it in a video game for example that the agent can go either left or right forwards or backwards pick up the block put down the block these are all possible actions that you could take in that environment now upon taking an action in the environment the agent will then receive some observation back from the environment these observations actually define how the agent interacts with the environment and it can include things like how the state changes so an example of a state is for example if you're the agent the state is what you see so it's the sensor inputs that you obtain it's your vision it's your sound your touch these are all parts of your state and basically it's a concrete situation that the agent finds itself at any given time so when it takes this action the environment responds with a new state that the agent is located and given that action and finally the last part part of this schematic is the reward that the environment responds with given that action so the reward is basically like a feedback that measures the success or failure of the given action taken in that environment for example in a video game when Mario touches a coin it gets an eight it gets a reward from any given state an agent sends an output in the form of actions to the environment and the environment returns the agents new state and an associated reward with that action rewards can be either immediate where you take an action and you immediately get a reward or they can be delayed in the context of delayed gratification where you may take an action today that you may not benefit until you may not benefit from until tomorrow or the day after tomorrow or even in some cases very long term into the future so building on this concept of rewards we can define this notion of a total reward that the agent obtains at any given time the total future reward rather as just the sum of all rewards from that time step into the future so for example if we're starting at time T we're looking at the reward that it obtains at time T plus the time to reward that obtains at time t + 1 + roared at t + 2 and so on there's a slight problem here and that's that if you are going to infinity if this summation is going to infinity so if you're looking at infinite time horizons with potentially infinite rewards this term capital R of T could also go to infinity and that's not a great property of mathematical equations so what we do is we introduce this notion of the discounted reward that's essentially where we discount future actions based on this discounting factor gamma it's just a number between zero and one where we're placing more weight on rewards obtained in the near-term future and placing less weight on it's placed in long term long term time steps from the current state so I guess before moving on any further I want to make sure that everyone understands all of these concepts present in this schematic because we're only going to be building on things from this point on and using these using this terminology to build higher and higher levels of that extraction so is this all clear to everyone yep go ahead the state changes immediate so usually you take an action and that's let's suppose in like self-driving cars if you want to train an agent to navigate in a self-driving car world you do it your action is a steering wheel angle and the next state is the next camera and put that that car sees it does not need to be related to the world so the rewards are basically just a scalar number that come back with each state but you could also imagine where there are no rewards and at a given time step you might not see a reward but you might see a reward way into the future so for example in in some games like for example pong you can imagine you've never get a reward until you either win or lose the game if you win the game you get a positive reward but if you lose the game you get a negative reward and all of the intermediate frames and actions that you take never result in any reward at all so now we're gonna take this notion of an agent collecting rewards in an environment and try to define what we call a queue function this is kind of a fundamental function that's gonna be the building block of one of the algorithms that we're gonna use in reinforcement learning in the first part of this lecture and just to reiterate again this this equation which you see on the top is the same equation as before it's saying the total reward that we can obtain from time T it's just a summation of the discounted rewards in the future ok and now we want to define this dysfunction which we're going to call the cue function and it's going to take as input two things one is the state and the action that the agent wants to execute at that state and we want that cue function to represent the expected total discounted reward that it could obtain in the future so now to give an example of this let's assume let's go back to the self-driving car example your plate you're placing your self-driving car on a position on the road that's your state and you want to know for any given action what is the total amount of reward future reward that that car can achieve by executing that action of course some actions will result in better Q values higher Q values because you're going to obtain if the road is straight obtaining a higher Q value might occur if you are taking actions corresponding to straight steering bowl angles but if you try to steer sharp to the right or sharp to the left your Q is going to sharply decrease on each end because these are undesirable actions so in a sense our Q value our Q function is telling us for any given action that the agent can make in a given state what is the what is the expected reward that it can obtain by executing that action okay so the key part of this problem in reinforcement learning is actually learning this function this is the hard thing so we want to learn this Q value function so given a state and given an input how can we compute that expected return of reward but ultimately what we need to actually act in the environment is a new function that I haven't defined it and that's called the policy function so here we're calling PI of s the policy and here the policy only takes as input just the state so it doesn't care about the action that the agent takes in fact it wants to output the desired action given any state so the agent obtains some state it perceives the world and ultimately you want your policy to output the optimal action to take given that state that's ultimately the goal of reinforcement learning you want to see a state and then know how to act in that state now the question I want to pose here is assuming we can learn this q function is there a way that we can now create or infer our policy function and I hope it's a little obvious here that the strategy we want to take is essentially like we want to try all possible actions that the agent can take in that given state and just find the maximum the one that results in the maximum reward and the ones that results in the maximum reward is just going to be the one that has the maximum Q value so what we're gonna define the policy function here as it's just the Arg max over all possible actions of that q value function so what that means just one more time is that we're gonna plug in all possible actions given the state into the Q value find the action that results in the highest possible total return in rewards and that's going to be the action that we take at that given state in deep reinforcement learning there are two main ways that we can try to learn policy functions the first way is actually by like I was alluding to before trying to first learn the q-value function so that's on the left hand side so we try and learn this cue function that goes from States and actions and then use that to infer a deterministic signal of which action to take given the state that we're currently in using this argument function our max function and that's like what we just saw another alternative approach that we'll discuss later in the class is using what's called as policy learning and here we don't care about explicitly modeling the cue function but instead we want to just have our output of our model be the policy that the agent should take so here the model that we're creating is not taking as input both the state and the action it's only taking as input the state and it's predicting a probability distribution which is PI over all possible actions probability distributions sum up to one they have some nice properties and then what we can do is we can actually just sample an action from that probability distribution in order to act in that state so like I said these are two different approaches for reinforcement learning two main approaches and the first part of the class will focus on value learning and then we'll come back to policy learning as a more general framework and more powerful framework we'll see that actually this is what alphago uses policy learning is what alphago uses and that's kind of what we'll end on and touch on how that works so before we get there let's keep going digger let's keep going deeper into the Q function so here's an example game that we'll consider this is the atari breakout game and the way it works is you're this agent you're this little pedal on the bottom and you can either choose to move left or right in the world at any given frame and there is this ball also in the world that's coming either towards you or away from you your job as the agent is to move your idle left and right to hit that ball and reflect it so that you can try to knock off a lot of these blocks on the top part of the screen every time you hit a block on the top part of the screen you get a reward if you don't hit a block you don't get a reward and if that ball passes your pedal without you hitting it you lose the game so your goal is to keep hitting that ball back onto the top of the board and breaking off as many as many of these colored blocks as possible each time getting a brand new reward and the point I want to make here is that understanding queue functions or understanding optimal queue values is actually a really tough problem and if I show you two possible example states and actions that an agent could take so for example here's it makes a pedal with a ball coming straight for the paddle down the agent can choose to stay where it is and basically just deflect that ball straight back up that's one possible state action pair another possible state action pair is where the ball is coming slightly at an angle towards the pedal the paddle can move slightly to the right hit that ball at an angle and just barely just barely knick it and send it ricocheting off into the screen and to the side of the screen rather now the question I have here is as a human which action state action pair do you think is more desirable to be in in this game which of you think it's a okay how about be interesting so actually you guys are much smarter than I anticipated or maybe you're just looking at the notes because the correct answer is B even in a slightly stochastic setting let's suppose you keep executing a and you keep hitting off these blocks in the middle of the screen you're kind of having a limited approach because every block that you knock off has to be this one that you explicitly aim towards and hit so here's an example of a policy executing something like a and it's hitting a lot of the blocks in the center of the screen not really hitting things on the side of the screen very often even though it's not going directly up and down it is targeting the center more than the side okay now I want to show you an alternative policy now it's B that's explicitly trying to hit the side of the paddle every time no matter where the ball is is trying to move away from the ball and then come back towards it so it hits the side and just barely hits the ball so I can send it ricocheting off into the corner of the screen and what you're gonna see is it's gonna basically be trying to create these gaps in the corner of the screen so on both left and right side so that the ball can get stuck in that gap and then start killing off a whole bunch of different blocks with one single action so here's an example it's gonna be trying to kill off those side blocks now it's going for the left side and once it breaks open now you can see it just starts dominating the game because it's ball is just getting stuck on that top platform and it's able to succeed much faster than the first than the first agent in agent a so to me at least this was not an intuitive action or an intuitive Q value to learn and for me I would have assumed that the safest action to take was actually a but through reinforcement learning we can learn more optimal actions than what might be immediately apparent to human operators so now let's bring this back to the context of deep learning and find out how we can use deep learning to actually model Q functions and estimate Q functions using training data and we can do this in one of two ways so the primary way or the primary model that's used is called a deep Q Network and this is essentially like I said a model that's trying to estimate a Q function so in this first in this first model that I'm showing here it takes as input a state and a possible action that you could execute at that state and the output is just the Q value it's just a scale or output and it's the neural network is basically predicting what is the estimated expected total reward that it can obtain given the state and this action then you want to train this network using mean squared error to produce the right answer given a lot of training data at a high level that's what's going on the problem with this approach is that if we want to use our policy now and we want our agent to act in this world we have to feed through the network a whole bunch of different actions at every time step to find the optimal queue value right so we have to for every possible action imagine we have a ton of actions in the previous example it's simple because we only have left and right as possible actions but let's suppose we have a ton of different actions for each action we have to feed in the state and that action compute the Q value do that for all actions now we have a whole bunch of Q values we take the maximum and use that action to act okay that's not great because it requires executing this network in a forward pass a total number of times that's equal to the total number of actions that the agent could take at that step another alternative is slightly reaper amat rising this problem still learning the Q value but now we input just the state and the network intrinsically will compute the Q value for each of the possible actions and since your action space is fixed and reinforcement learning in a lot of cases your output of the network is also fixed which means that it each time you input a state and the network is basically outputting and numbers where n is the dimensionality of your action space where each output corresponds to the Q value of executing that action now this is great because it means if we want to take an action given a state we simply feed in our state to the network it gives us back all these Q values we pick the maximum Q value and we use the wrote we use the action associated to that maximum Q value in both of these cases however we can actually train using mean squared error it's a fancy term a fancy version of mean squared error that I'll just quickly walk through so the right side is the predicted Q value that's actually the output of the neural network just to reiterate this takes as input the state and the action and this is what the network predicts you then want to minimize the error of that predicted Q value compared to the true or the target Q value which in this case is on the right hand side so the target Q value is what you actually observed when you took that action so when the agent takes an action it gets a reward that you can just record you store it in memory and you can also record the discounted reward that it receives in every action after that so that's the target return that's what you know it that's what you know the agent obtained that's a reward that they obtained given that action and you can use that to now have a regression problem over the predicted Q values and basically over time using back propagation it's just a normal feed-forward network we can train this loss function train this network according to this loss function to make our predicted Q value as close as possible to our desired or target Q values and just to show you some exciting results so when this first came out paper by deep mind showed that it could work in the context of Atari games and they wanted to present this general deep Q network this just learning through deep Q networks where they input the state of the game on the left-hand side pass it through a series of convolutional layers followed by nonlinear activation functions like rellis and propagating this information forward each at each time using convolutional layer activation function fully connected layer activation function and then finally at the output we have a list of n Q values where each Q value corresponds to the possible action that it could take and this is the exact same picture as I gave you before except now it's just for a specific game in Atari and one of the remarkable things that they showed was that this is an incredibly flexible algorithm because without changing anything about this algorithm but just deploying it in many different types of games you can get this network to perform above human level on a lot of different tasks in Atari so Atari is composed of a whole bunch of games which you can see on the x-axis so each bar here is a different game in Atari and things to the left of this vertical bar correspond to situations where the deep Q Network was able to outperform the level of a human operator in that game there are definitely situations in an Atari where that the DQ network was not able to outperform human level and typically what was noticed by a lot of researchers afterwards was that in situations or in games where we don't have a perfectly observable world where if I give you a state you can observe the like you can optimally observe the correct action to take in that state not all of these games are having that property and that's a very nice property to have so a lot of games have very sparse rewards for example this this game on the end montezuma's revenge is notorious for having extremely sparse rewards because it requires that the agent go through a ladder go to a different room and collect a key and then use that key to turn the knob or something like that and without randomly exploring or possibly seeing that sequence of states the agent would never get exposed to those cue values it could never learn that this was an optimal action to take if it just randomly explores the environment it's never going to actually see that possible action in that case it's never ever going to get any context of what the optimal action to take is in the context of breakout which I was showing you before where you have that paddle and the ball hitting the paddle and you're trying to break off all of these points on the top this is an example of a game where we have perfect information so if we know the direction of the ball we know the direction of what the position of our paddle and we see all of the points in the in the space we can correctly predict with like we we have an optimal solution at ma given state given what we see of where to move the paddle that kind of summarizes our topic of Q learning and I want to end on some of the downsides of Q learning it surpasses human level performance on a lot of simpler tasks but it also has trouble dealing with complexity it also can't handle action spaces which are continuous so if you think back to the way we defined the Q Network the deep Q Network we're outputting a q-value for each possible action it could take but imagine you're a self-driving car and the actions that you take are a continuous variable on your steering wheel angle it's the angle that the wheel should turn now you can't use cue learning because it requires an infinite number of outputs there are tricks that you can get around this with because you can just discretize your action space into very small bins and try to learn the Q value for each bin but of course the question is well how small do you want to make this the smaller you make these bins the harder learning becomes and just at its core the vanilla Q learning algorithm that I presented here is not well-suited for continuous action spaces and on another level they're not flexible to handle stochastic policies because we're basically sampling from this argument function we have our Q function and we just take the arc max to compute the best action that we can execute at any given time it does it means that we can't actually learn when our when our policies are stochastically computed so when the next state is maybe not deterministic but instead having some random component to it as well right so this is the point I mentioned about the continuous action space being actually a very important problem that might seem kind of trivial to deal with by just bending the solution bending the outputs but this is actually a really big problem in practice and to overcome this we're gonna consider a new class of reinforcement learning models called policy gradient models for training these algorithms so policy gradients is a slightly different twist on Q learning but at the foundation it's actually very different so let's recall Q learning so the deep Q network takes input the states and predicts a Q value for each possible action on the right hand side now in policy gradients we're gonna do something slightly different we're gonna take us and put the state but now we're gonna output a probability distribution over all possible actions again we're still considering the case of discrete action spaces but we'll see how we can easily extend the in ways that we couldn't do with q-learning to continuous action spaces as well let's stick with discrete action spaces for now just for simplicity so here pie of alpha sorry PI of AI for all I is just the probability that you should execute action I given the state that you see as an input and since this is the probability distribution it means that all of these outputs have to add up to one we can do this using softmax activation function and deep neural networks it's just enforces that the outputs are summing to one and again just to reiterate this PI a given s is the probability distribution of taking a given action given the state that we currently see and this is just fundamentally different than what we were doing before which was estimating a Q value which is saying what is the possible reward that I can obtain by executing this action and then using the maximum the Q value of maximum reward to execute that action so now we're directly learning the policy we're directly saying what is the correct action that I should take what is the probability that that action is a 1 whereas the probability that that action is a 2 and just execute the correct action so in some sense it's skipping a step from Q learning and Q learning you learn the Q function use the Q function to infer your policy and policy learning you just learn your policy directly so how do we train policy gradient learning essentially the way it works is we run a policy for a long time before we even start training we run multiple episodes or multiple rollouts of that policy a roll out is basically from start to end of a training session so we can define a roll out as basically from time 0 to time T where T is the end of some definition of a episode in that game so in the case of breakout capital T would be the time at which the ball passes the pad you miss it so this is the time when the episode ends and you miss the ball or it's the time at which you kill all of the points on the top and you have no other points to kill so now the game is over so you run your policy for a long time and then you get the reward after running that policy now in policy gradients all you want to do is increase the probability of actions that lead to high rewards and decrease the probability of actions that lead to low rewards it sounds simple it is simple let's just see how it's done by looking at the gradient which is where this this algorithm gets its name which is right here so let's walk through this let's walk through this algorithm a little more detail so we do a roll out for an episode given our policy so policy is defined by the neural network parametrized by the parameters theta we sample a bunch of episodes from that policy and each episode is basically just a collection of state action and reward pairs so we record all of those into memory then when we're ready to begin training all we do is compute this gradient right here and that gradient is the log likelihood of seeing a particular action given the state multiplied by the expected reward of that action sorry excuse me that's the expected discounted reward of that action at that time so let's try and parse what this means because this is really the entire policy gradient algorithm on this one line this is the key line here so let's really try and understand this line the green part is simply the log likelihood of obtaining or of outputting that action so let's suppose our action was very desirable it led to a good reward okay and that's just defined by we do our roll out on the top and we won the game at the end so all of these policies all of these actions should be enforced or reinforced and in this case we did and on the second line we did another episode which resulted in a loss all of these Paul should be discouraged in the future so when things result in positive rewards we multiply this is going to be a positive number and we're going to try and increase the log likelihood of seeing those actions again in the future so we want to tell the network essentially to update your parameters such that whatever you did that resulted in a good reward is gonna happen again in the future and to even greater probability so let's make sure that we definitely sample those things again because we got good rewards from them last time on the converse side if R is is negative or if it's zero if we didn't get any reward from it we want to make sure that we update our network now to change the parameters and make sure that we discourage any of the probabilities that we output on the previous time so we want to lower the log likelihood of executing those actions that resulted in negative rewards so now I'll talk a little bit about the game of Go and how we can use policy gradient learning combined with some fancy tricks that deepmind implemented in the game of alphago in the algorithm alphago and i think the core for those of you who aren't familiar with the game of Go it's an incredibly complex game with a massive state space there are more States than there are atoms in the universe and that's in the full version of the game where it's in 19 by 19 game and the idea of go is that you have a it's a two-player game black and white and the motivation or the goal is that you want to get more board territory than your opponent the state here is just the board of black and white cells and the action that you want to execute it's just a probability distribution over each possible cell that you need to put your next piece out that cell so you can have a you can train a network like we were defining before a policy based net network which takes us input an image of this board it's a 19 by 19 image where each pixel in that image corresponds to a cell in the board and the part of that network is going to be a probability distribution again it's a 19 by 19 probability distribution where each cell is now the probability that your next action should be placing a token on that cell on the board right so let's see how at a high level the alphago algorithm works they use a little trick here in the beginning which is they start by initializing their network by training it on a bunch of experts from sorry sorry they training it from a bunch of human experts on playing the game of go so they have humans play each other these are usually professional or very high quality high-level humans they record all of that training data and then they use that as input to train a supervised learning problem on that policy network so this again takes us input in action sorry it excuse me it takes us input the state of the board and it tries to maximize the probability of executing the action that the human output or sorry executing the action that the human executed okay so now this first step here is focusing on building a model using supervised learning to imitate what the humans how the humans played the game of go of course this is not going to surpass any human level performance because you're purely doing imitation learning so the next step of this algorithm is then to use that network which you trained using supervised learning and to pit it against itself in games of self play so you're gonna basically make two copies of this network and play one network against itself and use now reinforcement learning to achieve superhuman performance so now since its play against itself and not it's not receiving human input its able to discover new possible actions that the human may not have thought of that may result in even higher reward than before and finally the third step here is that you want to build another network at each time at each position on the board so it takes now the state of the board and tries to learn the value function and that's essentially very similar to the cue function except now it's outputting one output which is just the maximum Q function so over all actions this this value network is basically telling you how good of a board state is this so this gives us an intuition we're using neural networks now to give us an intuition as to what board states are desirable what support states may result in higher values or more probability of winning and you might notice that this is very closely related to the Q function this is telling us like I said at the core how desirable a given board state is and basically just by looking at the board state we want to determine is this a good board state or a bad sport state and the way that alphago uses this is it then uses this value network to understand which parts of the game it should focus more on so if it's at a particular state where it knows that this is not a good state to be in it knows that it shouldn't keep executing trials down the state so it's able to actually prune itself and have a bit of a more intelligent algorithm of learning by not executing all possible actions at every time step but by kind of building a smart tree of actions based on this heuristic learned by another network and the result is alphago which in 2016 beat lee sedol which is who's the top human player at go and this was the first time that an AI algorithm has beaten top human performers in the game of go this was really a groundbreaking moment it ties us back to that trailer that we saw at the beginning of class and finally just to summarize some really exciting new work from the same team that created alphago now they've released a new model called alpha zero which was published in science about one month ago and it's a general framework for learning self play models of board games so they show it being to outperform top model-based approaches and top human players in games of chess shogi and alphago so it's actually able to surpass the performance of alphago in just 40 hours and now this network alpha zero is called alpha zero because it requires no prior knowledge of human players it's learned entirely using reinforcement learning and self play and that's kind of the remarkable thinkers that without using any prior information at all now we've shown it's possible to one learn the possible or learn the likely sets of moves that human players would make and then the really interesting thing is that it then kind of starts to discard those moves in favor of even better moves that humans never really thought of making and the really interesting thing is if you follow this this evolution of the rewards going up in time if you stop it at any intermediate time especially on the early parts of so if you stop in the early training parts like for example right here and you look at the behavior of these agents they behave similar to top human players so they especially the execute the actions that they execute in the initial board states like right when the game starts they behave very similar to top human players but then as training continues and the agent starts to discover new and new more and more different advanced ways of executors starting the game it's actually able to create new policies that humans never even considered and now alpha zero is being used as almost a learning mechanism for top human performers on these new policies that can improve humans even more I think this is a really powerful technique because it shows reinforcement learning being used not just to pit humans against machines but also as a teaching mechanism for humans to discover new ways to execute optimal policies in some of these games and even going beyond games the ultimate goal of course is to create reinforcement learning agents that can act in the real world robotic reinforcement learning agents not just in simulated board games but in the real world with humans and help us learn in this world as well so that's all for reinforcement learning happy to take any questions or will hand it off Dava for the new frontiers of deep learning in the next part okay thank you [Applause]
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2022_Convolutional_Neural_Networks.txt
okay hi everyone and welcome back to 6s191 today we're going to be taking or talking about one of my favorite subjects in this course and that's how we can give machines a sense of sight and vision now vision is one of the most important senses in sighted people now sighted people and humans rely on vision quite a lot every single day everything from navigating the physical world to manipulating objects and even interpreting very minor and expressive facial expressions i think it's safe to say that for all of us sight and vision is a huge part of our everyday lives and today we're going to be learning about how we can give machines and computers the same sense of sight and processing of of images now what i like to think about is actually how can we define what vision is and one way i think about this is actually to know what we are looking at or what is where only by looking now when we think about it though vision is actually a lot more than just detecting where something is and what it is in fact vision is much more than just that simple understanding of what something is so take this example for take the scene for example we can build a computer vision system for example to identify different objects in this scene for example this yellow taxi as well as this van parked on the side of the road and beyond that we can probably identify some very simple properties about this scene as well beyond just the aspect that there's a yellow van and a uh sorry a yellow taxi and a white fan we can actually probably infer that the white van is parked on the side of the road and the yellow taxi is moving and probably waiting for these pedestrians which are also dynamic objects in fact we can also see these other objects in the scene which present very interesting dynamical scenarios such as the red light and other cars merging in and out of traffic now accounting for all of the details in this scene is really what vision is beyond just detecting what is where and we take all of this for granted i think as humans because we do this so easily but this is an extraordinarily challenging problem for humans to or for machines to be able to also tackle uh this in a learning fashion so vision algorithms really require us to bake in all of these very very subtle details and deep learning has actually revolutionized and brought forth a huge rise in computer vision algorithms and their applications so for example everything from robots in kind of operating in the physical world to mobile computing all of you on your phones are using very advanced machine vision and computer vision algorithms that even a decade ago were training on kind of super complete computers that existed at the highest clusters of computer clusters and now we're seeing that in all of our pockets used every day in our lives we're seeing in biology and medicine computer vision being used to diagnose cancers and as well as autonomous driving where we're having actually machines operate together with physical humans in our everyday world and finally we're seeing how computer vision can help humans who are lacking a sense of sight in terms of able to increase their own accessibility as well so deep learning has really taken computer vision systems by storm because of their ability to learn directly from raw pixels and directly from data and not only to learn from the data but learn how to process that data and extract those meaningful image features only by observing this large corpus of data sets so one example is data facial detection and recognition another common example is in the context of self-driving cars where we can actually take an image of that the car is seeing in its environment and try to process the control signal that it's that it should execute at that moment in time now this entire control control system in this example is actually being processed by a single neural network which is radically different when we think about how other companies like the majority of self-driving car companies like waymo for example their approach that those companies are taking which is a very very different pipelined approach now we're seeing computer vision algorithms operate the entire robot control stack using a single neural network and this was actually work that we published as part of my lab here at csail and you'll get some practice developing some of these algorithms in your software labs as well we're seeing it in medicine biology taking the ability to diagnose radiograph scans from a from a doctor and actually make clinical decisions and finally computer vision is being widely used in accessibility applications for example to help the visually impaired so projects in this research endeavor helped build a deep learning enabled device that could actually detect trails for running and provide audible feedback to visually impaired users so that they could still go for runs in the outdoor world and like i said this is often these are all tasks that we really take for granted as humans because each of us as cited individuals have to do them routinely but now in this class we're going to talk about how we can train a computer to do this as well and in order to do that we need to kind of ask ourselves firstly how can we build a computer to quote-unquote c right and specifically how can we build a computer that can firstly process an image into something that they can understand well to a computer an image is simply a a bunch of numbers in the world now suppose we have this picture of abraham lincoln it's simply a collection of pixels and since this is a grayscale image it's simply a each of those pixels is just one single number that denotes the intensity of the pixel and we can also represent our bunch of numbers as a two-dimensional matrix of numbers one pixel for every location in our matrix and this is how a computer sees it's going to take as input this giant two-dimensional matrix of numbers and if we had a rgb image a color image it would be exactly the same story except that each pixel now we don't have just one number that denotes intensity but we're going to have three numbers to denote red green and blue channels now we have to actually have this way of representing images to computers and i think that's and i think we can actually think about what computer tasks we can now perform using this representation of an image so two common types of machine learning tasks in computer vision are that of recognition and detection and sorry recognition and classification and regression and kind of quantitative analysis on your image now for regression our output is going to take a continuous valued number and for classification our output is going to take a one of k different class outputs so you're trying to output a probability of your image being in one of k classes so let's consider firstly the task of image classification we want to predict a single label for each image for example we can say we have a bunch of images of u.s presidents and we want to build a classification pipeline that will tell us which present president is this an image of on the left hand side and on the output side on the right side we want to output the probability that this was an image coming from each of those particular precedents now in order to correctly classify these images our pipeline needs to be able to tell what is unique to each of those different present so what is unique to a picture of lincoln versus a picture of washington versus a picture of obama now another way to think about this image classification problem and how a computer might go about solving it is at a high level in terms of features that distinguish each of those different types of images and those are characteristics of those types of classes right so classification is actually done and performed by detecting those features in our given image now if the features of a given class are present then we can say or predict with pretty high confidence that our class or our image is coming from that particular class so if we're building a computer vision pipeline for example our model needs to know what the features are and then it needs to be able to detect those features in an image so for example if we want to do facial detection we might start by first trying to detect noses and eyes and mouths and then if we can detect those types of features then we can be pretty confident that we're looking at a face for example just like if we want to detect a car or a door we might look at wheels or or sorry if we want to detect the car we might look for wheels or a license plate or headlights and those are good indicators that we're looking at a car now how might we solve this problem of feature detection first of all well we need to leverage certain information about our particular field for example in human faces we need to use our understanding of human faces to say that okay a human face is usually comprised of eyes and noses and mouths and a classification pipeline or an algorithm would then try to do that exactly and try to detect those small features first and then make some determination about our overall image now of course the big problem here is that we as humans would need to define for the algorithm what those features are right so if we're looking at faces a human would actually have to say that a face is comprised of my eyes and noses and mouths and that that's what the computer should kind of look for but there's a big problem with this approach because actually a human is not very good usually at defining those types of features that are really robust to a lot of different types of variation for example scale variations deformations viewpoint variations there's so many different variations that an image or a three-dimensional object may undergo in the physical world that make it very difficult for us as humans to define what good features that our our computer algorithm may need to identify so even though our pipeline could use the features that we the human may define this manual extraction will actually break down in the detection part of this task so due to this incredible variability of image data the detection of these features is actually really difficult in practice right so because your detection algorithm will need to withstand all of those different variations so how can we do better what we want ideally is we want a way to extract features and detect their presence in images automatically just by observing a bunch of images can we detect what a human face is comprised of just by observing a lot of human faces and maybe even in a hierarchical fashion so and to do that we can use a neural network like we saw in yesterday's class so a neural network-based approach here is going to be used to learn and extract meaningful features from a bunch of data of human faces in this example and then learn a hierarchy of features that can be used to detect the presence of a face in a new image that it may see so for example after observing a lot of human faces in a big data set an algorithm may learn to identify that human faces are usually comprised of a bunch of lines and edges that come together and form mid-level features like eyes and noses and those come together and form larger pieces of your facial structure and your facial appearance now this is how neural networks are going to allow us to learn directly from visual data and extract those features if we construct them cleverly now this is where the whole part of the class gets interesting because we're going to start to talk about how to actually create neural networks that are capable of doing that first step of extraction and learning those features now in yesterday's lecture we talked about two types of architectures first we learned about fully connected layers these dense layers where every neuron is connected to every neuron in the previous layer and let's say we wanted to use this type of architecture to do our image classification task so in this case our image is a two-dimensional image sorry our input is a two-dimensional image like we saw earlier and since our fully connected layer is taking just a list of numbers the first thing we have to do is convert our two-dimensional image into a list of numbers so let's simply flatten our image into a long list of numbers and we'll feed that all into our fully connected network now here immediately i hope all of you can appreciate that the first thing that we've done here by flattening our image is we've completely destroyed all of the spatial structure in our image previously pixels that were close to each other in the two-dimensional image now may be very far apart from each other in our one-dimensional flattened version right and additionally now we're also going to have a ton of parameters because this model is fully connected every single pixel in our first layer has to be connected to every single neuron in our next layer so you can imagine even for a very small image that's only 100 by 100 pixels you're going to have a huge number of weights in this neural network just within one layer and that's a huge problem so the question i want to pose and that's going to kind of motivate this computer vision architecture that we'll talk about in today's class is how can we preserve this spatial structure that's present in images to kind of inform and detect the arc to detect features and inform the decision of how we construct this architecture to do that form of feature extraction so to do this let's keep our representation of an image as a 2d matrix so we're not going to ruin all of that nice spatial structure and one way that we can leverage that structure is now to inherit is now to actually feed our input and connect it to patches of some weight so instead of feeding it to a fully connected layer of weights we're going to feed it just to a patch of weights and that is basically another way of saying that each neuron in our hidden layer is only going to see a small patch of pixels at any given time so this will not only reduce drastically the number of parameters that our next hidden layer is going to have to learn from right because now it's only attending to a single patch at a time it's actually quite nice right because pixels that are close to each other also share a lot of information with each other right so there's a lot of correlation that things that are close to each other and images often have especially if we look kind of at the local part of an image very locally close to that patch there's often a lot of relationships so notice here how the only only one small region in this red box of the input layer influences this single neuron that we're seeing on the bottom right of the slide now to define connections across the entire input image we can simply apply this patch based operation across the entire image so we're going to take that patch and we're going to slowly slide it across the image and each time we slide it it's going to predict this next single neuron output on the bottom right right so by sliding it many times over the image we can kind of create now another two-dimensional extraction of features on the bottom right and keep in mind again that when we're sliding this patch across the input each patch that we slide is the exact same patch so we create one patch and we slide that all the way across our input we're not creating new patches for every new place in the image and that's because we want to reuse that feature that we learned and kind of extract that feature all across the image and we do this feature extraction by waving the connections between the patch that the input is applied on and the neurons that get fed out so as to detect certain features so in practice the operation that we can use to do this this sliding operation and extraction of features is called a convolution that's just the mathematical operation that is actually being performed with this small patch and our large image now i'm going to walk through a very brief example so suppose we have a 4x4 patch which we can see in red on the top left illustration that means that this patch is going to have 16 weights so there's one weight per per in per pixel in our patch and we're going to apply this same filter to a um sorry we're going to apply the same filter of uh 4x4 pack of 4x4 pixels all across our entire input okay and we're going to use the result of that operation to define the state of the neuron in the next layer so for example this red patch is going to be applied at this location and it's going to inform the value of this single neuron in the next layer this is how we can start to think of a convolution at a very high level but now you're probably wondering how exactly or mathematically very precisely how does the convolution work and how does it allow us actually to extract these features right so how are the weights determined how are the features learned and let's make this concrete by walking through a few simple examples so suppose we want to classify this image of an x right so we're given a bunch of images black and white images and we want to find out if in this image there's an x or not now here black is actually the black pixel is defined by a negative one and a white pixel is defined by a positive one right so this is a very simple image black and white and to classify it it's clearly not possible to kind of compare the two matrices uh to see if they're equal right because if we did that what we would see is that these two matrices are not exactly equal because there are some slight deformations and transformations between one and the other so we want to classify an x even if it's rotated shrink or shrunk uh deformed in any way right so we want to be resilient and robust to those types of modifications and still have a robust classification system so instead we want our model to compare the images of an x piece by piece right or patch by patch we want to identify certain features that make up an x and try to and try to detect those instead now if our model can find these rough feature matches in roughly the same places then this is probably again a good indicator that indeed this image is of an x now each feature you should think of is kind of like a mini image right so it's a small two-dimensional array of values so here are some of these different filters or features that we may learn now each of the filters on the top right is going to be designed to pick up a different type of feature in the image so in the case of x's our filters may represent diagonal lines or like in the top in the top left or it may represent a crossing type of behavior which we can see in the middle or the unoriented diagonal line in the opposite direction on the far right now note that these smaller matrices are filters of weights just like in images right so these filters on the top row they're smaller mini images but they're still two-dimensional images in some way right so they're defined by a set of weights in 2d all that's left now is to define an operation a mathematical operation that will connect these two pieces that will connect the small little patches on the top to our big image on the bottom and output a new uh image on the right so convolution is that operation convolution just like addition or multiplication is an operation that takes as input two items instead of like addition which takes as input two numbers convolution is an input that takes as or sorry it's a function that takes as input two matrices in this case or in the more general case it takes as input two functions and it will output a third function so the goal of convolution is to take as input now two images and output a third image and because of that convolution preserves the spatial relationship between pixels by learning image features in small squares of the image or input image data and to do this we can perform an element-wise multiplication of our filter or our feature with our image so we can place this filter on the top left with on top of our image on the bottom and kind of element wise multiply every pixel in our filter with every pixel in in the corresponding overlapped region of our image so for example here we take this bottom right pixel of our filter one we can multiply it by the corresponding image in our sorry corresponding pixel in our image which is also one and the result one times one is one and we can basically do this for every single pixel in our filter right so we repeat this all over our filter and we see that all of the resulting element element-wise multiplications are one we can add up all of those results we get nine in this case and that's going to be the output of the convolution at this location in our next image next time we slide it we'll have a different set of numbers that we multiply with and we'll have a different output as well so let's consider this with one more example suppose we want to compute the convolution of a five by five image and a three by three filter okay so the image is on the left the filter is on the right and to do this we need to cover the input image entirely with our filter and kind of slide it across the image and then that's going to give us a new image so each time we put the filter on top of our image we're going to perform that operation i told you about before which is element-wise multiply every pixel in our filter and image and then add up the result of all of those multiplications so let's see what this looks like so first we're going to place our filter on the top left of our image and when we element wise multiply everything and add up the result we get the value 4. and we're going to place that value 4 in the top left of our new feature map let's call it this is just the output of this operation next time we slide the filter across the image we're going to have a new set of input pixels that we're going to element wise multiply with add up the result and we'll get three and we can keep repeating this all over the image as we slide across the image and that's it once we get to the end of the image now we have a feature map on the right hand side that denotes at every single location the strength of detecting that filter or that feature at that location in the input pixel so where there's a lot of overlap you're going to see that the element wise multiplication will have a large result and where there's not a lot of overlap the element wise multiplication will have a much smaller result right so we can see kind of where this feature was detected in our image by looking at the result of our feature map and we can actually now observe how different filters can be used to produce different types of outputs or different types of feature maps right so in effect our filter is going to capture or encode that feature within it so let's take this woman of a sorry let's take this picture of a woman's face and by applying the output of three different convolutional features or filters we can obtain three different forms of the same image right so for example if we take this this filter this three by three matrix of numbers it's just three by three numbers nine numbers nine weights and we slide it across our entire image we actually get the same image back but in sharpened form so it's a much sharper version of the same image similarly we can change the weights in our filter or our feature detector and now we can detect different types of features here for example this feature is performing edge detection right so we can see that everything is kind of blacked out except for the edges that were present in the original image or we can modify that those weights once again to perform an even stronger or magnified version of edge detection right and again here you can really appreciate that the edges are the things that remain in our output detector and this is just to demonstrate that by changing the weights in our filter our model is able to learn how to identify different types of features that may be present in our image so i hope now you can kind of appreciate how convolution allows us to capitalize on spatial structure and also use this set of weights to extract local features and very easily detect different features by extracting different those filters right so we can learn a bunch of those filters now and each filter is going to capture a different feature and that's going to define different types of objects and well features that the image is possessing now these concepts of preserving spatial structure as well as the local feature extraction using convolution are at the core of neural networks that we're going to learn about today and these are the primary neural networks that are still used for computer vision tasks and i've really shattered all of the state-of-the-art algorithms so now that we've gotten to the operational hood underneath convolutional neural networks which is the convolution operation now we can actually start to think about how we can utilize this operation to build up these full scale like i call them convolutional neural networks right so these networks are essentially appropriately named because under their hood they're just utilizing the same operation of convolution kind of combined with this weight multiplication and addition formulation that we discussed in the first lecture with fully connected layers so let's take a look at how convolutional neural networks are actually structured and then we'll kind of dive a little bit deeper into the mathematics of each layer so let's again stick with this example of image classification now the goal here is to learn the features again directly from the image data and use these learn features to actually identify certain types of properties that are present in the image and use those properties to guide or inform some classification model now there are three main operations that you need to be familiar with when you're building a convolutional neural network first is obviously the convolution operation right so you have convolutional layers that simply apply convolution with your original input and a set of filters that our model is going to learn so these are going to be weights to the model that will be optimized using back propagation the second layer is just like we saw in the first lecture we're going to have to apply a non-linearity right so this is to introduce nonlinearity to our model oftentimes in convolutional neural networks we'll see that this is going to be relu because it's a very fast and efficient form of activation function and thirdly we're going to have some pooling layer which is going to allow us to down sample and downscale our features every time we down scale our features our filters are now going to attend to a larger region of our of our input space right so imagine as we progressively go deeper and deeper into our network each step kind of down scaling now our features are capable of attending to that original image which is much larger and kind of the attention field of the of the later layers becomes and grows much larger as we down sample so we'll go through each of these operations now just to break down the basic architecture of a cnn and first we'll consider the convolutional operation and how it's implemented specifically in the operating and specifically in neural networks we saw how the mathematical operation was implemented or computed or formulated now let's look at it specifically in neural networks so as before each neuron in our hidden layer will compute a weighted sum of inputs right so remember how we talked about yesterday when we were talking about three steps of a perceptron one was apply weights second add a bias and then three was applied in nonlinearity and we're going to keep the same three steps in convolutional neural networks first is we're going to compute a weighted sum of its inputs using a convolutional operation we're going to add a bias and then we're going to activate it with a non-linearity and here's an example of or here's an illustration formulation of that same exact idea written out in mathematical form so each of our weights is multiplied element-wise with our input x we add a bias we add up all the results and then we pass it through a nonlinear activation function so this defines how neurons in one layer in our input layer are connected to our output layer and how they're actually computed that output layer but within a single convolutional layer now we can have multiple filters right so just like before in the percept in the first lecture when we had a fully connected layer that can have multiple neurons and perceptrons now we're having a convolutional layer that can learn multiple filters and the output of each layer of our convolution now is not just one image but it's a volume of images right it's one image corresponding to each filter that is learned now we can also think of the connections of neurons in convolutional layers as in terms of kind of their receptive field like i mentioned before the locations of the ith input is a node in that path that it's connected to at the ith output now these parameters essentially define some spatial arrangement of the output of a convolutional layer because of that and to summarize we've seen now how connections in convolutional layers are defined and how they can output and that output is actually a volume right because we're learning the stack of filters not just one filter and for each filter we output an image so okay so we are well on our way now to understanding how a cnn or a convolutional network is working in practice there's a few more steps the next step is to apply that non-linearity like i said before and the motivation is exactly like we saw in the first two lectures as introduced previously we do this because our images and real data is highly non-linear so to capture those types of features we need to apply non-linearities to our model as well in cnns it's very common practice to apply what is called the relu activation function in this case and you can think of the relu activation as actually some form of thresholding right so when the input image on the left hand side is less than 0 nothing gets passed through and when it's greater than zero the original pixel value gets passed through so it's kind of like the identity function when you're positive and it's zero when you're negative it's a form of thresholding centered at zero and yeah so also you can think of negative numbers right so these are negative numbers that correspond to kind of inverse detection of a feature during your convolutional operation so positive numbers correspond to kind of a positive detection of that feature zero means there's no detection of the feature and if it's negative you're seeing like an inverse detection of that feature and finally once we've done this activation of our output filter map the next operation in the cnn is pooling pooling is an operation that is primarily used just to reduce the dimensionality and make this model scalable in practice while still preserving spatial invariance and spatial structure so a common technique to pooling is what's called max pooling and the idea is is very intuitive as the name suggests we're going to select patches now in our our input on the left hand side and we're going to pool down the input by taking the maximum of each of the patches so for example in this red patch the maximum is six and that value gets propagated forward to the output you can note that the output because this max pooling operation was done with a scale of two the output is half as large as the input right so it's downscaled by also a factor of two and i encourage you some different ways to think about some different ways that we could perform this down sampling or down scaling operation without using a max pooling specific equation is there some other operation for example that would also allow you to kind of down scale and preserve spatial structure without doing the pooling of of using the maximum of each of the patches and there's some interesting kind of changes that can happen when you use different forms of pooling in your network so these are some key operations to convolutional neural networks and now we're kind of ready to put them together into this full pipeline so that we can learn these hierarchical features from one convolutional layer to the next layer to the next layer and that's what we're going to do with convolutional neural networks and we can do that by essentially stacking these three steps in tandem in the form of a neural network in sequence right so on the left hand side we have our image we're going to start by applying a series of convolutional filters that are learned that extracts a feature map or sorry a feature volume right one per map we're going to apply our activation function pull down and repeat that process and we can kind of keep stacking those layers over and over again and this will eventually output a set of feature volumes right here that we can take and at this time now it's very small in terms of the spatial structure we can now at this point extract all of those features and start to feed them through a fully connected layer to perform our decision making task so the objective of the first part is to extract features from our image and the objective of our second part is to use those features and actually perform our detection now let's talk about actually putting this all together into code very tangible code to create our first end-to-end convolutional neural network so we start by defining our feature extraction head which starts with a convolutional layer here with 32 feature maps so this just means that our neural network is going to learn 32 different types of features at the output of this layer we sample spatial information in this case using a max pooling layer we're going to downscale here by a factor of 2 because our pooling side and our stride size is of two and next we're going to feed this into our next set of convolutional pooling layers now instead of 32 features we're going to learn a larger number of features because remember that we've down scaled our image so now we can afford to kind of increase the resolution of our feature dimension as we downscale so we're kind of having this inverse relationship and trade-off as we downscale and can enlarge our attention or receptive field we can also expand on the feature dimension and then we can finally flatten this spatial information into a set of features that we ultimately feed into a series of dense layers and the series of dense layers will perform our eventual decision right so this is going to do the ultimate classification that we care about so so far we've talked only about using cnns for image classification tasks in reality this architecture is extremely general and can extend to a wide number of different types of applications just by changing the second half of the architecture so the first half of the architecture that i showed you before is focusing on feature detection or feature extraction right picking up on those features from our data set and then the second part of our network can be kind of swapped out in many different ways to create a whole bunch of different models for example we can do classification in the way that we talked about earlier or if we change our loss function we can do regression we can change this to do object detection as well segmentation or probabilistic control so we can even output probability distributions all by just changing the second part of the network we keep the first part of the network the same though because we still need to do that first step of just what am i looking at what are the features that are present in this image and then the second part is okay how am i going to use those features to do my task so in the case of classification there's a significant impact right now in medicine and healthcare where deep learning models are being applied to the analysis of a whole host of different forms of medical imagery now this paper was published on how a cnn could actually outperform human expert radiologists in detecting breast cancer directly from looking at mammogram images classification tells us a binary prediction of what an object sees so for example if i feed the image on the left classification is essentially the task of the computer telling me okay i'm looking at the class of a taxi right and that's a single class label maybe it even outputs a probability that this is the taxi versus the probability that it's something else but we can also take a look and go deeper into this problem to also determine not just what this image is or that this image has a taxi but maybe also a much harder problem would be for the neural network to tell us a bounding box right for every object in this image right so it's going to look at this image it's going to detect what's in the image and then also provide a a measure of where that image is by encapsulating it in some bounding box that it has to tell us what that is now this is a super hard problem because a lot of images there may be many different objects in the scene so not only does it have to detect all of those different objects it has to localize them place boxes at their location and it has to do the task of classification it has to tell us that within this box there's a taxi so our networks needs to be extremely flexible and then for a dynamic number of objects because for example in this image i may have one primary object but maybe in a different image i might have two objects so our model needs to be capable of outputting a variable number of different detections on the other hand so here for example there's one object that we're outputting a taxi at this location what if we had an image like this now we have kind of many different objects that are very present in our image and our model needs to be able to have the ability to output that variable number of classifications now this is extremely complicated number one because the boxes can be anywhere in the image and they can also be of different sizes and scales so how can we accomplish this with convolutional neural networks let's consider firstly the very naive way of doing this let's first take our image like this and let's start by placing a random box somewhere in the image so for example here's a white box that i've placed randomly at this location i've also randomized the size of the the box as well then let's take this box and feed it through just this small box and let's feed it through a cnn and we can ask ourselves what's the class of this small box then we can repeat this basically for a bunch of different boxes in our model so we keep sampling randomly pick a box in our image and feed that box through our neural network and for each box try to detect if there's a class now the problem here this might work actually the problem is that there are way too many inputs to be able to do this there's way too many scales so can you imagine for a reasonably sized image the number of different permutations of your box that you'd have to be able to account for would just be too intractable so instead of picking random boxes let's use a simple heuristic to kind of identify where these boxes of interesting information might be okay so this is in this example not going to be a learned kind of heuristic to identify where the box is but we're going to use some algorithm to pick some boxes on our image usually this is an algorithm that looks for some signal in the image so it's going to ignore kind of things where there's not a lot of stuff going on in the image and only focus on kind of regions of the image where there is some interesting signal and it's going to feed that box to a neural network it's first going to have to shrink the box to fit a uniform size right that it's going to be amenable to a single network it's going to warp it down to that single size feed it through a classification model try to see if there's a an object in that patch if there is then we try to say to the model okay there's a there's a class there and there's the box for it as well but still this is extremely slow we still have to feed each region of our heuristic the thing that our heuristic gives us we still have to feed each of those values and boxes down to our cnn and we have to do it one by one right and check for each one of them is there a class there or not plus this is extremely brittle as well since the network part of the model is completely detached from the feature extraction or the region part of the model so extracting the regions is one heuristic and extracting the features that correspond to the region is completely separate right and ideally they should be very related right so probably we'd want to extract boxes where we can see certain features and that would inform a much better process here so there's two big problems one is that it's extremely slow and two is that it's brittle because of this disconnect now many variants have been proposed so to tackle these issues but i'd like to touch on one extremely briefly and just point you in this direction called the faster rcnn method or faster rcnn model which actually attempts to learn the regions instead of using that simple heuristic that i told you about before so now we're going to take as input the entire image and the first thing that we're going to do is feed that to what's called a region proposal network and the goal of this network is to identify proposal regions where there might be interesting boxes for us to detect and classify in the future now that regional region proposal network now is completely learned and it's part of our neural network architecture now we're going to use that to directly grab all of the regions out right and process them independently but each of the regions are going to be processed with their own feature extractor heads and then a classification model will be used to ultimately perform that object detection part now it's actually extremely fast right or much much faster than before because now that part of the model is going to be learned in tandem with the downstream classifier so in a classification model we want to predict a single we want to predict from a single image to a list of bounding boxes and that's object detection now one other type of task is where we want to predict not necessarily a fixed number or a variable number of classes and boxes but what if we want to predict for every single pixel what is the class of this pixel right this is a super high dimensional output space now it's not just one classification now we're doing one classification per pixel so imagine for a large input image you're going to have a huge amount of different predictions that you have to be able to predict for here for example you can see an example of the cow pixels on the left being classified separately from the grass pixels on the bottom and separately from the sky pixels on the top right and this output is created first again the beginning part is exactly as we saw before kind of in feature extraction model and then the second part of the model is an upscaling operation which is kind of the inverse side of the encoding part so we're going to encode all that information into a set of features and then the second part of our model is going to use those features again to learn a representation of whatever we want to output on the other side which in this case is pixel-wise classes so instead of using two-dimensional convolutions on the left we're going to now use what are called transposed convolutions on the right and effectively these are very very similar to convolutions except they're able to do this upscaling operation instead of down scaling of course this can be applied to so many other applications as well especially in healthcare where we want to segment out for example cancerous regions of medical scans and even identify parts of the blood that are affected with malaria for example let's see one final example of how we can have another type of model for a neural network and let's consider this example of self-driving cars so let's say we want to learn one neural network for autonomous control of self-driving cars specifically now we want a model that's going to go directly from our raw perception what the car sees to some inference about how the car should control itself so what's the steering wheel angle that it should take at this instance at the specific instance as it's seeing what it sees so we're trying to infer here a full not just one steering command but we're trying to infer a full probability distribution of all of the different possible steering commands that could be executed at this moment in time and the probability is going to be very high you can kind of see where the red lines are darker that's going to be where the model is saying there's a high probability that this is a good steering command to actually take right so this is again very different than classification segmentation all of those types of networks now we're outputting kind of a continuous distribution of over our outputs so how can we do that this entire model is trained end-to-end just like all the other models by passing each of the cameras through their own dedicated convolutional feature extractors so each of these cameras are going to extract some features of the environment then we're going to concatenate combine all of those features together to have now a giant set of features that encapsulates our entire environment and then predicting these pro these control parameters okay so the the loss function is really the interesting part here the top part of the model is exactly like we saw in lecture one this is just a fully connected layer or dense layer that takes this input to features and outputs the parameters of this continuous distribution and on the bottom is really the interesting part to enable learning these continuous continuous probability distributions and we can do that even though the human never took let's say all three of these actions it could take just one of these actions and we can learn to maximize that action in the future after seeing a bunch of different intersections we might see that okay there's a key feature in these intersections that is going to permit me to turn all of these different directions and let me maximize the probability of taking each of those different directions and that's an interesting way again of predicting a variable number of outputs in a continuous manner so actually in this example a human can enter the car and put a desired destination and not only will its navigate to that location it will do so entirely autonomously and end to end the impact of cnns is very wide reaching beyond these examples that i've explained here and i've also touched on many other fields in in computer vision that i'm not going to be able to talk about today for the purpose of time and i'd like to really conclude today's lecture by taking a look at what we've covered just as a summary just to summarize so first we covered the origins of computer vision and of the computer vision problem right so how can we represent images to computers and how can we define what a convolutional operation does to an image right given a set of features which is just a small weight matrix how can we extract those features from our image using convolution then we discussed the basic architecture using convolution to build that up into convolutional layers and convolutional neural networks and finally we talked a little bit about the extensions and applications of this very very general architecture and model into a whole host of different types of tasks and and different types of problems that you might face when you're building an ai system ranging from segmentation to captioning and control with that i'm very excited to go to the next lecture which is going to be focused on generative modeling and just to remind you that we are going to have the software lab and the software lab is going to tie very closely to what you just learned in this lecture with lecture 3 and convolutional neural networks and kind of a combination with the next lecture that you're going to hear about from alva which is going to be on generative modeling now with that i will pause the lecture and let's reconvene in about five minutes after we we set up with the next lecture thank you
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2021_Recurrent_Neural_Networks.txt
Hi everyone, my name is Ava. I'm a lecturer and  organizer for 6.S191 and welcome to lecture 2 which is going to focus on deep sequence modeling.  So in the first lecture with Alexander we learned about the essentials of neural networks and built  up that understanding moving from perceptrons to feed-forward models. Next we're going to turn our  attention to applying neural networks to problems which involve sequential processing of  data and we'll see why these sorts of tasks require a different type of network  architecture from what we've seen so far. To build up an understanding of these types of  models we're going to walk through this step by step starting from intuition about how these  networks work and building that up using where we left off with perceptrons and feed-forward  models as introduced in the first lecture. So let's dive right in. First, I'd like to motivate  what we mean in terms of sequential modeling and sequential processing by beginning with a very  simple intuitive example. Let's suppose we have this picture here of a ball and our task is  to predict where it's going to travel to next. Without any prior information about the  ball's history or understanding of the dynamics of its motion any guess on its  next position is going to be exactly that just to guess. But instead if in addition to the  current location of the ball I also gave you its previous locations now our problem becomes much  easier and I think we can all agree that we have a sense of where the ball is going to travel to  next. So hopefully this this intuitive example gives you a sense of what we mean in terms of  sequential modeling and sequential prediction and the truth is that sequential data and these  types of problems are really all around us for example audio like the waveform from my speech  can be split up into a sequence of sound waves and text can be split up into a sequence of characters  or also words when here each of these individual characters or each of the individual words can  be thought of as a time step in our sequence now beyond these two examples there are many  more cases in which sequential processing can be useful from medical signals to EKGs to  prediction of stock prices to genomic or genetic data and beyond so now that we've gotten a  sense of what sequential data looks like let's think about some concrete applications in which  sequence modeling plays out in the real world in the first lecture Alexander introduced  feed-forward models that sort of operate in this one-to-one manner going from a fixed  and static input to a fixed and static output for example he gave this this  use case of binary classification where we were trying to build a model that  given a single input of a student in this class could be trained to predict whether or not that  student was going to pass or not and in this type of example there's no time component there's no  inherent notion of sequence or of sequential data when we consider sequence modeling we now expand  the range of possibilities to situations that can involve temporal inputs and also potentially  sequential outputs as well so for example let's consider the case where we have a language  processing problem where there's a sentence as input to our model and that defines a sequence  where the words in the sentence are the individual time steps in that sequence and at the end our  task is to predict one output which is going to be the sentiment or feeling associated with that  sequence input and you can think of this problem as a having a sequence input single output  or as sort of a many-to-one sequence problem we can also consider the converse case where now  our input does not have that time dimension so for example when we're considering a static image  and our task is now to produce a sequence of input of outputs for example a sentence caption  that describes the content in this image and you can think of this as a one  to many sequence modeling problem finally we can also consider this situation  of many to many where we're now translating from a sequence to another sequence and perhaps  one of the most well-known examples of this type of application is in machine translation  where the goal is to train a model to translate sentences from one language to another all right  so this hopefully gives you a concrete sense of use cases and applications where sequence modeling  becomes important now i'd like to move forward to understand how we can actually build neural  networks to tackle these sorts of problems and sometimes it can be a bit challenging to sort of  wrap your head around how we can add this temporal dimension to our models so to address this and  to build up really a strong intuition i want to start from the very fundamentals and we'll  do that by first revisiting the perceptron and we're going to go step by step to develop a  really solid understanding of what changes need to be made to our neural network architecture  in order to be able to handle sequential data all right so let's recall and revisit the  perceptron which we studied in lecture one we defined the set of inputs right  which we can call x1 through xn and each of these numbers are going  to be multiplied by a weight matrix and then they're going to all be added together  to form this internal state of the perceptron which we'll say is z and then this value z is  passed through a non-linear activation function to produce a predictive output y hat and remember  that with the perceptron you can have multiple inputs coming in and since you know in this  lecture overall we're considering sequence modeling i'd like you to think of these inputs  as being from a single time step in your sequence we also saw how we could extend from the single  perceptron to now a layer of perceptrons to yield multi-dimensional outputs so for example here  we have a single layer of perceptrons in green taking three inputs in blue and predicting four  outputs shown in purple but once again does this have a notion of time or of sequence no it doesn't  because again our inputs and our outputs you can think of as being from a fixed time step in our  sequence so let's simplify this diagram right and to do that we'll collapse that hidden layer  down to this green box and our input and output vectors will be as depicted here and again our  inputs x are going to be some vectors of length m and our outputs are going to be of length n but  still we're still considering the input at just a specific time denoted here by t which is nothing  different from what we saw in the first lecture and even with this this simplified representation  of a feed forward network we could naively already try to feed a sequence into this model by just  applying that same model over and over again once for each time step in our sequence to get a sense  of this and how we could handle these individual inputs across different time step let's first just  rotate the same diagram from the previous slide so now again we have an input vector x of t  from some time step t we feed it into our neural network and then get an output vector at that time  step but since we're interested in sequential data let's assume we don't just have a single time step  right we have multiple individual time steps which start from let's say time zero the first time step  in our sequence and we could take that input at that time step treat it as this isolated point  in time pass it into the model and generate a predictive output and we could do that for the  next time step again treating it as something isolated and same for the next and to emphasize  here all of these models depicted here are just replicas of each other right with different  inputs at each of these different time steps but we we know sort of that our output and we  know from the first lecture that our output vector y hat at a particular time sub t is just going  to be a function of the input at that time step but let's take a step back here for a  minute if we're considering sequential data it's probably very likely that the output or the  label at a later time step is going to somehow depend on the inputs at prior time steps so what  we're missing here by treating these individual time steps as individual isolated time steps is  this relationship that's inherent to sequence data between inputs earlier on in the sequence  to what we predict later on in the sequence so how could we address this what we really  need is a way to relate the computations and the operations that the network is doing at  a particular time step to both the prior history of its computation from prior time steps as well  as the input at that time step and finally to have a sense of forward looking right to be able to  pass that information the current information onto future time steps so let's try to do exactly that  what we'll consider is linking the information and the computation of the network at different time  steps to each other specifically we're going to introduce this internal memory or cell state which  we denote here as h of t and this is going to be this memory that's going to be maintained by the  neurons and the network itself and this state can be passed on time step to time step across  time and the key idea here is that by having this recurrence relation we're capturing some  notion of memory of what the sequence looks like what this means is now the network's output  predictions and its computations are not only a function of the input at a particular time  step but also the past memory of cell state denoted by h that is to say that our output  depends on both our current inputs as well as the past computations and the past learning that  occurred and we can define this relationship via these functions that map inputs to output and  these functions are standard neural network operations that alexander introduced in the first  lecture so once again our output our prediction is going to depend not only on the current input at  a particular time step but also on the past memory and because as you see in this relation here our  output is now a function of both the current input and the past memory at a previous  time step this means we can describe these neurons via a recurrence  relation which means that the we have the cell state that depends on the current  input and again on the prior in prior cell states and the depiction on the right of this line  shows these individual time steps being sort of unrolled across time but we could also  depict the same relationship by this cycle and this is shown on the loop on the left of the  slide which shows this concept of a recurrence relation and it's exactly this idea of recurrence  that provides the intuition and the key operations behind recurrent neural networks or rnns and  we're going to continue for the remainder of this lecture to build up from this foundation and  build up our understanding of the mathematics of these recurrence relations and the  operations that define rnn behavior all right so let's formalize this a little  bit more the key idea here as i mentioned and hopefully that you take away from this lecture  is that these rnns maintain this internal state h of t which is updated at each time step as  the sequence is processed and this is done by this recurrence relation which specifically  defines how the state is updated at the time step specifically we define this internal cell state  h of t and that internal cell state is going to be a function of that is going to be defined by  a function that can be parametrized by a set of weights w which are what we're actually trying to  learn over the course of training such a network and that function f of w is going to take as input  both the input at the current time step x of t as well as the prior state h of t minus 1. and how  do we actually find and define this function again it's going to be parametrized by a set of weights  that are going to be specifically what's learned over the course of training the model and a key  key feature of rnns is that they use this very same function and this very same set of parameters  at every time step of processing the sequence and of course the weights are going to  change over time over the course of training and later on we'll see exactly how but at  each iteration of training that same set of weights is going to be applied to each of  the individual time steps in the sequence all right so now let's let's step through the  algorithm for updating rnns to get a better sense of how these networks work we're going to  begin by initializing our network which i'm just abstracting away here in this code  block as in the pseudo code block as rnn and we're also going to initialize a hidden state  as well as a sentence and let's say ours our task here is to predict the network the next word  in the sentence the rnn algorithm is as follows we're going to loop through the words in the in  this sentence and at each step we're going to feed both the current word and the previous  hidden state into our rnn and this is going to generate a prediction for the next word as  well as an update to the hidden state itself and finally when we've done when we've when we're  done processing these four words in this sentence we can generate our prediction  for what the next word actually is by considering the rnn's output after all the  individual words have been fed through the model all right so as you may have realized the  rnn computation includes both this internal cell state update to hft as well as the output  prediction itself so now we're going to concretely walk through how each of these computations is  defined all right going from bottom to top right we're going to consider our input vector x of t  and we're next going to apply a function to update the hidden state and this function is a standard  neural network operation just like we saw in the first lecture and again because this internal  cell state h of t is going to depend on both the input x of t as well as the prior cell state h of  t minus one we're going to multiply each of these individual terms by their respective weight  matrices and we're going to add the result and then apply a non-linear activation function  which in this case is going to be a hyperbolic tangent to the sum of these two terms to  actually update the value of the hidden state and then to generate our output at a given time  step we take that internal hidden state multiply it by a separate weight matrix which inherently  produces a modified version of this internal state and this actually forms our output prediction  so this gives you the mathematics behind how the rnn can actually update its hidden  state and also to produce a predictive output all right so so far we've seen rnn's being  depicted as having these internal loops that feedback on themselves and we've also seen how we  can represent this loop as being unrolled across time where we can start from a first time step  and continue to unroll the network across time up until time set t and within this diagram  we can also make explicit the weight matrices starting from the weight matrix that defines how  the inputs at each time step are being transformed in the hidden state computation as  well as the weight matrices that define the relationship between the prior hidden  state and the current hidden state and finally the weight matrix that transforms the hidden  state to the output at a particular time step and again to re-emphasize in all of these  cases for all of these weight matrices we're going to be reusing the same weight matrix  matrices at every time step in our sequence now when we make a forward pass through the  network we're going to generate outputs at each of those individual time steps and from those  individual outputs we can derive a value for the loss and then we can sum all of these losses  from the individual time steps together to determine the total loss  which will be ultimately what is used to train our rnn and we'll get to  exactly how we achieve this in a few slides all right so now this this gives you a an  intuition and mathematical foundation for how we actually can make a forward  pass a forward step through our rnn let's now walk through an example of how we can  implement an rnn from scratch using tensorflow we're going to define the rnn using a layer so  we can build it up from as inheriting from the layer class that alexander introduced in the first  lecture we can also initialize the weight matrices and also finally initialize the hidden state of  the rnn to all zeros our next step is going to be to define what we call the call function  and this function is really important because it describes exactly how we can make a  forward pass through the network given a input our first step in this forward pass is  going to be to update the hidden state according to that same exact equation  we saw earlier where the hidden state and the from the previous time step and an  input x are multiplied by their relative relevant wave matrices are summed and then  passed through a non-linear activation function we next compute the output by transforming  this hidden state via multiplication by a separate weight matrix and at each time step  we're going to return both the current output as well as the hidden state so this gives a sense  of breaks down how we define the forward pass through an rnn in code using tensorflow but  conveniently tensorflow has already implemented these types of rnn cells for us which you can  use via the simple rnn layer and you're going to get some practice doing exactly this and using  the rnns later on in today's lab all right so to recap now that we're at this point in this lecture  where we've built up our understanding of rnn's and their mathematical basis i'd like to turn back  to those applications of sequence modeling that we discussed earlier on and hopefully now you've  gotten a sense of why rnns can be particularly suited for handling sequential data again with  feedforward or traditional neural networks we're operating in this one-to-one manner  going from a static input to a static output in contrast with sequences we can go from a  sequential input where we have many time steps defined sequentially over time feed them into  a recurrent neural network and generate a a single output like a classification of  sentiment or emotion associated with a sentence we can also move from a static input for example  an image to a sequential output going from one to many and finally we can go from sequential input  to sequential output many to many and two examples of this are in machine translation and also in  music generation and with the latter with music generation you'll actually get the chance  to implement an rnn that does exactly this in later on in today's lab beyond this we can  extend recurrent neural networks to many other applications in which sequential processing  and sequential modeling may be useful to to really appreciate why recurrent  neural networks are so powerful i'd like to sort of consider a concrete  set of what i like to call design criteria that we need to be keeping in mind when thinking  about sequence modeling problems specifically we need to be able to ensure that our recurrent  neural network or any machine learning model that we may be interested in will be equipped to  handle variable length sequences because not all sentences not all sequences are going to have  the same length so we need to have the ability to handle this variability we also need to have  this critical property of being able to track long-term dependencies in the data and to  have a notion of memory and associated with that is also the ability to have this sense  of order and have a sense of how things that occur previously or earlier on in the sequence  affect what's going to happen or occur later on and to do this we can achieve both points two and  three by using weight sharing and actually sharing the values of the way matrices across the entire  sequence and we'll see i'm telling you now and we'll see that recurrent neural networks do indeed  meet all these sequence modeling design criteria all right so to understand these criteria  concretely i'd like to consider a very concrete sequence modeling problem which is going to be the  following given some series of words in a sentence our task is going to be to predict the most likely  next word to occur in that sentence all right so let's suppose we have this sentence as an  example this morning i took my cat for a walk and our task is let's say we're given these  words this morning i took my cat for a and we want to predict the next word in the c in the  sentence walk and our goal is going to be to try to build a recurring neural network to do exactly  this what's our first step to tackle this problem well the first consideration before we even get  started with training our model is how we can actually represent language to a neural network so  let's suppose we have a model where we input the word deep and we want to use the neural network to  predict the next word learning what could be the issue here in terms of how we are passing in these  in this input to our network remember that neural networks are functional operators they execute  functional mathematical operations on their inputs and generate numerical outputs as a result so  they can't really interpret and operate on words if they're just passed in as words so what we  have here is is just simply not going to work instead neural networks require numerical inputs  that can be a vector or an array of numbers such that the model can operate on them to generate  a vector or array of numbers as the output so this is going to work for us but  operating just on words simply is not all right so now we know that we need to have  a way to transform language into this vector or array based representation how exactly are we  going to go about this the solution we're going to consider is this concept of embedding which is  this idea of transforming a set of identifiers for objects effectively indices into a vector of fixed  size that captures the the content of the input so to think through how we could actually go about  doing this for language data let's again turn back to that example sentence that we've been  considering this morning i took my cat for a walk we want to be able to map any word that  appears or could appear in our body of language to a fixed size vector so our first step  is going to be generate to generate a vocabulary which is going to consist of  all unique words in our set of language we can then index these individual  words by mapping individual unique words to unique indices and these indices can  then be mapped to a vector embedding one way we could do this is by generating sparse  and binary vectors that are going to have a length that's equal to the number of unique words in  our vocabulary such that we can then indicate the nature of a particular word by encoding this  in the corresponding index so for example for the word cat we could encode this at the second index  in this sparse binary vector and this is a very common way of embedding and encoding language  data and it's called a one hot encoding and very likely you're going to encounter this in your  journey through machine learning and deep learning another way we could build up these embeddings  is by actually learning them so the idea here is to take our index mapping and feed that index  mapping into a model like a neural network model such that we can then transform that index mapping across all the words of our vocabulary to  a to a vector of a lower dimensional space where the values of that vector are learned  such that words that are similar to each other are have similar embeddings and an example  that demonstrates this concept is shown here all right so these are two distinct ways in which  we can encode language data and transform language data into a vector representation that's going  to be suitable for input to a neural network now that we've now that we've built up this  way to encode language data and to actually get started with feeding it into our recurrent neural  network model let's go back to that set of design criteria where the first capability we desired is  this ability to handle variable sequence lengths and again let's consider this task of trying to  predict the next word in a sentence we could have very short sentences right where driving words  driving the meaning of our prediction are going to be very close to each other but we could  also have a longer sequence or an even longer sequence where the information that's needed  to predict the next word occurs much earlier on and the key requirement for our recurrent  neural network model is the ability to handle these inputs of varying length feed forward  networks are not able to do this because they have inputs of fixed dimensionality and then those  fixed dimensionality inputs are passed into the next layer in contrast rnns are able to handle  variable sequence lengths and that's because those differences in sequence lengths are just  differences in the number of time steps that are going to be input and processed by the rnn  so rnns meet this first first design criterion our second criterion is the ability to effectively  capture and model long-term dependencies in data and this is really exemplified in examples like  this one where we clearly need information from much earlier in the sequence or the sentence to  accurately make our prediction rnns are able to achieve this because they have this way of  updating their internal cell state via the recurrence relation we previously discussed  which fundamentally incorporates information from the past state into the cell state update  so this criteria is also met next we need to be able to capture differences in sequence order  which could result in differences in the overall meaning or property of a sequence for example in  this case where we have two sentences that have opposite semantic meaning but have the same words  with the same counts just in a different order and once again the cell state maintained by an  rnn depends on its past history which helps us capture these sorts of differences because we  are maintaining information about past history and also reusing the same weight matrices across  each of the individual time steps in our sequence so hopefully going through this example of  predicting the next word in a sentence with a very particularly common type of  sequential data being language data this shows how it shows you how sequential data  more broadly can be represented and encoded for input to rnns and how rnns can achieve these  set the set of sequence modeling design criteria all right so now we at this stage in the lecture  we've built up our intuition and our understanding of how recurrent neural networks work how they  operate and what it means to model sequences now we can discuss the algorithm for how we can  actually train recurrent neural networks and it's a twist on the back propagation algorithm that  was introduced in lecture one it's called back propagation through time so to get there let's  first again take a step back to our first lecture and recall how we can actually train feed-forward  models using the back propagation algorithm we first take a set of inputs and make a forward  pass through the network going from input to output and then to train the model we back  propagate net gradients back through the network and we take the derivative of the loss with  respect to each weight parameter in our network and then adjust the parameters the weights  in our model in order to minimize that loss for rnns as we walked through earlier are  forward pass through the network consists of going forward across time and updating the cell  state based on the input as well as the previous state generating an output and fundamentally  computing the loss values at the individual time steps in our sequence and finally summing  those individual losses to get the total loss instead of back propagating errors through a  single feed-forward network at a single time step in rnns those errors are going to  be back propagated from the overall loss through each individual time step and then  across the time steps all the way from where we are currently in the sequence back to the  beginning and this is the reason why it's called back propagation through time because as  you can see all of the errors are going to be flowing back in time from the most recent  time step to the very beginning of the sequence now if we expand this out and take a closer  look at how gradients can actually flow across this chain of repeating recurrent neural network  module we can see that between each time step we have to perform this matrix multiplication  that involves the weight matrix w h of h and so computing the gradient with respect to the  initial cell state h of 0 is going to involve many factors of this weight matrix and also repeated  computation of the gradients with respect to the this weight matrix this can be problematic for a  couple of reasons the first being that if we have many values in this series this chain of matrix  multiplications where the gradient values are less or greater than 1 or the weight values are  greater than 1 we can run into a problem that's called the exploding gradient problem where our  gradients are going to become extremely large and we can't really optimize and the solution here is  to do what is called gradient clipping effectively scaling back the values of particularly  large gradients to try to mitigate this we can also have the opposite problem where now  our weight values or our gradients are very very small and this can lead to what is  called the vanishing gradient problem when gradients become increasingly smaller and  smaller and smaller such that we can no longer effectively train the network and today we're  going to discuss three ways in which we can address this vanishing gradient problem first by  cleverly choosing our activation function also by smartly initially initializing our weight matrices  and finally we can we'll discuss how we can make some changes to the network architecture itself  to alleviate this vanishing gradient problem all right in order to get into that you'll need  some intuition about why vanishing gradients could be a problem let's imagine you keeps multiplying  a small number something in between 0 and 1 by another small number over time that number  is going to keep shrinking and shrinking and eventually it's going to vanish and what this  means when this occurs for gradients is that it's going to be harder and harder and harder  to propagate errors from our loss function back into the distant past because we have this  problem of the gradients becoming smaller and smaller and smaller and ultimately what this  will lead to is we're going to end up biasing the weights and the parameters of our network  to capture shorter term dependencies in the data rather than longer term dependencies to see why  this could be a problem let's again consider this example of training a language model to  predict the next word in a sentence of words and let's say we're given this phrase the clouds  are in the blank in this case it's it's pretty obvious what the next word is likely going to be  right sky because there's not that much of a gap in the sequence between the relevant information  the word cloud and the place where our prediction is actually going to be needed and so  an rnn could be equipped to handle that but let's say now we have this other sentence  i grew up in france and i speak fluent blank where now there's more context that's needed from  earlier in the sentence to make that prediction and in many cases that's going to be exactly  the problem where we have this large gap between what's relevant and the point where we may  need to make a prediction and as this gap grows standard rnns become increasingly unable to  connect the relevant information and that's because of this vanishing grading  problem so it relates back to this need to be able to effectively model and  capture long-term dependencies in data how can we get around this the first trick  we're going to consider is pretty simple we can smartly select the activation function our  networks use specifically what is commonly done is to use a relu activation function where the  derivative of the of this activation function is greater than one for all instances in which x  is greater than zero and this helps the value of the gradient of our of our with respect to  our loss function to actually shrink when um the values of its input are greater than zero  another thing we can do is to be smart in how we actually initialize the parameters in our  network and we can specifically initialize the weights to the identity matrix to be able to try  to prevent them from shrinking to zero completely and very rapidly during back propagation our final  solution and the one that we're going to spend the most time discussing and it's also the most robust  is to introduce and to use a sort of more complex recurrent unit that can more effectively  track long-term dependencies in the data by intuitively you can think of it as controlling  what information is passed through and and what information is used to update the actual cell  state specifically we're going to use what is called gated cells and today we're going to focus  on one particular type of gated cell which is definitely the most common and most broadly used  in recurrent neural networks and that's called the long short term memory unit or lstm and what's  cool about lstms is that networks that are built using lstms are particularly well suited at better  maintaining long-term dependencies in the data and tracking information across multiple time steps  to try to overcome this vanishing gradient problem and more importantly to more effectively model  sequential data all right so lstms are really the the workhorse of the deep learning community  for most sequential modeling tasks so let's discuss a bit about how lcms work and my goal for  this part of the lecture is to provide you with the intuition about the fundamental operations of  lstms abstracting away a little bit of the math because it can get again a little confusing to  wrap your mind around but hopefully i hope to provide you with the intuitive understanding  about how these networks work all right so to understand the key operations that make  lstm special let's first go back to the general structure of an rnn and here i'm depicting  it slightly differently but the concept is exactly that from what i introduced previously  where we build up our recurrent neural network via this repeating module linked across time and  what you're looking at here is a representation of an rnn that shows a illustration of those  operations that define the state and output update functions so here we've simplified  it down where these black lines effectively capture weight matrix multiplications  and the yellow rectangles such as the tan h depicted here show the non-linear the  application of a non-linear activation function so here in this diagram this repeating module  of the rnn contains a single computation neural network computation node which is consisting of  a tan h activation function layer so here again we perform this update to the internal cell  state h of t which is going to depend on the previous cell state h of t minus 1 as well as  the current input x of t and at each time step we're also going to generate an output  prediction y of t a transformation of the state lstms also have this chain like structure but  the internal repeating module that recurrent unit is slightly more complex in the lcm this  repeating recurrent unit contains these different interacting layers again which are  defined by standard neural network operations like sigmoid and 10h non-linear activation  functions weight matrix multiplications but what's cool and what's neat about these different  interacting layers is that they can effectively control the flow of information through the  lstm cell and we're going to walk through how these updates actually enable the lstms to track  and store information throughout many time steps and here you can see how we can  define a lstm layer using tensorflow all right so the key idea behind lstms is  that they can effectively selectively add or remove information to the internal cell state  using these structures that are abstracted away calling by being called gates and these gates  consist of a standard neural network layer like a sigmoid shown here and a pointwise multiplication  so let's let's take a moment to think about what a gate like this could be doing in this case because  we have the sigmoid activation function this is going to force anything that passes through that  gate to be between 0 and 1. so you can effectively think of this as modulating and capturing how  much of the input should be passed through between between nothing 0 or everything one  which effectively gates the flow of information lstms use this type of operation to process  information by first forgetting irrelevant by first forgetting irrelevant history secondly  by storing the most relevant new information fi thirdly by updating their internal cell state  and then generating a output the first step is to forget irrelevant parts of the previous state  and this is achieved by taking the previous state and passing it through one of these sigmoid gates  which again you can think of it as as modulating how much should be passed in or kept out the  next step is to determine what part of the new information and what part of the old information  is relevant and to store this into the cell state and what is really critical about lstms is that  they maintain the separate value of the cell state c of t in addition to what we introduced  previously h of t and c of t is what is going to be selectively updated  by via these gatewise operations finally we can return an output from our  lstm and so there is a a interacting layer an output gate that can control what information  that's encoded in the cell state is ultimately outputted and sent to the network as input  in the following time step so this operation controls both the value of the output y of t as  well as the um the cell state that's passed on time step to time step in the form of h of t the  key takeaway that i want you to have about lstms in this from this lecture is that they  can regulate information flow in storage and by doing this they can effectively better  capture longer term dependencies and also help us train the networks overall and overcome the  vanishing gradient problem and the key way that they help during training is that all of these  different gating mechanisms actually work to allow for what i like to call the uninterrupted  flow of gradient computation over time and this is done by maintaining the  separate cell state c of t across which the actual gradient computation so taking the  derivative with respect to the weights updating derivative of the loss with respect to the  weights and shifting the weights in response occurs with respect to this uh separate separately  maintained cell state c of t and what this ultimately allows for is that we can mitigate  the vanishing gradient problem that's seen with traditional rnns so to recap on the key concepts  behind lstms lstms maintain a separate cell state from what is outputted that's c of t they use  these gates to control the flow of information by forgetting uh irrelevant information from  past history storing relevant information from the current input updating their cell state  and outputting the prediction at each time step and it's really this maintenance of the separate  cell state cft which allows for back propagation through time with uninterrupted gradient  flow and more efficient and more effective training and so for these reasons lscms are very  very commonly used as sort of the backbone rnn in modern deep learning all right so now that we've  gone through the fundamental workings of rnn's been introduced to the back propagation through  time algorithm and also considered the lstm architecture i'd like to consider a few very  concrete practical examples of how recurrent neural networks can be deployed for sequential  modeling including an example that you'll get experience with in today's lab and that's the  the task and the question of music generation or music prediction so let's suppose you're trying  to build up a recurrent neural network that can take sequences of musical notes and actually from  that sequence predict what is the most likely next note to occur and not only do you want to predict  what's the most likely next note to occur we want to actually take this trained model and use  it to generate brand new musical sequences that have never been heard before and we can  do this by basically seeding a trained rnn model with a first note and then um iteratively  building up the sequence over time to generate a new song and indeed this is one of the one of  the most exciting and powerful applications of recurrent neural networks and to motivate this  which is going to be the topic of your lab today i i'm going to introduce a really fun and  interesting historical example and it turns out that one of the most famous classical composers  franz schubert had a famous symphony which was called the unfinished symphony and the symphony  is is described exactly like that it's unfinished it was actually left at two movements rather than  four and schubert did not was not able to finish composing the symphony symphony before he died  and recently they there was a neural network based algorithm that was trained and put to the test  to actually finish this symphony and compose two new movements and this was done by training  the the model of recurrent neural network model on schubert's body of work and then testing  it by tasking with tasking the model with trying to generate the new composition  given given the score of the previous two movements of this unfinished symphony so  let's listen in to see what the result was i'd like to continue because i'm actually  enjoying listening to that music but we also have to go on with the lecture so pretty awesome  right and i hope you agree that i think you know it's really exciting to see neural networks being  put to the test here but also at least for me this sparks some questioning and understanding  about sort of what is the line between artificial intelligence and human creativity and  you'll get a chance to explore this in today's lab another cool example is beyond music generation  is one in language processing where we can go from an input sequence like a sentence to a single  output where we can train an rnn to take this to train an rnn to let's say produce a a  prediction of emotion or sentiment associated with a particular sentence either positive or negative  and this is effectively a classification task much like what we saw in the first lecture  except again we're operating over a sequence where we have this time component so because  this is a classification problem we can train these networks using a cross-entropy loss and one  application where that we may be interested in is classifying the sentiments associated with  tweets so for example this tweet we could train an rnn to predict that this first tweet  about our class 6s191 has a positive sentiment but that this other tweet about the  weather actually has a negative sentiment all right so the next example i'll talk about  is one of the most powerful applications of recurrent neural networks and it's the backbone  of things like google translate and that's this idea of machine translation where our goal is  to input a sentence in one language and train an rnn to output a sentence in another language and  this is can be done by having an encoder component which effectively encodes the original sentence  into some state vector and a decoder component which decodes that state vector into  the target language the new language but it's it's quite remarkable that and it's quite  remarkable that using the foundations and the concepts that we learned today about sequential  modeling and about recurrent neural networks we can tackle this very complex problem of machine  translation but what could be some potential issues with using this approach using rnns or  lstams the first issue is that we have this encoding bottleneck we need to which means that  we need to encode a lot of content for example a long body of text of many different words into the  single memory state vector which is a compressed version of all the information necessary to  complete the translation and it's this state vector that's ultimately going to be passed on  and decoded to actually achieve the translation and by by forcing this this compression we could  actually lose some important information by imposing um this extreme encoding bottleneck which  is definitely a problem another limitation is that the recurrent neural networks that we learned  about today are not really that efficient as they require sequential processing of information  which is the point that i've sort of been driving home all along and it's the sequential  nature that makes these recurrent neural networks relatively inefficient on modern gpu hardware  because it's difficult to parallelize them and furthermore besides this problem of speed we need to be able to to to train the  rnn we need to go from the decoded output all the way back to the original input  which will involve going through order t or number t iterations of the of the network where t  is the number of time steps that we feed into our sequence and so what this means in practice is  that back propagation through time is actually very very expensive especially when considering  large bodies of text that need to be translated that's depicted here finally  and perhaps most importantly is this fact that traditional rnns have limited  memory capacity and we saw that recurrent neural networks suffered from this vanishing gradient  problem lestm's helped us a bit but still they both of these architectures are not super  effective at handling very long temporal dependencies that could be found in large bodies  of text that need to be translated so how can we build an architecture that could be aware of these  dependencies that may occur in larger sequences or bodies of text to overcome these these limitations  a method that's called attention was developed and instead the way it works is that instead of  the decoder component having access to only the final encoded state that state vector passed  from encoder to decoder instead the decoder now has access to the states after each of the  time steps in the original sentence and it's the weighting of these vectors that is actually  learned by the network over the course of training and this is a really interesting idea because  what this attention module actually does is to learn from the input which points and which states  to attend to and it makes it very efficient and capable of capturing long-term dependencies  as easily as it does short-term dependencies and that's because to train such a network  it only requires a single pass through this attention module no back propagation through  time and what you can think of these attention mechanisms as providing is this sort of learnable  memory access and indeed this system is called attention because when the network is actually  learning the weighting it's learning to place its attention on different parts of the input  sequence to effectively capture a sort of accessible memory across the entirety of the  original sequence it's a really really really powerful idea and indeed it's the basis of  a new class and then rapidly emerging class of models that are extremely powerful for large  scale sequential modeling problems and that that class of models is called transformers  which you may have heard about as well this application and this consideration of  attention is very important not only in language modeling but in other applications as well so for  example if we're considering autonomous vehicles at any moment in time an autonomous vehicle like  this one needs to have a understanding of not only where every object is in its environment  but also where a particular object may move in the future so here's an example of a self-driving  car and the the red boxes on the right hand side depict a cyclist and as you can see the  cyclist is approaching a stopped vehicle shown here in purple and the car the self-driving  car can be able to recognize that the the cyclist is now going to merge in front of the car and  before it does so the self-driving car pulls back and stops and so this is an example of trajectory  prediction and forecasting in which it's clear that we need to be able to attend to and make  effective predictions about where dynamic objects in a scene may move to in the future another  powerful example of sequential modeling is in environmental modeling and climate pattern  analysis and prediction so here we can visualize the predicted patterns for different environmental  markers such as winds and humidity and it's an extremely important and powerful  application for sequence modeling and for recurrent neural networks because effectively  predicting the future behavior of such markers could aid in projecting and planning for  long-term climate impact all right so hopefully over the course of this lecture you've gotten  a sense of how recurrent neural networks work and why they're so powerful for processing  sequential data we saw how we can model sequences by this defined recurrence relation and how  we could train them using the back propagation through time algorithm we then explored  a bit about how gated cells like lstms could help us model long-term dependencies in  data and also discussed applications of rnns to music generation machine translation and beyond  so with that we're going to transition now to the lab sessions where you're going to have a chance  to begin to implement recurrent neural networks on your own using tensorflow and we encourage  you to come to the class and the lab office hour gather town session to discuss the labs answer ask  your questions about both lab content as well as content from the lectures and we look  forward to seeing you there thank you
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2018_Deep_Generative_Modeling.txt
all right thanks very much for the invitation to speak to you yeah so I'm gonna be talking about deep generative models so when we talk about deep generative models what we're really talking about here from my point of view is to essentially train neural nets from training examples in order to represent the distribution from which these came so we can think about this as either explicitly doing density estimation where we have some samples here and we try to model those samples with some density estimation or we can think of it more like this which is what actually I'll be doing a lot more of this kind of thing which is essentially we're worried about sample generation here so we have some training examples like this where we're sort of just natural images from the world and Rast asking a model to train to learn to output images like this now these are actually not true samples these are actually just other images from the same training so I believe this is from imagenet a few years ago or even a few months ago this would have seemed obvious that this you couldn't generate samples like this but in fact nowadays this is actually not so obvious that we couldn't generate these so it's been a very exciting time in this area and the amount of work we've done in the amount of progress we've made in the last few years has been pretty remarkable and so I think part of what I want to do here is tell you a little bit about that progress give you some sense of where we were in say 2014 when this started really to accelerate and and where we are now so yeah so so why generative models why do we care about generative modeling well there's a there's a bunch of reasons some of us are just really interested in making pretty pictures and I confess that for the most part that's what I'll be showing you today is just as an evaluation metric we'll just be looking at pictures just natural images and how well we're doing in natural images but there's actually real tasks that we care about when we talk about generative modeling one of them is just let's say you want to do some conditional generation like for example machine translation right so we're conditioning on some source sentence and we want to output some target sentence well the structure within that target sentence that target language let's say that the the rules the grammatical rules you can model that structure using a generative model so this is an instance of where we would do conditional generative modeling another example where this is something that we're actually looking a little bit towards is is can we use generative models as outlier detection and actually recently these types of models have been integrated into RL algorithms to help them do exploration more effectively this was dot work done at deep mind I believe so here we're looking at you know a case where you know if you think about kind about this is a toyish version of the autonomous vehicle task and you want to be able to distinguish cars and and wheelchairs and then you're gonna have something like this and you don't want your classifier to just blindly say oh I think it's either a car or a wheelchair you want your classifier to understand that this is an outlier right and and you can use generative modeling to be able to do that by noticing that there aren't very many things like this example from the training set so you can proceed with caution and this is kind of a big deal right because we don't want our classifiers are our neural net classifiers are very very capable of doing excellent performance classification but but any classifiers just trained to output one of the classes that it's been given and so in cases where we actually are faced with something really new and that's not seen before or perhaps in lumination conditions that it's never been trained to to cope with we want models that are conservative in those cases so we hope to be able to use generative models for that another case where we're looking at generative models being useful is in going from simulation to real examples of in robotics so so in robotics training these robots with neural nets is actually quite laborious if you're really trying to do this on the real robot it would take many many trials and it's it's not really practical in simulation this works much much better but the problem is is that if you train a policy in simulation and transfer it to the real robot well that's not going to work that hasn't worked very well because the environment is just too different but what if we could use a generative model to make our simulation so realistic that that transfer is viable so this is one another area that a number of groups are look this kind of expression generator modeling in this direction so there's lots of really practical ways to use generate models beyond just looking at pretty pictures right so so I break down the kinds of generative models there are in the world in terms from two rough categories here and maybe we can take issue with this oh by the way if you guys have questions go ahead and ahead and ask me while you have them I think I like interaction if possible or you can just say them to the end either way is fine and sorry for my voice I've got a cold so yeah we have auto regressive models and we have latent variable models so auto regressive models are models where you basically define an ordering over your input right so for things like speech recognition or Strether's speech synthesis in the gender and modeling case this is natural right it's just there's a natural ordering to that data it's just the temporal sequence for things like images it's a little less obvious how you would define an ordering over pixels but there are nonetheless models such as pixel R and N and pixel CN n that do just this in fact pixel CN n is a is a really interesting model from this point of view they basically define a convolutional neural net with a mask so if you remember our previous lecture we did we saw these convolutional neural nets but what they do is they stick a mask on it so that you've got sort of a causal direction so you're only looking at previous pixels in the ordering that you've defined the the the the well the ordering that defined by the auto regressive model so so you kind of maintain this this ordering as you go through the continent and what that allows you to do is come up with a generative model that's supported by this confident that you know it's just a full generative model and it's a it's a pretty interesting model its own right but because of I have rather limited time I'm actually not going to go into that model in particular another thing I just want to point out here is that wavenet is probably the state of the art model for speech synthesis right now and it forms the basis of very interesting speech synthesis systems it's another area where gender models have made a remarkable contributions in the last few years even the last few months so now we're seeing models that you would be fairly hard-pressed to just English between natural speech and and these kinds of models for the most part so what I'm going to concentrate on is latent variable models so latent variable models are models that essentially posit that you have some latent variables that are hope that you hope will represent some latent factors of variation in the data right so these are these are things that as you wiggle them they're gonna move the data in in what you hope will be natural ways right so you can imagine a latent variable for images corresponding to illumination conditions right or if their face is a common it's a common thing we find is latent variable corresponding to a smart right so we if we move this latent variable the image that we see that we generate you know a smile appears and dissapears these are the kind of latent variables that we want and and we want to discover these so this is really a challenging task in general we want to take natural data just unlabeled data and discover these latent factors that give rise to the variation we see there's two kinds of models in this family that I'm going to be talking about adversarial auto-encoders and generative adversarial nuts or Ganz here I work personally with both of these kinds of models they serve different purposes for me and yeah what's let's dive in so first I'll talk about the variation all encoders this actually was a model was developed simultaneously by two different groups one at deep mind it's the bottom one here and then kingdom and Welling at the University of Amsterdam so again the idea behind the the the well latent variable models in general is kind of represented in this picture right so here's the space of our latent variables and we consist this kind of Representative being fairly simple we have our two coordinate Z 1 and Z 2 and and they're independent in this case and they're sort of fairly regular and they sort of form a chart for what is our complicated distribution here in in in X space right so this is you can think of this is third of the data manifold so you can think of this as image space for example right so image space embedded in pixel space natural images embedded pixel space form this kind of manifold and what we want is coordinates that allow you to as you move sort of smoothly in this space move along this what can be a very complicated manifold and so that's the kind of hope that what we're looking for when we do latent variable modelling so here's just an example of what I mean by exactly that this is an early example using these variational autoencoders so here's a the freight face data said just a whole bunch of images of Brendan Fraser faith that's in the data set what we're showing here is the model the the model output of this variational auto encoder for different values of these latent variables z1 and z2 now we've kind of post hawked added these labels pose and expression on them because you see if as we move this z2 here you can see the expression kind of smoothly changes from what looks like a frown to eventually smile and through what looks over here like a thought sting well sticking his tongue out I guess and in this direction there's a slight head turn it's pretty subtle but it's there so so we do like I said these were sort of post hoc added the model just discovered that these were two sources of variation in the data that were relatively independent and the model just pulled those out for something like m-miss this is the these are samples drawn on a model train by EM mist it's a little less natural right because in this case you could the you could argue the data is really best model there's something not with latent factors but continuously in factors but more like in clusters so you get this kind of somewhat interesting somewhat bizarre relationship where you know you've get some of this relationship of the tilt happens here but then that one hurdle morphs into a 7 which morphs into a 9 and you know because different regions in this continuous space that represent different examples here so a little bit more detail into how we do these kinds of latent variable models at least in the context of the variational auto encoder or VA II model so what we're trying to learn here is P of X some distribution over the data we're trying to maximize the likelihood of the data that's it but the way we're going to parameterize our model is with P of X given Zed Zetas are latent variables oh sorry Z I guess for you guys P of X given Z and then P of Z right some some prior distribution right so this P of Z here is is typically something simple right it's some prior distribution we actually generally want it to be independent there's some modeling compromises to be made there but the reason why you'd want it independent is because that helps get you the kind of orthogonal representation here so these guys this dimension and this dimension we want sort of not very much interaction in order to make them more interpretable yeah and so and the other thing we want to do is we want to think about how are we going to so so going from something simple like you can think about this is like in a Gaussian distribution or a uniform distribution but now we want to model G here that transforms Zed from this space into this complicated space and the way we're going to do that is with a neural net actually in all of the examples that I'm going to show you today the way we're gonna do that is with a convolutional neural net and the way to think about that it's a bit of an interesting thing to think about going from some fully connected things Zed into some two-dimensional input with a topology here in a natural image space X it's just the way going from what we talked about it's kind of like the opposite path of what you would take to do a confident classification there's a few different ways you could think about doing that one of which is called a transpose convolution this turns out to not to be such a good idea this is a case where you essentially fill in a bunch of zeros it seems like the most acceptable way to do that right now is to just once you get some small level topology here you just you do interpolation so you just you know super sample from the image you can do bilinear interpolation and then do a comp that preserves that size and then up sample again comp up sample comp that tends to give you the best results so right so when you see this kind of thing this kind of thing for our purposes think convolutional neural map alright so it's important to point out that that if we had if we had over here Zed's that gut Wentworth's our Z's that went with our X we'd be done right because this is just a supervised learning problem at this point and we'd be fine the trick is these are latent right they're hidden we don't know of these sets we don't have we haven't discovered them yet so how do we learn with this model well the way we're gonna do it is we're gonna use a trick that's actually been around for quite some time so this isn't particularly new we're gonna use a variational lower bound on the data likelihood right so it turns out that we can actually express the the data likelihood here again this is the thing we're trying to maximize we can express a lower bound for it given by something like this so we posit that we have some Q distribution of Z that estimates is the posterior of Z for a given X and we're trying to then I guess maximize this joint probability over X and Z minus the log Q Z so this is this is this variational or bound one of the ways we can express this is you're trying to find the Q from Q's point of view is if you were to find a Q that actually recovered the exact posterior distribution over Zed given X this would actually be a tight lower bound so then we would for sure be optimizing if we were now optimized this lower bound we would be for sure optimizing likelihood in practice that's what we're gonna do anyway we're gonna have this lower bound we're gonna try to optimize we're trying to raise that up in hopes of raising up our likelihood but the problem is this posterior the actual posterior of say of this G model here there's a neural net right this is just some forward model neural net so computing the posterior of z given x is intractable we have no good way of doing this it's gonna be some complicated thing and we have no sensible way of doing this so we're going to approximate it with this Q in this way so we're gonna now we can actually and what's interesting about this formulation and this is this is new to the variational auto queries they've sort or just reformulated this a little bit differently and what they've got is they come up with this this different expression here which actually can be thought of in two terms here one is the reconstruction term here if you look at what this is this is just you get some from some queue you get a Zed and you're just trying to reconstruct X from that Zed so you start with X you get a Zed and then are trying to do a reconstruction of X from that Zed this is where the name variational auto-encoder comes from is because this really looks like an encoder on the side of Q here and a decoder here and you're just trying to minimize that reconstruction error but in addition to this they add this regularization term and this is interesting right so this is what they're doing here is they're basically saying well we want to regularize this posterior and this actually Neela autoencoders don't have this right so we're trying to regularize this posterior to try to be a little bit closer to the prior here and it's a common mistake when people learn about this to sort of think that oh well the goal is for these things to actually match that would be terrible right that that means that you lose all information about X you definitely don't want these things to match but it does act as a regularizer sort of as a counterpoint to this reconstruction term and so now we've talked a little bit about this but what is this q well for the variational auto encoder the Q is going to be another neural net and in this case we can think this is just a straight Compton at for the case of natural images so again we've got our lower bound there objected that we're trying to minimize and we're gonna parameterize Q as this neural net the confident that goes from X to Z and we've got now our generative model here or decoder that goes from Z to X I'm gonna add a few more details to make this thing actually proc in practice up till now this is not too new there's been instances of this kind of formalism of an encoder network and a decoder network but what they do next is actually kind of interesting they notice that that if they parameterize this a certain way if they say Q is equal to actually you can use any continuous distribution here but they pick a normal distribution here so Q of X is some normal distribution where the parameters mu and Sigma from the defined this normal distribution are defined by this encoder network then they can actually encode it like this it's called a Reaper amortization trick where they take Z a random variable here is equal to some function of the input mu plus Sigma our scaling factor over some noise and what this allows us to do now when they formulate it this way is when in training this model they can actually back prop the decoder and into the encoder to train both models simultaneously it looks a little bit like this so they can do forward propagation start with an X forward propagate to Zed add noise here that was at epsilon and then for propagate here to this X hat which is our reconstruction compute the error between X and X hat and back propagate that error all the way through and that allows them to actually train this model very effectively in ways that we've never been able to train before this this trick came up and when you do that this is the kind of thing that came out so this came out in 2014 these were actually really I promised these were really impressive results in in 2014 this first time we were seeing sorted this is not this is from the label faced in the wild these days we use Celebi and this is imagenet so not a whole lot there actually this is a small version of imagenet but you can do things with this model actually so for example one of the things that we've done with this model is we actually just we talked to I mentioned briefly this pixel CNN we we actually include this pixel CNN into the decoder side so one of the problems if I just go back one of the problems why we get these kinds of images is this model makes a lot of independence assumptions right and part of it is because we want those independence assumptions to make our Zed's more interpretable but they have consequences to them and one of the consequences is you end up with kind of blurry images that's that's part of why you end up with Bill array images is because we're making these approximations in the variational lower bound and so by adding the pixels CNN that allows us to encode more complexity in here and by the way this is now a hierarchical version of the VA II using pixel CNN that allows those allows us to encode sort of complicated distributions in Zed 1 here given the upper levels adds and with this kind of thing this is the kind of images that we can just synthesize using this variational we'll call this the the pixel VA II model so these are bedroom scenes so you can sort of see it's reasonably good clear bedroom scenes and then imaged at which your that you you can see that it gets roughly the the textures right it's not really getting objects yet actually objects that are really tough to get when you model things in an unconditional way what I mean by that is the model doesn't know that it's supposed to generate a dog let's say if it was going to generate something so it's just generating from P of X in general and it's actually pretty challenging when we talk about image net all right so so that's one way we can improve the the VA II model another way we can improve the VA you models work on the encoder side and that was we're done by a few people but culminating I think in the inverse auto regressive flow model so this is actually a very effective way to deal with it the same kind of independence problems we saw they were addressing on the decoder side but they're addressing it on the encoder side so you can kind of see just briefly what this is doing so this is your prior distribution ideally you would like sort of the marginal posterior which is they're like combining all these things together to be as close to this as possible because any sort of disagreement between those two is really a modeling errors or it's an approximation error so standard via email is going to get gonna learn to do something like this which is it's kind of as close as it can get to this while still maintaining independence in the distributions using this iif method it's a bit of a complicated method that involves many many iterations of transformations that you can actually compute that are actually invertible that and you need this to be able to do the computation but with that you can get this kind of thing which is pretty much exactly what you'd want in this setting so we've played around with this model and in fact it you know it we find it works really well in practice actually yeah but it's again it's on the encoder side what we were doing with the pixel vs on the is working on the decoder side but now so this is actually fairly complicated both these models are actually fairly complicated to use and a fairly involved so one question is is there another way to train this model that isn't quite so complicated and so at the time a student of yoshua bengio and i and good fellow kate was toying around with this idea and he came up with a generative adversarial myths and the way generative adversarial Nets work is it posits the learning of a generative model gee in the form of a game so it's a game between this generative model G here and a discriminator D so the discriminators job is to try to tell the difference between true data and data generated from the generator right so it's trying to tell the difference between fake data that's generated by the generator and true data from the training distribution and it's just trained to do so this guy's trained to try to output one if it's true data and output 0 if it's fake data and the generator is being trained to fool the discriminator by using its own gradients against it essentially so we back propagate the the discriminate the discriminator error all the way through through X we usually have to use continuous X for this and into the discriminator now we're going to change the parameters of the generator here in order to try to maximally fool the discriminator so in a sort of a more I guess abstract way to represent this looks like this so we have the data on this side we have the discriminator here with its own parameters and this again for our purposes is almost always a convolutional neural net and then we have the generator which is again one of these kind of flipped convolutional models because it takes noise as input it needs noise because it needs variability and then it converts that noise into something an image space that's trained with these parameters are trained to fool the discriminator alright so you know we can be a little bit more formal about this this is actually the objective function we're training on so let's just break this down for a second so from the discriminators point of view what is this this is just it's called the cross entropy loss it's literally just what you would apply if you're doing a classification with this discriminator that's all this is from the generators point of view the generator comes in just right here right it's the thing you draw these samples from and it's trying to minimize well the discriminator is trying to maximize this quantity this is essentially likelihood in and and the discriminative generators moving in the opposite direction so we can analyze this this game to see you know there's a question right there's actually the way this happened was at first he just tried it and it worked it was kind of a overnight kind of thing and we got some very promising results and then we set about trying to think about well how do we actually explain what it's doing why does this work and so we did a little bit of theory which is useful to discuss and I can tell you there's been a lot more theory on this topic that's been done that I will not be telling you about but it's actually been a very interesting development in the last few years but this this is a theory that it appeared in the original paper so the way we approach this was let's imagine we have an optimal discriminator this turns out you can eat pretty easily show this is the optimal discriminator up here now you couldn't this is not a practical thing right because we don't know PR which is probability of the real distribution we don't this is not available to us this is only defined over training set so only about training examples so we actually don't you can't instantiate this but in theory if we had this optimal discriminator then the generator would be trained to minimize the Jenson Shannon divergence between the true distribution that gave rise to the training data and our generated distribution so this is good right this is telling us that we're actually doing something sensible in this kind of nonparametric ideal setting that we don't we're not really using but but it's but it's actually it's interesting nonetheless so one thing I can say though that in practice we actually don't use exactly the objective function that I was just describing what we use instead is a modified objective function and the reason is is because if we were to minimize G what we had before was this term minimizing G what happens is is that as the discriminator gets better and better that the gradient on G actually exaggerated so it goes to zero so that's not very useful if we want to train this model and this is actually one of these one of the practical issues that you see when you actually train these models is that you're constantly fighting this game between that you're sort of on this edge of of the discriminator doing too well or the or the generator you know it's it's essentially you're basically almost always fighting the discriminator because it's always going to be as soon as the discriminator starts to win this this competition between the Jenner and the discriminator you end up with unstable training and in this case you end up with basically the the generator stops training and the discriminator runs away with it well that's actually in the original case so what we do instead is we optimize this which is a slight modification but it's still monotonic and it it actually corresponds to the same the same fixed point but but what we're doing is we're just actually with respect to G again coming in through the samples here or maximizing this quantity rather than minimizing this one okay so that's just a practical kind of heuristic thing but it actually makes a big difference in practice all right so when we did this when we published first published this paper this is the kind of results we would see and what you're looking at now is kind of movies formed by moving smoothly and Zed space so this is you're kind of looking at transformations on the image manifold coming from smooth motions in Zed space so we were pretty impressed with these results again they felt good at the time but there's been you know a few papers that have come out recently well you know not so recently actually at this point in 2016 there was a this was came out in 2014 in 2016 there was a big jump in the quality and and this was sort of one of those stages this is the least-squares game this is just one example of many I could have pointed out but this is the kind of results for seeing so so part of one of the one of the secrets here is that it's 128 by 128 so bigger images actually give you much better perception of quality in terms of the images but so these are not necessarily or generally not real bedrooms these are these are actually generated from the model right so trained on you know roughly I think a hundred thousand or at least a hundred thousand bedroom scenes has to generate from these random zed bits this is what it gives you so one thing you could think of and one thing that certainly kind of occurred to me when I first saw these kinds of results is that well it's just over fit on some small set of examples and it's just learning these Delta functions right so so that not that interesting some since it's kind of memorized some small set of data and it's enough that it looks good and it's impressive but it doesn't seem like that's actually the case and one of the evidence one of the parts of evidences was pointed to and this is in the DC Gann paper was this so that same trick that I showed you with the movies in EM nough start of moving smoothly in Zed space they applied basically that same idea here so this is basically one long trajectory through Zed space and what you can see is starting up here on ending up all the way down here what you can see is a smooth transition right and at every point it seems like you know a reasonable bedroom scene so it really does seem like like that picture that I showed you where we had the Zed space it was sort of smooth and then we had this X space that had this manifold on it it really does feel like that's what's happening here right where we're moving smoothly in Zed space and we're moving along the image manifold in X space so for example this I guess I don't know if this is a picture or TV but it's slowly morphs into a window I guess and then kind of becomes clearly a window and then turns just into this edge sort of an edge of the room so one of the things actually if you want to nitpick about these the the models actually don't seem to understand 3d geometry very well it often gets the perspective just a little wrong it's sort of something might be interesting for future work so yeah so one question you might be it so why why do these things work well and they keep in mind that you know when we talked about the VA II model we actually had to do quite a bit of work to get comparable results right we had to embed these pick these pixel CN NS in the decoder or we had to do some quite a bit of work to get the encoder to work right in these models we literally just took a confident stuck in some noise at the beginning pushed it through and we got these fantastic samples it really is kind of that simple so what's going on like why is it working as well as its well as it is and so I have a kind of an intuition for you a kind of a cartoon view so imagine that this is the image manifold so so this is kind of a cartoon view of an image manifold but like this is in two pixel dimensions here and and we're imagining here that these are just parts of image manifold and they sort of you know share some features close by but but what this is basically representing is the fact that that most of this space isn't on the image manifold right image manifold is some complicated non-linearity and if you were to randomly sample in pixel space you would not land on this image manifold which makes sense right randomly sample in pixel space you're not getting a natural image out of it this is sort of a cartoon of you or my perspective on the difference between what you see with maximum likelihood models of which the VA II is one and something like again so the maximum likelihood the way it's trained is it has to give a certain amount of likelihood density for each real sample if it doesn't it's punished very severely for that so it's willing to spread out its probability mass over regions of the of the input space or the error of the pixel space which actually doesn't correspond to a natural image and so when we sample from this model that's most of our samples come from right this is these are these blurry images that we're looking at again models things differently right because it's only playing this game between the encoder and well sorry between the discriminator and the generator all that has to do is sort of stick on you know some subset of the examples or maybe some subset of the manifolds that are present not and you have enough diversity that the discriminator can't notice that it's modeling a subset so there's pressure on the model to maintain a certain amount of diversity but at the same time it doesn't actually face any pressure to model all aspects of the training distribution it can just ignore certain aspects of the training examples or certain aspects of the training distribution without significant punishment from the from the training algorithm so anyway that's that's I think a good idea to have in your mind about the difference between how these methods work yeah so so just I'd like to sort of conclude with a few steps that have happened since the the gans one of the things this is something that we've done we've actually you know what you might ask well ganz are great but in a way it's kind of unsatisfying because we start with this Zed and then we can generate images so yes we generate really nice-looking images but but we had this you know hope when I started talk about these latent variable Mountain models that we could actually maybe infer the Z from an image right so we can actually extract some semantics out from of the of the image using these latent variables that you discover and in the game we don't have them the question is can we actually within this generative adversarial framework can we incorporate inference mechanism so that's exactly what we're doing here with this work and it's actually a model we call Alli but the identical work essentially came out at the same time known as baigan and the basic idea here is just to incorporate an encoder into the model so rather than just giving the data distribute the data set R here on on the Left earlier the Gann was defined we had the decoder a generative model but over here we only gave it X training examples and here we only compared against X generated from the DQ from the the generator but in this case what we're doing is we take X and then we actually use an encoder here to to generate a Z given X and on the decoder we have again our traditional Gann style we take a sample from some simple distribution and we generate X so again this is the data distribution over here and code it to Zed and then we take our decode sample from Zed and we decode to X and our discriminator now crucially is not just given X but it's given X and Zed and it's asked can you tell the difference in this joint distribution between the end coder side and the decoder side that's the game and what we find is well first of all it actually generates very good samples which is interesting it's actually it seems to generate sort of better samples and we see with comparable Gans which there might be some regularization effect I'm not entirely sure why that would be but actually gets fairly compelling samples this is just with a cell of a a large data set of celebrity images but this is the more interesting plot so so this is actually corresponds to a hierarchical version of this model so this is why we have mult so this is said one and said two this is a two layer model in version of this model so if we just reconstruct from the higher level Z which is this you know containing fairly little information because is this you it's a single vector and then it has to synthesize into this a large image what we're looking at here a reconstruction so we take this image we encode it to all the way to Z 2 and then we decode it and what we end up with is is this which is you know reasonably close but not that great right and so it's the same thing you know kind of all they they sort of hold some piece of the information which in some sense this is kind of it it's remarkable that does as well as it does because it's actually not trained to do reconstruction unlike something like the via II it's actually explicitly trained to do reconstruction this is just trained to match these two distributions and this is kind of a probe to see how well it's doing because we take X from one map it to Zed and we take that set over and when we resynthesize the X and we see we're seeing now an X space how close did it come and you know it does okay but over here when we give it x1 as z1 and z2 so in this case we're really just giving it all of the latent variable information we actually get much much closer which is interesting is this is telling us that this pure joint modelling in this case it would be a joint modelling between X Z 1 and Z 2 that this is enough to do fairly good reconstruction without ever actually explicitly building that in so it's giving us an interesting probe and - how close are we coming to learning this joint distribution and it seems like we're getting actually surprisingly close so it's a testament to how effective I think this generative adversarial training exam room actually is so I want to just end with a few other things that have nothing to do with our work but I think are very very interesting and well worth you guys learning about so first one is cycle Gann cycle Gann is this really cool idea starting with it looks imagine you have two sets of data sets that somehow correspond but you don't know the correspondence you don't have like there's an alignment that exists between these two data sets but you might not know what you know might not have paired data this actually happens a lot say for example this is not an image space but a great example of this happens in machine translation right you almost always have lots of unilingual data so just text data in us in a given language but it's very expensive to get a line data but but to get a paired as a source and target distribution the question is what can you do if you just have you and lingual data how successful can you be at learning a mapping between the two and they essentially use ganz to do this so this is the setup they have some domain X here and our domain Y here and what they do is they start with an X they transform it through some convolutional neural net usually a resonant based model into some Y and on this Y they're going to evaluate it as a just as a as a gang-style discriminator here so can X through G make a convincing Y that's being met what's being measured here so you can think of X is taking the place which is you know some other image let's say this is some image to another image this image X is taking the place of our of our Z of our random bits it's getting its randomness from X and then we do the same thing we can kind of transform it through F so we've got X here transform it through G to D and we evaluate on on the discriminator here on AG and style training and then we re encode this in X now once we get here that's over here so we've taken X transformed it into Y transform it back they actually do what's called the cycle consistency loss which is this is actually a reconstruction this is an l1 reconstruction error and they back prop through F and G and then they it's a symmetric relationship so they do the exact same thing on the other side they start with y they see if they can transform it into X they compare the they compare that that that generated X with true X's via a discriminator and then again transform that to Y and do the cycle consistency loss so without any pair data this is the kind of thing that they can get so a particular note is horses and zebras right so and this is a case where you know it's it's impossible to get this kind of pair data say you wanted a transformation that transformed horses to zebras and vice versa you will never find pictures of horses and zebras in the exact same pose right that's just not a kind of data set your be able to collect and yet they do a fairly convincing job of doing this so you can see that they even turn like there's a little bit this one actually doesn't do it very well but oftentimes what you see is they turn like green grass a little bit more savanna like that kind of dulls it out because zebras are found generally in Savannah like conditions they can do winter to summer kind of transitions I've seen examples day-to-night these are pretty interesting very us other things no I think this there's a lot of interesting things you can do with this data set and and going back to if you think about that simulation example with real robotics that I gave the motivating example at the beginning this is a prime application area for this kind of technology but I will say that one of the things that they've done here is they assume a deterministic transformation between these two domains so I think there's a lot of interest looking at how do you actually break that kind of restriction and imagine something more like an many-to-many mapping between these two domains so the last thing I want to show you is kind of the most recent stuff which is just kind of mind-blowing in terms of just the quality of generation that they they show so these are images from Nvidia actually so I don't know if I'm I'm I'm sort of undercutting you I don't know if he was going to show these or not but yeah so these trained on on the original celibate data set the same one we had before but now much much larger images right so a thousand 24 by a thousand 24 and they're able to get these kinds of generated models so I would argue that many of these may be all of the ones shown here essentially pass a kind of a Turing test right an image during test you cannot tell that these are not real people right now I should say not all images actually look this good some of them are actually really spooky but you can go online and look at the video and pick some out there yeah how they do this is with a really interesting technique and this is actually so new we have students that are starting to look at this but we haven't really probed this very far so I actually don't know how effective this is in general but it's actually seems very interesting so they just start with a tiny 4 by 4 image and once you train up that parameters parameters for both the discriminator here and the generator so again these are convolutions but we're starting with with a relatively small input we increase the size of the input we add a layer and and and the some of these parameters here are actually formed by the parameters that gave you this image so you sort of just stick this up here and now you add some parameters and you'd now train the model to learn something bigger and you keep going and you keep going and then you get something like this as far as I'm concerned this is sort of this amounts to kind of a curriculum of training does two things for you one is it helps build a bit of global structure to the image because you're starting with such low dimensional inputs it helps reinforce the kind of global structure but it also does something else which is pretty important it allows the model to sort of not have to spend a lot of time training a very very large model like this right you wait I would imagine they spend relatively little time training here although this is Nvidia so they might spend a lot of time training this but but it allows you to spend a lot of the time sort of in much much smaller models so much more computationally efficient to train this model alright so that's it for me thanks a lot oh wait oh yeah sorry one more thing I forgot this is just what they get unconditional image generation now with m-miss this is again so you give it horse and then it's able to generate this kind of thing so far it's able to generate this right so bicycles it's able to generate these which is pretty amazing quality if you min here you can actually it's kind of fun because that it kind of gets the idea of these these spokes but not exactly like some of them just sort of end Midway but yeah but still pretty pretty remarkable alright so thanks if they have questions [Applause]
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2022_Deep_Learning_New_Frontiers.txt
hello everyone welcome back um so this is going to be our last lecture that's given by alexander and myself and traditionally this is one of my favorite lectures where we're going to discuss some of the limitations of deep learning algorithms that we've been learning about as well as some of the new and emerging research frontiers that are enjoying a lot of success in both theory and application today okay so first things first we're going to cover some logistical information given that this is the last lecture that we'll be giving prior to our guest lectures so the first point is regarding the class t-shirts thank you to everyone who came by yesterday and picked up t-shirts i will note importantly that the t-shirts we have a lot of t-shirts this year due to backlog from from last year so the t-shirts today are going to be different than those that you may have received yesterday the t-shirts today are our specific class t-shirts and we really really like these as kind of a way for you all to remember the course and you know signify your participation in it so please come by if you would like these one of these t-shirts and um cannot guarantee that they will be there remaining tomorrow or friday due to the demand so please come by today it will be in 10 250 where the in-person office hours are right after this lecture completes so to recap on where we are right now and where we are moving forward as you've seen we've you know pushed through deep learning at sort of this breakneck pace and we've done this split between our technical lectures and our hands-on project software labs so in the remaining four lectures after this one we're going to actually extend even further from some of the new research frontiers that i'll introduce today to have a series of four really exciting guest lectures from leading industry researchers and academic researchers in the field and i'll touch briefly on what they're each be talking about in a bit but please note that we're really really lucky and privileged to have these guest speakers come and participate and join our course so highly highly encourage and recommend you to join and to join synchronously specifically as far as deadlines and where things stand with the labs submissions and the final project assignments lab 3 was released today the reinforcement learning lab all three labs are due tomorrow night at midnight uploaded to canvas and there are going to be specific instructions on what you need to submit for each of the three labs for that submission and again since we've been receiving a lot of questions about this on piazza and by email the submission of the labs is not required to receive a grade to receive a passing grade this is simply for entry into each of the three project competitions what is required to receive credit for the course is either the deep learning paper review or the final project presentation we'll get into more specifics on those in a couple of slides as a reminder but these are both due friday the last day of class okay so about the labs really really exciting hopefully you've been enjoying them i think they're i mean they're awesome i think they're awesome but that's also because i'm biased um we because we made them but nevertheless right really exciting opportunity to enter these cool competitions and win some of these awesome prizes in particular again reminding you that they're going to be due tomorrow night and so for each of the labs we have selected a prize that kind of corresponds with the theme of that lab hopefully you'll appreciate that and as alexander mentioned again i'd like to highlight that for lab three focusing on reinforcement learning for autonomous vehicle control there's the additional opportunity to actually deploy your model if you are a winner on mit's full-scale self-driving autonomous vehicle okay a couple notes on the final class projects won't go too in detail because these are this is summarized on the slides as well as on the course syllabus but the critical action item for today is that if you are interested in participating in the final project proposal competition you must submit your name of your group by tonight and midnight i checked the sign up sheet right before starting this lecture and there's a lot of spaces open so plenty of opportunities for you to be eligible and to compete and to you know pitch us your ideas and be eligible for these prizes again more and more on the logistics of this on the syllabus and on this summary slide the second option as you may know is the for the for receiving credit for the course is the written report of a deep learning paper and the instructions for this summarized on the syllabus it will be due by the end of class 3 59 pm eastern time on this friday okay so that covers most of the logistics that i wanted to touch on the last and very exciting point is the amazing and fantastic lineup of guest lectures that we have slotted for tomorrow's lecture time as well as friday's lecture time briefly i'll talk about each of these amazing speaker sets and what they're going to talk about and contribute to our our exploration of deep learning so first we're going to have two speakers from this really exciting emerging self-driving car company innoviz and they're going to be talking about using a data modality called lidar to train and build deep learning models for autonomous vehicle control our second lecture will be from jasper snook from google research and google brain in cambridge he's going to be talking about the role and importance of uncertainty in deep learning models i'll introduce a little bit about this as a prelude to his lecture today next we'll have professor anima anand kumar from nvidia and caltech she's the head of ai research of all of nvidia and she's going to give a talk which i personally am super excited about on the applications of ai for science and how we can do this in a really principled manner and finally we're going to have miguel and jenny from rev ai which is a company that specializes in natural language processing and automatic speech recognition and they'll be talking about some of their recent research and product development in this line for all of these four series of lectures i highly highly encourage you to attend these synchronously and in person excuse me synchronously live virtually the reason is that we have been publishing the recordings for alexander and i's lectures on canvas but we will need to share our these these lecture recordings with our guest speakers for full company approval prior to publishing them on our course website and so this may take time so we cannot guarantee that the lectures will be shortly accessible via recording so please please please try to attend live synchronously if you can okay so that was a breakneck run through of the logistics of where we've been where we're going for the remainder of the course and as usual if you have any questions please direct them to me and alexander via piazza or by email great so now let's dive into the really really exciting part and the core technical content for this last lecture so so far right 6s191 it's a class about deep learning and why deep learning is so powerful and so awesome and we've specifically seen the rise of deep learning algorithms in a variety of different application domains and begun to understood how it has the potential to revolutionize so many different research areas and parts of society ranging from advances in autonomous vehicles and robotics to medicine biology and health care reinforcement learning generative modeling robotics finance security and the list goes on and on and hopefully now as a result of this course you have a more concrete understanding of how you can take deep learning and neural network approaches and apply them in your own research for example or in in lines of investigation that may be of interest to you and with that you've also come away with some understanding of how the foundations of these algorithms actually function and what that means for actually enabling some of these incredible advances specifically on you know taking a step back about what this learning problem even means so far we've been been seeing algorithms that can move from raw data and train a neural model to output some sort of decision like a prediction a classification taking some action and this was the case for both supervised learning examples as well as reinforcement learning examples and we've also seen the inverse right where we're trying to now instantiate and create new data instances based on some learned probability distribution of the data as was the case with unsupervised learning and generative modeling now what's common to both these directionalities is the fact that neural networks if you really abstract everything away what you can think of them as is very very powerful function approximators all they're learning all they're doing is learning a mathematical and computational mapping from data to decision or vice versa and to understand this in a bit more detail i think it's very helpful to go back to a very famous theorem that was proposed back in 1989 called the universal approximation theorem and at the time it generated quite the stir in the community because it states this very very bold and powerful statement that it then proves in theory and in math which is that a feed-forward neural network with just a single neural layer is absolutely sufficient to approximate any arbitrary function to any uh any arbitrary continuous function to any arbitrary precision and so far right in this class we've been exploring this concept of deep neural models right which constitute taking these individual neural layers and stacking them into a hierarchy but the universal approximation theorem is saying okay you don't even need to stack layers together all you need is one layer you can have as many nodes as many neurons as you want and you should be able to approximate a continuous function to some arbitrary precision so what does this really mean right what this theorem is kind of getting at is let's say you have some problem and you believe that you can reduce your problem down to a set of inputs and a set of outputs that can be related to each other via a continuous function if that's the case in theory you could build a neural net that could solve that problem that could learn that mapping that's pretty powerful but if we take a step more closely right there there are a bit of a few caveats to this theorem and what it's stating firstly it's making no claims or guarantees on the number of units the number of neurons that could be necessary to achieve you know this this continuous function prediction and furthermore it leaves open the question of how do you actually find the weights that could solve this problem and actually result in this architecture it doesn't tell you how you could go about doing this it just says the weights may exist right and finally and perhaps most importantly this theorem is making no claims about how well this neural net function would be able to generalize beyond the setting that it was trained on this theorem i think gets a a larger historical issue that was present in the computer science community which is this idea of this possible over hype about the promise of artificial intelligence and the promise of deep learning in particular and for us as a community you know you're all here clearly interested in learning more about deep learning i think we need to be extremely careful in terms of how we market and advertise these algorithms because while the universal approximation theorem makes a very powerful claim the reality is that such potential over hype that could result from either theory or some of the success that deep learning algorithms are enjoying in practice could in truth be very very dangerous and in fact historically there were two so-called ai winters where research in artificial intelligence and a neural network specifically came to an abrupt decline because of this issue of concern about you know what could be the potential um downstream consequences of ai and whether these methods would actually be robust and generalizable to real-world tasks and so in keeping with this spirit and being mindful about what are the limitations of these types of technologies in the first part of this lecture we're going to dive deeply into some of the most profound limitations of modern deep learning architectures and hopefully you'll get a sense of why these types of limitations arise and start to think about how we can actually in the future advance research to mitigate some of these these limitations the first example is one of my favorites and something that i like to highlight every year which is this idea of generalization how well will a neural network generalize to unseen examples that it may not have encountered before during training and so there was this really beautiful paper in 2017 that took a very elegant approach to exploring this problem and all they did was they took images right from this data set called imagenet very very famous data set in computer vision and in this data set these simple images right are associated with a class label an individual class label and what the authors of this paper did was they took each of these images and for each of them they flipped a coin a flip or flipped a die right a k-sided die where the number of the sides of the die were the number of possible classes that they wanted to assign a label to to each of these images and instead of taking the existing true class label for a corresponding image they used the result of that random sample to assign a brand new label to each of the images in the data set and what this meant is that now the images were no longer associated with their true class label just had a random assignment and in fact two images that could in truth belong to the same class could now be mapped to completely different classes altogether so the effect of this is that they're trying to randomize their labels entirely from this they then asked okay now we have this data set where we have images with completely random labels what happens if we train a neural network on this data set that's exactly what they did and as you may they trained their ima their model on this random label data set and then tested it on a test set where the true labels were preserved and as you could expect in the testing set the error and the accuracy of this network the accuracy of the network fell sort of exponentially as a function of the degree of random randomness imposed in the label assignment process what was really interesting though was what they observed when they now looked at the performance on the training set and what they found was that no matter how much they randomized the labels of the data the model was able to get nearly 100 accuracy on the training set what this means is that no modern neural networks can basically perfectly fit to entirely random data and it highlights this idea that the universal approximation theorem is getting at right the deep neural networks or neural networks in general are just very very powerful function approximators that can fit some arbitrary function even if it has entirely random labels to drive this point home even further again this idea that neural networks are simply excellent function approximators let's consider this 2d example right where we have some points lying on a 2d plane and we're trying to fit a function using a neural network to this data what the neural network is doing is learning a maximum likelihood estimate of the of the training data in that region where it has observations and so if we give our model a new data point shown here in purple that fall somewhere on that training distribution of data that it has seen before yeah we can expect that our neural network would be able to predict a reasonable maximum likelihood estimate for that data point but what happens now if we consider the out-of-distribution regions what is occurring what is the neural network predicting on regions to the left and to the right of these purple points well we have absolutely no guarantees on what the data could even look like in these regions and as a result we have no guarantees on how the function in truth could behave and this is a huge huge limitation that exists in modern deep neural network architectures which is this question of how do we know when our network doesn't know how can we establish guarantees on the generalization bounds of our network and how can we use this information to inform the training process the learning process and the deployment process and so a slight revision to this idea of neural networks being excellent function approximators is the idea that yes they are excellent function approximators but really only when they have training data and to build off this idea a little further i think there can be this popular conception which we've seen can be really inflated by the media in the popular press is that deep learning is basically magic alchemy it can be the be all and all solution to any problem that may be of interest right and so this spawns this belief which is incorrect that you can take some arbitrary training data apply some magical beautiful complex network architecture turn the crank on the learning algorithm and expect it to spit out excellent results but that's simply not how deep learning works and i want you to be really really mindful of this as it's perhaps the most important thing to take away if you're actually going to try to build deep neural models in practice this idea of garbage in garbage out your data is just as important if not more important than the actual architecture of the network you're trying to build so this motivates some discussion of what i think is one of the most pertinent and important failure modes of neural networks highlighting just how much they depend on the nature of the data on which they're trained so let's say we have this black and white image of this dog and we pass it into some convolutional neural network architecture and our task is to train this network to colorize this black and white image to paint it with color the result when this example was passed into some you know state-of-the-art cnn was what you see here on the right look closely and what can you notice there's this pink region under the dog's nose which i think hopefully you can all appreciate is actually the dog's fur but the network is predicting that it should be pink why could this be the case well if you consider what could be some of the examples of the data that was used to train this network probably pictures of dogs right and amongst those pictures of dogs it's very very likely that many of those images will be of dogs sticking their tongues out right because that's what dogs do and so the cnn this convolutional architecture that's trained to colorize a black and white image may have learned to effectively map the region under the dog's nose to be the color pink and so what this example really highlights is that deep learning models are powerful at building up representations based on the data that they have seen during training so this raises the question of okay yeah you've told me that you know neural networks depend on very very heavily on the distribution and the nature of the data that they're trained on but what happens if now they're deployed in the real world and they have to encounter and handle data instances where they may not have encountered before very infamously and very tragically a few years ago a car an autonomous vehicle from tesla that was operating autonomously uh ended up crashing in a major accident and resulting in the death and the killing the the death of the driver and it turned out in fact that that driver who was killed in this crash had in fact reported multiple instances over the prior weeks in which his his or her their car would um would would behave abnormally would swivel and turn towards this highway barrier which turned out to be the very same barrier into which the car ended up crashing and what was revealed was when they looked at this instance a little bit further was that it turned out as they were able to investigate from google street view images was that some years ago in the in the data on which the autonomous system uh was built on it lacked the actual physical construction of this barrier that was uh that ended up being the barrier into which the car crashed later on and so effectively what this instance highlights was that the car had encountered a real world data example that was an out of training distribution example and was unable to handle this situation effectively resulting in this accident and the death of the driver so this idea of potential failure modes have very very significant real world consequences and it's these very same types of failure modes and this notion of what could be safety critical applications that motivate the need for really being able to understand when the predictions from deep learning models can or cannot be trusted effectively when when the network's predictions are associated with a degree of uncertainty and this is a very emerging research direction in in deep learning that's important to a number of safety critical applications autonomous striving like i highlighted earlier biology and medicine facial recognition and countless other examples beyond these types of real world applications this notion of uncertainty is also very important from a more fundamental perspective where we're thinking about how to build neural models that can handle sparse limited or noisy data sets that may be that may carry imbalances with respect to the data or the features that are represented and so jasper in his guest lecture tomorrow will be focusing on this topic of uncertainty in deep learning and we'll give a really comprehensive overview of the field and talk about talk as well about some of his recent research advances in this direction and so to prepare a little bit for that and to set the stage for tomorrow's lecture i'm going to touch briefly on this notion of uncertainty in deep learning to get intuition about what uncertainty means and what are the different types of uncertainties we can encounter in deep learning models so to do that let's consider a very simple classification example where we're given some images of cats and dogs we train a model to predict an output a probability of the image containing a cat or the image containing a dog remember that importantly our model is being trained to predict probabilities over a fixed number of classes in this case two cat dog and so what could happen potentially when we now feed in a net an image that contains both a cat and a dog well because these are probabilities that are being outputted over a fixed number of classes the network is going to have to return estimates that ultimately sum to one but in truth right this image is containing both a cat and a dog so ideally we'd like to generate a high probability estimate for cat as well as a high probability estimate for dog and so this type of example highlights this case where there can be noise or variabilities in our input data such that even though we are able to use a traditional model to output a prediction probability in terms of the classification decision that classification decision is not strictly associated with a sense of confidence or a sense of uncertainty in that prediction and so this type of example where we have a noise or variability inherent to the data can result in a form of uncertainty known as aleatoric uncertainty or data uncertainty now let's suppose that instead of an image of containing both a cat and a dog we input an image of a horse right and again the network is being trained to predict dog or cat and again these probabilities will have to sum to one so yeah we can generate these probability estimates but ideally we want our network to be able to say i'm not confident in this prediction this is a high uncertainty estimate because this instance of an image of a horse is very very unlike the images of the cats and dogs seen during training and so this is an instance where the model is being tested on an out-of-distribution example this horse and so we again expected to not be very confident in the prediction to have a high uncertainty but it's a fundamentally different type of uncertainty than simple data noise or data variability here we're trying to actually capture the model's effective confidence in its prediction on a out of domain out of distribution example and this is this notion of epistemic uncertainty or also model uncertainty so these two types of uncertainties are commonly thought of as the dominant forms of uncertainty in deep neural models although i will say that there is also some hot debate in the field about whether these two data and model uncertainties capture all the types of uncertainty that could exist and so jasper is going to dive really deeply into this topic so i'm going to leave this discussion of uncertainty at that and again encourage you to please attend his lecture tomorrow because it will be very very um really really exciting a third failure mode to consider in addition to you know issues of generalization issues of extending to out-of-distribution regions and and predicting uncertainty is what you may know and have heard of as adversarial examples and the idea here is that we can synthetically construct a data instance that will function as an adversary to the network it will fool it to generate a incorrect and spurious prediction so the idea here is we can take an image and apply some degree of noisy perturbation that generates a adversarial example which to our eyes as humans looks fundamentally the same to our original input image but the difference is that this perturbation actually has a profound effect on the network's decision where with the original image the network may correctly classify this image as an image of a temple but upon applying this adversarial perturbation the resulting prediction is now completely nonsensical predicting with 98 probability that this image is actually of an ostrich and so what is really clever and really important in this idea of adversarial generation and adversarial attacks is this perturbation piece it appears to be random noise but in truth it's constructed in a very clever way so as to actually generate an example that will function effectively as an adversary to understand how this works let's recall our actual training operation when we're optimizing our neural network to according to some loss according to some objective function recall that our objective here is to apply this algorithm gradient sent to optimize an objective or loss function l or in this case j and specifically what we're trying to adjust as a function of this optimization procedure is the weights of the neural network w and we're doing this by constraining fixing our input image and its associated label our input data and its associated label and then asking how do small iterative adjustments in the network weights change the objective change the loss with respect to our input data with adversarial attacks we're now doing in many ways the opposite optimization where now we're asking how can we modify our input data by fixing the weights fixing the labels and seeking to increase the loss as a function of this perturbation on the input data and this is how we can actually train a neural network to learn this perturbation this this perturbation this perturbation entity that can then be applied to an input image or an input data instance to create an adversarial attack extension of this idea and this example that i showed in the 2d case was recently explored by a group of students here at mit in alexander madrid's research group where they devised an algorithm for synthesizing examples adversarial examples that can be robustly adversaria adversarial over a set of complex transformations like rotations color changes and so on and they extended this idea not only in the 2d case but also to the 3d case such that they were able to show that they could actually use 3d printing techniques to synthesize physical adversarial examples in the real world that could then be taken um you could then take a picture of and use that picture to feed it into a model which would then misclassify this actual 3d object based on that 2d image and so the example that they highlighted was these 3d printed turtles where many of these adversarial turtle examples were actually incorrectly classified by a network as a rifle when the network was trained to predict predict the types of you know the class label of the of the image okay so the final limitation that i'd like to very very briefly touch on is this notion of algorithmic bias and that's this idea which you've explored through our second lab that neural network models ai systems more broadly can be susceptible to significant biases such that these biases can actually have potentially very real and potentially detrimental societal consequences and so hopefully through your exploration of our second lab you have begun to understand how these types of issues may arise and what could be strategies to actually effectively mitigate algorithmic bias so these limitations that i touched on are just the the tip of the iceberg right in terms of what limitations currently exist and it's certainly not an exhaustive list that i've written up here but to highlight a little bit again in your lab you have focused and dove really deeply into this exploration of algorithmic bias in computer vision and in tomorrow's guest lecture will dive quite deeply into uncertainty and how we can develop methods for robust uncertainty estimation in deep learning and in the second second portion of this lecture today i'm going to use the remaining time to tackle and discuss two other classes of limitations and use these classes of limitations in terms of structural information and actual optimization to introduce what are some of the new and emerging research frontiers in deep learning today okay so the first that i will dive into is this idea of encoding structure into deep learning architectures imposing some sort of domain knowledge and some sort of knowledge about the problem at hand to intelligently design the network architecture itself to be better suited for the task at hand and we've already seen examples of this right perhaps most notably in our exploration and discussion of convolutional neural networks where the fundamental operation of the convolution was intricately linked to this idea of spatial structure in image data and envisioned data and we saw how we could use convolution as an operation to build up these networks that were capable of extracting features and preserving spatial invariants from spatial data but beyond images beyond sequences there can be many many different other types of data structures that exist and namely ones that are more irregular than a standard 2d image for example and one really interesting data structure modality is that of graphs which are a very powerful way to represent a vast variety of data types from social networks to abstract state machines that are from theoretical computer science to networks of human mobility and transport to biological and chemical molecules as well as networks of proteins and other biological modules that may be interacting within cells or within the body all of these types of data and all of these instances and application areas are very amenable to being thought of as a graph based structure and so this has motivated this really rapidly emerging field in deep learning today of extending neural networks beyond quote unquote standard encodings or standard data structures to cut capture more complicated data geometries and data structures such as graphs and so today we're going to talk a little bit about graphs as as a data structure and how they can inspire a new sort of network architecture that is related to convolution but also a bit different and so to discuss this i'll i'll first remind you of how convolutional neural networks operate right operating on 2d image data where as we saw yesterday the idea is to take a filter kernel and effectively iteratively slide this kernel over input images or input features and do this process over the course of the entire 2d image represented that it's a 2d matrix of numbers and do this in order to be able to extract local and global features that are present in the data in a way that is spatially invariant and preserves spatial structure again the key takeaway is that we're taking a 2d matrix a smaller filter matrix and sliding it over this 2d input this idea of taking the filter of weights and applying it sort of iteratively across a more global input is the exact idea that's implement that's implemented in graph convolutional neural networks where now we are representing our data not as a 2d matrix but rather as a set of nodes and a set of edges that connect those nodes and preserve some degree of information about the relationship of the nodes to one another and again graph convolutional neural networks function by take learning weights for a feature kernel which again is just like a weight matrix that we saw in convolutional networks and now rather than sliding that weight kernel over a 2d matrix that kernel is effectively going to traverse the graph going around to different nodes and at each instance of this traversal it's going to look at what are the neighboring nodes of of a particular node according to the edges of the graph and we're going to iteratively apply matrix multiplication operations to extract some features about the local connectivity of the graph structure and this process is applied iteratively over the course of the entire graph structure such that the learning process can pick up on weights that can extract and pick up on information about the patterns of connectivity and structure that exist to the graph and so this process repeats iteratively across all the nodes in the graph going forward such that at the end at the end of this iterative operation we can then aggregate the information that was picked up by each of these iterative node visits and used this to build a more global representation of what the feature feature space could look like for this graph example shown here so this is a very very brief very high level overview of the idea behind graph convolutional networks but hopefully you get a bit of intuition about how they work and why they can be very relevant to a variety of data types data modalities and application areas and indeed graph neural networks are now being used in a variety of different domains ranging from applications in chemistry biology looking at modeling small molecules according to a graph like structure which naturally you can think of it right as having the atoms in those small molecule be represented the atoms and the elements in that small molecule being represented as nodes in a graph and the local bond structure that connects these uh individual atoms together as vertices excuse me as as edges in the graph yeah as edges in the graph preserving some local structure about what the structure of the molecule looks like and in fact these very same types of graph convolutional networks were used a couple years ago to discover a novel antibiotic compound called halisin that had very potent antibiotic properties and was structurally completely dissimilar from traditional classes of antibiotics and so really this this idea of imposing this graph structure to model and represent small molecules has tremendous applications in drug discovery and in therapeutic design other application areas include the the context of urban mobility urban planning traffic prediction and so google maps it turns out uses graph based architectures to model the flow of traffic in cities and use these models to actually improve their estimates of estimated time of arrival for providing directions returned to the user via google maps which is a functionality that i know all of us are very likely to appreciate and finally in the in the past couple of years due to the nature of the coven 19 pandemic initially at the start of the pandemic there was a lot of excitement about applying deep learning and ai to various problems related to coping 19 from both the public health perspective as well as you know the fundamental biology perspective and diagnosis perspective and so one example using graph neural networks was in incorporating both spatial data and temporal data to perform very accurate forecasting of the likely spread of covet 19 in local neighborhoods and local communities a final example of a different type and class of data that we may encounter in addition to graphs sequences 2d images is that of 3d sets of points which often are called point clouds and they're effectively completely unordered sets of of a cloud of points where still there's some degree of spatial dependence between the points and much like as we saw with convolutional neural networks and 2d images we can perform many of the same types of prediction problems on this 3d types of data structures and it turns out that graph convolutional neural networks can be extended very naturally to these point cloud data sets and the way this is done is by actually dynamically computing graphs as as of effectively a mesh present in a 3d space where this point cloud exists and so you can think of this graph structure as being imposed on this 3d point cloud manifold where you can use the graph to effectively preserve the connectivity of the points and maintain spatial invariance really really cool stuff being done in this domain in applications like neural rendering and 3d graphics all right so the final and second sort of new frontier research direction i'm going to touch on is this idea of automated machine learning and learning to learn and over the the course of five years of teaching this course every single year one of the most popular questions and most popular areas of interest for you all has been this question of how do we actually design neural network architectures how do we choose hyper parameters how do we optimize the model design itself to achieve the best performance on our desired task and this is really one of the fundamental motivations behind this idea of automated machine learning where as you've probably seen there's there's a bit of alchemy there's a bit of mysticism behind how you actually go about building an architecture for some problem of interest and there's a degree of practice of trial and error and of expert knowledge that's involved in this process and so the motivation behind this field of automated machine learning is what if we could what if we could use ai what if we could use machine learning to actually solve this design problem in the first place and so the goal here is to try to build a learning algorithm that learns which model specified by its its hyper parameters its architecture could be most effective at solving a given problem and this is this idea of automl so the original automl framework and there have been many many extensions and efforts that have extended beyond this used a reinforcement learning setup where there were these two components the first being what they called a controller neural net architecture and the function of this controller architecture was to effectively propose a sample architecture what the model potentially could look like and the way is this this is defined is in terms of the hyper parameters of that network right the number of filters the number of layers so on and so forth and then following this effective spawning of a sample candidate architecture by the controller network that network was then in turn trained to get some accuracy some predictive accuracy on a desired task and as a result of that the feedback and the performance of that actual evaluation was then used to inform the training of the controller iteratively improving the controller algorithm itself to improve its architecture proposals for the next round of optimization and this process is iteratively repeated over the course of training many many times generating architectures testing them giving that the giving the resulting feedback to the controller to actually learn from and eventually the idea is that the controller will converge to propose architectures that achieve better accuracies on some data set of interest and will assign low low output probabilities to architectures that perform poorly so to get a little bit more sense about how these agents actually work the idea is that at each step of this iterative algorithm your controller is actually proposing a brand new network architecture based on predictions of hyperparameters of that network right so if you had a convolutional network for example these parameters could include the number of filters the the size of those filters the degree of striding you're employing so on and so forth and the next step after taking that child's network is to then take your training data from your original task of interest use that child network that was spawned by this rnn controller and generate a prediction and compute an accuracy from that prediction that could then be used to update the actual rnn controller to propose iteratively better and better architectures as a function of its training and more recently this idea of of automl has been extended from this original reinforcement learning framework to this broader idea of neural architecture search right where again we're trying to search over some design space of potential architecture designs and hyperparameter hyper parameters to try to identify optimal optimally performing models and so this has really kind of exploded and is very commonly used in modern machine learning and deep learning design pipelines particularly in industrial applications so for example designing new architectures for image recognition and what is remarkable is that this idea of automl is not just hype right it turns out that these algorithms are actually quite strong at designing new architectures that perform very very well and what you can see on this plot that i'm going to show on the right is the performance on a image object recognition task of networks that were designed by humans and what i'm now going to show you in in red is the performance of architectures that were proposed by an automl algorithm and what you can appreciate is that the neural architecture search and automl pipeline was able to produce architectures that actually achieved superior accuracy on this image recognition task with fewer parameters and and more recently and extending finally beyond this there's been a lot of interest in taking this concept of automl and extending it to this broader idea of auto ai designing entire data processing learning prediction pipelines that go end to end to um sort of beginning from data curation all the way to deployment and using ai machine learning algorithms as a way to actually optimize the components of this process itself and so i encourage you to think about what it would mean if we could actually achieve this capability of designing ai that can generate new neural networks and new machine learning model models that can be very very performant on tasks of interest of course this will reduce our troubles and our difficulty in actually designing the networks themselves but it also gets at this heart of this broader question about what it actually means to be intelligent sort of alluding back to how alexander opened the course and started this lecture series and i hope that as a result of of our course and as a result of your participation you've gained a bit more appreciation about what are the connections and distinctions between human learning human intelligence and some of these deep learning models that we've been exploring this week and so with that i'll close and conclude the lecture and finally make a final point about reminding you about our open office hour session which is going to occur from now till about 4 p.m we'll be there to answer questions about the labs about the lectures and importantly in 10 250 in person we will be distributing the class t-shirts so please come by we will be there to distribute the t-shirts right after this and with that um once again thank you all so much for your attention and your participation we really enjoyed this and doing this every year and i hope to see many of you very shortly in 10 to 50. thank you so much
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2023_TexttoImage_Generation.txt
foreign for the invite so yeah I'm I'm a research scientist at Google research just across the street and I want to talk today about a paper that we just put up on archives which is a new model for text to image generation via must generative Transformers and as you can guess from these figures that the name of the model is Muse so this is a work that I've done with an awesome set of colleagues at Google research the paper and a web pages online so many of you must be familiar with text to image generation which has uh really advanced in the last year or two and a question might be why texture image generation I think it's because text is a very natural control mechanism for Generation we are able to express our thoughts creative ideas through the use of text and then that allows non-experts non-artists to generate compelling images and then be able to iterate them through editing tools to create your own personal art and ideas uh also very importantly deep learning requires lots of data and it is much more feasible to collect large-scale paired image text Data an example is the lion 5 billion data set on Which models such as stable diffusion have been trained so we should recognize that various biases exist in these data sets and bias mitigation is an important research problem and lastly these models can exploit uh pre-trained large language models which are very powerful and they allow for extremely fine-grained understanding of text parts of speech you know nouns verbs adjectives and then be able to translate those semantic Concepts to Output images and these llms importantly can be pre-trained on various text tasks with orders of magnitude larger Text data so you can pre-train with text only data and then use paired data to do the text to image translation training so what's the state of the art you must be familiar with many of these so Dali 2 from openai was one of the first models which is a diffusion model that was uh that was built on pre-trained clip representations uh I won't have time to go into each of these terminologies so I'm sorry if some of it is not familiar uh imagine from Google is also a diffusion model that was built on a pre-trained large language model uh party which is another large-scale model from Google is an auto regressive model on latent token space and stable or latent diffusion is a model released from stability AI which is a diffusion model on latent embeddings so so here's an example that is showing comparing dalitu imagine and the Muse model on this particular text prompt um and I'll go revisit this example and point out some pros and cons of the different models later uh some image editing applications that have been built on these models one example is personalization via a tool called dream boot which was a paper written by some of my colleagues so here the idea is that instead of generating for example an image of a cat you can generate an image of your cat by fine-tuning the model on some images of your own cat and then doing text guided generation of your cat so that's actually proven extremely popular a couple of apps built on the dreamboot tool have gone to the top of the App Store and many people have used it to generate their own avatars for you know on social media another example is a fine tuning for instruction following so that one can do various kinds of flexible mask free editing for example here on this top right image we say add fireworks to the sky and then the model has an after fine tuning has an understanding of Concepts such as the sky and fireworks and it's able to do a reasonably good job of following the instruction foreign so how's Muse different from the the prior models uh that I listed so firstly it's neither a diffusion model nor is it auto regressive although it has some connections to both diffusion and auto regressive family of models uh it is extremely fast so for example a 512 by 512 image is generated in 1.3 seconds instead of 10 seconds for Imogen or party and about four seconds for stable diffusion on the same Hardware and on quantitative evals such as clip score and FID which are measures of how well the text prompt and the image line up with each other that's the clip score and FID is a measure of the image quality itself the diversity and Fidelity the model performs very well um so so it has similar semantic performance and quality as these much larger models but significantly faster inference and it has significantly better performance than stable diffusion all of these statements just hold true at this point in time all of these models keep improving so um there could be a new model that does even better you know next week and some applications that are enabled by the way we train the model are zero shot uh editing um so I'll show some examples of that musk free editing in painting to remove objects and cropping to expand beyond the boundaries of an image so I'll give a quick overview of the architecture here and then go into the individual components one by one Muse is mostly a Transformer based architecture for both the text and image parts of the network but we also use cnns and and we also use vector quantization and we also use Gan so we're using a lot of the toolkits in the modern CNN modern deep network uh you know from the toolbox we use image tokens that are in the quantized latent space of a CNN vqan and we train it with a masking loss which is similar to the masking loss used in large language models so the model is basically learning how to reconstruct masked tokens you pass in a set of tokens mask a bunch of them at a random and then learn to predict that and just by doing that with variable masking ratios enables it to get very good quality generation uh we have two models a base model that generates uh two to six by 256 size images and then a super resolution model that upscales that to five and two by five one two okay so the first uh major component is this pre-trained large language model so we use a a language model train at Google called the T5 XXL model which has about five billion parameters which was trained on many text tasks text to text tasks such as translation question answering and classification and when a text prompt is provided uh it's passed in through the text encoder and we get a sequence of vectors of Dimension 4096 which are then projected down to another lower dimensional sequence and those set of text tokens are then passed into the rest of the network and then we use cross attention from the text to the image tokens to guide the generation process foreign the next component is a the vector quantized latent space so for this we built first a vqan which is simply a a form of autoencoder with some Vector quantization built into the latent space and um the reason we use a quantize set of tokens about in most of the models we use 8 000 tokens 8192 is that this is amenable to a cross entropy classification type loss so when you want to predict what token is missing you just say which of the 8192 tokens does it need to be and that's a classification loss and we find that this works much more effectively than a regression loss where you might try to predict the exact pixel values of the missing token the wiki Gan has an entirely convolutional structure and encoder decoder structure which we found performs better than either Transformer based with q-gans or hybrid approaches that mix Transformers and cnns we use a down sampling ratio of 16 which generates latent so so when you go through the encoder layers of the CNN you get a latents that are of size 16 by 16 from a256 by 256 input and the super resolution model uses another vqan with latents of size 64 by 64. so those are larger latents because we want more information in each of those super resolution latents um and I this this um the um weakugan is built on work uh from some from from this paper called musket which is also from our group so a key component of our masking of our training is this idea of variable ratio masking so given a set of tokens you have an image it goes through the VQ tokenizer and then you get a sequence of say 196 tokens what we do is to drop a variable fraction of those tokens to then pass into the network to learn to predict um so unlike llms where you usually have a fixed ratio of tokens that's dropped here we use a variable distribution which is biased towards a very high value of about 64 percent of the tokens being dropped and we find that this makes the network much more amenable to editing applications like in painting and uncropping and since it's variable at uh inference time you can then pass in masks of different sizes and since it has done that during training it's able to do that during inference time okay so here's the base model which has which is producing 256 by 256 size images so this is a a Transformer based model the the base Transformer that gets in the mass tokens and also the token the text tokens so these these mask tokens are the image tokens and then we have the text tokens so we have cross attention from the text tokens to image and also self attention between the image tokens during training all the tokens are predicted in parallel with the cross entropy loss but during inference we find that it is better to do an iterative schedule where we predict a subset of the tokens first we choose it we choose tokens with the highest confidence pass those back in as unmasked tokens and repeat this process for about 12 or 24 steps until all of the tokens are unmasked we find that this significantly increases the quality of the result foreign and then the super resolution model up samples uh the 256 by 256 image 2512 by 512 importantly this does this in token Space by transforming the latents from 16 by 16 to 64 by 64. and we use cross attention from the text embedding as well as the low res tokens and pass it into the high-res Transformer so that's we Mask The high-res Tokens but the low res are left unmasked the text embedding is left unmasked and the supervised model is trained to predict the Mast high-res tokens so you can see the effect here especially on the text the the low res output of 256 by 256 it is hard to see the text and then once it's corrected in token space you can you can read the text the facial features of the hamster are much more clear many details are are reconstructed in token space what we found is that a Cascade of models is actually very important if you directly try to train a 512 by 512 model it tends to focus the model tends to focus too much on the details uh whereas if you first train a low-res model you get the overall semantics or the layout of the scene right somewhat like what an artist might do where you you get the scene right first and then start filling in the details uh so one of the things we are looking at is how to increase uh maybe increase the depth of this Cascade go from 128 to 256 to 512 see if that improves further the quality um so here you can see um so so here's a comparison of our token based super resolution comparing to a diffusion based pixel based super resolution so here on the left the prompt the text prompt here is a rabbit playing the violin and this is the two these are two samples from the model um of 256 by 256 resolution and on the top right is the result of our token based super resolution output so you can see that the musical notes the details on the violin all of these are much more clear but if we just took this and did a diffusion based super resolution which is just pixel hallucination that doesn't look so good so our token super resolution plays well with our token based approach um so another important kind of a heuristic or a trick is something called classifier free guidance which we found is crucial at inference time to trade off diversity and quality um what this um this trick does is to Simply trade off sorry um push the logits away from unconditional generation and towards text conditional generation so you can think of the scale as the amount by which you're pushing away from unconditional generation and here's a comparison the image on the left the two images on the left were generated by with the guidance scale of 0.5 so there's more diversity in the poses of the cat and the backgrounds but it's it's not as there are there are more errors for example here on the mouth of the cat compared to the two images on the right which are generated with a much larger guidance scale so during training um we just dropped some tokens with a probability of 10 I'm sorry we dropped conditioning with a probability of ten percent uh but then at inference time we choose a particular guidance scale um another trick so many tricks here to make the images look good is something called negative prompting so the idea here is that there are Concepts which one cannot say in the text prompt itself for example if you would like to generate an image but not have trees in it it's somewhat cumbersome to say that in the text prompt so so the classifier free guidance allows us to say generate this scene but don't have trees in it and we do that by pushing away from the negative prompt so here's an example um a text the text prompt says cyberpunk Forest by Salvador Dali so that generated these are two examples um that were generated using that prompt and then I added a negative prompt where I said I don't want trees I don't want green and blurry that was the text prompt that I provided and then it generates these other kinds of styles which seem to respect the negative prompt so that is a very useful trick um for for generating images closer to the things you're thinking of in your head this is um yeah so this is the iterative decoding I mentioned earlier at inference time and you can see that uh decoding tokens in multiple steps is very helpful for high quality generation so here's a sequence of unmasking steps which are generating tokens up to 18 steps and there's a steady Improvement in the quality of the output our base model uses 24 steps and the super resolution model uses eight steps but these number of steps are significantly lower than the 50 or 1000 steps that are required by diffusion models and this is actually one of the most important reasons for the speed of our approach so we get like a 10x lower number of forward props that we need to push through the model however these days there's these ideas of progressive distillation which can potentially reduce the number of steps for diffusion models and we hope to exploit that to further reduce our sampling steps okay so we did some so now on to evalse that that was a quick tour of the entire model so we did some qualitative evals where we took a set of 1650 prompts from the party paper and generated images from our model and the stable diffusion model and sent it to raters to answer the question which of the two images one from our model one from stable diffusion which of them matches the prompt better so the Raiders preferred our model 70 of the time compared to 25 percent of the time for stable diffusion and for the remaining a few percent they were indifferent so we removed those from this plot um we also did some qualitative evals on various properties of the model how well does it respect things like cardinality so here so here it's about you know counting so we have three elephants standing on top of each other we get three elephants uh four wine bottles so one two three four a tiny football in front of three yellow tennis balls so it seems to respect all of those however when the num the the count goes up Beyond six or seven the model tends to start making mistakes uh here we are looking at different styles so here's a portrait of A well-dressed raccoon oil painting in the style of Rembrandt uh pop art style and a Chinese ink and wash painting style other evals are about composition geometric composition three small yellow boxes on a large Blue Box and a large present with the red ribbon to the left of a Christmas tree Etc we find that it's if you have want to render text then it does a reasonable job if there's not too many words in the text so one word or two words the model is able to render them well and we're also able to provide um very long detailed uh prompts so here's an example an art gallery displaying Monet paintings the art gallery is flooded robots are going around the art gallery using paddle boards so a lot of this power comes from the language model itself which is really really powerful and able to represent each of those Concepts in the embedding space and what we're able to do is to map that to pixels it's still mind-blowing though it's still amazing that we can get these kinds of outputs from text prompts here are some failure cases like I mentioned um here we asked the model to render a long a number of words and it did not do such a good job 10 wine bottles and it stopped at one two three four five six seven and so on here's a subjective comparison to other state-of-the-art models so um one thing to say is that in this space the evaluations are in my opinion not very robust uh because by definition often the text prompts and the Styles we ask for are not natural we want various mix and match of styles and so in important open research question in my mind is how do you evaluate that model A is better than model B other than just looking at some results I think that's a very interesting and open uh question so here so here I'll just um point out a couple of things with the uh the example at the bottom which is a rainbow colored penguin and dalitu is generating Penguins but doesn't do such a good job with the colors whereas the Imagine and Muse models seem to be able to respect both and we think this is probably because the model is relying on a clip embedding which might lose some of these details um we did some quantitative eval on the metrics that we have which is FID and clip on a data set called cc3m and compared to a number of other models both in diffusion and autoregressive type models um here for FID score lower is better and for clip higher is better so overall we seem to be scoring the best on both of these metrics and here's a the eval on Coco comparing to many of the state-of-the-art models like Dali Dali 2 imagine party and um we are almost as good as the party 20 billion model um uh just slightly worse in FID score but doing significantly better on clip score um so so that's good which means that we are able to respect the semantics of the text prompt better than those models if one was to believe the clip score and finally a runtime on TPU B4 Hardware where here's the model the resolution that is generated and the time wall clock time that it took so most of the compute goes into the super resolution the base model takes 0.5 seconds and then it takes another 0.8 seconds to do the super resolution for a total of 1.3 seconds foreign so um now here are some examples of the editing that are enabled by the model on the left is a real input image and we've drawn a mask over one part of the image and then ask the model to fill that in with a text guided prompt so a funny big inflatable yellow duck hot air balloons and futuristic streamline modern building so all that came just zero shot we didn't do any fine tuning of the model but the fact that we trained it with variable masking allows it to do this out of the box so another example where we do out painting so the mask here is the rest we only want to keep the um this person and replace the rest of the image and the text guided prompt was provided to fill in the rest of the region so the London Skyline a wildflower Bloom at Mount Rainier on the Ring of Saturn uh we can also do other kinds of editing uh where we took a real image with there was a negative prompt use a man wearing a t-shirt and then these were the positive prompts to to do various kinds of style transfer uh on the on the on the T-shirt and here are some examples of mask free editing where on the top row are input images and on the bottom are the transformed outputs where it just relies on the text and attention between the text and images to make various changes for example here we could we say ashiba Inu and that the model converts that cat to a Shiba Inu dog a dog holding a football in its mouth so here the dog itself changes and then this ball changes to a football a basket of oranges where the model is able to keep the general composition of the scene and just change the apples to oranges the basket texture has also changed a little bit but mostly the composition is is kept similar uh here it's able to change the just um make the cat yawn without changing the composition of the scene and one of the things we are exploring is how we can have further control of the model uh you know really be able to adjust specific parts of the image without affecting the rest um yeah and here's a an example of iterative editing where uh the the the image at the top was provided and the top right and then on the bottom right is the output across for a croissant next to a latte with a flower latte art and these are we we ran the editing for a hundred steps uh progressively adjusting the tokens and these are the results of the different iterations of of train of inference so you can see that it starts with the cake and the latte and then progressively transforms the cake to something that looks in between the cake and the croissant and then finally it looks like the croissant and similarly the latte art changes from a heart to a flower uh you know kind of um some kind of an interpolation in some latent space so because of the speed of the model there's some possibility of interactive editing so I'll just play this short clip which shows how we might be able to do interactive work with the model so that's the real time as in it it's not sped up or slowed down it's not perfect but you can see the idea okay so um next steps for us are improving the resolution quality handling of details such as rendered text probing the cross attention between text and images to enable more control exploring applications um so yeah the paper and web page are listed here and I'm happy to take questions thank you foreign thank you so much maybe I have one question just to get things started um I'm curious in your opinion what are the most important contributions that Muse made specifically to achieve the very impressive speed up results right because in comparison to the Past methods it's really like a huge gap so I'm curious what like what across the many contributions led to that in your opinion yeah it was primarily the parallel decoding so in Auto regressive models by definition you decode one token at a time so you have to go let's say the image consists of 196 tokens then you're doing 196 steps because you unmask one token pass it back in get number two and so on in diffusion models what you have to do is to denoise step by step yeah you start with Pure Noise pass it through the network get something out then pass it back in and repeat this process and that process needs to be fairly slow otherwise it breaks down a lot of the research is about how to speed that up so that takes thousands of steps so if you have a comparable model there's just a fundamental number of forward props that need to be done so what we did instead was to go for parallel decoding and they instead of one token at a time you just do end tokens at a time and then if it's fixed and you just need 196 by n steps and the idea is that if you use high confidence tokens then they are potentially conditionally independent of each other at each step uh and we can each of them can be predicted without affecting the other so that seems to hold up very interesting [Music] so the prompt is allowing you to navigate the latent space of the image itself have you done any analysis or looked at what is the relationship between the prompting and the directions or velocity or any other sort of metric that you can look at in that latent representation exploration yeah it's a good question we haven't yet done a very thorough investigation uh what we looked at was uh and I don't have it here but we looked at some cross attention Maps uh between the text embeddings and the and the image the generated image and what you can see is that um you know um the uh it does what you might expect which is that uh there's cross attention between the different nouns and the objects in the image when you have a verb that connects two objects it seems to then highlight both those objects uh sort of paying attention to that so that's one level of exploration but I think we need to do more about like as we walk around in latent space what's the resulting image if we move between two text prompts what happens we we don't know yet if the latent space is smooth or it has abrupt jumps the editing suggests some smoothness here as you iterate but this is with a fixed prompt not with the changing prompt um so I think we need to do more any other questions foreign [Music] yeah I had a question about like the cardinality portion of the results um like one of the failure cases show that like the model can't really handle if you give it more than like six or seven of the same item but sometimes when you don't specify anything in the prompt it automatically generates like more than that of the number of items do you know why it breaks down when you give it like six or seven or eight I think it's my feeling and again we haven't analyzed this is that it's just that fewer num small numbers are more common in the data we just have a few two three four uh and then the model has just never seen uh 10 or 20 and then there's also not the way we train it there's not this concept of graceful degradation from 10 to many uh where it might just say Okay a crowd of people and then you generate what looks like 30 people I think we need more there have been papers that are trying to overcome this problem uh I think explicitly through just enriching the data I think more clever Solutions are needed thank you [Music] when you do not specify a background of a request how does it happen that is there a limited uh amount of backgrounds that it has to choose from randomly or uh is there some sort of a generator that it just how does it work when you do not specify the background for example at all yeah it's a great question so one thing we did was just type in nonsense into the text prompt and see what happens and it seems to generate just random scenes like of of you know Mountains and the beach and so on it doesn't generate nonsense so we think that the code book in the latent space is dominated by these kinds of backgrounds and somehow that's what gets fed in when you go through the decoder so um yeah I don't have a better answer than that hi thanks for the talk on kind of the Mask free and painting super interesting um a lot of the prompts you showed had like the the correction was a small change from maybe the next slide I think um one more sorry I'm looking for the one with the dog with the football in his mouth yeah yeah right many of these are kind of small changes from what's in there right you're going like give it the input as a basket with apples in it you're saying oranges have you tried if the like editing text is completely different like you go from Dog With A basketball to a cat bowling or something like that yep that's even something crazy like a city that doesn't work so we've tried that uh it doesn't uh so I think something like the instruct picks to picks uh that it works for that because they explicitly train it with large edits part of the problem could be the way we do the editing which is based on these small back prop steps which just allow you to do local changes so we don't know if it's a limitation of the model or a limitation of the gradient this the the SGD steps we do when we're doing the editing so what happens with the editing is you start with the original image then take the text prompt and just keep back propping till you know it settles down converges um say 100 steps and so each of those steps like I showed here is small changes and if you want something like what you described you need to have the ability to kind of jump faster 300 steps maybe if you did it for a million steps you could have that change we haven't yet looked at that yeah it's almost like hard to imagine even what that middle ground would look like yes like do you have to pass through some Valley of unrealistic images to get to this very large change here each of those sub steps look like believable things so it could be an optimization problem thank you maybe extension on that same question I'm curious for the case where the not the entire image is changing but maybe at the Image level you're changing like the style for example right like the main content is maintained but the style is being changed and modified is that have you tried these type of things before um uh yes I think so like I don't have example maybe something like the uh maybe something like this um I think um it seems much harder to do realistic pose changes like the kinds um he was asking about compared to these Global style changes because I think the global style is controlled by maybe one or two elements of the code book or something like that whereas to to change pose or make drastic geometry changes might require much more interaction among the different tokens yeah good question I thank you for that great talk I was wondering how does the model determine which picture is more accurate to the description or which is which has a better resolution like what part of the model determines that without needing to needing the human oversight so it doesn't in practice well are you talking about training or inference uh with the inference so we what we do is just use random seeds and generate a whole bunch of images and then we just pick out the one we like so often what happens is that if you have eight images or 16 three or four of them would be really nice and then a few of them would look really bad and we still don't have a self-correcting way or an automated way to say this image matches the prompt better um so yeah so so in general the hope is that the the latent space has been trained in a sensible way so that it will generate plausible images okay uh but for example you might generate a cat with you know the legs all in the wrong position and then it's still using the right elements of the codebook but it arranged them wrongly so we do see those kinds of mistakes okay so for your next step in improving the resolution do you imagine um you'd go about it the same way yeah okay yeah we'd have to some okay thank you those issues okay one more question um so I kind of have two questions in one so my first question is uh I'm not sure what are the limitations on the size of the Corpus for the text and for the images um but say uh there's like a new artist that hasn't come out yet or like will in the next two months um if I were to ask in the prompt say I want you to draw this and the style of this thing that isn't invented yet how would the model react to that change and um a separate question is with these especially in the example with the masked images the top row and the bottom row how much new information are we actually getting from these uh images that the model spits out so it's a it's a great um great questions so for the for the data clearly the data is biased towards famous artists and that's why we can say things like you know in the style of Rembrandt or Monet and it has seen many examples of those because this is data scraped from the web if you had a new uh like the style of a new artist the only current way to do it would be through fine tuning where we take those images of the person pair it with some text and then kind of train the model to generate that this is kind of what the dream boot approach tries to do although that's specific to you know objects rather than the style but you could imagine using something like this for the style of a new artist um it's not yet zero shot where you just present these examples and say can you generate something based on this text prompt but in that style so that's something we would like to work towards regarding the masking I didn't fully follow the question so you said something about the top and bottom rows um yeah I think it was uh one of the slides way past this one but it was a pretty general question it was how much new information are we actually gaining from these I guess pictures that haven't like no one has seen before like with the bearer on the bicycle Oh you mean yeah are you talking about memorization versus like is it actually is do you mean if this is already in the data set the training data set um yeah yeah so this is actually a great question and I think I don't think we still have very good answers to this um you know is it is a large language model just hallucinating some things it's seen before just mix and match probably uh it's probably this is also doing something like that where it's seen Bears before and bikes and has been able to put them together in a plausible way it's unlikely that the exact same image was in the data set but again we don't have really good tools yet to go in like like we could try to search based on some kind of embedding clip embedding or something to look for similar images uh uh but at the scale at which we are training hundreds of millions of images we haven't actually gone in and looked to see how much this memorization versus new combination of Concepts I do think it's the latter because it seems unlikely that you know these kinds of images would be in the training data set okay thank you thank you very much let's give tulip one more round of applause thank you
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2019_Deep_Learning_Limitations_and_New_Frontiers.txt
okay so welcome everyone to the final foundational lecture of this year's offering of six s-191 where we'll be taking kind of a step back from the architectures and the algorithms we've been exploring over the past two days to take a broader perspective on some of the limitations of deep learning and a couple of exciting new subfields that are emerging within deep learning so before we dive into that some very important announcements first and foremost we have amazing t-shirts for you that have arrived and we'll be distributing them today after lecture so we'll have them arranged sort of in the front by size and we'd first like to distribute two for credit registered students we should have plenty for everyone but then to registered listeners and then to all other guests and lesson and listeners so there should be more than enough we also have some shirts from Google as well as some other swag so it's gonna be really exciting and please do stick around for that so sort of where have we been and where are we going so following this lecture we have our final lab which is going to be on reinforcement learning and then tomorrow we have two extremely exciting guest lectures from Google and IBM as well as time for you to work on your final projects during the lab portion and on Friday we'll have one final guest lecture from Nvidia the project pitch competition as well as the judging and awards ceremony so I've we've received a lot of inquiries about the the final projects and specific logistics so I'd like to take a few minutes to to recap that so for those of you who are taking the course for credit right you have two options to fulfill your final requirement the first is a project proposal and here we're asking you to really pitch a novel deep learning architecture idea application and we've we've gotten a lot of questions about the size of the groups so groups of one are welcome but you will not be eligible to receive a prize if you are a group of one listeners are welcome to to present ideas and join groups in order to be eligible for a prize you must have a group of two to four people and your group must include at least one for credit registered student we're gonna give you three minutes to do your pitches and this link there's pretty detailed instructions for the the proposal on that link so last year we we only gave people one minute for their pitch which was really really short so by giving you three minutes we're really hoping that you have spent some time to think in-depth about your idea how it could work why it's compelling and we're going to have a panel of judges including judges from industry as well as Alexander myself and and other guest judges and our prizes are these three nvidia gpus as well as a set of four google homes sort of how the the prize Awards awarding is going to go is that top three teams will be awarded a GPU per team and the Google homes will be distributed within one team so if you have a team that has four people right and you're awarded the the Google home prize everyone will get a Google home if you have two each of you will get one and then the remaining two will be awarded to to the next best team okay so in terms of actually completing this we ask that you prepare slides for your for your pitch on Google slides so on Friday if you're participating you'll come down to the front present your pitch to the rest of the class and to our judges we ask that you please submit your groups on by today tonight at 10 p.m. there's this link leads to a Google sheet where you can sign up with your your team members names and a tentative like title for your for your project tomorrow's lab portion will be completely devoted to in-class work on the project then we ask that you submit your slides by midnight Thursday night so that we have everything ready and in order for Friday and the link for doing that is there and finally our our presentations will be on Friday so as was discussed in the first lecture the section second arguably most more boring option is to write a one-page review of a recent deep learning paper it can be either on you know deep learning fundamentals and theory or an interesting application of deep learning to a different domain that you may be interested in and this would be due Friday at the beginning of class by email to - intro to deep learning stuff mit.edu okay so tomorrow we're going to have two guest speakers the first is going to be where we're really really lucky and privileged to have her Fernanda Viegas she is an MIT alum and she's the co-director of Google's people and artificial intelligence Research Lab or pair and she's a world-class specialist on visualization techniques for machine learning and deep learning so it should be a really fun interactive a cool talk and we really hope to see everyone there the second talk will be given by Dimitri or dima crota from the MIT IBM Watson AI lab he's a physicist by training really exciting and fun guy and his research focuses on biologically plausible algorithms for training neural network so he'll he'll give some insight onto whether or not back propagation could actually be biologically plausible and if not what are some you know exciting new ideas about how learning could act we work in in the neuroscientific sense I guess then the lab portion is going to be devoted to work on the final projects we'll be here the TAS will be here you can brainstorm with us ask us questions you know work with your team you get the point finally Thursday we'll have our final guest lecture given by Yann Kouts from Nvidia who's a leader in computer vision and then we'll have the project proposal competition the awards as well as pizza celebration at the end ok so that is sort of the administrivia for for today and ideally the rest of the class so now let's start with with the technical content so on day one Alexandre showed this slide which sort of summarized how deep learning has revolutionised so many different research areas from autonomous vehicles to medicine and healthcare reinforcement learning generative modeling robotics and the list goes on and on and hopefully now through this series of 5 lectures you have a more concrete understanding of how and why deep learning is so well suited for these kinds of really complex tasks and how its enabled these advances across a multitude of disciplines and also you know so far we've primarily dealt with these algorithms that take as input some set of data in the form of signals sequences images or other sensory data to directly produce a decision at the as an output whether that's a prediction or an action as in the case of reinforcement learning and we've also seen ways in which we can go from sort of from decision to data in the context of generative modeling and to sample brand new data from the decision space in sort of this probabilistic setting more generally in in all these cases we've really been dealing with algorithms that are designed to do that are optimized to do well on Ingle tasks right but fail to think like humans or operate like humans sort of at a at a higher love order level of intelligence and to understand this in more detail we have to go back to a famous theorem in sort of the theory of neural networks which was presented in 1989 and generated quite the stir and this is called the universal approximation theorem and basically what it states is that a neural network with a single hidden layer is sufficient to approximate any arbitrary function any continuous function and in this class you know we've we've mostly been talking about deep models that use multiple layers but this theorem you know completely ignores this and says you just need one one neural layer and if you believe that any problem can be reduced sort of to a set of inputs and an output this means that there exists a neural network to solve any problem in the world right so long as you can define it using some continuous function and this may seem like an incredibly powerful result but if you look closely right there are two really big caveats to this first this this theorem makes no guarantee on the number of hidden units or the size of the hidden layer that's going to be required to solve you know your arbitrary problem and additionally it leaves open this question of how do we actually go about finding the weights to support whatever architecture that could be used to solve this problem it just claims that and actually proves that such an architecture exists but as we know from gradient descent and this idea of finding ways and sort of like a non convex landscape there's no guarantee that this this process of learning these weights would be anyway straightforward right and finally this theorem doesn't provide any guarantees that whatever model is that's learned would generalize well to other tasks and this theorem is is a perfect example of sort of the possible effects of overhype in AI and as a community I think we're all interested in sort of the state of deep learning and how we can use in that's probably a big motivation of why you're sitting in this lecture today but I think we we really need to be extremely careful in terms of how we market and advertise these algorithms so while the universal approximation theorem generated a lot of excitement when it first came out it also provided some false hope to the AI community that neural networks as they existed dirt could solve any problem in the world right and this overhype is extremely dangerous and historically there have actually been to quote-unquote AI winters where research in AI and neural networks specifically in in the second AI winter came to sort of a grinding halt and so this is why you know for the first portion of this lecture I'd like to focus on some of the limitations of these algorithms that we've learned about so far but also to take it a step further to touch on some really exciting new research that's looking to address these problems and limitations so first let's let's talk about limitations of deep learning and one of my favorite examples of a potential danger of deep neural networks comes from this paper from Google Google brain that was entitled understanding deep neural networks requires rethinking generalization and this paper really did something really simple but very powerful they took images from the imagenet dataset and you know their labels first four examples are shown here and what they did is that for every image in their data set they flipped a die write a K sided die where K is the number of possible classes that they were trying to consider in a classification problem and they use this result of this die roll to assign a brand you randomly sampled label to that image and this means that you know these new labels associated with each image were completely random with respect to what was actually present in the image and if you'll notice that these two examples of dogs ended up in this in this demonstration that I'm showing being mapped to different classes altogether so we're literally trying to randomize our labels entirely then what they did was that they tried to fit a deep neural network model to the sampled image net data ranging from either the the untouched original data with the original labels to data that they had reassigned the labels using this completely random sampling approach and then they tested the accuracy of their model on a test data set and as you may expect the accuracy of their models progressively decreased as the randomness in the training data set increased right but what was really interesting was when they tried was what happened when they looked at what happened in the training data set and this is what they found that no matter how much they randomized the labels the model was able to get 100% accuracy on the training set right because in training you know you're doing input label you know both right and this is a really powerful example because it shows once again in a similar way as the universal approximation theorem that deep neural Nets can perfectly fit to any function even if that function sort of is based on entirely random labels and to drive this point home we can understand neural networks simply as functional approximator x' and only universal function approximation theorem States is that neural networks are really really good at doing this so suppose you have this you know the set of training data we can learn you we can use a neural network to learn a maximum likelihood estimate of this training data and if we were to give the model a new data point shown here in this purple arrow we can use it to predict what the maximum likelihood estimate for that data point is going to be but if we extend the axis a bit left and right outside of the space of the training data that the network has seen what happens right there are no guarantees on what the training data will look like outside these bounds and this is a huge limitation that exists in modern deep neural networks and and in deep learning generally and so you know if you look here outside of these bounds that the network has been trained on we can't really know what our function looks like if the network has never seen data from those pieces before right so it's not going to do very well and this notion leaves really nicely into this idea of what's known as adversarial attacks on neural networks and the idea here is to take some example for example this this image of what you can see is a temple which a standard CNN trained on imagenet let's say can classify as a temple with you know 97 percent probability and then we can apply some perturbation to that image to generate what we call an adversarial example which to us looks completely similar to the original image right but if we were now to feed this adversarial example through that same CNN we can no longer recognize it as a temple you know and instead we predict okay this is an image of an ostrich right that makes no sense right so what's going on what is it about these perturbations and how are we generating them that we're able to fool the network in in this way so remember that normally during training when we train our network using gradient descent we have some like objective loss function J right that we're trying to optimize given a set of weights theta input data X and some output label Y and what we're asking is how does a small shift in the weights change our loss specifically how can we change our weights theta in some way to minimize this loss and when we train our networks to you know optimize this set of weights we're using a fixed input X and a fixed label Y and we're again reiterating trying to update our weights to minimize that loss with adversarial attacks we're asking a different problem how can we modify our input our input for example an image our input X in order to now increase the error in our networks prediction so we're trying to optimize over the input X right to perturb it in some way given a fixed set of weights theta and a fixed output Y and instead of minimizing the loss we're now trying to increase the loss to try to fool our network into making incorrect predictions and an extension of this idea was recently presented by a group of students here at MIT and they devised an algorithm for synthesizing a set of examples that would be adversarial over a diverse set of transformations like rotations or color changes and so the first thing that they demonstrated was that they were able to generate 2d images that were robust to noise transformations distortions other transformations but what was really really cool was that they actually showed that they could extend this idea to 3d objects and they actually used 3d printing to create actual physical adversarial objects and this was the first demonstration of Rosario examples that exist in the physical world so what they did in in this result shown here is that they 3d printed a set of turtles that were designed to be adversarial to a you know a given network and took images of those of those turtles and fed them in through the network and in the majority of cases right the network classifies these 3d turtles as rifles right and these these objects are designed to be adversarial they're designed to fool the network so this is pretty scary right and you know it opens a whole Pandora's box of how can we trick networks and and has some pretty severe implications for things like security and so these are just a couple of limitations that of neural networks that I've highlighted here you know as we've sort of touched on throughout this course they're very data hungry it's computationally intensive to train them they can be fooled by adversarial examples they can be subject to algorithmic bias they're relatively poor at representing uncertainty then a big point is this this question of interpretability right are known networks just black boxes that you can't peer into and sort of in the ml and AI community people tend to fall you know in sort of two camps one camp saying interpret interpret ability of neural networks matters a lot it's something that we should devote a lot of energy and thought into and others that very strongly argue that oh no we should not really concern ourselves with interpretability what's more important is you know generating these architectures that perform really really well on a task of interest and in going from limitations to sort of new frontiers in emerging areas in deep learning research I like to focus on these two sort of of points highlighted here the first is the notion of understanding uncertainty and the second is ways in which we can move past building models that are optimized for a single task to actually learning how to build a model capable of solving not one but many different problems so the first sort of new frontier is this field called Bayesian deep learning and so if we consider again right the very simple problem of image classification what we've learned so far is has been about modeling probabilities over a fixed number of classes so if if we are to train a model to predict you know dogs versus cats we output some probability that an input image is either a dog or a cat but I'd like to draw a distinction between a probability and this notion of uncertainty or confidence so if we were to feed in in the image of a horse into this network for example we would still output a probability of being dog or cat because right probabilities need to sum to one but the model may even even if it's more saying that it's more likely that this image of is of a horse it may be more uncertain in terms of you know its confidence in that prediction and there's this whole field of Bayesian deep learning that looks at modeling and understanding uncertainty in deep neural networks and sort of this gets into a lot of of statistics but the key idea is that Bayesian neural networks are trying to rather than learn a set of weights they're trying to learn a distribution over the possible weights right given some input data X and some output labels Y and to actually parameterize this problem they use Bayes rule which is a fundamental you know law from from probability theory but in practice this what's called this posterior distribution of the likelihood the probe a set of weights given input and output is computationally intractable and so instead of we can't learn this distribution directly so what we can do is find ways to approximate this posterior distribution through different types of sampling operations and one example of such a sampling approach is to use the principle of dropout which was introduced in the first lecture to actually obtain an estimate of the network's uncertainty so if we look at what this may look like for a network that's composed of convolutional layers consisting of like two dimensional feature Maps what how we can use dropout to SMA uncertainty by performing stochastic passes through the network and each time we make a pass through the network we sample each of these sets of weights right these filter maps according to some drop out mask that these are either zeros or ones meaning will will keep these weights highlighted in blue and will discard these weights highlight highlighted and white to generate this stochastic sample of our original filters and from this these passes what we can actually obtain is an estimate of sort of the expected value of the output labels given the input the mean as well as this variance term which provides an uncertainty estimate and this is useful in understanding the uncertainty of the model in making a prediction and one application of this type of approach is shown here in the context of depth estimation so given some input image we train a network to predict the depth of the pixels present in that image we also asked it okay give us an estimate of your uncertainty in making that prediction and we when we visualize that what you can see is that the model is more uncertain in this sort of edge here which makes sense if you look back at this original input that the edge is sort of at this point where those two cars are overlapping and so you can imagine that the model may have more difficulty in estimating the pixels that line that edge the depth of the pixels that line that edge furthermore if you remember from from yesterday's lecture I showed this video which is worked from the same group at Cambridge where they trained a convolutional base convolutional neural network based architecture on three tasks simultaneously semantic segmentation depth estimation and instant segmentation and what we really focused on yesterday was how this segmentation result was much crisper and cleaner from there this group's previous result from one year prior but what we didn't talk about was how they're actually achieving this improvement and what they're doing is they're using uncertainty by training their network on these three different tasks simultaneously what they're able to achieve is to use the uncertainty estimates from two of two tasks to improve the accuracy of the third task and this is used to regularize the the network and improve its generalization in one domain such as segmentation and this is just another example of their results and as you can see right each of these semantic segmentation instant segmentation and depth estimation seemed pretty crisp and clean when compared to the input image so the second exciting area of new research that I'd like to highlight is this idea of learning to learn and to understand why this may be useful and why you may want to build out algorithms that can learn to learn right well first like to reiterate that most most neural networks today are optimized for a single task and as models get more and more complex you know they increasingly require expert knowledge in terms of engineering them and building them and deploying them and hopefully you've gotten a taste of that knowledge through this course so this can be kind of a bottleneck right because you know there are so many different settings where deep learning may be useful but only so many deep learning researchers and engineers right so why can't we build a learning algorithm that actually learns which model is most well suited for an arbitrary set of data and an arbitrary task and Google asked this question a few years ago and it turns out that we can do this and this is the idea behind this this this concept of Auto ml which stands for sort of automatic machine learning automatically learning how to create new machine learning models for a particular problem and in the original auto ml which was proposed by Google uses a reinforcement learning framework and how it works is is the following they have sort of this like agent environment structure that Alexander introduced where they have a first Network the controller which in this case is a RNN that proposes a child model architecture right in terms of the parameters of that model which can then be trained and evaluated for its performance on a particular task and feedback on how well that child model does you know your tasks of interest is then used to inform the controller on how to improve its proposals for the next round in terms of okay what is the updated child network that I'm going to propose and this process is repeated thousands of times you know iteratively generating new architectures testing them and giving that feedback back to the controller to learn from and eventually the controller is going to learn to assign high probability to sort of areas of the architecture space that achieve better accuracy on that desired task and low probability to those architectures that don't perform well so how does this agent work as I mentioned it's an RNN controller that sort of at the macro scale considers different layers in the proposed generated network and at the internal level of each candidate layer it predicts different what are known as hyper parameters that define the architecture of that layer so for example if we're trying to generate a child CNN we may want to predict you know the number of different filters of a layer the the dimensionality of those filters the destryed that we're going to you know slide our filter patch over during the convolution operation all parameters associated with convolutional layers so then if we consider the the other network in this picture the child network what is it what is it doing to to reiterate this is a network that's generated by another neural network that's why it's called the child right and what we can do is we can take this this child network that's sampled from the RNN train it on a desire task right with the desired data set and evaluate its accuracy and after we do this we can then go back to our RNN controller update it right based on how the child met work performed after training and now the RNN parent can learn to create an even better child model right so this is a really powerful idea and what does it mean for us in practice well Google has now put the service on the cloud Google being Google right so that you can go in provide the auto Amell system a data set and a set of metrics that you wanted to optimize over and they will use parent RNN controllers to generate a candidate child Network that's designed to train optimally on your data set for your task right and this end result is this new child network that it gives back to you spawned from this rnm controller which you can then go and deploy on your data set right this is a pretty big deal right and it sort of gets that sort of this deeper question right they've demonstrated that we can create these AI systems that can generate new AI specifically designed to solve Mazar tasks right and this significantly reduces sort of the difficulties that machine learning engineers face in terms of optimizing a network architecture for for a different task right and this sort of gets at the heart of the question that Alexander proposed at the beginning of this course right this notion of generalized artificial intelligence and we spoke about a bit about what it means to be intelligent right loosely speaking the sense of taking in information using it to inform a future decision and as humans our learning pipeline is not restricted to solving only specific defined tasks how we learn one task can impact you know what we do on something something completely unrelated completely separate right and in order to reach sort of that same level with AI we really need to build systems that can not only learn single tasks but can improve their own learning and their reasoning so as to be able to generalize well two sets of related and dependent tasks so I'll leave you with this thought and I encourage you to to continue to discuss and think about these ideas amongst yourselves internally through introspection and also we're happy to chat and I think I can speak for the TAS in saying that they're happy to chat as well so that concludes you know the series of lectures from Alexander and I and we'll have our three guest lecturers over the next couple of days and then we'll have the final lab on reinforcement learning thank you [Applause]
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_AI_in_Healthcare.txt
i've been at google for 16 years um the last six years i've been in life sciences and healthcare i i generally like running more interactive classes given the size of the group we thought polls might work so i'll launch a couple of polls throughout the talk and i'll try and keep an eye on chat as well if you guys have questions but um i might save them for the end as well so let me talk through the agenda a little bit i'm hoping to give you uh some information about ai in particular deep learning and healthcare and i will be using ai and deep learning interchangeably because that's just the name of our team is google ai but the examples you'll be seeing are all deep learning examples and as you know ai does include other things like robotics and non-neural network approaches so i just wanted to be clear that when i use them i don't need to be conflating them entirely once i cover what some of the key applications are for what we've done in ai and healthcare i'd like to discuss with you what the kind of unique opportunity i think we have because of deep learning to be able to uh create a much more equitable society while we're deploying ai models and we can talk about how that's possible and finally i'll touch on one last set of applications for ai and healthcare at the end here so on the in terms of uh the history behind ai and healthcare we are benefiting from the fact that we have uh the maturation of deep learning um and especially the end-to-end capabilities where we can learn directly from the raw data this is extremely useful for advances in computer vision and speech recognition which is highly valuable in the field of medical the other area as you all know is the increase in localized compute power via gpus um so that's allowed for neural networks to outperform non-neural networks um in the past and then the third is the value of all these open source large label data sets and internet being one for non-health related areas but there is uh public data sets like uk bio bank and even mimic which has been truly helpful and it's uh was developed actually um and produced at the mit labs so you'll be hearing about some of the applications of ai in healthcare next uh one of things that we do is to make sure we look at the needs in the industry and match that up to the tech capabilities healthcare specifically has enormous amounts of complex data sets annually it's estimated to be generating on the order of several thousand exabytes of healthcare data a year um just to put that in perspective a bit it's estimated that if you were to take the internet data um that's around something with more like hundreds of exabytes so it's it's several thousand times more um and what we're looking at in terms of uh those applications you'll see in bit is the pattern detection um and the ability to recognize for things like lesions and uh tumors and um really nuanced subtle imagery another area that it's useful for is just the addressing the limited medical expertise globally if you look to the right what you'd like to see is uh one medical specialist like a radiologist to about 12 000 people in the population but and what you can see on the graph to the right is that in developing countries it looks more like one to a hundred thousand or one to a million even and so the benefit of ai and healthcare is that it can help scale up running some of these complex tax tasks that are valuable that middle experts are capable of the third is uh really addressing human inconsistencies and we'll talk a little bit about this especially when we're talking about generating labels um ai models don't obviously suffer from recency or cognitive biases and they are also able to work tirelessly which is an issue when when you have to work overtime uh as in medical field which often happens let me just talk a little bit through the next application which is lung cancer uh the application what we developed was a computer diagnostic uh and in this case it was to help screen uh for lung cancer using low-dose ct scans um what you normally see um is the survival rates increasing dramatically if you catch it at earlier stages but about 80 percent of lung cancers are not caught early and what they use usually to do these screenings are these low-dose ct scans that if you look in this diagram to the right is these three-dimensional uh imaging that happens to your entire body it creates hundreds of images for the radiologists to look through and uh typically the actual um uh lung cancer signs are very subtle so what our models were able to do um when we looked at this was to actually not just outperform the state of the art but actually more importantly we compared it to the radiologists to see if there was an absolute reduction in both false positives and false negatives so false positives will lead to overutilization of the system and false negatives will lead to uh not being able to catch the cancer early enough and usually want to see both both reduced pathology is another area that's a hard deep learning problem and even more complex data this is on the left you can see when you take a biopsy you have slices of the body tissue and these are magnified up to 40 times and creates about 10 to 15 megapixels of information per slide the part that is inherently complex is when you're doing pathology you want to know both the magnified uh level highly magnified level of this tissue so that you can characterize the lesion and you also need to understand um the overall tissue architecture to provide context for it um and so that's at a lower power so you have a multi-scale problem and it is also inherently complex to be able to differentiate between benign and malignant tumors i there's hundreds of the different pathologies that can affect the tissue and so being able to visually differentiate is very challenging we built the model to detect breast cancer from pathology images and the pathologists actually had no false positives the model was able to capture more of the cancer lesions so it was greater than 95 compared to 73 that pathologists were getting but it also increased the number of false positives um this meant that what we tried uh then was to actually combine uh and have the model and pathologists work together um to see if the accuracy could improve and it absolutely did um and this com combined effort led to also development of an augmented microscope where you can see the model um detecting the patches inside the microsoft microscope view itself and we'll come back to the fact that the models had certain weaknesses and how we dealt with that later uh genomics is another area that's uh benefited significantly from uh deep learning um it's worth noting that uh when you do um whole genome sequences what you're you're doing is tearing up your um dna into a billion reads of about 100 bases i mean there's about a 30x over sampling with errors when you do that um when you try and uh figure out the sequence what you're trying to do is something like i take 30 years of a sunday newspaper 30 copies each with errors introduced and then shred them into 20 word snippets and then you try and put them back together that's essentially what's happening when you're doing your sequencing um and so we recast this problem as a deep learning problem uh we looked at how image recognition and and specifically convolutional neural networks would be able to perform in the space and developed a tool called deep variant which is open sourced and available for for anyone to use and we've been improving it over time this is proven to be uh highly accurate um the usfda runs a precision fta competition every few years and it's uh outperformed uh mostly won the awards for three out of four accuracy areas and you can see on the right that when you it's quite visually obvious when you actually get an error a false variant in the sequencing um so this is a clever way to actually be able to rapidly detect errors and variant calls so we talked about the different needs that um are in the medical field and one of them was the limited medical expertise there's one way to help them which is scaling up the tasks that they run so that they can be automated this is another way of addressing it which is returning time to the doctors what's happened is what you're seeing on in this picture is a girl who drew um her experience when visiting a doctor you can see the doctor's actually facing the computer to the left this sparked a lot of discussion within the health care industry about the cost of technology and how it's interfering with patient care um the doctors now at this point spend something on the order of six hours a day interacting with their electronic health records to get data entered one of the areas that's ripe for being able to support medical doctors is scribes human scribes have been deployed medical dictation has gone much better the automatic speech recognitions now have end-to-end models they're highly accurate and it's improved significantly also on natural language processing so these are all ways that um is more like an assistive kind of ai to help doctors relieve the burden of documentation from them i'm going to launch the poll right now just to see what people think is the most valuable application let me see here if i can do that and as i just to quickly recap there was um computer diagnostics which are useful for screening and diagnoses um there is and that was demonstrated with the radiology um there was approved prognosis um that's pathology is useful for um determining therapeutics um being able to determine treatment efficacy and and the progression of the disease um and that's what both pathology and genomics is highly utilized for and then returning time to experts is really the ai assistance through medical dictation describing okay great so let me just keep going while the is going um i want to talk about how we can actually achieve a greater moonshot so let me take a step back here where um we look at how the healthcare the world of healthcare looks right now it's tremendously filled with fragmentation it's fairly impersonal and it's inequitably distributed and one of the things i noted was that in tech we do amplify a system if you apply it to it so tech is is a way to both augment and scale up what exists um and so if you have if you're applying it to a broken system with perverse incentives it won't fix the system inherently it will accelerate it um but at the core of machine learning um and these deep learning technologies what we're doing is uh we're looking at the data very carefully and uh utilizing that to build predictions and um and determine outcomes in this case given that the world is not full of equity you run the risk of training the wrong models we published also a paper to help address this so societal inequities and biases are often codified in the data that we're using we actually have the opportunity to examine those historical biases and proactively promote a more equal future when we're developing the models you can do that by correcting for bias in the training data you can also uh correct bias in the model design and the problem formulation which and what you're trying to solve for and we'll talk about that in a bit uh and and finally if if none of that is applicable then you all you can also test and ensure for equal outcomes and resource allocations at the end of when you're deploying the ai models so this is um i used to work in google x which is google's effort to do moon shots the way we design define moon shots is the intersection of a huge problem breakthrough technology and radical solutions and a huge problem here is that the world is uncertain impersonal and it's also entire accuracy we have the benefit of a breakthrough tech right now which is ai and deep learning um and i'm just gonna say digital mobile tools is actually breakthrough tech for healthcare because they tend to lag about a decade behind other industries do the regulations safety privacy and quality needs and so a radical solution here is is we we actually think about not just improving the quality of care that we're delivering but making sure that when we do that we also make it more equitable um and i there's at every point in time when i see a technological wave happen um i do realize that at this point that it's an opportunity for us to reshape our future so in the case of um deep learning i i'd like to talk about the opportunities for actually moving um so i didn't realize the slides weren't advancing i want to talk about the opportunity to actually make the ai models much more um equitable and how we would do that so the two key areas i'll talk about is community participation um and how that's gonna affect the models and in the data evaluation um and then also planning for model limitations and how you can do that effectively with ai one of the things that we did was work with the uh regions that we were directly going to deploy the models with and so on the left here you see us working with um the team in india and on the right it's our team working with those in in thailand i what we found was that the social economic situation absolutely mattered in terms of where you would deploy the model um an example is while we developed the model with the ophthalmology centers and that's where the eye disease is happening and diabeteopathy is the leading cause of growing cause of blindness um in the world and this is where the models were developed but actually the the use case was most acute in uh the diabetic centers so the endocrinology offices um and people were not making the 100 meter distance to be able to go from the endocrinology offices to the ophthalmology offices because of access issues and uh challenges with um with lines and and and so on so forth so so this is an area that we explo uh explored using user research extensively to make sure that we thought through where the ai models would land and how that would impact the users another area that we looked at is when we're generating labels for the models you can see on the left that as classically you you would expect when you get more data on the models continues to improve so it kind of flattens out here at 60 000 images and and at some point that's sufficient and you're not going to get much more improvement from that um what you actually benefit from if you look to the right graph is improvement of the what we the quality of the labels or the what we refer to as the grades on the images um each doctor gives an image and grade which is their diagnostic opinion of what they think they're seeing um as we got multiple opinions on single images and were able to reconcile that we were able to continuously improve the model output um and improve the accuracy so this is something that's often said in the healthcare spaces if you ask three doctors you get four opinions because um even the doctors themselves may not be consistent with themselves over time uh the way that this is addressed um in some countries is to use this delphi method which is which was developed during the cold war it helps determine consensus where individual opinions vary and we developed a tool to do asynchronous adjudication of different opinions this has led to much higher ground truth uh data creation and and it's because of the fact that doctors actually sometimes just miss what what another doctor notices and so they generally do reconcile and are able to come to agreement on what the actual um severity or diagnosis should be um and so this was something that we saw uh really and that was really impactful because um when we were doing the analysis with ophthalmologists we would see things like 60 consistency across the doctors um and this was a way to actually address that level of variance and here's the last area um that i want to talk about for community engagement um this is around if you go even further upstream to the problem formulation this is a case where they didn't think through uh the inputs to their models and algorithms this algorithm was trying to determine the utilization needs of the community and they were using the health costs as a proxy for the actual health needs um this has led to uh inadvertently a racial bias because less money was spent on black patients and um and this was caught after the fact and so if you just click one more time um this is this is one of the key areas where um having input from the community uh would have actually got that much earlier on when they were doing the algorithm development this is something that we practice now frequently um and i know you guys are working on projects so i it'd be uh one of the polls that i want to put out there which just let me see if we can get it to launch is uh which one of these um uh approaches are actually potentially relevant at all for um the projects that you guys are working on okay great i'll keep going with the toxin while this is being saved and be nice to look back on it i on the left here um i mentioned earlier how our pathology models had certain weaknesses um in terms of false positives um but it also was capturing more of the cancer allegiance than the pathologists were so we developed a way to try to explain the models um through similar image lookup and what this has allowed to have happen was um it showed it uses a cluster algorithm and is able to find features that were not known um as before to pathologists that might be meaningful indicators of the actual uh diagnosis or prognosis um and in this case uh pathologists have started to use the tool to learn from it and um and there's also the benefit of the pathologist being able to recognize any issues with the models and inform the models to improve so you get a virtuous cycle of uh the models and the pathologists learning from each other on the right is another way that we use to explain the model output you see saliency maps um which is a way to just be able to identify which features are the model is actually paying attention to and in this case which pixels um the model is paying attention to and and light those up we do this so that we know that the um the way that the model is actually determining the uh the diagnostic whether it's a particular skin condition um that they're looking at the actual skin abnormalities and not some side potential unintentional correlation to skin tone or our demographic information and so this has been valuable to you as a way of checking the models that way and the last that i mentioned is is doing monolith model evaluation for equal outcomes um there's something for in the dermatology space known as the fitzpatrick skin type it allows you to see the different skin tones and what we do is to have test sets that are in the different skin tones to do the model evaluation to see if we do get equal outcomes and this is something where as the model developer and you have to make some hard choices if you see that your models aren't performing well off a particular um category or demographic i ideally what happens is you supplement your data set so that you can improve your models further to appropriately address those regions or you may have to make a decision to limit your model output so that there can be equal outcomes and sometimes you don't actually choose not to deploy the models and so these are some of the kind of real world implications of developing ai models in the healthcare space the last application i wanted to talk through with this group is the concept of health care typically in the past healthcare is thought of for patients and while every patient is a person not every person is a patient and patients typically are considered on the left here people who are sick or at risk they're entering the health care system the models are quite different when you're thinking about people of this nature whether they have acute or chronic diseases and they're the ones that we talked about a bit earlier which are screening diagnostics prognosis treatment those are what the models tend to focus on um if you're looking at people uh they are considered quote unquote well um but their health is impacted every single day by what we call like social determinants of health which are your environmental and social circumstances your behavioral and lifestyle choices um and uh how your genes are interacting with the environment and um the models here look dramatically different in terms of how you would approach the problem uh they tend to focus on preventative care so eating while sleeping well exercising and they also focus on public health which i think has been a big um known issue now with coronavirus and and of course screening is actually consistent across both sides so um when we talk about uh public health there's can be things like epidemiological models which are extremely valuable but there's also things that are happening right now especially probably one of the biggest global threats to public health is climate change and so one of the things that's happening in places like india is flood forecasting uh for public health alerts in in india there's a lot of alert fatigue actually and so it's it's actually unclear when they should care about the alerts or not what this team did was they focused on building a scalable high resolution hydraulic model using convolutional neural nets to estimate inputs like snow levels soil moisture estimation and permeability these hydraulic bundles uh simulates the water behavior across floodplains um and were high far more accurate than what was being used before and this has been deployed now to help with alerting and across the india regions for during the monsoon seasons um let's see and so i just want to leave this group with with the idea that uh on the climate change side there is a lot going on right now um nature is uh essential to health and plant but also the people that live on it so uh we currently rely on these ecosystem services and what that means is people rely on things like clean air water supply pollination of agriculture for food land stability and climate regulations and this is an area that's ripe for ai to be able to help understand far better and value those services um that we currently don't pay a lot for um but we'll probably have to in the future and so this last slide let me see if we can get it to appear is for the pull and just i wanted to compare and understand if the perception around health is any different in terms of what might be the most exciting for uh ai to be applied to thanks for launching the final poll and the last thing i want to leave the group with is none of the work that we do in the in healthcare is possible there's a huge team and a lot of collaboration that happens across medical research organizations and research institutes and healthcare systems um and so that is uh you know this is our team as it's grown over the years and um in different forms it's not all of our team even anymore but this is certainly where a lot of the work has been generated from let me take a look at the questions in chat now and so that i'll just recap what the pull results were so it looks like um the diagnostic models um uh 54 hmm oh yeah so i guess you guys can do multiple choice uh you can pick multiple ones so 50 but 60 people half the people um felt that the diagnostics and the therapeutics were valuable um and less interested in but still valuable the assistance thanks for filling those out um definitely let me look at the questions given the fast advancement in terms of new models what is the main bottleneck to scaling ml diagnostic solutions to more people around the world i it's the regulatory automation of meeting the regulatory needs um the long pull for diagnostics is ensuring patient safety proper regulation usually you go through fda rce marks and that can take uh time there's quality management systems that have to be built to make sure that the system is robust from a development perspective so it's software as a medical device and this is always going to be true in the case of when you're dealing with patients in terms of the other part maybe is the open source data sets having more labeled data sets out there so that uh everyone can have access and move that space forward uh is is valuable second question here good day sets are essential to developing useful equitable models what effort and technologies do we need to invest in to continue collecting data sets and for more models uh so one of the things that's happening is developing a scalable labeling infrastructure um that's that's one way to be able to generate better data sets but raw data is also ones that are directly reflecting the outcomes is valuable so an example is if you're thinking about data that's coming straight from the user in terms of their vital signs or their physiological signals these are things that are as close to ground truth as you can get about the individual's well-being and um and obviously uh what we saw with cobia 19 was it was even harder to get information like um how many deaths were actually happening um and what were the causes of those deaths and so these are the kinds of data sets that need to um these pipelines need to be thought of in the context of how can they be supporting public health goods and and how does that did it accurately get out the door so we do have an effort right now that um lots of people pulled into you especially for coronavirus which was um uh on github now which i can provide a link for later um and it's volunteers um who have built a transparent data pipeline for the source of the data provenance is very important to keep track of when you create these data sets to make sure you know where what what the purpose of the data is and who's how reliable it is and where the source is coming from so these are the kinds of things that need to be built out to inform the models that you build this last question how do you facilitate conversation awareness around potential algorithmic bias related to the products you were developing um several things one is that the team you build as much as of them that can be reflective of uh the be representative of a popular broader population is actually more meaningful than than i think people realize um there's so what i mean by that is if you have a diverse team working on it or you bring in people who uh can be contributors or um part of a consortium that can reflect on the problem space that you're trying to tackle that is actually um a really good way to hear and discover things that you might not have ever thought of before but again it can start with the team that you build and then the network around you that you are actually getting feedback loops from and you know if you can't afford it uh you you would want to do that in a way that is quite measurable and quantitative but if you can it's it's actually still quite meaningful to you um to proactively just have these conversations about what space you're going into and how you're going to think about the inputs to your models um all right so thank you uh it looks like those were majority of questions
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2023_The_Future_of_Robot_Learning.txt
Robotics is a really cool and  important direction for the future. I really believe that we are  moving towards a world where so many routine tasks are taken off your plate. Fresh produce turns up at your doorsteps  delivered by drones garbage bins take themselves out smart infrastructure  ensures that the garbage gets removed. Robots help with recycling with  shelving with cleaning windows. Robots can do so many physical things for  us. And by the way can you count how many robots there are in this in this image?  Anybody wants to want to take a guess how many robots do I have in this image? Okay that's so that's that's really close!  It turns out it's 19. So here are all the robots we have flying robots we have  cars we have shopping cart robots we have robots that carry we have robots  that shelf we have robots that clean. I really believe that in the future we will have  AI assistance whether they are embodied or not to act as our guardian angels to provide advice to  ensure that we maximize and optimize our lives to live well and work effectively and these agents  will help us with cognitive and physical work. And so today we can say that with AI we we will  see such a wide breadth of applications for instance these technologies have the potential to  reduce and eliminate car accidents they have the potential to better diagnose monitor and treat  disease as you have seen in some of the previous lectures we these technologies have the potential  to keep your information private and safe to transport people and goods more effectively and  faster and cheaper to really make it easier to communicate globally by providing instantaneous  translations to to develop education to everyone to allow human workers to focus on big picture  tasks with machines taking on the routine tasks and so this future is really enabled by three  interconnected fields and on one hand we have robots now robots I like to think of robots as  as the machines that put computing in motion and give give our our machines in the world the  ability to navigate and to manipulate the world we have artificial intelligence which enables  machines to see to hear and to communicate and to make decisions like humans and then we have  machine learning and to me machine learning is about learning from and making predictions on data  and this uh this kind of application of machine learning is Broad it it applies to cognitive tasks  it applies to physical tasks but regardless of the task we can characterize how machine learning  works as using data to answer questions that are either descriptive predictive or prescriptive  so you can look at data to see what happened or to see what is this you can look at data to see  what will happen in the future and what the world might look like in the future or you can use  data to ask where should I go what should I do So when we think about these questions  in the context of a robot we have to we have to kind of get on the same  page about what a robot is and um and so think of a robot as a programmable  mechanical device that takes input uh with its sensors reasons about this input and then  generates an action in the physical world and robots are made of a body and the Brain the  body consisting of actuators and sensors determine the range of tasks that the robot can do so the  robot can only do what its body is capable of doing a robot on wheels will not be able to do the  task of climbing stairs so we have to think about that body and we have at T Cell we have a lot of  machine learning based research that allows us to um that that examines how to design  optimally a robot body for a particular task now in order for the body to do what it's meant to  do we need the brain we need the machine learning and the reasoning and the decision making engine  and this is what we are going to talk about today now in the context of robots we have three types  of learning and you have seen different aspects of these methodologies throughout the course we  have supervised learning and so in this uh in this method of learning we use data to have  to find the relationship between input and output we have unsupervised learning and in  the context of unsupervised learning we can use data to have patterns to find patterns  and classifications of the data and then we have reinforcement learning which is about  learning to perform a task by maximizing reward so for robots we end up with a cycle that  most often consists of three steps we have perception a perception step we have a planning  and reasoning step and we have an action step and so this is what we are going to  talk about today so let me start with some examples of how we can use machine learning  to enhance the perception capability of robots and so this is addressing the question what  is this and this question is really important because for instance uh training robot cars to  recognize all objects on roads including Ducks cars people is really critical uh for autonomous  driving and um so how does this work well let me let me give you a high level view of how a  robot car can actually recognize the scene so in order to in order to use deep learning  for the perception task of robots we use data this is manually labeled data that gets fed  into a convolutional neural network and then the labels are used to classify what the data  is so for instance for this image we may have classifications like car duck Road and we do this  so that when the system when the car sees a new image for example this one the car could say oh  this is a this is ducks on road now in order to in order to actually provide solutions for this  object classification problem we have to we have to employ multiple algorithms and the first  algorithm that we have to employ is called image segmentation in image segmentation we take as we  take inputs as input images and we group together the pick pixels that belong to the same object in  the image and this is a kind of a lexical step and then we need to label and recognize these images  this is a semantic step and so the exciting thing is that we already have very good algorithms  that that can segment images very fast so we can we can do this we can do this so we can we can  take an image and we can find the object in the image very fast we don't know what the objects  are so in order to know what the objects are well you know what we do right so we employ  thousands of people to label the objects that we have and so this is exciting but it but  labeling is a very labor intensive activity and a significant challenge to machine  learning let's keep this thought for later now the part the most popular Benchmark for  measuring the accuracy of image classification is imagenet and here we see the leaderboard  of imagenet and we see performance of various variations of of image classification algorithms  that perform well into 90 90 accuracy and this is really quite exciting it's exciting but in the  but if those algorithms were to run on a car that's not good enough because because the  car is a safety critical system and in fact we cannot afford to have any errors in how images get  recognized in the car on the car here's an example of of an object object detector that is running  on a on a car and we we see the image from three cameras there is a camera pointing forward there  is a camera pointing to the left there is a camera pointing to the right and so you can see that the  car takes this this input and it does pretty well see there is a blue car there are some bicyclists  there are there is a bicyclist here's another car so the system does pretty well and manages to find  the Gap in the road to make a turn but it's not a hundred percent incorrect so in particular there  is this moving truck and when this moving and when the image of the moving truck is passed through  the object recognition piece of the system we um we end up with a lot of interesting things the  writing is recognized as a fence and so these are these are the kinds of extreme errors um or um the  way um we we denote them Corner cases that we need to pay attention to when we train machine learning  for safety critical applications like driving but it turns out that in fact if you use a deep  neural network solutions for image classification um the the solutions work really well because  they are trained on a huge data set imagenet but the solutions capture more than the essence of  the object the solutions also capture the context in which the object appears and so MIT researchers  led by George Tannenbaum and Boris Katz did an experiment a few years ago where they took regular  objects and they put them in a different context so for instance they took your shoes and put them  on a bed they took the pots and pans and put them in the bathroom and with this significant change  in context the performance of the top performing imagenet algorithms dropped by as much as 40  to 50 percent which is really extraordinary and I'm sharing this with you not to discourage  you from using these algorithms but to point out that when you when you deploy an algorithm and  especially when you deploy it in a safety critical application it's important to understand the scope  of the algorithm it's important to understand what works and what doesn't work when you can apply the  algorithm and you when you shouldn't apply the the algorithm and so um so keep this in mind as you  think about deploying or building and deploying deep neural network Solutions there's another  thing um that is very critical for for autonomous driving and and for robots you have heard a  beautiful lecture on adversarial attacks well it turns out you can attack very easily the images  that get fed from the camera streams of cars um to the decision-making engine of the car and  in fact it's it's quite easy to take a stop sign and perturb it a little bit perturb it in uh in  such a way that you can't even tell with a with a naked eye that there is a difference between the  two images and with all small perturbations you can turn the stop sign into a yield sign and you  can imagine what kind of chaos this would create on a on a physical Road so machine learning is  very powerful for building perception systems for robots but as we employ machine learning in  the context of robots it's important to keep in mind the scope when they work when they don't work  and and then it's important to think about what my what kind of guard rails we might put in place at  the decision time so that we have robust Behavior so what could we do about the possibility of  adversarial perturbations on the stop sign well let's talk a little bit about decision  making and let's talk about how the car figures out what to do given input reinforcement learning  is causing a huge revolution in robotics and so um and and why is that well the reason reinforcement  learning is causing a huge revolution in robotics is because we have built fast simulation systems  and simulation methodologies that allow us to run thousands of simulations in parallel in order  to train a reinforcement learning policy and we are also uh decreasing the gap between the  hardware platforms and the simulation engines so you have seen reinforcement learning earlier in  the in the boot camp and so reinforcement learning is concerned with how intelligent agents ought  to take action in an environment in order to maximize the notion of a cumulative reward and  so um reinforcement learning is really about learning to act and this differs from supervised  learning and not needing a labeled input or in not needing labeled input output pairs so in this  example the agent has to learn to avoid fire and it's very simple it gets a negative reward if it  goes to Fire and it gets a positive reward when it gets to water and that's essentially what what  this approach is like you you do trial and error and and eventually the positive rewards dominate  the negative rewards and that that directs the the agent towards the the best action and so here is  an example where we have a reinforcement learning agent that is trying to drive on a race track and  you can see that it starts off and it initially it makes mistakes but eventually it learns  how to how to take the turns at high speeds and interestingly you can do that you can take  this idea and you can run it in parallel so you can take thousands of cars and you can you can put  them on the same track and in the beginning they all make mistakes but eventually they get the  solution right and eventually that joint policy ends up being the policy that reliably controls  the the vehicles so it's very it's really a very exciting area reinforcement learning much like  deep learning has been invented decades ago but it works well now because of the Advent of  of computation we have so much more compute today than we had 40 years ago we have so much more  data for deep annual networks today than we had 40 years ago and so these techniques that did not  do so well back then all of a sudden are creating extraordinary possibilities and capabilities  in our agents now this is a simple simulation in order to get the simulation to drive a  real robot we actually need to think about the Dynamics of the robot and so in other words  we have to take into account what the vehicle looks like what are its kinematics what are  its Dynamics and so here is a vehicle that is running the policy learned in simulation so  it's really cool because really we are now able to train in simulation and if the model that we  have for the vehicle Dynamics is good enough we can take the policy that was learned very fast in  simulation and we can deploy it on the vehicle and here are two vehicles racing with each other  the white car and the green car so check out what the white car is doing see how it snuck wow  it's really great I wish I could race like this so this is really a really exciting now in this  case the vehicles have limited field of view and they get the position of the other vehicles on the  track but only only so they get this position from an external localization system but they only  know where the vehicles within their field of view are so look at this that's great so okay  so what can we do with with these methodologies um so we've seen how we can use deep learning to  understand the um the view of the vehicle from cameras uh we've seen an example of learning uh to  steer what can we do well I think that these these advancements in robotics are really enabling the  possibility that you saw in the first Slide the possibility of creating many robots that can do  many tasks and much more complicated tasks than what we see here and so what I want to talk about  uh next is the autopilot how do we take these pieces together to enable the autopilot meaning to  enable a self-driving vehicle I don't mean so this autopilot is not the Tesla autopilot is it's just  the idea that you have a full self-driving vehicle now in order to do this we need to advance  the brain more we need to do more about the learning reasoning and planning part of  the car so let me uh let me ask you do you know when was the first autonomous Coast to  Coast Drive in the United States any guesses not you I know you know sorry two two thousand interesting uh interesting  well actually it was in 1995. in 1995 A Carnegie Mellon project called nav lab um built a car  that um that actually was was driven by a machine learning engine called Alvin and Alvin drove this  car all the way from Washington DC to Los Angeles and the car was in autonomous mode for a large  part of the highway driving but there was always a student there right ready to um to take  control and the car did not did not drive in autonomous mode when there were when it was  raining or when there was a lot of congestion or when the car had to had to take exits so this is  what the car did it went from Washington DC all the way to Los Angeles now 1995 is a long time  ago right I mean it's before many of you were born so it's really extraordinary to think about  what is needed uh in terms of of advancement in terms of progress in order to get from where  we were back then to the point where we can actually see deployed autonomous vehicles and  by the way you should come and check out the MIT autonomous vehicles which Alexander has built  over the past five years which are very powerful and can drive in our neighborhood and we'll  talk about how they drive but interestingly this was not the first time when we had cars  racing in autonomous modes on on highways in fact do you know when was the first autonomous  Highway Drive in the world anywhere in the world all right so it was in 1986. and in 1986 German  engineer Ernst Dickman started thinking about how he could turn his van into an autonomous vehicle  and so he put computers and cameras on the van and began running tests on an empty section of the  German Autobahn which had not been open for for um for for public driving and he was able to  actually get his van to drive on that empty road but uh interestingly when he  started developing this work um computers needed about 10 minutes to analyze  an image can you imagine okay so how do you how do you go from that to enabling an autonomous  vehicle to drive at 90 kilometers an hour well um what they did was they they developed some  very fast solutions for paring down the image to only the the the aspects that they needed to look  at and they assumed that there were no obstacles in the world which made the problem much easier  because all the car had to do was to stay on on the road so it's really super interesting to think  about how visual processing improved from one frame per 10 minutes to 100 frames per second and  this has been a game changer for autonomous cars and we're getting back to the connection between  hardware and software we need both in order to get good solutions for real problems so um the other  thing that happened in autonomous driving was that the lidar sensors decreased the uncertainty and  increased safety and today we have many companies and groups that are deploying um self-driving cars  this is an example from Singapore it's uh it's a vehicle we deployed and in fact we had the public  ride our vehicle in 2014 we have vehicles at MIT we have a lot of other groups that are developing  these vehicles now before we had lidar we had sonar and nothing worked when we had sonar because  when you when you work with sonar the sonar beams just kind of go forward and then they bounce and  if the angle is about plus minus seven degrees you will hear the Ping back but if the angle so  if you're if if the sonar bounces on a Surface that's more than seven degrees angled from  from the direction of the sonar that sonar ping will bounce off and it will bounce on other  objects and walls and you will get wrong direction measurements wrong distance measurements so with  lidar that problem went away so all of a sudden a powerful accurate sensor made a huge difference  all the algorithms that were developed on Sonar and didn't work started working when the  later was introduced it's really exciting um okay now when we think about autonomous  driving there are several key parameters that emerge as we think about what the capabilities  of these systems are one one question how complex is the environment where the car t Road like in  the German case then the problem is much easier then we have to ask ourselves how how complex  are the interactions between the car and the environment and we also have to think about  how complex is the reasoning of the car how fast is the car going and underlying all these  questions is a fundamental question and this fundamental question is how does the car cope  with uncertainties now you have seen that machine learning has uncertainty associated with it so as  you consider deploying machine learning on safety critical applications it is super important to  consider the connection between um between your your context the uncertainties of the models that  you're deploying and what the actual application requires I will tell you that today we have very  effective and Deployable solutions for robot cars that move safely in Easy environments where  there aren't many static nor moving obstacles and you can you can see from this example this is  this is an example of the MIT car and it's you can see this this car operating autonomously without  any issues at Fort Devens where there aren't too many obstacles and the car is perfectly capable  of avoiding the obstacle by the way that's my car so I'm very glad that the car is very capable  of uh of avoiding obstacles and in fact I was so convinced I said okay we can use my car as  the obstacle but the sensors don't work well in weather and the uncertainty of the perception  system increases significantly if it rains hard or it snows and the uncertainty of the vehicle  prior also increases in the case of extreme congestion where you have erratic driving with  vehicles with people with scooters even with cows on the road and this is a video I took during  a taxi ride in Bangalore there come the cows so um there are so many important preconditions  and many of these preconditions revolve around certainty in perception planning learning  reasoning and execution before we can get to Robo taxi but we can have many other robot  solutions that are much that that can happen today and so I want to tell you that many companies  and research teams are deploying and developing self-driving cars and many of them follow a  very simple solution which you can adopt and turn your car into a self-driving  car so here's what you have to do you take your car you extend it to drive  by wires so that your computer can talk to um to the steering and the acceleration the  throttle controls so then you'll further extend this car with sensors and most of the sensors  we use are cameras and lidars and then there are Suite of software models modules and this  includes a perception module that provides support for making maps and for detecting a static and  dynamic obstacles and then we have an estimation module that identifies where the robot is located  and it does so by comparing the what the what the perception system sees now against a map that  was created by looking at the infrastructure and finally there is a learning planning and  control system that figures out what the car should do based on where it is so this is it this  is the recipe you can take your vehicle and turn it into an autonomous vehicle and as you do so  you really have to think about foundationally what are the computational units that  you have to make you you have to create so you have to process the sensor data you have  to detect obstacles you have to localize the vehicle you have to plan and then you have  to move and so there are so many works that address each of these subtasks that are involved  in autonomous navigation and and some of these works are model based and some of the works  are machine learning based but what's really interesting is that in this autonomous driving  pipeline the classical autonomous driving pipeline there are a lot of parameters so every for every  solution of each of these individual problems you you have to hand engineer parameters for any  type of Road situation that the car will encounter and then you will have to think about how the  modules get stitched together you need to Define parameters that connect the modules together and  this is very tough to do in a robust way and it brings brittleness to these Solutions in fact you  have to really think about what are the parameters if you have nighttime driving or if you'd have  rainy weather or you're you're on in the country on a country road or in the city on a city  road or you're on a road with no Lane markings so these are these are really challenging things  that that the first solutions for autonomous driving had to had to reason through now in  Alexander's PhD thesis his idea was to utilize a large data set to learn a representation  of what humans did in similar situations and develop autonomous driving solutions that drive  more like humans than than the the traditional pipeline which is much more roboticy if you  if so um so then the question is how can we use machine learning to go directly from sensors  to actuation in other words can we compress all the stuff in the middle and use learning  to connect directly perception and action so the solutions that we employed build on things  we have already talked about we can use deep learning and reinforcement learning to take us  from from images of ofroads onto steering and and throttle onto what to do so this is really great  because you can train on certain kinds of Roads and and you can then take your vehicle and  put the vehicle in completely different driving environments and driving situations  and you don't need new parameters you don't need retraining you can go exactly  directly to what the car has to do so in other words we can learn a model to go from  raw perception and here you can think of this as pixels from a camera and the other thing we feed  the vehicle is noisy Street View maps so these are not the high definition maps that are usually  created by autonomous driving labs and companies and so you can do this to directly infer a  full continuous probability distribution over the space of all control and here the red lines  indicates the inferred trajectories of the vehicle projected out onto the image frame and the opacity  represents the large density function by our model and so this is done by training a deep  learning model and the Deep learning model can output the parameters of this conditional  distribution directly from Human driving data okay so um more precisely the input to our  Learning System consists of camera feeds from three cameras camera that looks forward and two  cameras that look sideways and also the the street view maps and from this from this data this data  is processed and it's from this data we can learn to maximize the likelihood of particular control  signals for particular situations and amazingly the solution also allows us to to localize  the the vehicle so it's really super exciting okay so we can we can we can get this human-like  control but assuming light control requires a lot of data and I've told you at the beginning  that we have to be careful with the data because there are a lot of corner cases there are a  lot of typical near accident situations for which it's difficult to generate real data um so  for instance let's see for instance if you have if you want to ensure that the car will not be  responsible for an accident and will know what to do when it comes to Road situation like this  it'd be pretty expensive to take a car and crash that car in order to generate the data so instead  what we do is we do the training in simulation and so Alexander developed the Vista  simulator and the Vista simulator can model multiple agents multiple types of sensors  and multiple types of agent to agent interaction and and so the the Vista simulator  has been recently open sourced you can get the code from vista.csel.mit.edu and  a lot of people are already using the system so what we get from Vista are is the ability to  simulate different physical sensing modalities that means including 2D cameras 3D lidar event  cameras and and so forth and then you get the possibility to to simulate different types of  environmental situations and perturbations you can simulate weather you can simulate different  lighting you can simulate simulate different types of roads and you can also simulate different types  of of interactions so here's how we use Vista we can take one high quality a data set taken from  a human-driven vehicle we can take this very high definition data set and then in simulation we can  turn it into anything we want we can turn it into erratic driving we can turn it into near accident  simulation situation we can turn it into anything anything you want so for instance here you can  see our original data and you can see how this original data can be mapped in simulation in  a way that looks very realistically into a new simulated trajectory that is erratic and that now  exists that's part of our training set in Vista and so we can we can do this and we can use  this data and then we can learn a policy we can evaluate this policy offline and ultimately  deploy it on the vehicle and this works by um first updating the state of all agents based  on the vehicle Dynamics and interactions and then by recreating the world from the new viewpoints  of the agents once you move them the the world will look different to the agent than than  in the original driving data set and finally we can we can then render the image  from the different agents viewpoints and we can perform the the control so um so  this is really cool there are several other simulation engines and there are simulation  engines that rely on on imitation learning um or on domain randomization or there's car  lab that's very effective that seemed to real now however our solution works better than all the  other Solutions and here you can see the results of comparing what happens in Vista with the um  with what happens in the existing simulators in the state of the art so the Top Line shows crash  locations in red and the bottom line shows mean trajectory variation in color and you can you can  see that our solution really does the best and in fact the solution is able to do things that other  simulation based learning based control cannot do for instance in our solution we are able to  recover from orientations that point the vehicle off the road or we are able to to recover  from being in the wrong in the wrong lane so here's the vehicle that is executing the  learning based control and here's Alexander with his vehicle that was trained using data from Urban  driving and now he's driving to the soccer field and you can see that he he's able to drive to  to get this car to drive him to the soccer field without doing any training without ever having  seen this road and explicitly providing data about this road so this is pretty cool right all  right so um okay I'm gonna open the hood for you and so I'm going to show you what happens inside  the decision engine of this solution so let me Orient you in this image bottom right you see the  map of the environment top left you see the camera input stream bottom left you'll see the attention  map of the vehicle and then in in the middle you see the decision engine the decision engine has  about a hundred thousand neurons and about a half a million parameters and I will challenge you to  um to figure out if there are any patterns that associate the state of neurons with the behavior  of the vehicle it's really hard to see because there are so many of them there's just so much  stuff that is happening in Peril at at the same time and then have a look at the attention map so  it turns out this vehicle is looking at the bushes on the roads in order on the road in order to make  decisions still it seems to do a pretty good job but we asked ourselves can we do better can we  have more reliable uh learning based Solutions and and so yesterday Ramin introduced liquid  networks and introduced neural circuit policies and so I just want to drill down a little bit  more into this area because you can now compare you cannot understand how the the original engine  worked and you can compare that against what we get from liquid Networks and so look at this we  have 19 neurons and now it's much easier to look at patterns of activation of these neurons and  Associate them with the behavior of the vehicle and we the attention map is so much cleaner right  the vehicle is looking at the road Horizon and at the sides of the road which is what we all do  when we drive a vehicle now remember how um so remember that Ramin told us that this um this  model called liquid time constant network is a continuous time Network and this this model uh  changes what the neuron computes and in particular we start with a well-behaved state-space model  to increase the neuron stability during learning and then we have non-linearities over the synaptic  inputs to increase the explosivity of the model and to also increase the model State  during training and inference and by plugging these equations into each other  we can see the equation of the LTC neuron where here the function is where the function  here determines not only the state of the neuron but also this function can be con is controlled  by new input at at the time of of execution of inference so what's really cool about this model  is that it is able to dynamically adapt after training based on the inputs that it sees and this  is something very powerful about liquid Networks now in addition to changing the neuron equation we  also change the wiring and this new type of wiring essentially gives function to the neurons in a  deep neural network every neuron is the same in our architecture we have input neurons we have we  have control neurons we have interneurons and they each do different things and so with this in mind  we can look again at the beautiful solution that is enabled by liquid networks and the solution  keeps the car on on the roads and only requires 19 neurons to deliver that kind of function and you  can see here that the attention of the solution is extremely focused as compared to other models like  CNN or ctrnn or lstm which are much more noisy so I hope you now have a better understanding of how  liquid networks work and what their properties are now we can take this model and apply it to many  other problems and so here is a problem we call Canyon Run where we have taken a liquid Network  and we have implemented it on a task of flying a plane with one degree of Freedom where the plane  can only go up and down but it has to hit these obstacles which are at locations that the the  plane does not know and the plane also does not know what the environment looks like and so in  particular when you have we have uh when when you have the when you implement the task with one  degree of Freedom control for the plane all you need is 11 liquid neurons if you want to control  all the degrees of freedom of the plane then you need 24 neurons it's still much smaller than  the huge models that we're talking about today here's another task we call drone dodgeball  where the objective is to keep the a drone at a specified location and the Drone has to protect  itself when um when balls come its way and you can see a two degree of Freedom solution to drone  dodgeball and that's the network you can see how it how all the neurons fire and you can really  associate the the function of of this controller of this learning based control with activation  patterns of the neurons and so very excited because in fact we're able to extract decision  trees from these kinds of solutions and these decision trees provide human understandable of  human understandable explanations and so this is really important for safety critical systems all  right um so um let's see um Ramin told you that these liquid networks are Dynamic causal models  and I want to show you some more examples that um that explain how these models are Dynamic  ozone models so here you can see that so here we're studying the task of finding an object  inside a wooded environment and the object and here here are some examples of the data that  we have used to train this test so essentially we've had a human pilot drive a drone to um to to  accomplish the task now this is the data so check this out we have then used a standard deep neural  network and we have asked this network to solve this problem and the attention map of the network  is really all over the place you can see that the network the the Deep neural network solution is  very confused but check out something else the data that we collected was summertime data and now  it's fall so the background is no longer green we have we don't have as many leaves on trees and so  the context for this task has completely changed by comparison the the liquid network is able  to focus on the task is not confused and is able to go directly to the object that it needs  to find and look how clean the attention map of this of this example is so this is very exciting  because it really shows that with liquid networks we have a solution that is that is able to  in some sense abstract out the context of training and that means we can get zero shots  transfers from one environment to another and so moreover we actually have done the same  task in the middle of the winter when we no longer have leaves we have black tree lines and the  environment looks much much different than than the environment where we trained and this kind of  this kind of ability to transfer from one set of training data to completely different environments  is is truly transformational for the capabilities of machine learning well we've done more than that  so we've taken our our trained solution and we deployed it in the lab so um so here is um uh here  is uh macrame who worked on this problem and he is um and look at the attention map I mean he's the  environment is not not even the woods uh it's a it's a it's in it's an office it's an indoor  environment and we see other examples where we take our solution and we deploy it to find  the same object the chair just outside of the stata building and this is uh the the Deep neural  network solution that gets completely confused and here is the liquid Network solution that has the  exact same input and has no problem uh going to um to the the going to the robot let's see  a few more examples here where we are doing um uh hop by hop we're actually  searching the object and doing multi-step Solutions and in fact in fact  we can if I can get to my next video I'm sorry so um I am the next one is it shows you  that we can actually do this forever so here is an infinite hop demo that was done just outside  on the baseball field and we we placed uh three of the same objects that we trained on and we  placed them at unknown locations and we added the the search feature to our machine learning  solution and so we can the system can go on and on and on hopping from one to the other the final  example I will show you is in is on the patio of the stata building where we have put a number of  chairs we have put our favorite chair but also we have put a lot of other other similar chairs  and we can see that liquid networks generalize very well whereas if we take an lstm solution  it gets confused and goes to the wrong object so all of these ideas come together to really  point to a new type of machine learning that yields models that generalize to unseen scenarios  essentially addressing a challenge with today's neural networks that do not generalize well to  unseen test scenarios because the models are so fast and and compact you can train them online you  can train them on edge devices and you can really see that they are beginning to understand the  tasks that they're given so you can see that we're really beginning to get at the semantics of what  these systems what these systems have to do so what does this have to do with the future well I  I think it's so exciting to use machine learning to study nature and to begin to understand the  nature of intelligence and we in in our lab here at C cell we have one project that is looking  at whether we can understand the lives of whales and so what do I mean by this so here is an  example where we have used a robotic drone to um to go very sorry this is uh this is very loud we have used the robotic drone to find whales  and look at what they do and track them and here is some Imaging and here's some some  clips from what this what the system is able to do we have used machine learning to identify  the whales and then once you have identified the whale we can actually Servo to the center of  the whale essentially tracking the whale along the way and here is how how the system works you  can um you can see a bunch of a group of whales and you can see our robot servoing and following  the whales as they move along now um we this is a very exciting project because whales are such  Majestic intelligent and mysterious creatures and so if we can use our Technologies to get  better insights into their lives we will be able to to understand more about about other animals  and other other creatures we share this beautiful planet with so we can study these whales from  above from the air we can also study the whales from from within from from inside the uh the Ocean  and here's a here's Sophie our soft robotic fish um which Joseph who is with us today has  participated in in building and here is this beautiful beautiful very natural uh moving  robot that can get close to aquatic creatures that can move in the same way aquatic creatures  do without without disturbing them when you put thruster-based robots in Ocean environments they  behave differently than than the fish do and they they tend to scare the fish so if you're curious  um the the tail is made out of silicone and there is a pump that can that can pump water in two of  its Chambers so you see it has two ripped Chambers and you can move water from one chamber to the  other and depending on how much water we move and in what proportions you can get the fish  to move forward so turn left or to turn right so we can observe the motion of animals  using robotic Technologies but we can do more we can also listen in on what oops  actually I need sound here I forgot about this we can observe the the whales and we  can observe what they say to each other foreign so this is a sperm whale and you have heard the vocalization of sperm  whales we believe that they're talking that the sperm whale is talking to its family and  friends and we would like to know what it's saying we have no idea but we can use  machine learning to make progress and the way we can do that is by is by by using  data using the kind of the kind of data you have just heard to look for to look for the presence  of language which is a major sign of intelligence we can look at whether we have discrete compounds  we can look at whether there might be grammar we can look at whether we have long range  dependencies we can look at whether we have other properties that human language has  and so um basically our project is very much work in progress I can't tell you today what  the whales are saying to each other but I can tell you that we have made progress I can tell  you that we are beginning to find which parts of their calls carry information we can use machine  learning to differentiate the clicks that allow the whales to echolocate from the clicks that seem  to be vocalization and information carrying clicks we can begin to look at what the protocols  for information exchange are how do they engage in dialogue and we can begin to ask what  is the information that they say to one another so with our project we are trying  to understand the phonetics the semantics and the syntax and the discourse  for whales so we have a big data set consisting of about 22 000 clicks uh the clicks get grouped  into codas the codas are like the phonemes and using machine learning we can identify  coded types we can identify patterns for Coda exchanges and we can we can begin to really  ask ourselves how is it that that Wales exchange information and if you're interested in this  problem please come see us because we have a lot of projects that are very very exciting  and important towards reverse engineering what this really extraordinary and majestic animal is  capable of doing so let me close by saying that in this class you have looked at a number of  really exciting machine learning algorithms but you have also looked at what some of the technical  challenges with a machine learning algorithms are including data availability including data  quality including the amount of computation required the model size and the ability of that  model to run on edge devices or on huge devices uh you have seen that many of our Solutions are  Black Box Solutions and sometimes we have brittle function we have we have easily attackable models  you have also seen some alternative models like liquid networks which attempt to address some of  these questions there is so much opportunity for developing improved machine learning using  existing models and inventing new models and if we can do this we can create an exciting  work world where machines will really Empower us will really augment us and and enhance us in our  cognitive abilities and in our physical abilities so just imagine waking up enabled by your  personal assistant that figures out the optimal time and helps you organize all the items  that you need for the day and then brings them to you so you don't have to think about whether  your outfit matches and as you walk by a store the image in the store window displays your  picture with the latest fashion on your body and inside the store maybe you want to buy a  certain shoe an AI system can analyze how you walk can analyze your your dimensions and can create  a bespoke shoe a bespoke model just for you and then all the all the clothing all the items in our  environment can kind of awaken our clothing could become robots and so our clothing could become  monitoring devices but they could also become programmable for instance in this case you can see  the ability of a sweater to change color so that the girl can match her friend now this is actually  not far-fetched we have a group on campus that is delivering these programmable fibers that can  change color and that they can do some computation at work inside the intelligent boardroom the  temperature could get adjusted automatically by monitoring people's comfort and gesture and  just-in-time Holograms could be used to make the virtual world much more much more realistic much  more connected and so here they're discussing the the design of a new flying car and let's say we  have these flying cars and then we can integrate these cars with the it infrastructure and the cars  will know your needs so that they can tell you for instance that you can buy the plants you have  been wanting nearby by Computing a small detour and back at home you can take a first ride  on a bike and the bike itself becomes a robot with adaptable wheels that appear and disappear  according to your skill level you can have robots that help with planting you have you can have  delivery robots and there's the garbage ban the garbage bin that takes itself out and after a good  day when it's time for a bedtime story you can begin to enter the story and control the flow and  begin to interact with the characters in the story these are some possibilities for the kind  of future that machine learning artificial intelligence and robots are enabling and I'm  personally very excited about this future with robots helping us with cognitive and physical  work but this future is really dependent on very important new advancements that will come from all  of you and so I'm so excited to see what you'll be doing in the next years in the years ahead so  thank you very much and uh come come work with us
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_AI_for_Science.txt
thank you alex and uh yeah great to be part of this and i hope you had an exciting course uh until now and got a good foundation right on different aspects of deep learning as well as the applications um so as you can see i kept a general title because you know the aspect of question is you know is deep learning mostly about data is it about compute or is it about also the algorithms right so and how and ultimately how do these three come together and so what i'll show you is what is the role of some principal design of ai algorithms and when i say challenging domains i'll be focusing on ai for science which as alex just said at caltech kind of we have the ai for science initiative to enable collaborations across the campus and have domain experts work closely with the ai experts and to do that right how do we build that common language and foundation right and and why isn't it just an application a straightforward application of the current ai algorithms what is the need to develop new ones right and how much of the domain specificity should we have versus having uh domain independent frameworks and all this of course right the answer is it depends but the main aspect that makes it challenging that i'll keep emphasizing i think throughout this talk is the need for extrapolation or zero shot generalization so you need to be able to make predictions on samples that look very different from your training data right on many times you may not even have the supervision for instance if you are asking about the activity of the earth deep underground you haven't observed this so having the ability to do unsupervised learning is important having the ability to extrapolate and go beyond the training domain is important and so that means it cannot be purely data driven right you have to take into account the domain priors the domain constraints and laws the physical laws and the question is how do we then bring that together in an algorithm design and so you'll see some of that in this talk here yeah and uh the question is this is all great as an intellectual pursuit but is there a need right and the to me the need is huge because if you look at uh scientific computing and so many applications in the sciences right the requirement for computing is growing exponentially you know now with the pandemic you know the need to understand uh right the ability to develop new drugs vaccines and the evolution of new viruses is so important and this is a highly multi-scale problem right we can go all the way to the quantum level and ask you know how precisely can i do the quantum calculations but this would not be possible at the scale of millions or billions of atoms right so you cannot like do these fine scale calculations right especially if you're doing them through numerical methods and you cannot then scale up to uh millions or billions of atoms which is necessary to model the real world and similarly like if you want to tackle climate change and you know precisely predict climate change for the next uh century uh we need to also be able to do that at fine scale right so saying that the planet is going to warm up by one and a half or two degrees centigrade is of course it's disturbing but what is even more disturbing is asking what would happen to specific regions in the world right we talk about the global south or you know the middle east like india like places that uh you know may be severely affected by climate change and so you could go even further to a very fine spatial resolution and you want to ask what is the climate risk here and then how do we mitigate that and so this starts to then require lots of computing uh capabilities uh so you know there's a if you like look at the current numerical methods and you ask i want to do this let's say at one kilometer scale right so i want the resolution to be at the one kilometer level uh and then i wanna look at the predictions for the night just the next decade that alone would take ten to the love and more computing than what we have today and similarly you know we talked about understanding molecular properties if we try to compute this schrodinger's equation which is right the fundamental equation uh so we know right that characterizes everything about the molecule but even to do that for a 100 atom molecule it would take more than the age of the universe right the time it's needed in the current supercomputers so so that's the aspect that no matter how much computing we have we will be needing more and so like the hypothesis right that nvidia we're thinking about is yes gpu will give you right some amount of scaling and then you can build super computers and we can have the scale up and out but you need machine learning to have 1000 to further million x speed up right on top of that and then you could go all the way up to 10 to the 9 or further to you know close that gap so machine learning and ai becomes really critical to be able to speed up scientific simulations and also to be data driven right so we have you know lots of measurements of the planet uh in terms of the weather over the last few decades but then we have to extrapolate further but we do have the data and we are also you know collecting data through satellites right as we go along so how do we take the data along with the physical laws let's say fluid dynamics right of how clouds move how clouds form so you need to like take all that into account together and same with you know discovering new drugs we have data on the current drugs and we have a lot of the information available on those properties right so how do we use that data along with the um physical properties right whether it's at the level of like classical mechanics or quantum mechanics right so and how how how do we make the decision of at which level at which precision do we need to make ultimately discoveries right either discovering new drugs or coming up with the precise characterization of climate change and to do that with the right uncertainty quantification because we need to understand right like kind of it's not we're not going to be able to precisely predict what the climate is going to be over the next decade let alone the next century but can we predict also the error bars right so we need that precise error bars and all of this is a deep challenge for the current deep learning methods right because we know deep learning tends to result in models that are over confident when they're wrong and we've seen that you know the famous cases are like the gender shade studies where it was shown on darker colored skin and especially on women right those models are wrong but they're also very over confident when they're wrong right so so that you cannot just directly apply to the climate's case because you know trillions of dollars are on the line in terms of to design the policies based on those uncertainties that we get in the model and so we cannot abandon just the current uh numerical methods and say let's do purely deep learning um and in the case of uh of course right drug discovery right the aspect is the space is so vast we can't possibly search through the whole space so how do we then make right like the relevant uh you know design choices on where to explore and as well as there are so many other aspects right is this drug synthesizable you know is that going to be cheap to synthesize so there's so many other aspects beyond just the search space uh so yeah so i think i've convinced you enough that these are really hard problems to solve the question is where do we get started and how do we make headway in solving them and so what we'll see is uh in you know what i want to cover in this lecture is right if you think about predicting climate change and i emphasize the fine scale phenomena right so this is something that's well known in fluid dynamics that you can't just take measurements of the core scale and try to predict with that because the fine scale features are the ones that are driving uh the phenomena right so you will be wrong in your predictions if you do it only at the core scale and so then the question is how do we design machine learning methods that can capture this fine scale phenomena and that don't over fit to one resolution right because we have this underlying continuous space on which the fluid moves if we discretize and take only a few measurements and we fit that to a deep learning model it uh may be doing the you know it may be overfitting to the wrong just those discrete points are not the underlying continuous phenomena so we'll develop methods that can capture that um underlying phenomenon in a resolution invariant manner and the other aspect we'll see for molecular uh modeling uh we'll look at the symmetry aspect because you rotate the molecule in 3d right the result it should be equivariant so also how to capture that into our deep learning models uh we'll see that uh in the later part of the talk so that's hopefully an overview or in terms of the challenges and uh the um also some of the ways we can overcome that and so this is just saying that you know there is right lots of also data available so that's a good opportunity and we can now have large scale models and we've seen that in the language realm right like uh including now what's not shown here the nvidia 530 billion model uh with 530 billion parameters and with that we've seen language understanding have a huge quantum leap and so that also shows that if you try to capture complex phenomena like the uh earth's um weather or you know ultimately the climate or molecular modeling we would also need big models and we have now better data and more data available so all this will help us contribute to you know getting good impact in the end and the other aspect right is like also what we're seeing is bigger and bigger super computers and with ai and machine learning the benefit is we don't have to worry about high precision computing right so with traditional high performance computing you needed to do that in very high precision right 64 floating point uh computations whereas now with ai computing we could do it in 32 or even 16 or even eight right so we are kind of getting to lower and lower bits and also mixed precision so we have more flexibility on how we choose that precision and so that's another aspect that is deeply beneficial so okay so let me now get into some aspect of algorithmic design uh i mentioned uh briefly just a few minutes ago that if you look at standard neural networks right they are fixed to a given resolution so they expect image in a certain resolution right a certain size image and also the output whatever task you're doing is also a fixed size now if you're doing segmentation it would be the same size as the image so so why is this not enough because you know if you are looking to solve fluid flow for instance this is like kind of air foils right so you can uh you know with standard numerical methods what you do is you decide what the mesh should be right and depending on what task you want you may want a different mesh right and they want a different resolution and so we want methods that can remain invariant across these different resolutions and that's because what we have is an underlying continuous phenomenon right and we are discretizing and only sampling in some points but we don't want to over fit to only predicting on those points right we want to be predicting on other points other than what we've seen during training and also different um initial conditions boundary conditions right so that's what uh when we're solving partial differential equations we need all this flexibility so if you're saying we want to replace current partial differential equation solvers we cannot just straightforward use our standard neural networks and so that's what we want to then formulate what does it mean to be learning such a pde solver because if you're only solving one instance of a pte right as a standard solver what it does is it looks at what is the solution at different query points and numerical methods will do that by discretizing in space and time right at an appropriate resolution like it has to be fine enough resolution and then you numerically compute the solution on the other hand we want to learn uh solver for uh family of partial differential equations let's say like fluid flow right so i want to be learning to predict like uh say the velocity or the vorticity like all these properties as the fluid is flowing and to do that i need to be able to learn what happens under different initial conditions so different initial let's say velocities or different initial right and boundary conditions right like what is the boundary of this space so so i need to be given this right so if i tell you what the initial and boundary conditions are i need to be able to find what the solution is and so if we have multiple training points we can then train to solve for a new point right so if i now give you a new set of initial and boundary conditions i want to ask what the solution is and i could potentially be asking it at different query points right at different resolutions so that's the problem set up any questions here right so hope that's clear so the main difference from standard um supervised learning right that you're familiar with say images is here right it's not just fixed to one resolution right so you could have like different query points at different resolutions in different samples and different during training versus test samples and so now the question is how do we design a framework that does not fit to one resolution that can work across different resolutions and we can think of that by thinking of it as learning in infinite dimensions because if you learn from this function space of initial and boundary conditions to the solution function space then you can resolve at any resolution yeah so how do we go about building that in a principled way so to do that just look at a standard neural network right let's say um an mlp and so what that has is right a linear function which is matrix multiplication and then um right so on top of that some non-linearity so you're taking linear processing and on top of that adding a non-linear function and so with this you have good expressivity right because if you only did linear processing that would be limiting right that's not a very expressive model that can't fit to say complex data like images but if you now add non-linearity right you're getting this expressive network and same with convolutional neural networks right you have a linear function you combine it with non-linearity so this is the basic setup and we can ask can we mimic the same but now the difference is instead of assuming the input to be in fixed finite dimensions in general we can have like an input that's infinite dimensional right so that can be now uh a continuous set on which we define the initial or boundary conditions so how do we extend this to that scenario so in that case right we can still have the same notion that we will have a linear you know operator so here it's an operator because it's now in infinite dimensions potentially right so that's the only detail but we can ask it's still linear right and compose it with non-linearity so we can still keep the same principle but the question is what would be now some practical design of what these linear operators are and so for this well take inspiration from solving linear partial differential equations i don't know how many of you have taken a pde class right if not not to worry i'll give you some quick insights here so if you want to solve a linear partial differential equation uh you know the most popular example is heat diffusion so you have like a heat source and then you want to see how it diffuses in space right so that can be described as a linear partial differential equation system and the way you solve that is you know there is this is known as the greens function so this says how it's going to propagate in space right like so at each point uh what is this kernel function and then you integrate over it and that's how you get the temperature at any point so intuitively what this is doing is uh convolving with this green's function and doing that at every point to get the solution which is the temperature at every point so it's saying how the heat will diffuse right you can write it as the propagation of that heat as this integration or convolution operation and so this is linear and this is also now not dependent on the resolution because this is continuous so you can do that now at any point and query a new point and get the answer right so it's not fixed to finite dimensions and so this is conceptually right like a way to incorporate now a linear operator uh but then if you only did this right if you only did this operation that would only solve linear partial differential equations but on the other hand we are adding non-linearity and we are going to compose that over several layers so that now allows us to even solve non-linear pdes or any general system right so the idea is we'll be now learning how to do this integration right so these we will now uh learn over several layers and uh get uh a now what we call a neural operator that can learn in infinite dimensions so of course then the question is how do we come up with the practical architecture that would learn this right so that would learn to do this kind of global convolution and continuous convolution so here we'll do some signal processing 101. again uh you know if you haven't done the course or don't remember here's again a quick primer right so uh you know the idea is if you try to do convolution in the spatial domain it's much more efficient to do it by transforming it to the fourier domain right or the frequency space and so convolution in the right spatial domain you can change it to multiplication in the frequency domain so now you can multiply in the frequency domain and take the inverse fourier transform and then you solve this convolution operation and the other benefit of using fourier transform is it's global right so if you did a standard convolutional neural network the filters are small so the receptive field is small right even if you did over a few layers and so you're only capturing local phenomenon which is fine for natural images because you're only looking at edges that's all local right but uh for um especially fluid flow and all these partial differential equations there's lots of global correlations and doing that through the fourier transform can capture that so in the frequency domain you can capture all these global correlations effectively and so with this insight right sorry here that i wanna emphasize is we're gonna ultimately come up with an architecture that's you know very simple to implement because in each layer what we'll do is we'll take fourier transform so we'll transform our signal to the fourier space right or the frequency domain and learn weights on how to pick like a across different frequencies which one should i update which one should i down weight and so i'm going to learn these weights and also only limit to the low frequencies when i'm doing the inverse fourier transform so this is more of a regularization um of course if i did only one layer of this this highly limits expressivity right because i'm taking away all the high frequency content which is not good but i'm like now adding non-linearity and i'm doing several layers of it and so that's why it's okay and so this is now a hyper parameter of how much should i filter out right which is like a regularization and it makes it stable uh to trade so so that's an additional detail but at a high level what we are doing is we are processing now from we're doing the training by learning weights in the frequency domain right and then we're adding non-linear transforms in between that to give it expressivity so this is a very simple formulation but uh the previous slides with that what i try to also give you is an insight to why this is principled right and in fact we can theoretically show that this can universally approximate any operator right including solutions of non-linear pes and it can also like for specific families like fluid flows we can also argue that it can do that very efficiently with not a lot of parameters so which is like an approximation bound so so yeah so that's the idea that uh this is uh right all you know in many cases we'll also see that incorporates the inductive bias you know of the domain that expressing signals in the fourier domain or the frequency domain is much more efficient and even traditional numerical methods for fluid flows use spectral decomposition right so they do fourier transform and solve it in the fourier domain so we are mimicking some of that properties even when we are designing neural networks now um so that's been a nice benefit and the other thing is ultimately what i want to emphasize is now you can process this at any resolution right so if you now have input at a different resolution you can still take fourier transform and you can still go through this and the idea is um this would be a lot more generalizing across different resolutions compared to say convolutional filters which learn at only one resolution and don't easily generalize to another resolution any questions here i hope this concept was uh you know clear at least right you got some insights into why first of all fourier transform is a good idea it's a good inductive bias you can also process signals at different resolutions using that and you can also have this principle approach that you're solving convolution right a global convolution a continuous convolution in a layer and with nonlinear transforms together you have an expressive model we have a few questions coming into the chat are you able to see them or would you like me to read them uh would be helpful if you can yeah i can also see them okay now i can see them here okay great um yeah so how generalizable is the implementation of so you know so that really depends right you can do fourier transform on different domains you could also do non-linear fourier transform right so i and then the question is of course right if you want to keep it to just fft are there other transforms before that to like do that end to end and so these are all aspects we are now further right uh looking into uh for domains where it may not be uniform but the idea is if it's uniform right you can do fft and that's very fast and so the kernel r is um so so the kernel r is going to be yes it's going to like but it's not the resolution right so remember this r uh the weight matrix is in the frequency domain right so you can always transfer to the frequency space no matter what the spatial resolution is and so that's why this is okay i hope that answers your question great and the next question is essentially integrating over different resolutions and take the single integrated one to train your uh neural network model uh so you could right so depending on you know if your data is under different resolutions you can still feed all of them to this network and train one model um and and also the idea is a test time you now have different query points a different resolution you can still use the model and so that's the benefit because we don't want to be training different models for different resolutions because that's first of all clunky right it's expensive and the other is it may not you know if the model doesn't easily generalize from one resolution to another it may not be correctly learning the underlying phenomenon right because your goal is not just to fit to this training data or one resolution your goal is to be accurately let's say predicting fluid flow and so how do we ensure we're correctly doing that so better generalizability if you do this in a resolution in variant manner i hope those answer your questions great so i'll show you some quick results uh you know here right this is navier stokes to the two dimensions and so here we're training on only low resolution data and directly testing on high resolution right so this is zero shot so and you know you can visually see that right this is the predicted one and this is the ground truth that's able to do that but we can also see that in being able to capture the energy spectrum right so if you did the standard like unit right you know it starts you know i mean it's well known that the uh convolutional neural networks don't capture high frequencies that well right so that's first of all already a limitation even with the resolution with which it's trained but the thing is if you further try to extrapolate to higher resolution than the training data it you know deviates a lot more and our model is much closer and and right now we are also doing further uh versions of this to see how to capture this high frequency data well um so that's the idea that we can now you know think about handling different resolutions beyond the training data so let's see what the other um yeah so the phase information so remember we're keeping both the phase and amplitude right so the frequency domain we're doing it as complex numbers so we're not throwing away the phase information so we're not just keeping the amplitude it's amplitude and phase together that's a good point good good now yeah yeah so i know we intuitively think we're only processing in real numbers because that's what standard neural networks do uh and by the way if you're using these models uh just be careful by torch how to bug in complex numbers uh for gradient updates i think for adam algorithm and we had to redo that so they forgot a conjugate uh software uh so yeah so but yeah this is complex numbers great um so i know there are you know a lot of different uh examples of applications but uh i have towards the end but that you know i'm happy to share the slides and you can look so the remaining few minutes we have i want to add just another detail right in terms of how to develop the methods better which is to add also the physics laws right so here i'm given the partial differential equation it makes complete sense that i should also check how close is it to satisfying the equations so i can add like now this additional loss function that says am i satisfying the equation or not right and and one little detail here is if you want to do this at scale and if we want to like you know auto differentiation is expensive but we do it in the frequency domain and we do it also very fast so that's something we developed but the other i think useful detail here is right so you can train first on lots of different problem instances right so different initial boundary conditions you can train and you can learn a good model this is what i described before this is supervised learning but now you can ask i want to solve one specific instance of the problem now i tell you a test time this is the initial and boundary condition give me the solution now i can further fine tune right and get more accurate solution because i'm not just relying on the generalization of my pre-trained model i can further look at the equation loss and fine-tune on that and so by doing this we show that we can further you know we can get good errors and we can also ask you know what is the trade-off between having training data because training data requires right having a a numerical solver and getting enough right training points or just looking at equations right if i need to just impose the loss of the equations i don't need any solver any data from existing solver and so what we see is uh the balance is maybe right to get like uh really good error rates right to be able to quickly get to good solutions over a range of different conditions is something like maybe small amount of training data right so if you can query your existing solver get small amounts of training data but then be able to augment with um just you know this part is unsupervised right so you just add the equation laws over a lot more instances then you can get to good generalization capabilities and so this is like the balance between right being data informed or physics informed right so the hybrid where your you can do physics informed over a lot more samples because it's free right it's just data augmentation but you had a small amount of supervised learning right and that can pay off very well by you know having a good trade-off is the model assumed to be at a given time um so yes in this cases we looked at the overall error i think like l2 error both in space and time so it depends some of the examples of pdes we used were time independent others are time dependent right so it depends on the setup i think this one is time dependent so this is like the average error great so um i don't know how much longer uh i have um you know so alex are we stopping at 10 45 is that the yeah we can go until 10 45 and maybe wrap up some of the questions after that yeah so like quickly another i think conceptual aspect that's very useful in practice is solving inverse problems right so you know the way partial differential equation solvers are used in practice is you typically already have the solution you want to look at what the initial condition is for instance right like if you want to like you know ask what about the activity deep underground right so that's like the initial condition because that propagated and then you see on the surface what the activity is and same with the famous example of black hole imaging right so you don't know directly you don't observe what the black hole is and so all these indirect measurements means we are solving an inverse problem and so what we can do with this method is you know we could first like kind of do the way we did right now right we can saw learn a partial differential equation solver in the forward way right from the initial condition to the solution and then try to invert it and find the best uh fit or we can directly try to learn the inverse problem right so we can see like given solution learn to find the initial condition and do it over right lots of training data and so doing that is also fast and effective so you can avoid the loop of having to do mcmc which is expensive in practice and so you get both the speed up of you know replacing the partial differential equation solver as well as mcmc and you get good speed ups um so chaos is another aspect i won't get into it because here you know this is right more challenging because we are asking if it's a chaotic system can you like predict its ultimate right statistics and how do we do that effectively and we also have frameworks there i won't get into it and there's also a nice connection with transformers so it turns out that you can think of transformers as finite dimensional right systems and you can now look at continuous generalization where this attention mechanism becomes integration and so you can replace it with these right fourier neural operator kind of models in the spatial mixing and even potentially channel mixing frameworks and so that can lead to good efficiency like you know you have the same um performance as a full a self-potential model but you can be much faster because of the fourier transform operations yeah so we have many applications of these different frameworks right so just a few that i've listed here and but i'll stop here and take questions instead so lots of like i think application areas and that's what has been exciting collaborating across many different disciplines thank you so much anima are there any remaining questions from the the group and the students yeah we got a one question in the chat thanks again for the very nice presentation well the neural operators be used for various application domains in any form similar to print trade networks oh yeah yeah so pre-trained networks right so whether neural operators will be useful for many different so i guess like the question is similar to language models you have one big language model and then you apply it in different uh contexts right so there the aspect is there's one common language right so you wouldn't be able to do english and directly to spanish right so but you couldn't use that model to train again uh so it depends on the question is what is that nature of partial differential equations right for instance if i'm like you know having this model for fluid dynamics that's a starting point to do weather prediction or climate right so i can then use that and build other aspects because i need to also right like you know there's fluid dynamics in the cloud uh but there's also right like kind of precipitation there's other micro physics so so you could like either like kind of plug in models as modules in a bigger one right because uh you know there's parts of it that it knows how to solve well or you could like uh you know ask that uh or in a multi-scale way in fact like this example of stress prediction in materials right you can't just do all in one scale there's a core scale solver and a fine scale and so you can have solvers at different scales that you train maybe separately right as neural operators and then you can also jointly fine tune them together so in this case it's not straightforward as just language because you know yes we have the universal physical laws right but can we train a model to understand physics chemistry biology that seems uh too difficult maybe one day but the question is also what all kind of data do we feed in and what kind of constraints do we you know add right so i think i think one day it's probably gonna happen but it's gonna be very hard that's a good question yeah i have uh actually one follow-up question so i think the ability for extrapolation specifically is very fascinating and sorry alex i can't i think you're completely breaking up i can't oh sorry is this better no no let's see maybe you can type so the next question is from the physics perspective interpreters as learning the best renormalization scheme so indeed you know like uh even convolutional neural networks there's been like connections to renormalization right so um i mean here like you know the uh yeah so we haven't looked into it there's potentially an interpretation but the current interpretation have we have is uh much more straightforward in the sense we are saying each layer is like an integral operator right so which would be solving a linear partial differential equation and we can compose them together and then that way we can have a universal approximator but yeah that's to be seen it's a good point another question is whether we can augment uh to learn symbolic equations um yeah so this is right it's certainly possible but it's harder right to discover new physics or to discover some new equations new laws uh so this is always a question of like right we've seen all that here but what is the unseen uh but yeah it's definitely possible um so and alex i guess you're saying the ability for extrapolation is fascinating um so potentials for integration of uncertainty yes quantification and robustness i think these are really important uh you know on other threads we've been looking into uncertainty quantification on how to get conservative uncertainty for deep learning models right and so that's like the foundation is adversarial risk minimization or distributional robustness and we can scale them up uh so i think that's uh an important aspect uh in terms of robustness as well i think there are several you know uh other threats we are looking into like whether it is right designing say transformer models what is the role of self-attention to get good robustness or in terms of uh right the role of generative models to get robustness right so can you combine them to purify like kind of right the noise in certain way or denoise uh so we've seen all really good promising results there and i think there is ways to combine that here and we will need that in so many applications excellent yeah thank you maybe time for just one more question now um sorry i still couldn't hear you buddy i think you were saying thank you um yeah so the other question is the speed up versus of the um yeah so it is on the wall clock time with the traditional solvers and the speed increase with parallelism or um yeah so i mean right so we can always certainly further make this efficient right and now we are scaling this up one in fact uh more than thousand gpus uh in some of the applications and so there's also the aspect of the engineering side of it uh which is very important at nvidia like you know that's what we're looking into this combination of data and model parallelism how to do that at scale um so yeah so those aspects become very important as well when we're looking at scaling this up awesome thank you so much anina for an excellent talk and for fielding sorry i can't hear anyone for some reason it's [Music] can others in the zoom hear me yeah somehow yes i i don't know what maybe it's on my end but uh um it's fully muffled for me so uh but you know anyway i i think it has been a lot of fun so yeah and uh yeah i hope uh you got now a good foundation of deep learning and you can go ahead and do a lot of cool projects so yeah reach out to me if you have further questions or anything you want to discuss further thanks a lot thanks everyone
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2021_Introduction_to_Deep_Learning.txt
Good afternoon everyone and Welcome to MIT  6.S191 -- Introduction to Deep Learning. My name is Alexander Amini and I'm so excited to  be your instructor this year along with Ava Soleimany in this new virtual format. 6.S191 is  a two-week bootcamp on everything deep learning and we'll cover a ton of material in  only two weeks so I think it's really important for us to dive right in with  these lectures but before we do that I do want to motivate exactly why I think  this is such an awesome field to study and when we taught this class last year I decided  to try introducing the class very differently and instead of me telling the class how great  6.S191 is I wanted to let someone else do that instead so actually I want to start this year  by showing you how we introduced 6s191 last year. [Obama] Hi everybody and welcome to MIT  6.S191 the official introductory course on deep learning taught here at MIT. Deep learning  is revolutionizing so many things from robotics to medicine and everything in between you'll  learn the fundamentals of this field and how you can build some of these incredible algorithms.  In fact, this entire speech and video are not real and were created using deep learning  and artificial intelligence and in this class you'll learn how. It has been an honor to speak  with you today and I hope you enjoy the course. So in case you couldn't tell that was  actually not a real video or audio and the audio you actually heard was purposely  degraded a bit more to even make it more obvious that this was not real and avoid some potential  misuse even with the purposely degraded audio that intro went somewhat viral last year  after the course and we got some really great and interesting feedback and to be  honest after last year and when we did when we did this i thought it was going to  be really hard for us to top it this year but actually i was wrong because the one thing  i love about this field is that it's moving so incredibly fast that even within the past year  the state of the art has significantly advanced and the video you saw that we used last year used  deep learning but it was not a particularly easy video to create it required a full video of obama  speaking and it used this to intelligently stitch together parts of the scene to make it look and  appear like he was mouthing the words that i said and to see the behind the scenes here now  you can see the same video with my voice. [Alexander] Hi everybody and welcome to mit 6.S191 the official introductory course on  deep learning taught here at MIT. Now it's actually possible to use just a single  static image not the full video to achieve the exact same thing and now you can actually  see eight more examples of obama now just created using just a single static image no more  full dynamic videos but we can achieve the same incredible realism and result using deep learning  now of course there's nothing restricting us to one person this method generalizes to different  faces and there's nothing restricting us even to humans anymore or individuals that  the algorithm has ever seen before [Alexander] Hi everybody and welcome to mit 6.S191 the official introductory course  on deep learning taught here at MIT. The ability to generate these types of  dynamic moving videos from only a single image is remarkable to me and it's a testament  to the true power of deep learning in this class you're going to actually not only learn about  the technical basis of this technology but also some of the very important and very important  ethical and societal implications of this work as well now I hope this was a really great way  to get you excited about this course and 6.S191 and with that let's get started we can actually  start by taking a step back and asking ourselves what is deep learning defining deep learning  in the context of intelligence intelligence is actually the ability to process information such  that it can be used to inform a future decision now the field of artificial intelligence or AI  is a science that actually focuses on building algorithms to do exactly this to build algorithms  to process information such that they can inform future predictions now machine learning you  can think of this as just a subset of AI that actually focuses on teaching an algorithm to  learn from experiences without being explicitly programmed now deep learning takes this idea even  further and it's a subset of machine learning that focuses on using neural networks to automatically  extract useful patterns in raw data and then using these patterns or features to learn to perform  that task and that's exactly what this class is about this class is about teaching algorithms  how to learn a task directly from raw data and we want to provide you with a solid  foundation both technically and practically for you to understand under the hood how these  algorithms are built and how they can learn so this course is split between technical  lectures as well as project software labs we'll cover the foundation starting today with neural  networks which are really the building blocks of everything that we'll see in this  course and this year we also have two brand new really exciting hot topic lectures  focusing on uncertainty and probabilistic deep learning as well as algorithmic bias and fairness  finally we'll conclude with some really exciting guest lectures and student project presentations  as part of a final project competition that all of you will be eligible to win some really exciting  prizes now a bit of logistics before we dive into the technical side of the lecture for those of you  taking this course for credit you will have two options to fulfill your credit requirement the  first option will be to actually work in teams of or teams of up to four or individually  to develop a cool new deep learning idea now doing so will make you eligible to win some of  the prizes that you can see on the right hand side and we realize that in the context of this  class which is only two weeks that's an extremely short amount of time to come up  with an impressive project or research idea so we're not going to be judging you on the  novelty of that idea but rather we're not going to be judging you on the results of that  idea but rather the novelty of the idea your thinking process and how you how impactful this  idea can be but not on the results themselves on the last day of class you will actually  give a three-minute presentation to a group of judges who will then award the winners and the  prizes now again three minutes is extremely short to actually present your ideas and present your  project but i do believe that there's an art to presenting and conveying your ideas concisely  and clearly in such a short amount of time so we will be holding you strictly to that  to that strict deadline the second option to fulfill your grade requirement is to write a  one-page review on a deep learning paper here the grade is based more on the clarity of the writing  and the technical communication of the main ideas this will be due on thursday the last thursday  of the class and you can pick whatever deep learning paper you would like if you would  like some pointers we have provided some guide papers that can help you get started if  you would just like to use one of those for your review in addition to the final project  prizes we'll also be awarding this year three lab prizes one associated to each of the software  labs that students will complete again completion of the software labs is not required for grade of  this course but it will make you eligible for some of these cool prices so please we encourage  everyone to compete for these prizes and get the opportunity to win them all please  post the piazza if you have any questions visit the course website for announcements  and digital recordings of the lectures etc and please email us if you have any  questions also there are software labs and office hours right after each of these  technical lectures held in gather town so please drop by in gather town to ask any questions  about the software labs specifically on those or more generally about past software labs or  about the lecture that occurred that day now this team all this course has a incredible  group of TA's and teaching assistants that you can reach out to at any time  in case you have any issues or questions about the material that you're learning and finally we want to give a  huge thanks to all of our sponsors who without their help this class would  not be possible this is the fourth year that we're teaching this class and each year it  just keeps getting bigger and bigger and bigger and we really give a huge shout out to our  sponsors for helping us make this happen each year and especially this year in light of the virtual  format so now let's start with the fun stuff let's start by asking ourselves a question about  why why do we all care about deep learning and specifically why do we care right now understand  that it's important to actually understand first why is deep learning or how is deep learning  different from traditional machine learning now traditionally machine learning algorithms  define a set of features in their data usually these are features that are handcrafted or hand  engineered and as a result they tend to be pretty brittle in practice when they're deployed the  key idea of deep learning is to learn these features directly from data in a hierarchical  manner that is can we learn if we want to learn how to detect a face for example can we learn  to first start by detecting edges in the image composing these edges together to detect  mid-level features such as a eye or a nose or a mouth and then going deeper and composing  these features into structural facial features so that we can recognize this face this is this  hierarchical way of thinking is really core to deep learning as core to everything  that we're going to learn in this class actually the fundamental building blocks though  of deep learning and neural networks have actually existed for decades so one interesting thing to  consider is why are we studying this now now is an incredibly amazing time to study these algorithms  and for one reason is because data has become much more pervasive these models are extremely  hungry for data and at the moment we're living in an era where we have more data than ever  before secondly these algorithms are massively parallelizable so they can benefit tremendously  from modern gpu hardware that simply did not exist when these algorithms were developed and finally  due to open source toolboxes like tensorflow building and deploying these models  has become extremely streamlined so let's start actually with  the fundamental building block of deep learning and of every neural network that  is just a single neuron also known as a perceptron so we're going to walk through exactly what is  a perceptron how it's defined and we're going to build our way up to deeper neural networks all the  way from there so let's start we're really at the basic building block the idea of a perceptron  or a single neuron is actually very simple so i think it's really important for all of you  to understand this at its core let's start by actually talking about the forward propagation  of information through this single neuron we can define a set of inputs x i through x m  which you can see on the left hand side and each of these inputs or each of these numbers  are multiplied by their corresponding weight and then added together we take this single number  the result of that addition and pass it through what's called a nonlinear activation function  to produce our final output y we can actually actually this is not entirely correct because one  thing i forgot to mention is that we also have what's called a bias term in here which allows  you to shift your activation function left or right now on the right hand side of this diagram  you can actually see this concept illustrated or written out mathematically as a single equation  you can actually rewrite this in terms of linear algebra matrix multiplications and dot products  to to represent this a bit more concisely so let's do that let's now do that with x  capital x which is a vector of our inputs x1 through xm and capital w which is a vector  of our weights w1 through wm so each of these are vectors of length m and the output is very  simply obtained by taking their dot product adding a bias which in this case is  w0 and then applying a non-linearity g one thing is that i haven't i've been mentioning  it a couple of times this non-linearity g what exactly is it because i've mentioned it now a  couple of times well it is a non-linear function one common example of this nonlinear activation  function is what is known as the sigmoid function defined here on the right in fact there  are many types of nonlinear functions you can see three more examples here including the  sigmoid function and throughout this presentation you'll actually see these tensorflow code  blocks which will actually illustrate uh how we can take some of the topics that we're  learning in this class and actually practically use them using the tensorflow software library now  the sigmoid activation function which i presented on the previous slide is very popular since it's  a function that gives outputs it takes as input any real number any activation value and it  outputs a number always between 0 and 1. so this makes it really really suitable for problems  and probability because probabilities also have to be between 0 and 1 so this makes them very well  suited for those types of problems in modern deep neural networks the relu activation function which  you can see on the right is also extremely popular because of its simplicity in this case it's a  piecewise linear function it is zero before when it's uh in the negative regime and it is strictly  the identity function in the positive regime but one really important question that i  hope that you're asking yourselves right now is why do we even need activation functions  and i think actually throughout this course i do want to say that no matter what  i say in the course i hope that always you're questioning why this is a necessary step  and why do we need each of these steps because often these are the questions that can lead to  really amazing research breakthroughs so why do we need activation functions now the point of  an activation function is to actually introduce non-linearities into our network because these are  non-linear functions and it allows us to actually deal with non-linear data this is extremely  important in real life especially because in the real world data is almost always non-linear  imagine i told you to separate here the green points from the red points but all you could  use is a single straight line you might think this is easy with multiple lines or curved lines  but you can only use a single straight line and that's what using a neural network with a linear  activation function would be like that makes the problem really hard because no matter how deep the  neural network is you'll only be able to produce a single line decision boundary and you're  only able to separate your space with one line now using non-linear activation functions  allows your neural network to approximate arbitrarily complex functions and that's what  makes neural networks extraordinarily powerful let's understand this with a simple example so  that we can build up our intuition even further imagine i give you this trained network now with  weights on the left hand side 3 and negative 2. this network only has two inputs x1 and x2  if we want to get the output of it we simply do the same story as i said before first take  a dot product of our inputs with our weights add the bias and apply a non-linearity  but let's take a look at what's inside of that non-linearity it's simply a  weighted combination of our inputs and the in the form of a two-dimensional line  because in this case we only have two inputs so if we want to compute this output it's the  same story as before we take a dot product of x and w we add our bias and apply our non-linearity  what about what's inside of this nonlinearity g well this is just a 2d line in fact since it's  just a two dimensional line we can even plot it in two-dimensional space this is called  the feature space the input space in this case the feature space and the input  space are equal because we only have one neuron so in this plot let me describe what you're seeing  so on the two axes you're seeing our two inputs so on one axis is x1 one of the inputs on the other  axis is x2 our other input and we can plot the line here our decision boundary of this trained  neural network that i gave you as a line in this space now this line corresponds to actually all  of the decisions that this neural network can make because if i give you a new data point for example  here i'm giving you negative 1 2. this point lies somewhere in this space specifically at x one  equal to negative one and x two equal to two that's just a point in the space i want you to  compute its weighted combination and i i can actually follow the perceptron equation to get  the answer so here we can see that if we plug it into the perceptron equation we get 1 plus minus  3 minus 4 and the result would be minus 6. we plug that into our nonlinear activation function  g and we get a final output of 0.002 now in fact remember that the sigmoid function actually  divides the space into two parts of either because it outputs everything between zero and  one it's dividing it between a point at 0.5 and greater than 0.5 and less than 0.5 when the input  is less than 0 and greater than 0.5 that's when the input is positive we can illustrate the space  actually but this feature space when we're dealing with a small dimensional data like in this case  we only have two dimensions but soon we'll start to talk about problems where we have thousands or  millions or in some cases even billions of inpu of uh weights in our neural network and then  drawing these types of plots becomes extremely challenging and not really possible anymore but  at least when we're in this regime of small number of inputs and small number of weights we can make  these plots to really understand the entire space and for any new input that we obtain for example  an input right here we can see exactly that this point is going to be having an activation function  less than zero and its output will be less than 0.5 the magnitude of that actually is computed  by plugging it into the perceptron equation so we can't avoid that but we can immediately get an  answer on the decision boundary depending on which side of this hyperplane that we lie on when we  plug it in so now that we have an idea of how to build a perceptron let's start by building neural  networks and seeing how they all come together so let's revisit that diagram of the perceptron  that i showed you before if there's only a few things that you get from this class i really  want everyone to take away how a perceptron works and there's three steps remember them always  the dot product you take a dot product of your inputs and your weights you add a bias and you  apply your non-linearity there's three steps let's simplify this diagram a little bit let's  clean up some of the arrows and remove the bias and we can actually see now that every line here  has its own associated weight to it and i'll remove the bias term like i said for simplicity  note that z here is the result of that dot product plus bias before we apply the activation  function though g the final output though is is simply y which is equal to the activation  function of z which is our activation value now if we want to define a multi-output neural  network we can simply add another perceptron to this picture so instead of having one perceptron  now we have two perceptrons and two outputs each one is a normal perceptron exactly like we saw  before taking its inputs from each of the x i x ones through x m's taking the dot product adding a  bias and that's it now we have two outputs each of those perceptrons though will have a different set  of weights remember that we'll get back to that if we want it so actually one thing to keep  in mind here is because all the inputs are densely connected every input has a connection to  the weights of every perceptron these are often called dense layers or sometimes fully connected  layers now we're through this class you're going to get a lot of experience actually coding up  and practically creating some of these algorithms using a software toolbox called tensorflow  so now that we have the understanding of how a single perceptron works and how a dense  layer works this is a stack of perceptrons let's try and see how we can actually build up  a dense layer like this all the way from scratch to do that we can actually start by initializing  the two components of our dense layer which are the weights and the biases now that we  have these two parameters of our neural network of our dense layer we can actually define the  forward propagation of information just like we saw it and learn about already that forward  propagation of information is simply the dot product or the matrix multiplication of our  inputs with our weights at a bias that gives us our activation function here and then we  apply this non-linearity to compute the output now tensorflow has actually implemented  this dense layer for us so we don't need to do that from scratch instead we  can just call it like shown here so to create a dense layer with two outputs  we can specify this units equal to two now let's take a look at what's called a single  layered neural network this is one we have a single hidden layer between our inputs and our  outputs this layer is called the hidden layer because unlike an input layer and an output layer  the states of this hidden layer are typically unobserved they're hidden to some extent they're  not strictly enforced either and since we have this transformation now from the input layer to  the hidden layer and from the hidden layer to the output layer each of these layers are going to  have their own specified weight matrices we'll call w1 the weight matrices for the first layer  and w2 the weight matrix for the second layer if we take a zoomed in look at one of the neurons  in this hidden layer let's take for example z2 for example this is the exact same perceptron that we  saw before we can compute its output again using the exact same story taking all of its inputs x1  through xm applying a dot product with the weights adding a bias and that gives us z2 if we  look at a different neuron let's suppose z3 we'll get a different value here because the  weights leading to z3 are probably different than those leading to z2 now this picture looks a bit  messy so let's try and clean things up a bit more from now on i'll just use this symbol here to  denote what we call this dense layer or fully connected layers and here you can actually  see an example of how we can create this exact neural network again using tensorflow  with the predefined dense layer notation here we're creating a sequential model where  we can stack layers on top of each other first layer with n neurons and the second  layer with two neurons the output layer and if we want to create a deep neural network  all we have to do is keep stacking these layers to create more and more hierarchical models  ones where the final output is computed by going deeper and deeper into the network and  to implement this in tensorflow again it's very similar as we saw before again using the tf keras  sequential call we can stack each of these dense layers on top of each other each one specified  by the number of neurons in that dense layer n1 and 2 but with the last output layer fixed to  two outputs if that's how many outputs we have okay so that's awesome now we have an idea of not  only how to build up a neural network directly from a perceptron but how to compose them together  to form complex deep neural networks let's take a look at how we can actually apply them to a very  real problem that i believe all of you should care very deeply about here's a problem that we  want to build an ai system to learn to answer will i pass this class and we can start with  a simple two feature model one feature let's say is the number of lectures that you attend as  part of this class and the second feature is the number of hours that you spend working on your  final project you do have some training data from all of the past participants of success 191  and we can plot this data on this feature space like this the green points here actually indicate  students so each point is one student that has passed the class and the red points  are students that have failed the pass failed the class you can see their where they are  in this feature space depends on the actual number of hours that they attended the lecture the number  of lectures they attended and the number of hours they spent on the final project and then there's  you you spent you have attended four lectures and you have spent five hours on your  final project and you want to understand uh will you or how can you build a neural network  given everyone else in this class will you pass or fail uh this class based on the training data  that you see so let's do it we have now all of the uh to do this now so let's build a neural  network with two inputs x1 and x2 with x1 being the number of lectures that we attend x2 is the  number of hours you spend on your final project we'll have one hidden layer with three units and  we'll feed those into a final probability output by passing this class and we can see that  the probability that we pass is 0.1 or 10 that's not great but the reason is because  that this model uh was never actually trained it's basically just a a baby it's never seen any  data even though you have seen the data it hasn't seen any data and more importantly you haven't  told the model how to interpret this data it needs to learn about this problem first it knows nothing  about this class or final projects or any of that so one of the most important things to do  this is actually you have to tell the model when it's able when it is making bad predictions  in order for it to be able to correct itself now the loss of a neural network actually defines  exactly this it defines how wrong a prediction was so it takes as input the predicted outputs  and the ground truth outputs now if those two things are very far apart from each other  then the loss will be very large on the other hand the closer these two things are from each  other the smaller the loss and the more accurate the loss the model will be so we always  want to minimize the loss we want to incur that we want to predict something that's  as close as possible to the ground truth now let's assume we have not just the data  from one student but as we have in this case the data from many students we now care about  not just how the model did on predicting just one prediction but how it did on average  across all of these students this is what we call the empirical loss and it's  simply just the mean or the average of every loss from each individual  example or each individual student when training a neural network we  want to find a network that minimizes the empirical loss between our  predictions and the true outputs now if we look at the problem of binary  classification where the neural network like we want to do in this case is supposed to  answer either yes or no one or zero we can use what is called a soft max cross entropy loss now  the soft max cross entropy loss is actually built is actually written out here and it's  defined by actually what's called the cross entropy between two probability  distributions it measures how far apart the ground truth probability distribution is  from the predicted probability distribution let's suppose instead of predicting binary  outputs will i pass this class or will i not pass this class instead you want to predict the  final grade as a real number not a probability or as a percentage we want the the grade that you  will get in this class now in this case because the type of the output is different we also need  to use a different loss here because our outputs are no longer 0 1 but they can be any real number  they're just the grade that you're going to get on on the final class so for example here since this  is a continuous variable the grade we want to use what's called the mean squared error this measures  just the the squared error the squared difference between our ground truth and our predictions  again averaged over the entire data set okay great so now we've seen two loss functions  one for classification binary outputs as well as regression continuous outputs and the problem now  i think that we need to start asking ourselves is how can we take that loss function we've seen our  loss function we've seen our network now we have to actually understand how can we put those two  things together how can we use our loss function to train the weights of our neural network  such that it can actually learn that problem well what we want to do is actually  find the weights of the neural network that will minimize the loss of our  data set that essentially means that we want to find the ws in our neural network  that minimize j of w jfw is our empirical cost function that we saw in the previous slides that  average loss over each data point in the data set now remember that w capital w is simply  a collection of all of the weights in our neural network not just from one layer  but from every single layer so that's w0 from the zeroth layer to the first layer  to the second layer all concatenate into one in this optimization problem we want to optimize  all of the w's to minimize this empirical loss now remember our loss function is just a  simple function of our weights if we have only two weights we can actually plot this entire  lost landscape over this grid of weight so on the one axis on the bottom you can see weight number  one and the other one you can see weight zero there's only two weights in this neural network  very simple neural network so we can actually plot for every w0 and w1 what is the loss what is the  error that we'd expect to see and obtain from this neural network now the whole process of training a  neural network optimizing it is to find the lowest point in this lost landscape that will tell us our  optimal w0 and w1 now how can we do that the first thing we have to do is pick a point so let's pick  any w0 w1 starting from this point we can compute the gradient of the landscape at that point  now the gradient tells us the direction of highest or steepest ascent okay  so that tells us which way is up okay if we compute the gradient of our  loss with respect to our weights that's the derivative our gradient for the loss  with respect to the weights that tells us the direction of which way is up on that  lost landscape from where we stand right now instead of going up though we want to find the  lowest loss so let's take the the negative of our gradient and take a small step in that direction  okay and this will move us a little bit closer to the lowest point and we just keep repeating  this now we compute the gradient at this point and repeat the process until we converge  and we will converge to a local minimum we don't know if it will converge to a global  minimum but at least we know that it should in theory converge to a local minimum now we can  summarize this algorithm as follows this algorithm is also known as gradient descent so we start by  initializing all of our weights randomly and we start and we loop until convergence we start  from one of those weights our initial point we compute the gradient that tells us which way is  up so we take a step in the opposite direction we take a small step here small is computed by  multiplying our gradient by this factor eta and we'll learn more about this factor later  this factor is called the learning rate we'll learn more about that later now again in  tensorflow we can actually see this pseudocode of grading descent algorithm written out in  code we can randomize all of our weights that in that basically initializes our search our  optimization process at some point in space and then we keep looping over and over and over  again and we compute the loss we compute the gradient and we take a small step of our weights  in the direction of that gradient but now let's take a look at this term here this is the how  we actually compute the gradient this explains how the loss is changing with respect to  the weight but i never actually told you how we compute this so let's  talk about this process which is actually extremely important in training  neural networks it's known as backpropagation so how does backpropagation work  how do we compute this gradient let's start with a very simple neural network  this is probably the simplest neural network in existence it only has one input one hidden  neuron and one output computing the gradient of our loss j of w with respect to one of the weights  in this case just w2 for example tells us how much a small change in w2 is going to affect our loss  j so if we move around j infinitesimally small how will that affect our loss that's what the gradient  is going to tell us of derivative of j of w 2. so if we write out this derivative we can actually  apply the chain rule to actually compute it so what does that look like specifically  we can decompose that derivative into the derivative of j d d w over d y multiplied by  derivative of our output with respect to w2 now the question here is with the second part  if we want to compute now not the derivative of our loss with respect to w2 but now the  loss with respect to w1 we can do the same story as before we can apply the chain rule now  recursively so now we have to apply the chain rule again to this second part now the second  part is expanded even further so the derivative of our output with respect to z1 which is the  activation function of this first hidden unit and we can back propagate this information now you can  see starting from our loss all the way through w2 and then recursively applying this chain rule  again to get to w1 and this allows us to see both the gradient at both w2 and w1 so  in this case just to reiterate once again this is telling us this dj dw1 is telling us how  a small change in our weight is going to affect our loss so we can see if we increase our weight a  small amount it will increase our loss that means we will want to decrease the weight to decrease  our loss that's what the gradient tells us which direction we need to step in order to decrease  or increase our loss function now we showed this here for just two weights in our neural network  because we only have two weights but imagine we have a very deep neural network one with more  than just two layers of or one layer rather of of hidden units we can just repeat this this process  of applying recursively applying the chain rule to determine how every single way in the model  needs to change to impact that loss but really all this boils down to just recursively applying  this chain rule formulation that you can see here and that's the back propagation algorithm in  theory it sounds very simple it's just a very very basic extension on derivatives and the chain  rule but now let's actually touch on some insights from training these networks in practice that make  this process much more complicated in practice and why using back propagation as we saw there  is not always so easy now in practice training neural networks and optimization of networks can  be extremely difficult and it's actually extremely computationally intensive here's the visualization  of what a lost landscape of a real neural network can look like visualized on just two dimensions  now you can see here that the loss is extremely non-convex meaning that it has many many local  minimum that can make using an algorithm like gradient descent very very challenging because  gradient descent is always going to step closest to the first local minimum but it can  always get stuck there so finding how to get to the global minima or a really good solution for  your neural network can often be very sensitive to your hyperparameters such as where the optimizer  starts in this lost landscape if it starts in a potentially bad part of the landscape it can very  easily get stuck in one of these local minimum now recall the equation that we talked about for  gradient descent this was the equation i showed you your next weight update is going to be your  current weights minus a small amount called the learning rate multiplied by the gradient so we  have this minus sign because we want to step in the opposite direction and we multiply it by the  gradient or we multiply by the small number called here called eta which is what we call the learning  rate how fast do we want to do the learning now it determines actually not just how fast  to do the learning that's maybe not the best way to say it but it tells us how large should  each step we take in practice be with regards to that gradient so the gradient tells us the  direction but it doesn't necessarily tell us the magnitude of the direction so eta can tell  us actually a scale of how much we want to trust that gradient and step in the direction of that  gradient in practice setting even eta this one parameters one number can be extremely difficult  and i want to give you a quick example of why so if you have a very non-convex loc or lost  landscape where you have local minima if you set the learning rate too low then the model  can get stuck in these local minima it can never escape them because it gets it actually does  optimize itself but it optimizes it to a very sm to a non-optimal minima and it can converge very  slowly as well on the other hand if we increase our learning rate too much then we can actually  overshoot our our minima and actually diverge and and lose control and basically uh explode the  training process completely one of the challenges is actually how to pre how to use stable learning  rates that are large enough to avoid the local minima but small enough so that they don't  diverge and convert or that they don't diverge completely so they're small enough to actually  converge to that global spot once they reach it so how can we actually set this learning  rate well one option which is actually somewhat popular in practice is to actually  just try a lot of different learning rates and that actually works it is a feasible approach  but let's see if we can do something a little bit smarter than that more intelligent what if we  could say instead how can we build an adaptive learning rate that actually looks at its lost  landscape and adapts itself to account for what it sees in the landscape there are actually  many types of optimizers that do exactly this this means that the learning rates are no longer  fixed they can increase or decrease depending on how large the gradient is in that location and how  fast we want and how fast we're actually learning and many other options that could be also with  regards to the size of the weights at that point the magnitudes etc in fact these have been widely  explored and published as part of tensorflow as well and during your labs we encourage each of  you to really try out each of these different types of optimizers and experiment with  their performance in different types of problems so that you can gain very important  intuition about when to use different types of optimizers are what their advantages are and  disadvantages in certain applications as well so let's try and put all of this together so  here we can see a full loop of using tensorflow to define your model on the first line define  your optimizer here you can replace this with any optimizer that you want here i'm just using  stochastic gradient descent like we saw before and feeding it through the model we loop  forever we're doing this forward prediction we predict using our model we compute the  loss with our prediction this is exactly the loss is telling us again how incorrect our  prediction is with respect to the ground truth y we compute the gradient of our loss with respect  to each of the weights in our neural network and finally we apply those gradients using our  optimizer to step and update our weights this is really taking everything that we've  learned in the class in the lecture so far and applying it into one one whole  piece of code written in tensorflow so i want to continue this talk and really talk  about tips for training these networks in practice now that we can focus on this very powerful  idea of batching your data into mini batches so before we saw it with gradient descent that  we have the following algorithm this gradient that we saw to compute using back propagation can  be actually very intensive to compute especially if it's computed over your entire training set so  this is a summation over every single data point in the entire data set in most real-life  applications it is simply not feasible to compute this on every single iteration in  your optimization loop alternatively let's consider a different variant of this algorithm  called stochastic gradient descent so instead of computing the gradient over our entire data  set let's just pick a single point compute the gradient of that single point with respect to the  weights and then update all of our weights based on that gradient so this has some advantages this  is very easy to compute because it's only using one data point now it's very fast but it's also  very noisy because it's only from one data point instead there's a middle ground instead of  computing this noisy gradient of a single point let's get a better estimate of our gradient by  using a batch of b data points so now let's pick a batch of b data points and we'll compute the  gradient estimation estimate simply as the average over this batch so since b here is usually not  that large on the order of tens or hundreds of samples this is much much faster to compute than  regular gradient descent and it's also much much more accurate than just purely stochastic gradient  descent that only uses a single example now this increases the gradient accuracy estimation which  also allows us to converge much more smoothly it also means that we can trust our gradient more  than in stochastic gradient descent so that we can actually increase our learning rate a bit more  as well mini-batching also leads to massively parallelizable computation we can split up the  batches on separate workers and separate machines and thus achieve even more parallelization and  speed increases on our gpus now the last topic i want to talk about is that of overfitting this  is also known as the problem of generalization and is one of the most fundamental problems in all  of machine learning and not just deep learning now overfitting like i said is critical to  understand so i really want to make sure that this is a clear concept in everyone's mind ideally  in machine learning we want to learn a model that accurately describes our test data not the  training data even though we're optimizing this model based on the training data what we really  want is for it to perform well on the test data so said differently we want  to build representations that can learn from our training data but  still generalize well to unseen test data now assume you want to build a line to describe  these points underfitting means that the model does simply not have enough capacity to  represent these points so no matter how good we try to fit this model it simply does not  have the capacity to represent this type of data on the far right hand side we can see the  extreme other extreme where here the model is too complex it has too many parameters  and it does not generalize well to new data in the middle though we can see what's called  the ideal fit it's not overfitting it's not underfitting but it has a medium number of  parameters and it's able to fit in a generalizable way to the output and is able to generalize well  to brand new data when it sees it at test time now to address this problem let's talk about regularization how can we make sure that our  models do not end up over fit because neural networks do have a ton of parameters how  can we enforce some form of regularization to them now what is regularization regularization  is a technique that constrains our optimization problems such that we can discourage these complex  models from actually being learned and overfit right so again why do we need it we need it so  that our model can generalize to this unseen data set and in neural networks we have many  techniques for actually imposing regularization onto the model one very common technique and very  simple to understand is called dropout this is one of the most popular forms of regularization  in deep learning and it's very simple let's revisit this picture of a neural network this is  a two layered neural network two hidden layers and in dropout during training all we simply  do is randomly set some of the activations here to zero with some probability so what we can do is  let's say we pick our probability to be 50 or 0.5 we can drop randomly for each of the activations  50 of those neurons this is extremely powerful as it lowers the capacity of our neural network so  that they have to learn to perform better on test sets because sometimes on training sets it just  simply cannot rely on some of those parameters so it has to be able to be resilient to  that kind of dropout it also means that they're easier to train because at least on every  forward passive iterations we're training only 50 of the weights and only 50 of the gradients so  that also cuts our uh gradient computation time down in by a factor of two so because now  we only have to compute half the number of neuron gradients now on every iteration we dropped  out on the previous iteration fifty percent of neurons but on the next iteration we're going  to drop out a different set of fifty 50 of the neurons a different set of neurons and this gives  the network it basically forces the network to learn how to take different pathways to get to its  answer and it can't rely on any one pathway too strongly and overfit to that pathway this is a way  to really force it to generalize to this new data the second regularization technique that  we'll talk about is this notion of early stopping and again here the idea is very basic  it's it's basically let's stop training once we realize that our our loss is increasing on a  held out validation or let's call it a test set so when we start training we all know the  definition of overfitting is when our model starts to perform worse on the test set so if we  set aside some of this training data to be quote unquote test data we can monitor how our network  is learning on this data and simply just stop before it has a chance to overfit so on the x-axis  you can see the number of training iterations and on the y-axis you can see the loss that we  get after training that number of iterations so as we continue to train in the beginning both  lines continue to decrease this is as we'd expect and this is excellent since it  means our model is getting stronger eventually though the network's testing  loss plateaus and starts to increase note that the training accuracy will always  continue to go to go down as long as the network has the capacity to memorize the data and  this pattern continues for the rest of training so it's important here to actually focus on this  point here this is the point where we need to stop training and after this point assuming  that our test set is a valid representation of the true test set the accuracy of the model  will only get worse so we can stop training here take this model and this should be the model that  we actually use when we deploy into the real world anything any model taken from the left hand  side is going to be underfit it's not going to be utilizing the full capacity of the  network and anything taken from the right hand side is over fit and actually performing  worse than it needs to on that held out test set so i'll conclude this lecture by summarizing  three key points that we've covered so far we started about the fundamental building  blocks of neural networks the perceptron we learned about stacking and composing  these perceptrons together to form complex hierarchical neural networks and how to  mathematically optimize these models with back propagation and finally we address the  practical side of these models that you'll find useful for the labs today including adaptive  learning rates batching and regularization so thank you for attending the first  lecture in 6s191 thank you very much
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2020_Generalizable_Autonomy_for_Robot_Manipulation.txt
you I wanted to start with this cute little video  what I really study is algorithmic methods to make robot manipulation generalizable and and  why I really like this video is this is how I got inspired to work in robotics this is science  fiction and as a researcher you are always chasing science fiction and in trying to make some of  it reality and and really to think about this if you think of something like this if I were to  have a system like this in my home I would want it to do a variety of things maybe clean cook do  laundry perhaps help me sweep or other stuff and not only that I would probably want it to work  outside of my lap I would want it to work in a variety of settings maybe my home your home or  maybe in settings which are much more complex then you can show it perhaps in the door and and the  idea really is how do we enable such complexity and learning in such sort of general purpose  diversity of skills which interaction in the real world requires and this is where I argue that  a lot of my research agenda lies we are trying to build these systems physical agents particularly  that can really extend our ability and when I use extending augment it's both in cognitive and  physical sense but there's nothing new about that dr. Cox already talked about the the talk  of emergence of AI that happened in 1956 and soon after we were dreaming of robot assistants  in the kitchen this is the first industrial robot I don't know how many of you are aware of  this robot anybody in the room know the name interesting the robot is called unimate  it is as big as the kitchen but this is this is actually 50 years to date in the  time since I would argue people have done variety of tremendous stuff this is not my  work this is boss robotics but this is one particularly good example robot can walk  on ice lift heavy boxes and whatnot cut to this year in CES this is a video from Sony  doing pretty much the same thing this is a concept video of a robot cooking circa  20 2015 2 years after the original video what what is what is very striking is despite  the last 50 years when you put real robots in real world it doesn't really work work it turns  out doing long term planning real time perception in real robots is really hard so what gives  what gives from from 1950 1960 to today I argue that we need algorithms that can generalize to  unstructured settings that a robot is going to encounter both in terms of perception in terms  of dynamics perhaps task definition and I will only given you examples of kitchen but there's  nothing particularly about special about kitchen it this sort of lack of generalization happens in  all sorts of robots application from manufacturing to healthcare to person and service robotics  so this is where I argue that to tackle this problem what we really need is to inject some  sort of structured inductive bias and priors to achieve the generalization in simpler terms  you can really think about this that we need algorithms to learn from specifications of tasks  where specifications can be easy language video or let's say kinesthetic demonstrations and then  we need mechanisms where the system can self practice to generalize to new but similar scenes  but often imitation gets a bad rep imitation is just copying but it actually is not let's  let's think about this very simple example of of scooping some leaves in your yard if you have  a two-year-old looking at you trying to imitate they may just move them up around they're trying  to just basically try to get the motion right but nothing really happens so they get what you'd call  movement skills as they grow up probably they can do a bit better they they can get some sort of  planning they can do some sort of generalization some of the tasks actually works and they even  grow to a point where now they understand the concept of limitation is really not the motion  but it is actually semantics you need to do the task not an exact not exactly always the how of  it the what matters so they may actually use a completely different set of tools to do the exact  same task and this is precisely what we want in algorithmic let's say equivalence so today what  I'm going to talk about is at all three levels of these let's say imitation at level of control  planning and perception how can we get this kind of generalization through structured priors and  inductive biases so let's start with control so I started with these simple skills in the house  let's take one of these skills and think about what we can do with this what we are really  after is algorithms which would be general so I don't have to code up new algorithms for let's say  sweeping versus cleaning or or create a completely new set up for I don't know let's say cutting so  let's think about that this is where precisely learning based algorithms come into play but one  of the things that is very important is let's say we take the example of cleaning cleaning is  something that is very very common in everyday household you can argue wiping is a motion that  is required across the cross board not very many people clean radios though but still the concept  is generalization perhaps cleaning a harder stain would require you to push hard you have some  sort of reward function where you wipe until it is clean it's just that the classifier of what  is clean is not really given to you explicitly or maybe you know the concept or context where if  you're wiping glass do not push too hard you might describe it how do we build the algorithms that  can get this sort of generalization one way can be this recent wave of what you would call machine  learning and reinforcement honey you get magical input of images some sort of torque or output  in actions and this is actually done very well in robotics we have seen some very interesting  results for long-standing problems being able to open closed doors which is handling some sort of  fluids or at least deformable media but and this is actually surprising and very impressive but one  of the things that strikes out is these methods are actually very very sample inefficient it may  take days if not weeks to do very simple things with specification and even then these methods  would be very let's say iffy very unstable you change one thing and then the whole thing comes  shattering down you have to start all over again now the alternate to do this might be something  that is more classical let's say look at controls you are given a robot model the robot model maybe  specification of the dynamics it may include the environment if the task is too complicated given  something like this what would you do you would come up with some sort of task structure T I  need to I need to do this particular step maybe go to the table do wiping and when it is wipe  then have a comma so this this has worked for a long time actually when you have particular  tasks but there is a problem generalization is very hard because I need to create this tree for  every task to perception is very hard because I have to build a perception for particular task  wiping this table may be very different from wiping the whiteboard because I need to build a  classifier to detect when it is white so one of the one of the algorithms we started working on is  like can we take advantages of or the best of both worlds argument in this case so the idea is the  enforcement learning or learning in general can allow you to be general-purpose but it's very  sample inefficient on the contrary model-based methods allow you to rely on priors things that  you know about the robot but require you to code up a lot of the stuff about the task so what we  thought is maybe the way to look at this problem is break up this problem in modular settings where  you take the action space of the learning do not be in the robot space but in the task base itself  what do we really want to do is think about a modular method where the output of a policy learn  the policy that is taking in images is actually not at the level of what what joint angles do  you want to change but really think about the pause and the velocity of the end effector but  also the gains or the impedances how hard or stiff dude because the end effector needs to  be at this why does why is it important it is important because this enables you to manage  different stiffnesses when you are in different stages of the task this basically obviates the  need for you to create a task structure tree so the system can learn when it's free when it  needs to be stiff and in what dimensions it needs to be stiff and this is very important now the  policy is essentially outputting this stiffness parameters which can then be put into a robot  model that we already know the robot model can be nonlinear rather complicated why bother wasting  time spending learning effort to do this and this is the only sort of best of both worlds where you  are using the model however much you you can but you are still keeping the environment which is  general to be to be unmodeled what benefit does this sort of give you so what we do here what  you see here is image input to the agent and this is environment behavior we model this as  a reinforcement learning problem with a fairly simple objective the objectives up top basically  clean up all of the tiles do not apply any forces that would kill the robot that is basically and  what you really need to see is we tested this against a bunch of different actions basis so  actions faces the prior here that you're using and the only thing that you should sort of take  away is at the bottom is the favorite image to talk and at the top is basically impedance  controlled or variable impedance directly provided through the policy and this is basically  the difference between failure and success in both terms of sample efficiency and all the smoothness  and control because you're using known mechanisms of doing control at high frequency where you can  actually safeguard the system without wetting about what the important learning algorithm  can do to your system interestingly what you can do with this is because you have a decoupled  system now you can train purely in simulation F theme is can be a model in the simulation you can  replace this model with a real robot model on the fly you do not need fine tuning because again  the policy is outputting end-effector spaces in end effector spaces so you can replace this model  and even though there might be finite differences in parameters at least in these cases we found  that the policy generalizes very well without any sort of let's say computational loss we did  not have to do simulation based randomization we did not have to do techniques on either  fine-tuning the policy when you do go to the real world so this is kind of zero short transfer  in this case which is pretty interesting basically identifying the right prior enables you to do this  generalization very efficiently and also gets you sample efficiency learning so moving on let's  let's talk about reinforcement learning again we always want reinforcement learning to do any  sort of interesting tasks which can do let's say image to control kind of tasks this is yet again  an example from Google but but often when you want to do very complicated things it is it can be  frustrating because the amount data required on realistic systems can actually be very big so when  tasks get slightly harder the enforcement learning starts to what you would call stutter so you you  want to do longer time longer term things which it can be very hard and interestingly something  that came out last year this is not my work but friends from a company actually showed that even  though reinforcement learning is doing fancy stuff you can code that up in about 20 minutes and you  still do better so what was interesting is that in these cases at least from a practical perspective  you could code up a simple solution much faster than a learn solution which basically gave us  the idea what is going on here what we really want to do is exploration in reinforcement  learning or in these sort of learn systems is very slow often in these cases you already  know part of the solution you know a lot about the problem as as a designer of the system the  system isn't actually working ab initio anyways so the question we were asking is how can human  intuition guide exploration how how can we do simple stuff which can help learning faster so the  intuition here was let's say you are given a task the task requires you some sort of reasoning it is  basically move the block to a particular point if you can reach the block you will move the block  directly if you cannot reach the block then you use the tool so we thought that what we can do  easily is instead of writing a policy it is very easy to write sub parts of the policy basically  specify what you know but don't specify the full solution so provide whatever you can but you don't  actually have to provide the full solution so this basically results in you get a bunch of teachers  which are essentially blackbox controllers most of them are some suboptimal in fact they may be  incomplete so in this case you can basically say that I can provide a teacher which only goes to a  particular point it doesn't know how to solve the task there is no notion or concept of how to  complete the task so you can start with these teachers and then the idea would be you want to  complete you want you still want a full policy that is both faster than the teachers in terms  of learning and a test time doesn't necessarily use the teachers because teacher may have access  to privileged information that a policy may not have but the idea even though it's simple the  specify is actually non-trivial think about this if you have teachers multiple of them some of the  teachers can actually be partial you might need to sequence them together maybe they are partial  and they are not even complete in the sense that there is no single sequence that even will  complete the task because there is no requirement we put in on on these teachers sometimes the  teachers may actually be contradictory you did not say that they are all helpful they can  be adversarial independently they are useful because they provide information but when you  try to put them together let's say go back and and move forward and you can keep them you keep  using them without making progress in the task so how do we do this so let's review some  basics in reinforcement learning I believe you went through a lecture in reinforcement  learning this is an off policy reinforcement learning algorithm called DD PG what DD PG does  is you start with some sort of state that an environment of rates you run a policy a policy is  let's say your current iterate of your system when you operate with the policy the policy gives you  the next state the reward and you put that pupil in a database that is called experience replay  buffer this is a standard trick in modern deep reinforcement learning algorithms now what you  do is you sample mini batches in the same way you would do let's say any sort of deep learning  algorithm from this database that is constantly updating to compute two gradients one gradient  is the value of what you call a function critic which is value function which is basically  telling what is the value of this state the value of this state can be thought of as  how far would the goal be from my current state and then you use the policy gradient in this  particular case something called deterministic pol policy gradient that is what the name is deep  deterministic policy gradient so the you have these two gradients one to update the critic and  one to update the policy and you can do them in a synchronous manner offline offline in the sense  that the data is not being generated by the same policy so you can have these two update process  and the and the rollout process separated that is why it's called off policy so now let's assume  you have some RL agent whether it's DDP G or not doesn't really matter you have an environment  you get in state and you can run the policy but then now the problem is not only the policy you  actually have a bunch of other teachers which are giving you advice so they can all basically  tell you what to do now you have to not only decide how the agent should behave you also need  to figure out if I should trust the teacher or not how do I do this one way to do this is think  about how Bandit algorithms work I can basically at any point of time think about the value of  any of these teachers in a particular state I can basically think of an outer loop running RL  in the outer loop or well of the policy learning and I basically state if the problem that I was  solving was selecting which agent to use or which one of these teachers to use then I just need  to know which will result in the best outcome in the current state this formalism can basically  be stated you learn a critic or a value function where it is basically choosing which which of the  actions you should you should pick simultaneously as you run this thing then the policy which is  called the behavioral policy basically runs that agent whether it's your own agent the learned  agent or one of the teachers but then now the trick is regardless of who operates whether it's  the teacher or your agent the data goes back to the replay buffer and then my agent can still  learn from that data so basically I am online running some teachers so using supervisors and  using the data to to train my agent so whenever a supervisor is useful the agent will learn  from it if the supervisor is not used then standard reinforcement learning will happen  so what does this result in so we go back to the same task we provide for teachers they look  something like grab position push pull but we do not provide any sort of mechanism to complete  the task so the first question we were asking is if we give one full teacher the method should  basically be able to copy the teacher and that's what we refined that in terms of baseline we are  basically able to do something that is Porsche at least copy one teacher when the teacher is  near optimum a more interesting thing happens when you actually get multiple teachers so  when you get multiple teachers the problem gets a bit more complicated because you have  to decide which one to use in and if you use a suboptimal one you waste time this is where  we basically see that using our method results in sample efficiency even more interestingly  what happens is if you provide teachers which are let's say incomplete I only provide teachers  for part of the task and the other part needs to be filled in this is where all of the other  methods kind of fail but the fact that you are essentially using reinforcement learning with  imitation you still maintain the sample efficient so just taking taking a breather here what  we really learned from this line of work is understanding domain specific action  representations even though even though they're domain-specific they're fairly general  manipulation is fairly general in that sense and using weekly supervised systems so in this case as  we are using weekly supervised teacher suboptimal teachers provides enough structure to promote both  sample efficiency in learning and generalization to variations of distance so let's go back to  those notes setup we started with low-level skills let's graduate to a bit more complicated  skills so we started with simple skills like grasping pushing what happens when you need to do  sequential skills things that you need to reason for a bit longer so we started by studying this  problem which is fairly sort of interesting let's say you have a task to do maybe it's sweeping  or hammering and you're given an object but the identity of the object is not given to you you  basically have given a random object whether it's a pen or a bottle how would you go about this  one way to go about this is look at the task look at the object predict some sort of optimal  grasp for this object and then try to predict the optimal task policy but it I I can argue that  optimally grasping the hammer near the center of mass is suboptimal for both of these tasks what is  more interesting is you actually have to grab the hammer only in a manner that you'll still succeed  but not optimally because what the goal standard that you're really after is the task success  not the grasping success nobody grabs stuff for the purpose of grabbing them so how do we go about  this problem you have some sort of input some sort of task and the problem is we need to understand  and evaluate there there can be many ways to grasp an object and we still need to optimize for the  policy so there it is a very large discrete safe space where you are basically grafting in objects  in different ways each of those ways will result in a policy some of those policies will succeed  some of those will not but the intuition or at least the realization that that enables the  problem to be computationally tractable is the fact that whenever the task succeeds grasp must  succeed but the other way round is not actually true you can grab the object and still fail at the  task but you can never succeed at the task without grabbing the object this enables us to factorize  this value function into two parts which is a condition grass or grass model and a grass model  itself just an independent gas model and this sort of factorization enables us to create a model with  three last terms one is are you able to grab the object whenever you grab the object does the task  succeed and a standard policy grading locks this model then can be jointly trained in simulation  where you have a lot of these simulated objects in a simulator trying to do the tasks where the  reward function is parse you're basically only told do you succeed in the task or not there is no  other reward function going on so a test time what you get is you get some object a real object that  is not in the test set you get the RGB D image of that what you do is you generate a lot of these  grasps samples the interesting part is what you're doing here is you're ranking grasps based on the  task so this ranking is generated by your belief of task success and then given this ranking you  can pick a grasp and evaluate the task so you can actually go out and do the task and this is what  the errors from here is back dropped in to picking this ranking the way this problem is set up you  can generalize to arbitrary new objects because nothing about object category is given to you so  in this particular case you are seeing simulation for the hammering task and for pushing task and  we evaluated this against a couple of base lines basically a very simple graph in base line then  you have this sort of two-stage pipeline where you optimally graph the object and then optimally  try to do the task and then our method where you are jointly optimizing the system what we find  is in this case end-to-end optimization basically gets you more than double the performance there's  nothing special about simulation because we are using depth images we can directly go to real  world without without any fine-tuning because input is depth so in this case it is basically  doing the same task but in real world and pretty much the trend for the performance still holds  so we are much more than double the performance then let's say a two-stage biplane where you're  optimally grasping the object so moving forward in the same setup we wanted to now ask the  question can we do more interesting sequential tasks which require reasoning as dr. Clarke was  mentioning earlier that can we do something that is requiring you to do but discrete and continuous  planning simultaneously so think of these kind of spaces where the task is to roll the pin to  you but if you keep rolling the pin will roll off the table hence you need a support and you  may have objects that are blocking the task and there can be variants of this service set up  what it requires you to do is basically both discrete and continuous reasoning the discrete  reason is which object to push and in the scene and continuous reasoning is how much to push  it by or what is the actual sort of mechanism of control so basically the kind of question  we are asking is can a robot efficiently learn to perform these sort of multi-step tasks under  various both physical and semantic constraints these are usually kind of things that people use  to let's say test animal intelligence behaviors so we attempted to study this question in a  manipulation setting in a simple manipulation setting where the robot is asked to move a  particular object to a particular position the interesting thing is that there is constraints  so in this particular setup the constraint can be the object can only move along a given path in  this case along let's say gray tiles and there can be other objects along the way so in this  particular case in presence an obstacle multiple decisions need to be made you cannot just push  the can to the yellow square you actually need to push this object out of the way first and then  you can do a greedy decision making so you have to think about this at different levels of time  scale so now doing something like this you would argue can be done with a model-based approach  you can learn a model of the dynamics in the system you can roll the model out and use this use  this model to come up with some sort of optimal action sequence to do this and one would argue  that in in recent times we have seen the number of these papers well such model can be learned  in pure image spaces so you are basically doing some sort of pushing in pure image space then the  question we were asking is since this is such a general solution basically its visuals are going  it seems natural that that these sort of models will do it and and we're really surprised that  that even though the solution is fairly general and there's nothing new about these papers from  the perspective of the solution it's basically learning a model and then do optimally control  these particular classes of models do not scale to more complicated setups so you cannot ask these  complicated questions of doing hybrid reasoning with these simple geometric models the reason is  to be able to learn a complicated model that can do long term planning or long term prediction  the amount of data that you would need skills super linearly so to be able to do this something  like this would require many many robots and many many months of data even then we do not know if  it'll work on the contrary what insight we had is there is hierarchical nature to this action space  basically there is some sort of long-term symbolic effects rather than the actual space of the US and  then there is a local motion if you can learn both of these things simultaneously then perhaps you  can generalize to do an action sequence that can that can achieve this reasoning task so what we we  propose is basically a latent variable model where you are learning both long-term effects as what  you would call the effect curve and local motions so what this does is essentially you can think of  the long term planner doesn't really tell you how to get to the airport but it only gets what would  be the milestones when you get to the airport when you do that then the local local planner can tell  you how to get to each of these milestones as you go along so think of it like this that you can  sample a meta dynamics model which generates these multiple trajectories multiple ways to get to the  airport you select one of those depending on your cost function given the sequence of subtasks now  you can actually generate actions in a manner that would give you a distribution of those actions  for for going forward let's say from milestone to milestone and you would check it against a  learnt low-level dynamics model the value any of that action sequence so you are basically  saying that the action sequence generated by module is that going to be valid based on the  data that I've seen so far and then you can wait these action sequences based on cost functions so  essentially what you are doing is you're trying to train a model of dynamics for multiple levels but  you're training all of this purely in simulation without any task label so you are not actually  trying to go to the airport only you are basically just pushing around the other thing is you do not  actually get labels for let's say milestones which is equal into saying you don't get labels for  latent variables so motion codes and effect codes are essentially related so you set this up as a  variational inference problem so you see these modules at the bottom these are these are used  to infer latent codes without explicitly boots so the set up overall looks something like this you  have a robot the robot input image is parsed into objects represented object centric representations  then this representation is passed into the planner the planner outputs a sequence of states  that can basically be now fed into the system and you can basically look through it I gave you this  example of a simple task of moving the object but we did other tasks as well where you are basically  trying to move the object a particular goal in a field of obstacles or trying to clear a space with  all of the multiple objects needs to be pushed out so what we found is comparing this map model  with a bunch of baselines which were using simpler models that having a more complicated model works  better especially when you have dense reward function but when you have sparse about functions  which is basically saying oh you only get reward when the task completes no intermediate reward  then the performance gap is bigger and again the way this is set up you can go to a real world  system without any fine-tuning pretty much get the same performance okay the the trick is input is  depth images okay so just to give you qualitative examples of how the system works in this case the  jello box needs to go to the yellow yellow bin and there are multiple obstacles along the way and the  system is basically doing discrete and continuous planning simultaneously without doing any sort  of modeling for us to do sort of discrete and continuous models or-or-or specifically designed  or task specific design models so the interesting thing is there is only single model that we  learned for all of these tasks it is not separate yet another example is the system basically  takes a longer path rather than pushing through the greedy greedy path to get this bag of chips  in this particular case the system figures out that the object needs to create a path by pushing  some other object along the way or out of the way so in both of these projects what we learned is  the power of self supervision in in robotics is very strong you can actually do compositional  fires with latent variable models using purely self supervision both the both the task where we  are doing hammering and in this case we basically had models which were doing pure self supervised  data in simulated setups and we were able to get real-world performance out of these so moving on  the next thing I wanted to study was what happens when tasks grow a bit more complex so we live that  simple let's say to stage tasks what happens when you are graph structured that's when you actually  have to reason about longer tasks let's say towers of Hanoi problem so we talked about these problems  clearly RL would be much harder to do imitation even even imitation in these cases starts to fail  because specification of imitation let's say to do a very long multistage task whether it's building  Legos or block worlds is actually very hard what we nearly want is meta mutation journey so meta  mutation learning can really be thought of as you have our environment the environment is bounded  but it can be in many final states which can be thought of as a task so you get many examples  of reconfigurations of that environment this can be thought of as examples of tasks that you  are seeing in train distribution in a test time you are given a specification of one final task  that can be new most likely and you still need to be able to do this how do we do these kind  of tasks in the current solutions actually let me skip this so the way we do this right now is  write programs these programs essentially enable you to reason about long term tasks even at a  very granular scale this is how you would code up a robot to put two blocks on top of each other  now if you were to do this slightly differently you need to write a new program so this basically  gave us an idea that perhaps instead of doing the enforcement learning we can pose this problem as  a program induction or noodle program induction it's essentially reducing an enforcement learning  problem or decision making problem to a supervised learning problem yet in very large phase so you  get an input video a meta learning model which is basically taking current state and outputting what  is the next program you should output not only the next program but of course also the arguments that  you need to pass using that API it's essentially equivalent to saying that if you give the robot  an API can the system use that API itself so what you need is a data set of demos video  demonstrations and let's say a planner that is giving you what what sub programs were called in  that execution and the loss basically looks very much like a surprise learning loss where you have  a prediction and you compare it with your planet Earth what does this look like okay should not be  like this okay so you can really think of this as a high-level neuro symbolic manner well you at  the start out put something like block stacking no let's see this should be better you start with  block stacking the block stacking unpacked to pick in place pick in place can unpack to pick and  once you are unpacked to pick you can basically say the robot will actually execute the API level  command as the API level command execute the the executor goes back to pick moves forward with the  pick and and then actually does the pick itself once the pick is complete the pick in place moves  forward to the place aspect of it and then goes on to pick up the object by grabbing the object  and picking it up in in sequence and once place is complete the executor basically goes back  to pick in place to block stacking and you can continue doing this so this is just an example of  one pick in place but this can actually continue to multiple blocks we tested with over 40 of  these examples sorry the videos didn't play as well so what does this enable what this enables  is you can now input the specification of the task through let's say doing VR execution what you  see in the inset and then the robot can actually look at the video to try to do this task what is  important is to understand what is happening in this sequence of block executions the system is  not just parsing the video because that would be easy the system is actually creating a policy out  of out of this sequence so one way to test this is let's say if there is an adversarial human  in the system that will break the model so if you have done the task half way through the world  is stochastic the world goes back it should not continue doing the same thing that you saw it  should actually be a reactive policy so it is actually state dependent in terms of numbers  when you look at this basically what we find is that if you have a flat policy or a deep RL  style policy it does not work on test tasks but this sort of task programming or programming  action works very well and it actually works even with vision so you have pure visual input no  specification of where the tasks where the objects are so you get generalization with visual input  without specifying particular domain specific visual design but again none of this thing works  perfectly it would not be robotics if it worked so so so what feels often what happens is because we  are using API if the API doesn't declare when the failure happens let's say you're trying to grab  something but the grab action did not succeed the high-level planner does not know any continuance  so we went back to the model and said what is what is it that is actually causing it to fail so we  found that even though we used the output as the program we were able to inject structure the model  itself was still a black box it was basically an illustration we thought perhaps we can open  the black box and actually put a compositional problem what does this compositional prior look  like you basically say think of graph neural networks so graph can basically be this idea of  executing the task in a planner you know you know in a PDF style planner where nodes are active  States edges are graphs and you plant through these things so this can actually still result  in a two-stage model where you are learning the graph of the task itself rather than a black box  LST and predicting this but there is a problem the problem is in these kind of setups the number  of states can actually be combinatorial millions maybe and the number of actions are finite so the  concept of learning this graph in a neural sense was to understand that the graph will not be in  this set up but actually a conjugate graph the conjugate graph flips the model by saying nodes  are now actions and edges are States so you can really think of these are nodes are telling you  what to do and edges are pre and post conditions and how does this model work now you can actually  have an observation model which tells you where to go in any particular state where what action  was executed and each action tells you what state you end up in which tells you what it would  be the next action to do because this graph is learned your base you're basically getting the  policy for free and the training is very similar to the program induction except you do not need  full execution with the lowest level of actors or lowest level of actions you are sufficiently this  that would be sufficient and what that basically gives us is stronger generalization in both videos  and states with much less supervision again so we have fewer data points or weaker supervision but  we get better generalization so the big picture key insight again is compositional priors such  as let's say new programs or new tasks graphs enable us a modular structure that is needed to  achieve one-shot generalization in these long term sequential plans in the one or two minutes  that I have left I want to leave you with this question often we are looking at robotics as azzam  as the sort of ultimate challenge for a high and we compare the performance of Robotics with our  colleagues in vision and language where we have done a lot of progress but if you notice one  of the things what happens is as the tasks grow smaller the datasets grow very small very quickly  in robotics but to do more interesting tasks more complicated tasks we still need to keep the  datasets large enough for us to be able to sort of leverage powers of these algorithms if you look at  robotics in recent times the data that have been essentially miniscule about 30 minutes of data  that can be collected by a single person this is a chart of large data sets in robotics this is  not very old actually this is coral 20 19 20 80 just to compare this with NLP and and envisioned  a license we are about three orders of magnitudes offs so we were basically asking the task why  is it the problem the problem is both vision and language have Mechanical Turk they can get  a lot of labels but in robotics labeling doesn't work you actually to show so we spent a lot of  time to create the system which is very very similar to Mechanical Turk we call it Robo turk  well you can use essentially commodity devices like a phone to get large-scale data sets which  are actual demonstrations full 3d demonstrations and this this can now enable us to get data both  on real systems and simulated systems at scale so you can be in places where ever you want  collect data from crowdsource workers at very large scale we did some pilots we were able to  collect hundreds of hours of data just to give you a sense of how this compare that's 13 hours  of data collected in about seven months we were able to collect about 140 hours of data in six  days now the next question would be why is this data useful so we did reinforcement learning what  we find is if you do pure RL with no data even on three days of of doing this with multiple machines  you get no progress as you keep injecting data in the system the performance keeps improving so  there is actually value in collecting data so the take-home lesson was basically that the that more  data with structure and semantics supervision can fuel robot learning in increasingly complex tasks  and scalable crowdsourcing methods such as robotic are really sort of enable us to access this  treasure trove so going back what I really want to leave you with is we talked about a variety of  methods in different levels of abstraction from controls to planning to perception and then we  talked about how to collect data but if there's one thing that I want to leave you today with is  if you want to do learning in in complex tasks and complex domains such as robotics it is very  important to understand the value of injecting structured priors and and inductive biases in your  models generic models from deep learning that have worked for may or may not work for you that is one  - the use of modular components in modularization of your problem well you use domain dependent  expertise with data-driven problems can enable you to actually do practice build practical systems  for much more diverse and complex applications that I would like to thank you all for being such  a patient audience and happy to take questions
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2020_Convolutional_Neural_Networks.txt
hi everyone and welcome back to MIT 6.S191 today we're going to be talking about one of my favorite topics in this course and that's how we can give machines a sense of vision now vision I think is one of the most important senses that humans possess sighted people rely on vision every single day from things like navigation manipulation how you can pick up objects how you can recognize objects recognize complex human emotion and behaviors and I think it's very safe to say that vision is a huge part of human life today we're going to be learning about how deep learning can build powerful computer vision systems capable of solving extraordinary complex tasks that may be just 15 years ago would have not even been possible to be solved now one example of how vision is transforming computer or how deep learning is transforming computer vision is is facial recognition so on the top left you can see an icon of the human eye which visually represents vision coming into a deep neural network in the form of images or pixels or video and on the output on the bottom you can see a depiction of a human face or detecting human face but this could also be recognizing different human faces or even emotions on the face recognizing key facial features etc now deep learning has transformed this field specifically because it means that the creator of this AI does not need to tailor that algorithm specifically for towards facial detection but instead they can provide lots and lots of data to this algorithm and swap out this end this end piece instead of facial detection they can swap it out for many other detection types or recognition types and the neural network can try and learn to solve that task so for example we can replace that facial detection task with the detection of disease region in the retina of the eye and similar techniques could also be applied throughout healthcare matters and towards the detection and classification of many different types of diseases in biology and so on now another common example is in the context of self-driving cars where we take an image as input and try to learn an autonomous control system for that car this is all entirely end-to-end so we have vision and pixels coming in as input and the actuation of the car coming in as output now this is radically different than the vast majority of autonomous car companies and how they operate so if you look at companies like way Moe and Tesla this end-to-end approach is radically different we'll talk more about this later on but this is actually just one of the autonomous vehicles that we build here as part of my lab at csail so that's why I'm bringing it up so now that we've gotten a sense of at a very high level some of the computer vision tasks that we as humans solve every single day and that we can also train machines to solve the next natural question I think to ask is how can computers see and specifically how does a computer process an image or a video how do they process pixels coming from those images well to a computer images are just numbers and suppose we have this picture here of Abraham Lincoln it's made up of pixels now each of these pixels since it's a grayscale image can be represented by a single number and now we can represent our image as a two dimensional matrix of numbers one for each pixel in that image and that's how a computer sees this image it sees that that's just a matrix of two-dimensional numbers or two-dimensional matrix of numbers rather now if we have an RGB image a color image instead of a grayscale image we can simply represent that as three of these two-dimensional images concatenated or stacked on top of each other one for the red channel one for the green channel one for the blue channel and that's RGB now we have a way to represent images to computers and we can think about what types of computer vision tasks this will allow us to solve and what we can perform given this this foundation well two common types of machine learning that we actually saw in lecture 1 and 2 yesterday are those of classification and those of progression in regression we take we have our output take a continuous value in classification we have our output take a continuous label so let's first start with classification and specifically the the problem of image classification we want to predict a single label for each image for example we have a bunch of US presidents here and we want to build the classification pipeline to determine which President is in this image that we're looking at outputting the probability that this image is each of those US presidents in order to collect correctly classify this image our pipeline needs to be able to tell what is unique about a picture of Lincoln versus what is unique about a picture of Washington versus a picture of Obama it needs to understand those unique differences in each of those images or each of those classifications each of those features now another way to think about this and this image classification pipeline at a high level is in terms of features that are characteristics of a particular class classification is done done by detecting these types of features in that class if you detect enough of these features specific to that class then you can probably say with pretty high confidence that you're looking at that class now one way to solve this problem is to leverage knowledge about your field your domain knowledge and say let's suppose we're dealing with human faces we can use our knowledge about human faces to say that if we want to detect human faces we can first detect noses eyes ears mouths and then once we have a detection pipeline for those we can start to detect those features and then determine if we're looking at a human face or not now there's a big problem with that approach and that's that preliminary detection pipeline how do we detect those noses ears mouths and like this hierarchy is kind of our bottleneck in that sense and remember that these images are just three dimensional arrays of numbers well actually they're just three dimensional arrays of brightness values and that images can hold tons of variation so there's variations such as occlusions that we have to deal with variations in illumination and even intro class very and when we're building our classification pipeline we need to be invariant to all of these variations it'll and be sensitive to inter class variation so sensitive to the variations between classes but invariant to the variations within a single class now even though our pipeline could use the features that we as humans defined the manual extraction of those features is where this really breaks down now due to the incredible variability in image data specifically the detection of these features is super difficult in practice and defining the manually extracting these features can be extremely brittle so how can we do better than this that's really the question that we want to tackle today one way is that we want to extract both these visual features and detect their presence in the image simultaneously and in a hierarchical fashion and for that we can use neural networks like we saw in lab in class number one and two and our approach here is going to be to learn the visual features directly from data and to learn a hierarchy of these features so that we can reconstruct a representation of what makes up our final class label so I think now that we have that foundation of how images work we can actually move on to asking ourselves how we can learn those visual features specifically with a certain type of operation in neural networks and neural networks will allow us to directly learn those features from visual data if we construct them cleverly and correctly in lecture one we learned about fully connected or dense neural networks where you can have multiple hidden layers and each hidden layer is densely connected to its previous layer and densely connected here let me just remind you is that every input is connected to every output in that layer so let's say that we want to use these densely connected networks in image classification what that would mean is that we're going to take our two-dimensional image right it's a two-dimensional spatial structure we're going to collapse it down into a one dimensional vector and then we can feed that through our dense Network so every pixel in that that one dimensional vector will feed into the next layer and you cannot already imagine that or you can you should already appreciate that all of our two-dimensional structure in that image is completely gone already because we've collapsed a two-dimensional image into one dimension we've lost all of that very useful spatial structure in our image and all of that domain knowledge that we could have used a priori and additionally we're going to have a ton of parameters in this network because it's densely connected we're connecting every single pixel in our input to every single neuron in our hidden layer so this is not really feasible in practice and instead we need to ask how we can build some spatial structure into neural networks so we can be a little more clever in our learning process and allow us to tackle this specific type of inputs in a more reasonable and and well-behaved way also we're dealing with some prior knowledge that we have specifically that spatial structure is super important in image data and to do this let's first represent our two-dimensional image as a array of pixel values just like they normally were to start with one way that we can keep and maintain that spatial structure is by connecting patches of the input to a single neuron in the hidden layer so instead of connecting every input pixel from our input layer and our input image to a single neuron in the hidden layer like with dense neural networks we're going to connect just a single patch a very small patch and notice here that only a region of that input layer or that input image is influencing this single neuron at the hidden layer to define connections across the entire input we can apply the same principle of connecting patches in the input layer in single neurons in the subsequent layer we do this by simply sliding that patch window across the input image and in this case we're sliding it by two units each time in this way we take into account we maintain all of that spatial structure that spatial information inherent to our image input but we also remember that the final task that we really want to do here that I told you we want to do was to learn visual features and we can do this very simply by waving those connections in the patches so each of the patches instead of just connecting them uniformly to our hidden layer we're going to weight each of those pixels and apply a similar technique like we saw in lab 1 instead of we can basically just have a weighted summation of all of those pixels in that patch and that feeds into the next hidden unit in our hidden layer to detect a particular feature now in practice this operation is simply called convolution which gives way to the name convolutional neural network which we'll get to later on we'll think about this at a high level first and suppose that we have a four by four filter which means that we have 16 different weights 4 by 4 we are going to apply the same filter of four by four patches across the entire input image and we'll use the result of that operation to define the state of the neurons in the next hidden layer we basically shift this patch across the image we shifted for example in units of two pixels each time to grab the next patch we repeat the convolution operation and that's how we can start to think about extracting features in our input but you're probably wondering how does this convolution operation actually relate to feature extraction so so far we've just defined the sliding operation where we can slide a patch over the input but we haven't really talked about how that allows us to extract features from that image itself so let's make this concrete by walking through an example first suppose we want to classify X's from a set of black and white images so here black is represented by -1 white is represented by the pixel 1 now to classify X's clearly we're not going to be able to just compare these two matrices because there's too much variation between these classes we want to be able to get invariant to certain types of deformation to the images scale shift rotation we want to be able to handle all of that so we can't just compare these two like as they are right now so instead what we're gonna do is we want to model our model to compare these images of exes piece by piece or patch by patch and the important patches are the important pieces that it's looking for are the features now if our model can find rough feature matches across these two images then we can say with pretty high confidence that they're probably coming from the same image if they share a lot of the same visual features then they're probably representing the same object now each feature is like a mini image each of these patches is like a mini image it's also a two-dimensional array of numbers and we'll use these filters let me call them now to pick up on the features comment 2x in the case of X's filter is representing diagonal lines and crosses are probably the most important things to look for and you can see those on the top the top row here so we can probably capture these features in terms of the arms and the main body of that X so the arms the legs and the body will capture all of those features that we show here and note that the smaller matrices are the filters of weights so these are the actual values of the weights that correspond to that patch as we slide it across the image now all that's left to do here is really just define that convolution operation and tell you when you slide that patch over the image what is the actual operation that takes that patch on top of that image and then produces that next output at the hidden neuron layer so convolution preserve is that spatial structure between pixels by learning the image features in these small squares or the small patches of the input data to do this to cut the entire equation or the entire computation is as follows we first place that patch on top of our input image of the same size so here we're placing this patch on the top left on this part of the image in green on the X there and we perform an element-wise multiplication so for every pixel on our image where the patch overlaps with we element-wise multiply every pixel in the filter the result you can see on the right is just a matrix of all ones because there's perfect overlap between our filter in this case and our image at the patch location the only thing left to do here is sum up all of those numbers and when you sum them up you get nine and that's the output at the next layer now let's go through one more example a little bit more slowly of how we did this and you might be able to appreciate what this convolution operation is intuitively telling us now that's mathematically how it's done but now let's see intuitively what this is showing us suppose we want to compute the convolution now of this 5x5 image in green with this 3x3 filter or this 3x3 patch to do this we need to cover that entire image by sliding the filter over that image and performing that element-wise multiplication and adding the output for each patch and this is what that looks like so first we'll start off by placing that yellow filter on the top left corner we're going to element-wise multiply and add all of the outputs and we're gonna do it four and we're gonna place that four in our first entry of our output matrix this is called the feature map now we can continue this and slide that 3x3 filter over the image element wise multiply add up all the numbers and place the next result in the next row in the next column which is three and we can just keep repeating this operation over and over and that's it the feature map on the right reflects where in the image there is activation by this particular filter so let's take a look at this filter really quickly you can see in this filter this filter is an X or a cross it has ones on both diagonals and then the image you can see that it's being activated also along this main diagonal on the four where the four is being maximally activated so this is showing that there is maximum overlap with this filter on this image along this central diagonal now let's take a quick example of how different types of filters are changing the weights in your filter can impact different feature Maps or different outputs so simply by changing the weights in your filter you can change what your filter is looking for what it's going to be activating so take for example this image of this woman Lenna on the left that's the original image on the left if you slide different filters over this image you can get different output feature Maps so for example you can sharpen this image by having a filter shown on the second column you can detect edges in this image by having the third column by using the third columns features filter sorry and you can even detect stronger edges by having the fourth column and these are ways that changing those weights in your filter can really impact the features that you detect so now I hope you can appreciate how convolution allows us to capitalize on spatial structure and use sets of weights to extract these local features within images and very easily we can detect different features by simply changing our weights and using different filters okay now these concepts of preserving spatial information and spatial structure while local feature extraction while also doing local feature extraction using the convolution operation are at the core of neural networks and we use those for computer vision tasks so now that we've gotten convolutions under our belt we can start to think about how we can utilize this to build full convolutional neural networks for solving computer vision tasks and these networks are very appropriately named convolutional neural networks because the backbone of them is the convolution operation and we'll take a look first at a CNN or convolutional neural network architecture define designed for image classification tasks and we'll see how the convolution operation can actually feed into those spatial sampling operations so that we can build this full thing end to end so first let's consider the simple very simple CN n for image classification now here the goal is to learn features directly from data and to use these learn feature Maps for classification of these images there are three main parts to a CNN that I want to talk about now first part is the convolutions which we about before these are for extracting the features in your image or in your previous layer in a more generic saying the second step is applying your non-linearity and again like we saw in lecture 1 and 2 nonlinearities allow us to deal with nonlinear data and introduce complexity into our learning pipeline so that we can solve these more complex tasks and finally the third step which is what I was talking about before is this pooling operation which allows you to down sample your spatial resolution of your image and deal with multiple scales of that image or multiple scales of your features within that image and finally the last point I want to make here is that the computation of class scores let's suppose if we're dealing with image classification can be outputted using maybe a dense layer at the end after your convolutional layers so you can output a dense layer which which represents those probabilities of representing each class and that can be your final output in this case and now we'll go through each of these operations and break these ideas down a little bit further just so we can see the basic architecture of a CNN and how you can implement one as well okay so going through this step by step those three steps the first step is that convolution operation and as before this is the same story that we've going we've been going through each neuron here in the hidden layer we'll compute a weighted sum of its inputs from that patch and we'll apply a bias like in lecture one and two and activate with a local non-linearity know what's special here is that local connectivity that I just want to keep stressing again each neuron in that hidden layer is only seeing a patch from that original input image and that's what's really important here we can define the actual computation for a neuron in that hidden layer its inputs are those neurons in the patch and the previous layer we apply a matrix of weights again that's that filter a 4x4 filter in this case we do an element-wise multiplication add the result apply a bias activate with that non-linearity and that's it that's our single neuron at the hidden layer and we just keep repeating this by sliding that patch over the input remember that our element-wise multiplication and addition here is simply that convolution operation that we talked about earlier I'm not saying anything new except the addition of that bias term before our non-linearity so this defines how neurons and convolutional layers are connected but with a single convolutional layer we can have multiple different filters or multiple different features that we might want to extract or detect the output layer of a convolution therefore is not a single image as well but rather a volume of images representing all of the different filters that it detects so here at D the depth is the number of filters or the number of features that you want to detect in that image and that's set by the human so when you define your network you define at every layer how many features do you want to detect at that layer now we can also think about the connections in a neuron in a convolutional neural network in terms of their receptive field and the locations of their input of that specific node that they're connected to right so these parameters define the spatial arrangement of that output of the convolutional layer and to summarize we can see basically how the connections of let's see so how the connections of these convolutional layers are defined first of all and also how the output of a convolutional layer is a volume defined by that depth or the number of filters that we want to learn and with this information this defines our single convolutional layer and now we're well on our way to defining the full convolutional neural network the remaining steps are are kind of just icing on the cake at this point and it starts with applying that non-linearity so on that volume we apply an element-wise non-linearity in this case I'm showing a rectified linear unit activation function this is very similar in idea to lectures 1 & 2 where we also applied nonlinearities to deal with highly nonlinear data now here the relative activation function rectified linear unit we haven't talked about it yet but this is just an activation function that takes as input any real number and essentially ships everything less than zero to zero and anything greater than zero it keeps the same another way you can think about this is it make sure that the minimum of whatever you feed in is zero so if it's greater than zero it doesn't touch it if it's less than zero to make sure that it caps it at zero now the key idea the next key idea let's say of convolutional neural networks is pulling and that's how we can deal with different spatial resolutions and become spatially or like invariant to spatial size in our image now the pooling operation is used to reduce the dimensionality of our input layers and this can be done on any layer after the convolutional layer so you can apply on your input image a convolutional layer apply a comp a non-linearity and then down sample using a pooling layer to get a different spatial resolution before applying your next convolutional layer and repeat this process for many layers and a deep neural network now a common technique here for pooling is called max pooling and when and the idea is as follows so you can slide now another window or another patch over your network and for each of the patches you simply take the maximum value in that patch so let's say we're dealing with two by two patches in this case the red patch you can see on the top right we just simply take the maximum value in that red patch which is six and the output is on the right-hand side here so that six is the maximum from this patch this 2x2 patch and we repeat this over the entire image this makes us or this allows us to shrink the spatial dimensions of our image while still maintaining all of that spatial structure so actually this is a great point because I encourage all of you to think about what are some other ways that you could perform a pooling operation how else could you down sample these images max pooling is one way so you could always take the maximum of these these 2x2 patches but there are a lot of other really clever ways as well so it's interesting to think about some ways that we can also in another ways potentially perform this down sampling operation now the key idea here of these convolutional neural networks and how we're now with all of this knowledge we're kind of ready to put this together and perform these end-to-end networks so we have the three main steps that I talked to you about before convolution local nonlinearities and pooling operations and with CNN's we can layer these operations to learn a hierarchy of features and a hierarchy of features that we want to detect if they're present in the image data or not so a CNN built for image classification I'm showing the first part of that CNN here on the Left we can break it down roughly into two parts so the first part I'm showing here is the part of feature learning so that's where we want to extract those features and learn the features from our image data this is simply applying that same idea that I showed you before we're gonna stack convolution and nonlinearities with pooling operations and repeat this throughout the depth of our network the next step for our convolutional neural network is to take those extracted or learned features and to classify our image right so the ultimate goal here is not to extract features we want to extract features but then use them to make some classification or make some decision based on our image so we can feed these outputted features into a fully connected or dense layer and that dense layer can output a probability distribution over the image membership in different categories or classes and we do this using a function called softmax which you actually already got some experience with in lab 1 whose output represents this categorical distribution so now let's put this all together into coding our first end-to-end convolutional neural network from scratch we'll start by defining our feature extraction head which starts with a convolutional layer here I'm showing with 32 filters so 32 is coming from this number right here that's the number of filters that we want to extract inside of this first convolutional layer we down sample the spatial information using a max pooling operation like I discussed earlier and next we feed this into the next set of convolutional layers in our network so now instead of 32 features we're gonna be extracting even more features so now we're extracting 64 features then finally we can flatten this all of the spatial information and the spatial features that we've learned into a vector and learn our final probability distribution over class membership and that allows us to actually classify this image into one of these different classes that we've defined so far we've talked only about using CN NS for image classification tasks in reality this architecture extends to many many different types of tasks and many many different types of applications as well when we're considering CN NS for classification we saw that it has two main parts first being the feature learning part shown here and then a classification part and the second part of the pipeline what makes a convolutional neural network so powerful is that you can take this feature extraction part of the pipeline and at the output you can attach whatever kind of output that you want to it so you can just treat this convolutional feature extractor simply as that a feature extractor and then plug in whatever other type of neural network you want at its output so you can do detection by changing the output head you can do semantic segmentation which is where you want to detect semantic classes for every pixel in your image you can also do ant and robotic control like we saw with the tongue that's driving before so what's an example of this we've seen a significant impact in computer vision in medicine and healthcare over the last couple of years just a couple weeks ago actually there was this paper that came out where deep learning models have been applied to the analysis of a whole host of breast and the sry mammogram cancer detection or yeah breast cancer detection in mammogram images so what we showed what was showed here was that CNN were able to significantly outperform expert radiologists and detecting breast cancer directly from these mammogram images that's done by feeding these images through a convolutional feature extract they're out putting that those features those learn features to dense layers and then performing classification based on those dense layers instead of predicting a single number breast cancer or no breast cancer you could also imagine for every pixel in that image you want to predict what is the class of that pixel so here we're showing a picture of two cows on the left those are fed into a convolutional feature extractor and then they're up scaled through the inverse convolutional decoder to predict for every pixel in that image what is the class of that pixel so you can see that the network is able to correctly classify that it sees two cows and brown whereas the grass is in green and the sky is in blue and this is basically detection but not for a single number over the image yes or no there's a cow or in this image but for every pixel what is the class of this pixel this is a much harder problem and this output is actually created using these up sampling operations so this is no longer a dense neural network here but we have kind of inverse or what are called transpose convolutions which scale back up our image data and allow us to predict these images as outputs and not just single numbers or single probability distributions and of course this idea can be you can imagine very easily applied to many other applications in healthcare as well especially for segmenting various types of cancers such as here we're showing brain tumors on the top as well as parts of the blood that are infected with malaria on the bottom so let's see one final example before before ending this lecture and here we're showing again and going back to the example of self-driving cars and the idea again is pretty similar so let's say we want to learn a neural network to control a self-driving car and learn autonomous navigation specifically we want to go from a model we're using our model to go from images of the road maybe from a camera attached to that car on top of the car you can think of the actual pixels coming from this camera that are fed to the neural network and in addition to the pixels coming from the camera we also have these images from a bird's-eye Street view of where the car roughly is in the world and we can feed both of those images these are just two two-dimensional arrays so this is one two-dimensional array of images or pixels excuse me and this is another two-dimensional array of pixels both represent different things so this represents your perception of the world around you and this represents roughly where you are in the world globally and what we want to do with this is then to directly predict or infer a full distribution of possible control actions that the car could take at that's instant so if it doesn't have any goal destination in mind they could say that I could take any of these three directions and steer in those directions and that's what we want to predict with this Network one way I do this is that you can actually train your neural network to take as input these camera images coming from the car pass them each through these convolutional encoders or feature extractors and then now that you've learned features for each of those images you can concatenate all of them together so now you have a global set of features across all of your sensor data and then learn your control outputs from those on the right hand side now again this is done entirely end to end right so we never told the car what a lane marker was what a road was or how to even turn right or left or what's an intersection so we never told any of that information but it's able to learn all of this and extract those features from scratch just by watching a lot of human driving data and learn how to drive on its own so here's an example of how a human can actually enter the car input a desired destination which you can see on the top right the red line indicates where we want the car to go in the map so think of this as like a Google map so you plug into Google Maps where you want to go and the antenna see and then the convolutional neural network will output the control commands given what it sees on the road to actually activate that vehicle towards that destination note here that the vehicle is able to like sickness successfully navigate through those intersections even though it's never been driving in this area before it's never seen these roads before and we never even told it what in an intersection was it learned all of this from data using convolutional neural networks now the impact of cnn's has been very very wide reaching beyond these examples that I've given to today and it has touched so many different fields of computer vision ranging across robotics and as medicine and many many other fields I'd like to conclude by taking a look at what we've covered in today's lecture we first considered the origins of computer vision and how images are represented as brightness values to a computer and how these convolution operations work in practice right so then we discussed the basic architecture and how we get build-up from convolution operations to build convolutional layers and then pass that to convolutional neural networks and finally we talked about the extensions and applications of convolutional neural networks and how we can visualize a little bit of the behavior and actually actuate some of the real world with convolutional neural networks either by predicting some parts of medicine or some parts of medical scans or even activating robots to interact with humans in the real world and that's it for the CNN lecture on computer vision next up we'll hear from alpha a deep generate generative modeling and thank you you
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2022_Reinforcement_Learning.txt
okay hi everyone and welcome back to 6s191 today is a really exciting day because we'll learn about how we can actually marry two very long-standing fields in computer science and with reinforcement learning with recent advances in deep learning and the topic of today's lecture will actually be on how we can combine those two fields and into the form of what's called deep reinforcement learning now this field is really amazing to me because it moves away from this paradigm that we've been seeing in this class so far and that paradigm is really where we have a machine learning model or a deep learning model that is trained on a fixed data set okay so this is a fixed data set that we go out we collect we label and then we train our model on that data set and in rl or reinforcement learning the deep learning model is is not going to be learning on some fixed data set that's static now our algorithm is going to be placed in some dynamic environment and it's going to be able to explore and interact with that environment in different ways so that it can actually try out different actions and experiences so that it can learn how to best accomplish its task in that environment without any form of human supervision or fixed annotations from a human or any form of human guidance for example all it has to define is simply some objective that the that the algorithm should try to optimize for now this has huge obvious implications in many different fields ranging from robotics to self-driving cars and robot manipulation but also in this new and emerging field of gameplay and building strategies within games for solving and optimizing how an agent or how a player in the game can try to beat out other forms of human players now you can even imagine a combination of robotics and gameplay now where you train robots to play against humans in the real world to take from millions and millions of possibilities here's actually an example that you may have already seen in the past about a deep mind algorithm that was actually trained to play the game of uh of starcraft and it uh or sorry uh yeah starcraft and the algorithm's name was alpha star now here you're seeing it competing against some of the top human players and this was a huge endeavor playing star trek the algorithm creators and this is a huge deal when this came out and let's just watch this video for a little bit to be that good everything that we did was proper it was calculated and it was done well i thought i'm learning something it's much better than i expected it i would consider myself a good player right this is a professional player of starcraft competing against the deep learning algorithm alpha star and against this professional player who came in actually at the beginning of the video extremely confident that they would not only win but kind of win convincingly alpha star ended up defeating the human five to zero right so this is really an extraordinary achievement and we've kind of been keep seeing these type of achievements especially in the field of gameplay and strategies and i think the first thing i want to do as part of this lecture is kind of take a step back and introduce how reinforcement learning and how this this algorithm that you saw in the last slide was trained in the context of everything else that we've learned in this course so we've seen a bunch of different types of models so far in this course and we've also seen several different types of training algorithms as well but how does reinforcement learning compare to those algorithms so in the beginning of this class we saw what was called supervised learning this is an example where we have a data set of x as our input y as our output or our label for that input and our goal here is to learn some functional mapping that goes from x to y and predicts y right so for example we could give a neural network this image of an apple and the goal of our neural network is to is to label this image and say this thing is an apple right so that's an example of a supervised learning problem if we collect a bunch of images of apples we can train that type of model in a supervised way now in yesterday's lecture we also uncovered a new type of learning algorithms called unsupervised learning now here it's different than supervised learning because now we only have access to our data x there are no labels in unsupervised learning and now our only goal is to uncover some underlying structure within our data so here for example we can observe a bunch of different pictures of apples we don't need to say that these are apples and maybe there's other images of oranges right so these are two different images we don't need to give them labels we can just give all of the images to our algorithm and the goal of an unsupervised learning algorithm is simply to identify that okay this one picture of an apple is pretty similar to this other thing right it doesn't know that it's an apple it just knows that these things share similar features to each other and now in reinforcement learning we are given only data in the form of what are called state action pairs states are the observations that an agent or a player sees and actions are the behavior that that agent takes in those states so the goal now of reinforcement learning is to learn how to maximize some metric of its rewards or future rewards over many different time steps into the future so in this apple example now that we've been keeping on the bottom of the slide we might now see that the agent doesn't know that this thing is an apple and now it just learns that it should eat this thing because when it eats it it gets to survive longer because it's a healthy food right this thing will help you keep or help keep you alive right it doesn't understand anything about what it is or maybe even the fact that it's a food right but it just got this reward over time by eating an apple it was able to become healthier and stay alive longer so it learns that that's kind of an action that it should take when it sees itself in a state presented with an apple now rl or reinforcement learning this third panel is going to be the focus of our lecture today so before we go any deeper i want to really make sure that you understand all of the new terminology that is associated to the reinforcement learning field because a lot of the terminology here is actually very intuitive but it's a little bit different than what we've seen in the class so far so i want to really walk through this step by step and make sure that all of it from the foundation up is really clear to everyone so the first and most important aspect of a reinforcement learning system is going to be this agent here we call an agent is anything that will take actions for example it could be a drone that's making a delivery or it could be super mario navigating through a video game the algorithm that you have is essentially your agent so for example in life you are the agent right so you live life and you take some actions so you that makes you the agent the next main component of the system is the environment the environment is simply the world in which the agent exists and takes actions in the place that it moves and lives the agent can send commands to the environment in the form of actions right so it can take steps in the environment with actions and here we can say that a is the set of all possible actions that this agent could take in this environment capital a and an action is almost very self-explanatory but it should be noted that agents choose among a potentially discrete set of actions so for example in this case we have an agent that can choose between moving forwards backwards left or right you could also imagine cases where it's not a fixed number of actions that an agent could take maybe it could be represented using some function so it's a continuous action space and we're going to talk about those types of problems as well in today's lecture but just for simplicity we can consider kind of a finite space of actions now when the agent takes actions in an environment its environment will send it back observations right so an observation is simply how an agent interacts with its environment and you can see that it's sending back those observations in the form of what are called states now a state is just a concrete and immediate situation that the agent finds itself in at this moment in time t right so it takes some action at t it gets some state back at time t plus one and in addition to getting the state back we also get what's called a reward back from our environment now a reward is simply a feedback let's think of this as a number it's a feedback by which we can measure the success of an agent's actions right so for example in a video game when mario touches a coin a gold coin he wins points right those are examples of rewards that are distributed to mario when he touches that gold coin now from any given state an agent sends out or sends outputs in the form of actions to the environment and then the environment will return in response with those states and those rewards now if there are any rewards there are also cases where rewards may be delayed right so you may not get your reward immediately you may see the reward later on in the future and these effectively will evaluate your agent's actions in a delayed state now let's dig into this reward part a little bit more now we can also identify or kind of formulate the total reward that the agent is going to see which is just the sum of all rewards that an agent gets after any certain time t okay so here for example capital r of t is denoted as this total reward or what's also called the return and that can be expanded to look like this so it's reward at time t plus its reward at time t plus one and so on all the way on to infinity right so this is kind of like the total reward that the agent is going to achieve from here on out now it's often useful to not only consider the total reward right but the total or the total sum of rewards but also to think about what we call a discounted total reward or a discounted sum of future awards now here the discount factor you can think of is this gamma parameter so now instead of just adding up all of the rewards we're going to multiply it by some discount factor which is just a number that we can define and that's that number is chosen to effectively dampen these rewards effects on the agent's choices of an action now why would we want to do this so the discounting factor is effectively designed to make future rewards worth less than current rewards right so it enforces some form of short-term or greedy learning in the agent and this is actually a very natural form of of rewards that we can think about so let's suppose i offered you i can give you a reward of five dollars for taking this class today or a reward of five dollars in five years from now right you still take the class today but you're gonna get the reward either today or in five years from today now which reward would you prefer to take they're both five dollars right but you would prefer the reward today because you actually have some internal discounting factor for those future rewards that make it less valuable for you now the discount factor here is simply multiplied by future rewards uh as discovered by the agent as it moves through the environment as it takes actions and like i said these effectively dampen the rewards effect on the agent's choice of action now finally there's a very important function in reinforcement learning and this is going to be kind of the main part of the first half of today's lecture where we look at this function called the q function okay and let's look at how we can actually define what the q function is from what we've learned in the previous slides and all the terminology that we've built up thus far now the q function is simply a function that takes as input two things it takes as input the current state that the robot is in or sorry that the agent is in and the current action that it executes in this current state okay so it's gonna take as input the observation that the agent sees and the action that it's going to take in response to that observation and the output of our q function is going to return the expected total future sum of rewards that an agent can receive after that point given that action that it took in this particular state right so if we took a really good action in this state our q function should return a very high expected total future reward if we took a bad action though in this state we should see that our q function should reflect that and return a a very poor or a penalized future reward right so now the question is if we are given this let's say magical q function let's not say i i'm going to let's say you don't care about how you get the q function let's say i give it to you so you're going to get some black box function that you can feed in two things too the state and the action and it's going to give you this expected future return on your rewards as a return now how can we let's say we are agents in this environment how can we choose what action to take if we have access to this magical q function right so let me ask this kind of question to all of you and i hope maybe you can take a second to reflect on this let's say you're given this q function i'll just repeat the question again you're given this function it takes this input and kind of evaluates how good of an action this action is in your current state how can you use that function to determine what is the best action well ultimately we want to kind of uh infer we need to create a policy let's call it pi a policy is something that just takes as input the state and that policy the goal of the policy is to infer or output the best possible action that could be executed in this state right so think of a policy as just another function it takes as input your state and outputs what you should do in this state right so that's the ultimate goal that's what we want to compute we want to find what action do we take now that we're in this state how can we use our q function to create or that policy function well one strategy and this strategy is a is exactly the correct strategy that you would take is that you should just define your policy to choose the action that will maximize your q function right that will maximize your future rewards well how do you do that well you have some let's say finite list or finite array of possible actions you can feed each action into your q function along with your current state and each time you feed in an action you're going to get like how good of an action was that from your q function that's what your q function tells you and we just want to find the maximum of all of those different future returns of rewards right so by finding the arg max we identify the action that yielded the best or the greatest return on our future rewards as possible from this current state so we can actually define our policy our optimal policy at this time we'll call pi star which is denoting the optimal policy at this state s should just be the arg max or the max the the action that results in the maximum q value at this time now in this lecture we're going to focus primarily on two forms of reinforcement learning algorithms that can broadly be disconnected into two different categories one of which is where we try to actually learn this q function and then use it in the exact way that i just described on the previous slide right so assuming we have the q function we can solve the problem of reinforcement learning just by using this arg max over our q q function but then the question is how do we get the q function right previously i just said i'll give it to you magically but in practice you'll need to actually learn it right so how can we use reinforcement learning and deep learning to learn that q function that's going to define what we call a value learning algorithm and the second class of algorithms are what we call policy learning algorithms because they try to directly learn the policy that governs the agent and then sample actions from this policy right so you can think of almost policy learning as a much more direct way of modeling the problem instead of finding a q function and then maximizing that q function you want to just directly find your policy function and use the neural network to optimize or identify your policy function from a bunch of data and then sample actions from that policy function so first let's look at value learning and then we'll build up our intuition and then we'll extend on in the second half of today's lecture on to policy learning so let's start by digging deeper firstly onto the into this q function since the q function is actually the foundational basis of value learning now the first thing i'll introduce is the atari breakout game which you can see here on the left the objective of this game is essentially that you have this uh paddle on the bottom this paddle can move left or right that's the agent so the agent is the paddle you can move either left or right at any moment in time and you also have this ball that's coming at this moment coming towards the agent now the objective of the agent the paddle is to move in ways that it reflects the ball and can hit the ball back towards the top of the screen and every time it hits the top of the screen it kind of breaks off some of those colored blocks at the top and that's why we call this game breakout because essentially you're trying to keep breaking out as many of those top pieces as possible you lose the game when that ball passes the paddle and that's when the game is finished so you got to keep hitting the paddle up and up until you break out all of the balls if you miss the ball then you lose the game so the q function essentially tells us the expected total return that we can expect given a certain state and action pair and the first point i want to convey to all of you is that it can be sometimes extremely challenging for even humans to define what is a good state and action pair right and what is a bad state in action pair and out of i'm going to show two examples here out of these two examples a and b these are two examples of both states and actions so you can see in state a the action of the agent is to do nothing right as the ball is coming towards that agent and it's going to bounce off the ball back towards the top of the board or state b where the ball is coming towards the side and the the agent is kind of out of the direction of the ball right now but it can kind of zoom in at the last second and kind of hit the ball between these two state action pairs which do you think will have the higher future expected return on rewards maybe enter your thoughts through the chat and let's see what you guys think so and i just want to convey again that i think the first time i looked at this i was really interested in paul in the state action pair a because this was a very conservative action to take and i thought actually this would be the best action that could or this would be the state action pair that i would have a higher return on rewards and we can actually look at a policy that behaves in the manner of this agent here so now i'm going to play a video on the left hand side which actually shows kind of this strategy of the agent where it's constantly trying to get under the ball and hit the ball back up towards the middle of the screen so let's watch this agent in practice you can see it is successfully hitting off and breaking off balls or sorry breaking off these colored boxes at the top of the screen so it is winning the game but it's doing so rather slowly right so it's kind of breaking off all of the points in the middle because its strategy is kind of conservative to hit the middle of the screen now let's consider a strategy b by a different agent where the agent may even potentially purposely move away from the ball just so we can come back and hit it from an angle what does this agent look like so here's an example where you can see the agent is really targeting the edges of the screen why because the second it attacks the edges it's able to break off a ton of the balls from the top of the screen just by entering through a small door kind of that it creates in the in the in the side of the screen so this is a very desirable policy for the model but it's not a very intuitive policy that humans would think about necessarily that you need to attack those edges just for kind of unlocking this cheat code almost where you can now start to kill all of the balls or blocks from the top of the screen so we can now see that if we know the q function then we can directly use it to determine what is the best action that we should take at any moment in time or any state in our environment now the question is how can we train a neural network or a deep learning model to learn that q function right so we kind of have already answered the second part of this problem given a q function how to take an action but now we need to answer the first part of the problem which is how do we even learn that q function in the first place well there's two different ways that we could do this one way is an approach very similar to the formulation of the function itself so the function takes as input a state and an action and it outputs a value right so we can define a neural network to do exactly the same thing we can define a neural network that takes as input convolutional uh convolutional layers with an image input that defines the state just the pixels of our board or of our game at this moment in time and also simultaneously we can feed in our action as well the action that the agent should take at this given state for example move towards the right right now the output we can train our neural network to just output this q value okay that's one option that we could use for training this system with a deep neural network and i'll talk about the actual loss function in a little bit later but first i want to share also this idea of a different type of method and i want to kind of debate a little bit which one you think might be better or more efficient now instead of inputting both the state and the action we're going to input only the state and we're going to learn to output the q value for all of the different possible actions so imagine again we have a finite set of actions we could have let's say there are k actions we could have our neural network output k different q values each one corresponding to taking action one through action k now this is often much more convenient and efficient than the option shown on the left hand side why is that because now when we want to evaluate what is the best action we only need to run our network once given a state so we feed in our state we extract all of the q values at one time simultaneously and then we just find the one with the maximum q value and that's the action that we want to take let's say we find that this q value q of s a two this is the highest one out of all of the k q values so this action a two is the one that we ultimately pick and move forward with at that state whereas if we're on the left hand side we have to feed in each action independently so to find what is the best action we'd have to run this neural network k times and propagate the information k times all the way over so what happens if we take first of all uh all of the best actions and the point i want to get at here is i want to start answering this question of how we can actually train this q-valued network right and hopefully this is a question that all of you have kind of been posing in your minds thus far because we kind of talked about how to use the q value how to kind of structure your neural network in terms of inputs and outputs to find the q value but never we talked about how to actually train that neural network so let's think about the best case scenario right how an agent would likely perform in the ideal case what would happen if we took all of the best actions okay well this would mean that the target return would be maximized right and this can serve as our ground truth to train the agent right so to train the agent we're going to follow this simple ideology which is in order to maximize the target return we're going to try to sorry to in order to train the agent we will ultimately maximize our target return right so to do this we're going to first formulate our expected return as if we were going to take all of the best actions from this point onwards right so we pick some action now and then after that point we pick all of the best actions so think kind of optimistically right i'm going to take some action now and i'll take all of the best actions in the future what would that look like right that would be my initial reward at time t that i get right now by taking this current action right so i take some action my environment immediately tells me what my reward is so that's i can hold that in my memory as ground truth for my training data and then i can select the action that maximizes the expected return for the next future scenario right and of course we have one to apply this discounting factor as well uh so this is our target right so now let's let's start about thinking about estimating this this prediction value right q of s a so this is the cube given our current state and action pair that is the expected total return given our state and our action and that is exactly what our network is going to predict for every one of our different actions how can we use these two terms here to kind of formulate this loss function that will train our neural network and provide some objective that we can back propagate through so this is known as the q loss and this is how we can train deep neural networks right so we predict some value right so we pass our state interactions through our network we get some value that's on the right side that's the predicted value right here and then our target value is just going to be obtained by observing what our reward was at time t so we take that action and we actually get a reward back from our environment that's a tangible reward that we can hold in memory and that's going to define our second part of the loss function and then combine that with what we expect our expected total future return on rewards would be and that's our target value now our loss function is just simply we want to minimize the divergence between these two so we subtract them we take some normalize some norm over them like a mean squared error and that's our q loss so we're going to try to have our predictions match as closely as possible to our ground truth okay great so now let's summarize this whole process because i've kind of thrown a lot at you let's see how all of this kind of comes together and can shape up into a solid reinforcement learning algorithm that first tries to learn the q function so first we're going to have our deep neural network that takes as input our state at time t and it's going to output the q values for let's say three different possible actions in this case there are three actions that our breakout agent can take it can either go left it can go right or it can stay in the middle and take no action okay so for each of these three actions there's going to be one output so we'll have three different outputs each output is going to correspond to the q value or the expected return of taking that action in this state so here for example the actions are right stay and we can see that the q values for example are 20 for left three for stay and zero for right and we can actually this makes sense right because we can see that the ball is moving towards the left the paddle is already a bit towards the right so the paddle is definitely going to need to move towards the left otherwise it has no chance of really hitting that ball back into place and continuing the game right so our neural network is able to output these three different q values we can compute our policy which is what is the best action that we want to take in this given state by simply looking at our three q values looking at which is the highest right so in this case it's 20 which corresponds to the q value of action number one which corresponds to the action of going left and that's the action that our network should execute or our agent should execute so because that had the highest q value now our agent is going to step towards the left now when that agent steps towards the left that's an action that gets fed back to our environment and our environment will respond with a new state that new state will be again fed over to our deep neural network and we'll start this process all over again now deepmind showed how these networks which are called deep queue networks could actually be applied to solving a variety of different types of atari games not just breakout like we saw in the previous slide but a whole host of different atari games just by providing the state and oftentimes the state was in the form of pixels only on kind of the left hand side you can see how these pixels are being provided to the network pass through a series of convolutional layers like we learned about yesterday and then extracting some two-dimensional features from these images of the current state passing these to fully connected layers and then extracting or predicting what our q values should be for every single possible action that the agent could take at this moment in time so here for example the agent has a bunch of different actions that it could execute all on the right side and it's going and the network is going to output the q value for executing each of these different uh possible actions on the right now they tested this on many many games and showed that over 50 percent of the games just by applying this kind of very intuitive algorithm where the agent is just stepping in the environment trying out some actions and then maximizing its own own reward in that environment they showed that this technique of reinforcement learning was able to surpass uh human level performance just by training neural networks to accomplish and and operate in this manner now there were certain games that you can see on the right hand side that were more challenging but still given how simple and kind of clean and elegant this algorithm was it's actually to me still amazing that this works at all right now there are several downsides to q-learning and i want to talk about those and those will kind of motivate the next part of today's class so the first downside is the complexity side right so in q learning our model can only we can only kind of model scenarios right that we can define the action space in discrete and small pieces right so because and the reason for this is because we have to have our network output all of these actions as q values right so our number of outputs has to be number one has to be fixed right because we can't have our neural network output a variable number of outputs and it has to be also relatively small we can't have extremely large or infinite action spaces or continuous action spaces necessarily and that's the other downside right so we can't easily handle at least in this basic version of q learning handle continuous action spaces there have been some updates of q learning that now can handle continuous action spaces but in the in the foundational version of q learning typically we can only handle discrete or fixed amounts of actions that an agent can tackle at any moment in time the other side is the flexibility right so our policies are now determined deterministically by our q function right so we have some q function that our network outputs and we simply pick the maximum the the action that has the maximum q value right so that's a deterministic operation it cannot learn for example stochastic processes where where our environment is kind of stochastic right and may change different output outcomes in the future so to address both of these issues actually we're going to have now the second part of today's lecture where we're going to consider a new class of or a different class rather of reinforcement learning algorithms which are called policy gradient methods now just again as a recap we already saw what value learning was where we tried to first learn the q function and then extract actions based on maximizing our q function now in policy learning we're simply going to directly learn the the policy that governs our action taking steps or our ideal action taking steps now if we have that policy function that's a function that takes us input state and outputs an action we can simply sample from that function given in a state and you will return an action so essentially now we want to let's first revisit the q function the q neural networks these take us input a state and the output a expected maximum reward or return that you can expect to have if you take this action each of these actions right now in policy learning we're going to be not predicting the q values but we're going to directly optimize for pi of s so our policy its state s which is the policy distribution you can think of it directly governing how the agent should operate and act when it sees itself placed in this state so the outputs here give us the desired action in a significantly more direct way right so the outputs represent now not a expected reward that the agent can achieve in the future but now it represents the probability that this is a good action that it should take in this in this state right so it's a much more direct way of thinking about this problem for example what's the probability that this action will give us the the maximum reward so if we can predict these probabilities for example here let's say we train this policy gradient model it inputs a state and it outputs three different probabilities now for example the probability that going left is the best action is ninety percent the probability that staying is ten percent and the probability of going right is zero right we can aggregate them into pi to define what's called our our policy function and then to compute the action that we should take we will sample from this policy function right so keep in mind that now we see that our sample is action a but this is a distribution right this is defining a probability distribution and every time we sample from our pi of s our policy function ninety percent of times we'll get action a right because our weight on action is ninety percent but 10 percent of times we'll also get a different action right and that's because this is a stochastic distribution and again because this is a distribution all of the outputs of our neural network must add up to one here in order to maintain that this is a valid and well-formed probability right now what are some advantages of this of this type of formulation in comparison to q learning well the first thing is that we can now handle continuous action spaces not just situations where our actions are fixed and predefined maybe we can have a situation where instead of saying okay my actions are i go left i go right or i stay that's it now let's say i can have a continuous number or a continuous spectrum of actions which is actually an infinite set of actions ranging from i want to go really really fast to the right to really really slow to the right to really really slow to the left or really really fast to the left right so it's this full spectrum of kind of speeds that you want to move in this axis so instead of saying which direction should i move which is a kind of a classification problem now i want to say how fast should i move now when we plot the probability of our of our action space the likelihood that any action in this action space is a is a good action or an action that will return positive rewards when we plot that distribution now we can see that it has some mass over the entire number line right everywhere on the number line it's going to have some mass some probability not just at a few specific points or discrete points that are kind of predefined categories now let's look at how we can actually model these continuous actions with policy gradient learning methods so instead of predicting a probability that an action given all possible states which is actually not possible if there's an infinite number of actions in a continuous space let's assume that our output distribution now is a it's a set of parameters that define some continuous distribution okay so for example instead of outputting the mass at an infinite number of places along our number line let's define or let's have our network output a mean and a variance that defines a normal distribution which will describe how we have or how we have the mass all over the number line so for example in this image we can see the parallel needs to move towards the left so if we plot the distribution here we can see that the density of this distribution that the network predicts it has mean negative one right and it has a variance of 0.5 and when we plot it we can see okay it has a lot of mass on the side that's going fast to the left and we can also see that now when we sample from this distribution we get an action that we should travel left at a speed of 0.8 let's say meters per second or whatever the units may be right and again every time that we sample from this because this is a probability distribution we might see a slightly different answer right and again just like before because this is a probability distribution the same rules apply the mass underneath this entire density function has to integrate to one right because it's a continuous space now we use integrals instead of discrete summations but the story is still the same great so now let's look at how policy gradients work in a kind of concrete example so let's revisit firstly this reinforcement learning training loop or kind of environment loop that we saw earlier in today's lecture so let's think about how we could train for example in this in this toy problem an autonomous vehicle to drive using reinforcement learning and policy gradients algorithms so the agent here is a vehicle right so its goal is to drive as long as possible without crashing the states that it has the observations are coming from some camera maybe other sensors like lidar and so on and it can take actions in the form of the steering wheel angle that it wants to execute so it can decide the angle that the steering wheel should be turned to in order to achieve some reward which is maximizing the distance traveled before it has to crash for example so now let's look at how we can train a policy gradient network in the context of self-driving cars as a complete example and kind of walk through this step by step so first of all we're going to start with our agent right so we start by placing our agent somewhere in the world our environment right we initialize it now we run a policy now remember our policy is defined by a neural network it's outputting the actions that it should take at any moment in time and because it's not really trained this policy is going to crash pretty early on but we're going to run it until it crashes and when it crashes we will now record all of the states the actions and the rewards that it took at each point in time leading up to that crash right so this is kind of our memory of what just happened in this situation that led up to the crash then we're going to look at the the half of the state action rewards that came close to the crash and we're going to so those are all rewards that we can say kind of resulted in a low outcome or a bad outcome it's a low reward state right so for all of those actions let's decrease their probability of ever being executed again in those states right so let's try some other actions at those places and for the actions here close to the beginning part or close to the part far away from our bad rewards from our penalties from our crash let's try to increase the probability of those actions being repeated again in the future right so now the next time we repeat this process because we increase these good actions and we've decreased the bad actions now we can see the same process again we reinitialize the agent we run until completion and we update this policy and we can keep doing this over and over again and you'll see that eventually the agent learns to perform better and better because it keeps trying to optimize all of these different actions that resulted in high returns and high rewards and it minimized all of the actions that came very close to crashing so it doesn't kind of repeat those actions again in the future and eventually it starts to kind of follow the lanes without crashing and i mean this is really incredible because first of all we never taught this algorithm anything about uh lanes right we never taught anything about roads all we told it was when it crashed and when it survived right all we gave it was a reward signal about survival and it was able to learn that in order to maximize its rewards okay probably it should detect the lanes it should detect the other cars for example and we should try to avoid those types of crashes now the remaining question that needs to be seen is how we can actually update our policy on every training iteration in order to accomplish this right so in order to decrease the probability of all of the bad actions and increase the probability of all the good actions how can we kind of construct our training algorithm to promote those good likelihoods and demote the bad likelihoods now the part of this algorithm on the left-hand side that i'm talking about is steps four and five right so these are the these are the main uh kind of next pieces that i want to start to talk about and i want to kind of start by saying what is the big challenge first of all with this whole training loop and before i get to that let's say here first with when we want to think about kind of the loss function for training this type of model for training policy gradients in practice we'll dissect it first of all to kind of look at what pieces it's comprised of so the loss function is going to be composed of two different terms the first one is this log likelihood think of this almost as being like a probability or the likelihood that it should select a given action from a given state this is the output of our policy right so this is uh again just to repeat it one more time the likelihood that our agent thinks that it should execute action a given that it's in state s we multiply we put that inside a log so it's a log probability and that's that's it right now we multiply this by the total discounted reward that was achieved at this time t right now let's assume that we got a lot of reward at this time a lot of return by taking this action that had a high log likelihood this loss will be very um very large right and it will reinforce those actions because they resulted in very good returns right on the other hand if we had rewards that are very low for a specific action so those are bad rewards or kind of penalties right then those actions should not be sampled again in the future because it did not result and it's a in a desirable action so we'd want to in order to minimize our loss function we'd want to minimize the log probability the probability of sampling that action again in the future that would essentially be equivalent to minimizing the probability of mass at that specific action state given that that point or that observation in the environment and when we plug this into our gradient descent algorithm to train our neural network we can actually see that policy gradients uh kind of highlighted here in blue right and this is where this method kind of gets its name from right because when you uh take the gradient and you plug it into your your gradient descent optimizer the gradient that you're actually taking over is the gradient of your policy multiplied by the returns right and that's kind of where the connection comes into play a little bit now i want to talk a little bit as i conclude and wrap up this lecture is how we can extend this framework of reinforcement learning into real life right so in the beginning we talked a lot about gameplay and how reinforcement learning is really amazing uh shown to do amazing things in games where we have kind of like a full observation of our environment but in the real world we don't have a full observation of our environment oftentimes we don't even really know what's going to happen next a lot of time in a lot of games if i move my piece here there is a fixed number of possible outcomes that can be shown back to me right so i have some level of understanding of my future even if it's at one state there's a fixed number of possible states in games often whereas in real life that's very much not the case now we can get around this somewhat by training and simulation but then the problem is that modern simulators often do not accurately really depict the real world and they do not really transfer to reality either because of that when you deploy them so one interesting point here is where does this whole thing break down right so if we want to execute this algorithm on the left the key step that is going to break everything is step number two if we try to do this in reality why because in reality we can't just go out and crash our vehicles and let them drive on roads and just crash just so that we can teach them how not to crash right so that's not the way that we can train let's say an autonomous vehicle to operate in reality that's why simulators do come into into play a little bit and one cool result that we actually created in our lab was actually how we have been developing a brand new or photorealistic simulation engine for self-driving cars that is entirely data-driven so it overcomes this boundary of of uh this this gap and this transferability of whatever we learn in simulation cannot be applied to reality now because of this extremely photorealistic simulation that is entirely data driven the simulator is actually amenable to learning reinforcement learning policies and helping us use real data of the world to actually generate and synthesize brand new real data um that is very photorealistic and allows us to train reinforcement learning in simulation and still be transferred and deployed into reality so in fact we did exactly this and we showed that you can place agents within your simulator within this simulator train them using policy gradients the exact same algorithm that you learned about in today's lecture and all of the training was done entirely within the simulator within the simulator called vista and then we took these policies and actually put them directly on board our full-scale autonomous vehicle on real roads right and there was a direct transferability of all of the policies that came over and this represented actually the first time ever that a full-scale autonomous vehicle was capable of being trained using only reinforcement learning there was no human supervision and it was trained entirely in simulation and then deployed directly into reality so that's a really awesome result and actually in lab three we're going to well all of you will have the ability to kind of number one play around with the simulator number two train your own agents using policy gradients or whatever reinforcement learning algorithm you would like to train within simulation and design your own autonomous vehicle controllers and then kind of as part of the prices the winners will will invite you to put your policies on board the car and you can actually say that you trained uh an entire autonomous vehicle end-to-end using a single neural network and put your neural network onto the car and how to drive on in real roads so i think that's a pretty awesome result that should motivate all of you so now we've covered the fundamentals behind value learning policy learning and policy gradient approaches um very briefly i'm going to talk about some very exciting advances and applications that we're seeing all over the world right so first we turn to the game of go where reinforcement learning agents were put to the test against kind of human champions and achieved what at the time was an extremely exciting result so just very briefly a quick introduction to the game of go this is a 19 by 19 grid extremely high dimensional in terms of gameplay it's played between two players white and black and the objective of the game is to actually occupy more territory than your opponent okay so the problem of go or the game of go is actually extremely complex and in fact the full size of the board with this 19 by 19 board there are a greater number of legal board positions than atoms in the entire universe that's two times 10 to the 170 positions now the objective here is to train an ai algorithm using reinforcement learning to master the game of go not only to beat the existing gold standard software but also to beat the current world champion so google deepmind rose to this challenge a couple years ago they developed a reinforcement learning based platform that defeated the grand champion of go and the idea at its core was actually very clean and elegant so first they trained the neural network to watch human expert go players and learn how to imitate their behaviors then they used pre-trained networks to play against and a reinforcement learning policy network which actually allowed that policy to go beyond what the human experts had had done in the past but actually achieved what's called superhuman performance right and one of the tricks that really brought this algorithm to life and really to the next level was the use of an auxiliary network which took as input the states of the board and predicted how good of a state was that position and given that network this ai was kind of able to hallucinate almost how we could reach these great board positions and the steps it would have to take to reach these great board positions and use those as actions essentially to guide its predicted values and finally in recently published research of these approaches uh just over about a year or a year and a half ago called alpha zero that only uses um self-play and generalizes to three frame this board games starting from chess to shogi and go and in all of these examples the authors demonstrated that it was not only kind of possible to to learn how to really master the game but again to surpass uh human performance and surpass uh the at the time the gold standard humans so now just to wrap up today's lecture i'm going to uh first or uh just to wrap up today's lecture we talked about first the foundations of reinforcement learning in the beginning kind of what defines the reinforcement and learning environment ranging from the the agent to the environment to how they interact with each other then we talked about two different approaches for solving this reinforcement learning problem first with q learning where we want to estimate using a neural network the expect the total future expected return on rewards and then later with policy gradients and how we can train a network to directly optimize our policies directly and how we can actually extend these to continuous action spaces for example autonomous driving right so yeah so in the next lecture we're going to hear from ava on kind of the new exciting and recent advances of deep learning literature in the field and also some kind of interesting limitations about what you've been learning as part of this class and hopefully that can spark some motivation for you in the future of how you can kind of build on everything that you learned in this class and advance the field even further because there's still so much to be done so in a few minutes we'll hear from ava on that thank you very much
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2019_Recurrent_Neural_Networks.txt
hi everyone my name is Alvin and welcome to our second lecture on deep sequence modeling so in the first I was lecture Alexander talked about the essentials of neural networks and feed-forward models and now we're going to turn our attention to applying neural networks to problems which involve sequential processing of data and why these sorts of tasks require a different type of network architecture from what we've seen so far so before we dive in I like to start off with a really simple example suppose we have this picture of a ball and we want to predict where it will travel to next without any prior information about the ball's history any guess on its next position will be exactly that just a guess however if in addition to the current location of the ball I also gave you a history of its previous locations now the problem becomes much easier and I think we can all agree that we have a pretty clear sense of where the ball is going to next so this is a really really simple sequence modeling problem given this this image this you know a thought experiment of a balls travel through space can we predict where it's going to go next but in reality the truth is that sequential data is all around us for example audio can be split up into a sequence of sound waves while text can be split up into a sequence of either characters or words and beyond these two Ubiquiti examples there are many more cases in which sequential processing may be useful from analysis of medical signals like EKGs to predicting stock trends to processing genomic data and now that we've gone in the sense of what sequential data looks like I want to turn our attention to another simple problem to to motivate motivate the types of networks that we're going to use for this task and in this case suppose we have a language model where we're trying to Train neural network to predict the next word in a phrase or a sentence and suppose we have this sentence this morning I took my cat for a walk yes you heard and read that right this morning I took my cat for a walk and let's say we're given these words this morning I took my cat for a and we want to predict the next word in the sequence and since this is a class on deep learning we're going to try to build a deep neural network like a feed-forward Network from our first lecture to do this and one problem that we're immediately going to run into is that our feed-forward network can only take a fixed length vector as its input and we have to specify this size of this input right at the start and you can imagine that this is going to be a problem for our task in general because sometimes we'll have a sentence we'll have seen five words sometimes seven words sometimes ten words and we want to be able to predict what comes next so fundamentally we need a way to handle variable length input and one way we can do this is to use this idea of a fixed window to force our input vector to be a certain length in this case - and this means that no matter where we're trying to make our prediction we just take the previous two words and try to predict the next word and we can represent these two words as a fixed length vector where we take a larger vector allocate some space for the first word some space for the second word and encode the identity of each word in that vector but this is problematic because because of the fact that we're using this fixed window we're giving ourselves a limited history which means that we can't effectively model long term dependencies in our input data which is important in sentences like this one where we clearly need information from much earlier in the sentence to accurately predict the next word if we were only looking at the past two words or the past three words or the past five words even we wouldn't be able to make this prediction being the the word French so we need a way to integrate information from across the sentence but also still represent the input as a fixed length vector and another way we could do this is by actually using the entire sequence but representing it as a set of counts and this representation is what's called a bag of words where we have some vector and each slot in this vector represents a word and the value that's in that slot represents the number of times that that word appears in this sentence and so we have a fixed length vector over some vocabulary of words regardless of the length of the input sentence but the counts are just going to be different and we can feed this into our feed-forward neural network to generate a prediction about the next word and you may have already realized that there's a big problem with this approach in using counts we've just completely abolished all sequence information and so for example these two sentences with completely opposite semantic meanings would have the exact same bag of words representation same words same counts so obviously this isn't going to work another idea could be to simply extend our first idea of a fixed window thinking by thinking that by looking at more words we can get most of the context we need and so we can represent our sentence in this way right just a longer fixed window feed it into our feed-forward model and make a prediction and if we were to feed this vector into a feed-forward neural network each of these inputs each 0 or 1 in the vector would have a separate weight connecting it to the network and so if we repeatedly were to see the words this morning at the beginning of the sentence the neural network would be able to learn that this morning represents a time or a setting but if in another sentence this morning were to now appear at the end of that sentence the network is going to have difficulty recognizing that this morning actually means this morning because the that see the end of the vector have never seen that phrase before and the parameters from the beginning haven't been shared across the sequence and so at a higher level what this means is that what we learn about the sequence at one point is not going to transfer anywhere to anywhere else in the sequence if we use this representation and so hopefully by by walking through this I've motivated that why a traditional feed-forward neural network is not really well suited to handle sequential data and this simple example further motivates a concrete set of design criteria that we need to keep in mind when thinking about building a neural network for sequence modeling problems specifically our network needs to be able to handle variable length sequences be able to track long term dependencies in the data maintain information about the sequence order and share the parameters it learns across the entirety of the sequence and today we're going to talk about using recurrent neural networks or RN ends as a general framework for sequential processing and sequence modeling problems so let's go through the general principle behind RN ends and explore how they're a fundamentally different architecture from what we saw in the first lecture so this is a abstraction of our standard feed-forward neural network and in this architecture data propagates in one direction from input to output and we already motivated why a network like this can't really handle sequential data RNs in contrasts are really well-suited for handling cases where we have a sequence of inputs rather than a single input and they're great for problems like this one in which a sequence of data is propagated through the model to give a single output for example you can imagine training a model that takes as input a sequence of words and outputs a sentiment that's associated with that phrase or that sentence alternatively instead of returning a single output could also train a model where we take in a sequence of inputs propagate them through our recurrent neural network model and then return an output at each time step in the sequence and an example of this would be in text or music generation and you'll get a chance to explore this type of model later on in the lab and beyond these two these two examples there are a whole host of other recurrent neural network arctor architectures for sequential processing and they've been applied to a range of problems so what fundamentally is a recurrent neural network as I mentioned before to reiterate our standard vanilla feed-forward neural network we're going from input to output in one direction and this fundamentally can't maintain information about sequential data our Nan's on the other hand are networks where they have these loops these loops in them which allow for information to persist so in this diagram our RNN takes as input this vector X of T outputs a value like a prediction Y hat of T but also makes this computation to update an internal state which we call H of T and then passes this information about its state from this step of the network to the next and we call these networks with loops in them recurrent because information is being passed internally from one time step to the next so what's going on under the hood how is information being passed our nuns use a simple recurrence relation in order to process sequential data specifically they maintain this internal state H of T and at each time step we apply a function parametrized by a set of weights W to update this state based on both the previous state H of T minus 1 and the current input X of T and the important thing to know here is that the same function and the same set of parameters are used at every time step and this addresses that important design criteria from earlier of why it's useful to share parameters in the context of sequence modeling to be more specific the RNN computation includes both a state update as well as the output so given our input vector we apply some function to update the hidden state and as we saw in the first lecture this function is a standard neural net operation that consists of multiplication by a weight matrix and applying a non linearity but in this case since we both have the input vector X of T as well as the previous state H of T minus 1 as inputs to our function we have two weight matrices and we can then apply our non-linearity to the sum of these two terms finally we generate an output at a given time step which is a transformed version of our internal state the foes from a multiplication by a separate weight matrix so so far we've seen our n ends as depicted as having these loops that feedback back in on themselves another way of thinking about the RNN can be in terms of unrolling this loop across time and if we do this we can think of the RNN as multiple copies of the same network where each copy is passing a message onto its descendant and continuing this this scheme throughout time you can easily see that our n ends have this chain like structure which rely really highlights how and why they're so well suited for processing sequential data so in this representation we can we can make our weight matrices explicit beginning with the weights that transform the inputs to the hidden state transform the previous hidden se to the next hidden safe and finally transform the hidden sate to the output and it's important once again to note that we are using the same weight matrices at every time step and from these outputs we can to loss at each time step and this completes our what is called our forward pass through the network and finally to define the total loss we simply sum the losses from all the individual time steps and since our total loss consists of these individual contributions over time this means that training the network will also have to involve some time component okay so in terms of in terms of actually training our Nets how can we do this and the algorithm that's used is an extension of the back propagation idea that alexander introduced in the first lecture and it's called back propagation through time so to remind you let's let's think back to how we trained feed-forward models given our given our inputs we first make a forward pass through the network going from input to output and then back propagate gradients back through the network taking the derivative of the loss with respect to each parameter in the network and then tweaking our parameters in order to minimize the lost for our n ends our forward pass through the network consists of going forward across time updating the cell state based on the input and the previous state generating an output at each time step computing a loss I each time set and then finally summing these individual losses to get the total loss and what this means is that instead of back propagating errors through a single feed-forward network at a single time step in our n ends errors are back propagated at each individual time step and then across time steps all the way from where we are currently to the very beginning of the sequence and this is the reason why it's called back propagation through time because as you can see all the errors are flowing back in time to the beginning of our data sequence and so if we take a closer look at how gradients flow across this chain of repeating modules you can see that in between these these time steps in doing this back propagation we have this factor whh which is a matrix and this means that in each step we have to perform a matrix multiplication that involves this way matrix W and furthermore each cell say update results from a nonlinear activation and what this means is that in computing the gradient in an RNN the derivative of the loss with respect to our initial state H naught we have to make many matrix multiplications that involve the weight matrix as well as repeated use of the derivative of the activation function why might this be problematic well if we consider these multiplication operations if many of these values are greater than one what we can encounter is what's called this exploding gradient problem where our gradients become extremely large and we can't do any optimization and to combat this one thing that's done is in practice is called gradient clipping which basically means you scale back your gradients when they become too large and this is a really good practical option especially when you have a network that's not too complicated with and doesn't have many parameters on the flip side we can also have the opposite problem where if our matrix values are too small we can encounter what's called the vanishing gradient problem and it's really the motivating factor behind the most widely used RNN architectures and today we're gonna address three ways in which we can alleviate the vanishing gradient problem by changing the activation function that's used being clever about how we initialize the weights in our network and finally how we can fundamentally change the RNN architecture to combat this and so before we go into that let's take a step back and try to establish some more intuition as to why vanishing gradients are such a big problem so imagine you have a number right and you keep multiplying that number by something in between zero and one that number is going to keep shrinking and shrinking and eventually it's going to vanish when this happens two gradients this means it's going to be harder and harder to propagate errors further back into the past because the gradients are going to become smaller and smaller and this means that during training will end up biasing our network to capture short term dependencies which may not always be a problem sometimes we only need to consider very recent information to perform our tasks of interest so to make this concrete right let's go back to our example from the beginning of the lecture a language model we're trying to predict the next word in a phrase so in this case if we're trying to predict the last word in the phrase the clouds are in the blank it's pretty obvious what the next word is going to be and there's not much of a gap between the relevant information like the word cloud and the place where the prediction is needed and so an a standard RNN can use the past information to make the prediction but there can be other cases where more context is necessary like in this example more recent information suggests that the next word is most likely the name of a language but to identify which language we need information from further back the context of France and in many cases the gap between what's relevant and the point where that information is needed can become really really large and as that grid gap grows standard rnns become increasingly unable to connect the information and that's all because of the vanishing gradient problem so how can we alleviate this the first trick is pretty simple we can change the activation function the network uses and specifically both the 10h and sigmoid activation functions have derivatives less than one pretty much everywhere right in contrast if we use a rel ooh activation function the derivative is one four four whenever X is greater than zero and so this helps prevent the value of the derivative from shrinking our gradients but it's only true for when for when X is greater than zero another trick is to be smart in terms of how we in the initialize parameters in our network by initialing our weights to the identity matrix we can help prevent them from shrinking to zero too rapidly during back back propagation the final and most robust solution is to use a more complex type of recurrent unit that can more effectively track long term dependencies by controlling what information is passed through and what's used to update the cell state specifically we'll use what we call gated cells and there are many types of these gated cells that exist and today we'll focus on one type of gated cell called a long short-term memory network or LS TMS for short which are really good at learning long-term dependencies and overcoming this vanishing gradient problem and Ellis hams are basically the gold standard when it comes to building RN ends in practice and they're very very widely used by the deep learning community so to understand what makes LS am special right let's think back to the general structure of an RN n all recurrent neural networks have this form of a series of repeating modules right the RN n being unrolled across time and in a standard RN n the repeating module contains one computation node in this case it's a 10 8 10 H layer LS PMS also have this chain like structure but the repeating module is slightly more complex and don't get too frightened hopefully by you know what these flow diagrams mean we'll walk through except bicep but the key idea here is that the repeating unit in an LS TM contains these different interacting layers that control the flow of information the first key idea behind Alice hands is they maintain an internal cell state which will denote C of T in addition to the standard are n n say H of T and this cell state runs through throughout the chain of repeating modules and as you can see there are only a couple of simple linear interactions this is a point wise multiplication and this is addition that update the value of C of T and this means that it's really easy for information to flow along relatively unchanged the second key idea that L stems use is that they use these structures called gates to add or remove information to the cell state and gates consists of a sigmoid neural net layer followed by a point wise multiplication so let's take a moment to think about what these gates are doing this sigmoid function is special because it's forcing the input to the gate to be between 0 and 1 and intuitively you can think of this as capturing how much of the input should be passed through the gate if it's 0 don't pass any of that information through the gate if it's 1 pass all the information through the gate and so this regulates the flow of information through the LS TM so now you're probably wondering ok these lines look really complicated what's how do these LS Siam's actually work thinking of the lsdm operations at a high level it boils down to three key steps the first step in the LS TM is to decide what information is going to be thrown away from the prior cell state forget irrelevant history right the next step is to take both the prior information as well as the current input process this information in some way and then selectively update the cell state and our final step is to return an output and for this Alice hams are going to use an output gate to return a transformed version of the cell state so now that we have a sense of these three key lsdm operations forget update output let's walk through each step by step to get a concrete understanding of how these computations work and even though we're gonna walk through this I really want you to keep in mind the high-level concepts of each operation because that's what's important in terms of establishing the intuition behind how Alice teams work and we'll go again go back to our language model example that we've been using throughout this lecture where we're trying to predict the next word in a sequence so our first task is to identify what past information is relevant and what's irrelevant and we achieve this using a sigmoid layer called the forget gate f of T right and F of T is parametrized by a sets a set of weights and biases just like any neural network layer and this layer looks at the previous previous information H of T minus one as well as the input X of T and then outputs a number between zero and one between completely forgetting that information and completely keeping that information and then passes it along this decision so in our language model example the cell state might have included some information about the gender pronoun of a subject in a sentence for example so you can imagine updating the LST M to forget the gender pronoun of a sentences past subject once it encounters a new subject in that set in that sentence our second step is to decide what new information is going to be stored in our updated cell state and to actually execute that update so there are two steps to this the first is a sigmoid layer which you can think of as gating the input which identifies what values we should update secondly we have a tan H layer that generates a new vector of candidate values that could be added to the state and in our language model we may to add the gender of a new subject in order to replace the gender of the old subject now we can actually update our old cell states EFT minus one into the new cell states EFT our previous two steps decided what we should do now it's about actually executing that so to perform this update we first multiply our old cell state C of t minus one by our forget state or forget gate F of T this forgets what we decided to forget right we can then add our our set of new candidate values scaled by the input gate to selectively update each state value and so in our language model example this means that we're dropping information about the old subjects gender and then adding the new information finally we just need to decide what we're going to output and actually output it and what we are going to output H of T is going to be a filtered version of our internal state that we've been maintaining and updating all along so again we're going to use a sigmoid layer to gate where we're going to output and we put our recently updated cell state C of T through through a tan H layer and then multiply this by the output of the sigmoid gate essentially this amounts to transforming the updated South State using that tan H and then dating it and in our language model for example you may want to output information that relates to a verb for example if we've just seen the subject a new subject in the sentence so this gives us a sense of the internal workings of the LS TM but if there's one thing I that you take away right it's sort of those three high-level operations of the LS TM forget old information update the cell state and output a filtered version right but to really appreciate how the LS TM helps overcome the vanishing gradient problem let's consider the really in ship between C of T and C of T minus one right when we back propagate from C of T our current cell state right to C of t minus one what you'll notice is that we only have to perform elementwise multiplication and an addition and doing this back propagation there's no matrix multiplication that's involved and that's entirely because we're maintaining this separate cell state C of T apart from H of T and that C of T is only involved in really simple computations and so when you link up these repeating LS p.m. units in a chain what you'll see is that you get this completely uninterrupted gradient flow unlike in a standard RNN where you have to do repeated matrix multiplications and this is really great for training purposes and for overcoming the vanishing gradient problem so to recap the key ideas behind LS CMS we maintain a separate cell state from what's outputted we use gates to control the flow of information first forgetting what's irrelevant selectively updating the cell state based on both the past history and the current input and then outputting some filtered version of what we just computed and this this maintain this maintenance of this separate cell state allows for simple back propagation with uninterrupted gradient flow so now that we've gone through the fundamental workings of our n ends back propagation through time the vanishing and exploding gradient problems and the STM architecture I'd like to close by considering three really concrete examples of how to use our nuns let's first imagine we're trying to learn a RN n to predict the next musical note and to use this model to generate brand new musical sequences so you can imagine inputting a sequence of notes right producing an output at each time step where our output at each time step is what we think is the next note in the sequence right and if you train this model like this you can actually use it to generate brand new music that's never been heard before and so for example [Music] right you get the idea this sounds like classical music right but in reality this was music that was generated by a recurrent neural network that trained on piano pieces from Chopin and after the training process was asked okay now generate some some new music based on what it is you've learned and you can see right this sounds like extremely realistic you may not have been able to tell that this was music generated by a machine unless maybe you're you're an expert piano aficionado and you'll actually get some practice with building a model to do exactly this in today's lab where you'll be training an RNN to generate brand new Irish folk music that has never been heard before as another cool example where we're going from an input sequence to just a single output we can train an RNN to take as input words in a sentence and actually output the sentiment or the feeling of that particular sentence either positive or negative so for example if we if we were to train a model like this on a set of tweets we could train our RNN to predict that this wonderful first tweet about our class success 191 has a really positive sentiment which hopefully you agree with but that this other tweet about the weather is actually negative the final example I'll briefly talk about is one of the most powerful and widely used applications of Arlen's in industry and it's the backbone of Google's Translate algorithm and that's machine translation where you input a sentence in one language and train an RNN to output a sentence in a new language and this is done by having an encoder that encodes encodes their original sentence into a state vector and a decoder which decodes that state vector into a new language but there's a big problem in in the approach as depicted here and that's the fact that this entire original sentence needs to be encoded in a single vector that's passed from the encoder to the decoder and this is a huge bottleneck when you're considering large bodies of text that you're trying to translate and actually you know researchers devised a clever way to get around this problem which is this idea of attention and the basic idea here is that instead of the decoder only having access to the final encoded state and now has access to each of these states after each of the steps in the original sentence and the actual weighting of these vectors from encoder to decoder is learned by the network during training and this this technique is called attention because when the network learns this waiting its placing its attention on different parts of the input sequence and in this sense you can think of it as actually capturing a sort of memory of the original sentence so hopefully you've gotten a sense of how our ends work and why they're so powerful for sequential processing we've discussed why they're so well-suited for sequential modeling tasks seen how to define their operation using this recurrence relation how to train them using back propagation through time and also looked at how gated cells can let us model long-term dependencies and finally we discussed three concrete applications of RN ends and so this concludes right the lecture portion of our first day of six s-191 and we're really excited now to transition to the lab portion which as I mentioned is going to mostly focus on training and RNN to generate brand new music and so the the lab is going to be broken down into two parts but sorry but before I go into the specifics of getting started with the labs I'd like to take a few minutes pause for those of you who you know plan to stick around for the labs we're happy to have you we're also happy to address any questions that you may have about either lecture one or lecture two here at the front and we'll just take a 5-minute break or so to orient ourselves and get set up for the lab thank you [Applause]
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2020_Reinforcement_Learning.txt
you now I think this field is really incredible because at its core it moves away from this paradigm that we've seen so far in this class in the first three or four lectures actually so so far in this class we've been using deep learning on fixed datasets and we've really been caring about our performance on that fixed dataset but now we're moving away from that and we're thinking about scenarios where our deep learning model is its age its own self and it can act in an environment and when it takes actions in that environment it's exploring the environment learning how to solve some tasks and we really get to explore these type of dynamics scenarios where you have a autonomous agent potentially working in the real world with humans or in a simulated environment and you get to see how we can build agents that learn to solve these tasks without any human supervision in some cases or any guidance at all so they learn to solve the tasks entirely from scratch without any data set just by interacting with their environment now this has huge obvious implications in fields like robotics where you have self-driving cars and also manipulation so having hands that can grasp different objects in the environment but it also impacts the world of gameplay and specifically strategy and planning and you can imagine that if you combine these two worlds robotics and gameplay you can also create some pretty cool applications where you have a robot playing against the human in real life [Music] okay so this is a little bit dramatized and the robot here is not actually using deep reinforcement learning I'd like to say that first so this is actually entirely choreographed for a TV ad but I do hope that it gives you a sense of what this marriage of having autonomous agents interact in the real world and the potential implications of having efficient learning of the autonomous controllers that define the actions of those autonomous agents so actually let's first take a step back and look at how deep reinforcement learning fits into this whole paradigm of what we've seen in this class so far so so far what we've explored in the first three lectures actually has been what's called supervised learning and that's where we have a data set of our data X and our labels Y and what we've tried to do in the first three lectures really is learn a neural network or learn a model that takes us input the data X and learns to predict the labels Y so not an example of this is if I show you this picture of an apple or we want to train our model to predict that this is an apple it's a classification problem next we discussed in the fourth lecture the topic of unsupervised learning and in this realm we only have access to data as there are no labels at all and the goal of this problem is that we just want to find structure in the data so in this case we might see an example of two types of apples and we don't know that these are apples per se because there's no labels here but we need to understand that there's some structure underlying structure within these apples and we can identify that yes these two things are the same even if we don't know that they're specifically apples now finally in reinforcement learning we're going to be given data in the form of what are called state action pairs so States are the observations or the inputs to the system and the actions are the actions well that the agent wants to take in that environment now the goal of the agent in this world is just to maximize its own rewards or to take actions that result in rewards and in as many rewards as possible so now in the Apple example we can see again we don't know that this thing is an apple but our agent might have learned that overtime if it eats through eat an apple it counts as food and might survive longer in this world so it learns to eat this thing if it sees it so again today in this class our focus is going to be just on this third realm of reinforcement learning and seeing how we can build deep neural networks that can solve these problems as well and before I go any further I want to start by building up some key vocabulary for all of you just because in reinforcement learning a lot of the vocabulary is a little bit different than in supervised or unsupervised learning so I think it's really important that we go back to the foundations and really define some important vocabulary that's going to be really crucial before we get to building up to the more complicated stuff later in this lecture so it's really important that if any of this doesn't make sense in these next couple slides you stop me and make sure you ask questions so first we're gonna start with the agent the agent is like the central part of the reinforcement learning algorithm it is the neural network in this case the agent is the thing that takes the actions in real life you are the agents each of you if you're trying to learn a controller for a drone to make a delivery the drone is the agent the next one is the environment the environment is simply the world in which the agent operates or acts so in real life again the world is your environment now the agent can send commands to the environment in the form of what are called actions now in many cases we simplify this a little bit and say that the agent can pick from a finite set of actions that it can execute in that world so for example we might say that the agent can move forward backwards left or right within that world and at every moment in time the agent can send one of those actions to the environment and in return the environment will send back observations to that agent so for example the agent might say that okay I want to move forward one step the the environment is going to send back in observation in the form of the state and a state is a concrete or immediate situation that the action that the agent finds itself in so again for example the state might be the actual vision or the scene that the agency is around it it could be in form of an image or a video maybe sound whatever you can imagine it's just the data that the agencies in return and again this loop continues it the agency is that observation or that state and it takes a new action in return and we continue this loop now the goal of reinforcement learning is that the agent wants to try to maximize its own reward in this environment so at every step the agent is also getting back a reward from that environment now the reward is simply just a feedback measure of success or failure every time the agent acts and you don't have to get a reward every time you act but your reward might be delayed you might only get one reward at the very end of your episode so you might live a long time and then at the end of your life get a reward or not so it doesn't have to be like every moment in time you're getting a reward these rewards effectively you can think about them as just evaluating all of the agents actions so from them you can get a sense of how well the agent is doing in that environment and that's what we want to try and maximize now we can look at the total reward as just the summation of all of the individual rewards and time so if you start at some time T we can call capital R of T as the sum of all of the rewards from that point on to the future and then so simply expanding the summation out you can see it's little R of T which is the reward at this time step right now after taking this action at this time plus all of the rewards into the future potentially on an infinite time horizon now often it's very useful to consider not just the sum of all rewards but what's called the discounted sum of rewards and that's obtained by simply multiplying this discounting factor lambda by each of the rewards that at any point in time and the reason you do this is simply so that you can discount the future rewards so they don't count quite as much as a current reward so let me give a concrete example if I could offer you five dollars today or if I dollars in ten years it's still a reward of five dollars but you'd take the one today and that's because mentally you're discounting overtime that five dollars it's not worth as much to you because it's coming so far into the future so you'd prefer rewards that come as quickly as possible and again just showing this discounting total reward expanded out from a summation you can see that at each time point it's multiplying the reward at that time multiplied by the discounting factor which is typically between 0 & 1 ok so now that we've defined all these terms there's one very important function called the cue function and reinforcement learning that we now need to define so let's go back a step and remember how this total reward the total discounted reward or what's also called the return is defined so that's again just taking the current reward at time T multiplying it by a discounting factor and then adding on all future rewards also multiplied by their discounting factor as well now the Q function takes as input the current state of the agent and also takes as input the action that the agent executes at that time and it returns the expected total discounted return that the agent could expect at that point in time so let's think about what this means so this is telling us if the agent is in state s with and it takes an action a the total amount of reward total amount of discounted reward that it could obtain if it takes that action in that state is that the result from that q function and that's all the q function is telling you so it's a higher Q value is gonna tell us that we're taking an action that's more desirable in that state a lower Q value is going to tell us that we've made an undesirable action in that state so always we want to try and take actions that maximize our Q value okay so now the question is if we take this magical Q function and we have an agent that has access to this Oracle of the Q function so assume that I give you the Q function for now and the agent has access to it and I place that agent in an environment the question is how how can that agent use that cue function to take actions in the environment so let me actually ask you this as a question so if I give you this Q value Q function and you're the agent and all you see as the state how would you use that q function to take your next action exactly yeah so what would you do you would feed in all of the possible actions that you could execute at that time you evaluate your Q function your Q function is gonna tell you for some actions you have a very high Q value for other actions you have a very low Q value you pick the action that gives you the highest Q value and that's the one that you execute at that time so let's actually go through this so ultimately what we want is to take actions in the environment the function that will take us input a state or an observation and predict or evaluate that to an action is called the policy denoted here as PI of s and the strategy that we always want to take is just to maximize our Q value so PI of s is simply going to be the Arg max over our actions of that Q function so we're going to evaluate our Q function over all possible actions and then just pick the action that maximizes this Q function that's our policy now in this lecture we are going to focus on two classes of reinforcement learning algorithms and the two categories that we're going to primarily focus on first are cases where we want our deep neural network to learn the Q function so now we're actually not given the Q function as ground truth or as an Oracle but we want to learn that q function directly and the second class of algorithms is we're sorry so when you take we learned that q function and we use that q function to define our policy the second class of functions second class of algorithms is going to directly try and learn that policy without the intermediate q function to start with okay so first we'll focus on value learning which again just to reiterate is where we want the deep neural network to learn that q function and then we'll use that learn Q function to determine our policy through the same way that I did before okay so let's start digging little bit deeper into that q-function so you can get a little more intuition on how it works and what it really means and to do that I'd like to introduce this breakout game which you can see on the left the idea of the breakout game is that you have you are the paddle you're on the bottom you can move left or right or stay in the middle or don't move it all rather and you also have this ball that's coming towards you and your objective in the game is to move left and right so that you hit that ball you bounces off your paddle and it tries to hit as many of the colored blocks on top as possible every time you hit a colored block on top you break off that block hence the name of the game is called breakout the objective of the game is to break out all of the blocks or break out as many of the blocks as possible before that ball passes your paddle and yeah so the ball bounces off your peddling you try and break off as many colored blocks as possible the cue function basically tells us the expected total return that we can expect at any state given a certain action that we take at that state and the point I'd like to make here is that estimating or guessing what the Q value is is not always that intuitive in practice so for example if I show you these two actions that are two states and action pairs that this agent could take and I ask you which one of these probably has a higher Q value or said differently which one of these will give you a higher total return in the future so the sum of all of those rewards in the future from this action and state forward how many of you would say state action pair a okay how many of you would say state action pair B okay so you guys think that this is a more desirable action to take in that state state B or a scenario B okay so first let's go through these and see the two policies working in practice so we'll start with a let me first describe what I think a is gonna be acting like so a is a pretty conservative policy it's not gonna move when it sees that ball coming straight toward it which means that it's probably going to be aiming that ball somewhere towards the middle of the or uniformly across the top of the board and it's gonna be breaking off color or colored blocks across the entire top of the board right so this is what that looks like it's making progress it's killing off the blocks it's doing a pretty good job it's not losing I'd say it's doing a pretty good job but it doesn't really dominate the game okay so now it's good to be so what B is doing is actually moving out of the way of the ball just so that it can come back towards the ball and hit the ball on its corner so like ball ricochets off at an extreme angle and tries to hit the color blocks at a super EXTREME angle now what that means well actually let me ask why might this be a desirable policy mm-hmm yeah so if you catch someone's on this really extreme edges what might happen is that you might actually be able to sneak your ball up into a corner and start killing off all of the balls on the top or all of the blocks on the top rather so let's see this policy in action so you can see it's really hitting at some extreme angles and eventually it it breaches a corner on the left and it starts to kill off all the blocks on the top it gets a huge amount of reward from this so this is just an example of how it's not always intuitive to me when I first saw this I thought a was going to be the safer action to take it would be the one that gives me de Morgan's like more of a turn but it turns out that there are some unintuitive actions that reinforcement learning agents can learn to really I don't know if I would call it cheating the environment but really doing things that we as humans would not find intuitive okay so the way we can do this practically with deep learning is we can have a deep neural network which takes as input a state which could in this case is just the pixels coming from that game at that instant and also some representation of the action that we want to take in this case maybe go right move the paddle to the right it takes both of those two things as input and it returns the Q value just a single number of what the neural network believes the Q value of that state action pair is now that's fine you can do it like this there's one minor problem with doing it like this and that's if you want to create your policy you want to try out all of the different possible actions that you could execute at that time which means that you're gonna have to run this network n times at every time instant where n is the number of actions that you could take so every time you'd have to execute this network many times just to see which way to go the alternative is that you could have one network that output or takes as input that state but now it has learned to output all of the different Q values for all of the different actions so now here we have to just execute this once before we propagate once and we can see that it gives us back the Q value for every single action we look at all of those Q values we pick the one that's maximum and take the action that corresponds now that we've set up this network how do we train it to actually output the true Q value at a particular instance or the Q function over many different states now what we want to do is to maximize the target return right and that will train the agent so this would mean that the target return is going to be maximized over some infinite time horizon and this can serve as the ground truth to train that agent so we can basically roll out the agent see how it did in the future and based on how we see it got rewards we can use that as the ground truth okay now I'm gonna define this in two parts first is the target Q value which is the the real value that we got by just rolling out the episode of the agent inside this simulator or environment let's say that's the target Q value so the target Q value is composed of the reward that we got at this time by taking this at this action plus the expected or plus the maximum like the the best action that we could take at every future time so we take the best action now and we take the best action at every future time as well assuming we do that we can just look at our data see what the rewards were add them all up and discount appropriately that's our true Q value okay now the predicted Q value is obviously just the output from the network we can train these we have a target we have a predicted we can train this whole network and to end by subtracting the two taking the squared difference and that's our loss function it's a mean squared error between the target root target Q value and the predicted Q value from the network okay okay great so let's just summarize this really quickly and see how this all fits together in our Atari game we have a state we get it as pixels coming in and you can see that on the left-hand side that gets fed into a neural network our neural network outputs in this case of Atari it's going to output three numbers the Q value for each of the possible actions that can go left it can go right or it can stay and don't do anything each of those Q values will have a numerical value that's in that the neural network will predict now again how do we pick what action to take given this Q function we can just take the arc max of those Q values and just see okay if I go left I'm gonna have an expected return of 20 that means I'm gonna probably break off 20 colored blocks in the future if I stay in the center maybe I can only break off a total of three blocks in the future if I go right I'm gonna miss that ball and the game is gonna be over so I'm gonna have a return of zero so I'm gonna take the action that's going to maximize my total return which in this case is left does it make sense okay great that action is then fed back into the Atari game in this case the game repeats next frame goes and this whole process loops again now deepmind actually showed how these networks which are called deep Q networks could actually be applied to solve a whole variety of Atari games providing the state as input through pixels so just raw input status pixels and showing how they could learn the Q function so that all of the possible actions are shown on the left hand on the right hand side and it's learning that q function just by interacting with its environment and in fact they showed that on many different Atari games they were able to achieve superhuman performance on over 50% of them just using this very simple technique that I presented to you today and it's actually amazing that this technique worked so well because to be honest it is so simple and it is extremely clean how how clean the idea is it's very it's very elegant in some sense how how simple it is and still it's able to achieve superhuman performance which means that it beat the human on over 50% of these Atari games so now that we saw the magic of q-learning I'd like to touch on some of the downsides that we haven't seen so far so so far the main downside of q-learning is that it doesn't do too well with complex action in bar action scenarios where you have a lot of actions a large action space or if you have a continuous action space which would correspond to infinite number of actions right so you can't effectively model or parameterize this problem to deal with continuous action spaces there are ways that you can kind of tweak it but at its core what I've presented today is not amenable to continuous action spaces it's really well suited for small action spaces where you have a small number of possible actions and can and discrete possibilities right so a finite number of possible actions at every given time it's also its policy is also deterministic because you're always picking the action that maximizes your your Q function and this can be challenging specifically when you're dealing with stochastic environments like we talked about before so q q sorry Q value learning is really well-suited for a deterministic action spaces sorry deterministic environments discrete action spaces and we'll see how we can ask you learning to something like a policy gradient method which allows us to deal with continuous action spaces and potentially stochastic environments so next up we'll learn about policy learning to get around some of these problems of how we can deal with also continuous action spaces and stochastic environments or probabilistic environments and again just to reiterate now we've gone through this many times I want to keep drilling it in you're taking as input in queue networks you're taking as input the state you're predicting Q values for each of your possible actions and then your final answer your policy is determined by just taking the arc max of that Q function and taking an action that maximizes a q function okay the differentiation with policy gradient methods is that now we're not going to take we're still going to take as input the state at that time but we're not going to output the Q function we're directly going to output the policy of the network or rather let me say that differently we're going to output a probability distribution over the space of all actions given that state so this is the probability that taking that action is going to result in the highest Q value this is not saying that what Q value am I going to get is just saying that this is going to be the highest Q value this is the probability that this action will give me the highest Q value so it's a much more direct formulation we're not going with this intermediate Q function we're just directly saying let's optimize that policy automatically does that make sense okay so once we have that probability distribution we can again we see how our policy executes very naturally now so that probability distribution may say that taking a left will result in the maximum Q value of 0.9 with probability 0.9 staying in the center will result in a probability or a maximum reward or return with 0.1 going to the right is a bad action you should not do that because you're definitely not going to get any return now with that probability distribution that defines your policy like that is your policy you can then take an action simply by sampling from that distribution so if you draw a sample from that probability distribution that exactly tells you the action you should take so if I sample from this probability distribution here I might see that the action I select is a 1 going left but if I sample again it since it's probably sztyc I could sample again and it could tell me a 2 because a 2 also has a probability point 1 on average though I might see that 90 percent of my samples will be a 1 10% of my samples will be a 2 but at any point in time if I want to take an action all I do is just sample from that probability distribution and act accordingly and note again that since this is a probability distribution it follows all of the typical probability distribution properties so all of its elements all like its its total mass must add up to 1 because it's a probability distribution now already off the bat does anyone see any advantages of this formulation why we might care about directly modeling the policy instead of modeling the q function and then using that to deduce a policy if you formulate the problem like this your output is a probability distribution like you said but what that means is now we're not really constrained to dealing only with categorical action spaces we can parameterize this probability distribution however we'd like in fact we could make it continuous pretty easily so let's take an example of what that might look like this is the discrete action space so we have three possible actions left right or stay in the center and a discrete action space is going to have all of its mass on these three points the summation is going to be one of those masses but still they're concentrated on three points a continuous action space in this realm instead of asking what direction should I move the continuous action space is going to say maybe how fast should I move in in whatever direction so on the right is gonna be faster and faster to the right it's a speed now and on the left of the axis is gonna be faster and faster to the left so you could say I want to move to the left with speed 0.5 meters per second or 1.5 meters per second or whatever real number you want it's a continuous action space here now when we plot the probability density function of this of this policy we might see that the probability of taking an action giving a state has a mass over the entire number line not just on these three points because now we can take any of the possible actions along this number line not just a few specific categories so how might we do that with policy gradient networks that's really the interesting question here and what we can do is we assume that our output follows a Gaussian distribution we can parameterize that Gaussian or the output of that Gaussian with a mean and a variance so at every point in time now our network is going to predict the mean and the variance of that distribution so it's outputting actually a mean number and a variance number now all we have to do then let's suppose that mean in variances minus 1 and 0.5 so it's saying that the center of that distribution is minus 1 meters per second or moving one meter per second to the left all of the mass then is centered at minus 1 with a variance of 0.5 okay now again if we want to take an action with this probability distribution or this policy we can simply sample from this distribution if we sample from this distribution in this case we might see that we sample a speed of minus 0.8 which corresponds to or sorry a velocity of minus 0.8 which corresponds to a speed of 0.8 to the left okay and again same idea is before now that's continuous if we take an integral over this probability distribution it has to add up to 1 ok makes sense great ok so that's a lot of material so let's cover how policy gradients works in a concrete example now so let's walk through it and let's first start by going back to the original reinforcement learning loop we have the agent the environment agent sends actions to the environment environment sends observations back to the agent let's think about how we could use this paradigm combined with policy gradients to Train like a very I guess intuitive example let's train a self-driving car ok so the agent in this case is the vehicle it's state is whatever sensory information that receives maybe it's a camera attached to the vehicle the action it could take let's stay simple it's just the steering wheel angle that it should execute at that time this is a continuous variable it can take any of the angles within some bounded set and finally the reward let's say is the distance traveled before we crash okay great so the training algorithm for policy gradients is a little bit different than the training algorithm for cue function or Q deep neural networks so let's go through it step by step in this example so to train our self-driving car what we're gonna do is first initialize the agent the agent is in self-driving car we're gonna start the agent in the center of the road and we're going to run a policy until termination okay so that's the policy that we've ran in the beginning it didn't do too good it crashed pretty early on but we can train it okay so what we're gonna do is record all of the states and all the actions and all of the rewards at every single point in time during that entire trajectory given all of these state action reward pairs we're gonna first look at right before the crash and say that all of these actions because they happened right before a crash or Viper's right before this undesirable event we're gonna penalize all of those actions so we're gonna decrease the probability of selecting those actions again in the future and we're gonna look at actions that were taken farther away from that undesirable event with higher rewards and we're going to increase the probability of those actions because those actions resulted in more desirable events the car stayed alive longer when it took those actions when it crashed it didn't stay alive so we're gonna decrease the probability of selecting those actions again in the future okay so now that we've tried this once through one training iteration we can try it again we've reinitialized the agent we run a policy until termination we do the same thing again decrease the probability of things closer to the crash increase the probability of actions farther from the crash and just keep repeating this over and over until you see that the agent starts to perform better and better drive farther and farther and accumulate more and more reward until eventually it starts to follow the lanes without crashing now this is really awesome because we never taught anything about what our lane markers it's just seeing images of the road we never taught anything about how to avoid crashes it just learned this from sparse rewards the remaining question here is how we can actually do these two steps I think how can we do the step of decreasing the probability of actions that were undesirable and how can we increase the probability of actions that were desirable I think everything else is conceptually at least is pretty clear I hope the question is how do we improve our policy over time so to do that let's first look at the loss function for training policy gradients and then we'll dissect it to understand a little bit why this works the loss consists of two terms the first term is the log likelihood of selecting the action given the state that you were in so this really tells us how likely was this action that you selected the second term is the total discounted return that you received by taking that action that's really what you want to maximize so let's say if the agent or if the car got a lot of return a lot of reward for an action that had very high log likelihood so it was very likely to be selected and they got a lot of reward from that action that's going to be a large number multiplied by a large number when we multiply them together and we multiply them together you get another large number you add in this negative in front of this loss function so now it's gonna be an extremely negative number remember that neural network tried to minimize their loss so that's great so we're in a very desirable place we're in the we're in a pretty good minimum here so we're not gonna touch that probability at all let's take another example now we have an example of where the reward is very low for an action so R of T is very small and let's assume that the probability of selecting this action that we took was very high so we took an action that we were very confident in taking but we got a very low reward for it or very low return for it what are we going to do so that's a small number now multiplied by this probability distribution our loss is going to be very small we multiply it by the negative in front the total loss is going to be large right so on the next training iteration we're going to try and minimize that loss and that's going to be either by trying out different actions that may result in higher return or higher reward moving some of the probability of taking that action that we took again in the future so we don't want to take that same action again because we just saw that it didn't have a good return for us and when we plug this loss into the gradient descent algorithm that we saw in lecture 1 to train our neural network we can actually see that the policy gradient itself is highlighted here in blue so that's the that's why it's called policy gradients right because you're taking the gradient of this policy function scaled by the return and that's where this this method really gets its name from so now I want to talk a little bit about how we can extend this to perform reinforcement learning in real life so far I've only really shared examples of you with you about doing reinforcement learning in either games or in the simple toy example with the car what do you think is the shortcoming of this training algorithm so there's a real reason here why we haven't seen a ton of success of reinforcement learning in the real life like we have seen with the other fields that we've covered so far in this class and that's because one of these steps has a severe limitation when you're trying to play it in the real world does anyone have an idea okay so reinforcement learning in real life the big limitation here obviously is you can't actually run a lot of these policies in real life safety critical domains especially let's think about self-driving cars I said run until termination I don't think you want to do that on every single training iteration not just like at the end goal this is like every single step of your gradient descent algorithm millions of steps I don't think that's a desirable outcome we can get around this though so we can think about about training in simulation before deploying in the real world the problem is that a lot of modern simulators are not really well suited for very photo realistic simulation that would support this kind of transfer transfer from the simulator to the real world when you deploy them one really cool result that we created in in my lab also that some of the TAS so you can ask them if you have any questions has been developing a brand new type of photo realistic simulation engine for self-driving cars that is entirely data-driven so the simulator we created what's called Travis sorry Vista and it allows us to use real data of the world to simulate virtual agents so these are virtual reinforcement learning agents that can travel within these synthesized environments and the results are incredibly photorealistic and they allow us to train agents in reinforcement learning environments entirely in simulation so that they can be deployed without any transfer in the real world in fact that's exactly what we did we placed agents inside of our simulator we trained them using policy gradient algorithms which is exactly what we talked about today and we took these train policies and put them on board our full-scale autonomous vehicle without changing anything they learn to drive in the real world as well just like they learn to drive in the simulator on the left hand side you can actually see us sitting in the vehicle but it's completely autonomous it's executing a policy that was trained using reinforcement learning entirely within the simulation engine and this actually represented the first time ever a full-scale autonomous vehicle was trained using only reinforcement learning and able to successfully to be deployed in the real world so this was really awesome result that we had and now we've covered some fundamentals of policy learning also value learning with cue functions what are some exciting applications that have sparked this field I want to talk about these now as well for that we turned to the game of Go so in the game of Go agents humans or autonomous agents can be playing against each other and an autonomous agent specifically was trained to compete against humans specifically a human or many human champions and achieved what at the time was a very very exciting result so first I want to give some background very quick background on to the game of go because probably a lot of you are not too familiar with go-go is played on a 19 by 19 grid it's played between two players who rolled white and black pieces the objective of the game is to basically occupy as much space on the board as as possible you want to claim territory on the board but the game would go and the strategy behind go is incredibly complex that's because there are more positions more possible positions and go than the number of atoms in the universe so the objective of our AI is to learn this incredibly complex state space and learn how to not only beat other autonomous agents but learn how to eat the existing gold standard human professional goal players now google deepmind rose to the challenge a couple years ago and developed a reinforcement learning pipeline which defeated champion players and the idea at its core was actually very simple and I'd like to go through it just in a couple steps in the last couple minutes here first they trained a neural network to watch a human play go so these are expert humans and they got a whole dataset of how humans play go and they trained their neural network to imitate the moves of those humans and imitate those behaviors they then used these pre trained networks trained from the expert go players to play against their own reinforcement learning agents and the reinforcement learning policy network which allowed the policy to go beyond what was imitating the humans and actually go beyond human level capabilities to go superhuman the other trick that they had here which really made all of this possible was the usage of an auxilary neural network which not only was the policy network she took as input the state to predict the action but also an auxilary network which took it and put the state and predicted the how good of a state this was so you can think of this as kind of again similar an idea to the Q network but it's telling you for any particular state of the board how good of a state is this and given this network what the AI could do is basically hallucinate different possible trajectories of actions that it could take in the future and see where it ends up use that network to say how good of a state have all of these board positions become and use that to determine where I where that action or where that agent needs to act in the future and finally a recently published extension of these approaches just a year ago called alpha zero used use basically self to play all the way through so the previous example I showed used a buildup of expert data to imitate and that was what started the foundation of the algorithm now in alpha zero they start from zero they start from scratch and use entirely self play from the beginning in in these examples they showed examples on let's see it was chess go many other games as well they showed that you could use self play all the way through without any need of training with or pre training with human experts and instead optimize these networks entirely from scratch ok so finally I just like to summarize this lecture today we saw how deep reinforcement learning could be used to train agents in environments in this learning loop where the agent interacts with the environments we've got a lot of foundations into reinforcement learning problems we learned about cue learning where agents try to optimize the total expected return of their for their actions in the future and finally we got to experience how policy gradient algorithms are trained and how we can directly optimize the policy without going through the cue function at all and in our lab today we will get some experience of how we can train some of these reinforcement learning agents in the game of pong and also some simpler examples as well through the debug with and play around with next we will hear from abba who's gonna be talking about limitations as a very wide stretching lecture it's very exciting lecture it's actually one of my favorite lectures its limitations of deep learning so of all of these approaches that we've been talking about and also some really exciting new advances of this field and where it's moving in the future so I hope you enjoyed that as well thank you you [Applause] you
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2018_Beyond_Deep_Learning_LearningReasoning.txt
good morning everyone as Elvis said I'm the director of IBM Research Cambridge it's literally just a few blocks down the road I've worked probably 20 years and IBM research primarily out of our New York lab but I moved here just three months ago to really start up this new AI lab refocus and significantly grow the Cambridge lab that we've we've already started I intentionally chose a somewhat provocative title to my talk today the reason I wanted the the beyond deep learning it's not necessarily to say that you know all of these deep learning techniques are going to be you know obsolete that's definitely not what I'm trying to say but I am trying to say that you know although there's a lot of exciting things that we can do with deep learning today there's also a frontier you know a space that we can't do very well and so I hope to today talk to you about you know kind of what is an area of a boundary that we we're not able to break through at this time that I think is is critical for machine intelligence for artificial general intelligence so I'm hoping that today I can set that up for you hopefully you know motivate additional people to come in and study this because I really believe that it's a it's a critical area where where additional breakthroughs are needed I'd like to first introduce IBM Research I don't know how many of you actually know about IBM research some of you may have heard of us because of the Jeopardy challenge so in 2011 we created a computer that was able to beat the chest I mean the jeopardy champions at that game very handily as as a matter of fact some people don't realize that our research division is it's quite significant so we have 3,000 people worldwide 12 labs that are doing this research researchers pursue the same types of accolades that you the top university professors go after so nobody laureates touring awards National Academy of Sciences National Academy of technologies so we pursue a very you know rigorous you know set of research in the areas that we focus on oh and although I put the the Jeopardy challenge up there you're only as good as your your most recent results right so I also wanted to make sure that I talked a little bit about things that we've done more recently so in 2017 as an example we created the first 50 cubit quantum computer most people are kind of you know I expect to actually see some people announcing some other companies announcing simulators for 50 on the order of 50 cubits and the difference here is that we're talking about an actual 50 cubit quantum computer so we we also have the simulators but we have the real quantum computers I think what's also unique about what we're doing is that we are also making our quantum computing capabilities available through the cloud so that people can kind of login and they can experiment and learn about what quantum computing is it's a very exciting program for us we also in 2017 we were able to show near linear scale out in terms of you know cafe deep learning models on on our servers we were able to show algorithms that were able to exploit the quantum advantage so the idea is that if you in order to actually get the speed ups on quantum computers you need to be able to map problems into a form where you can get that acceleration and when you get that acceleration then you're talking about exponential accelerations over traditional computers that people are using today so this particular result was an algorithm that's able to basically map small molecules models of small molecules on to the actual quantum computing system so that we could demonstrate the ability to find the lowest energy state so and and get that exponential speed-up from that you know the thing is that in 2017 we rename number one leading corporation for scientific in you know scientific research at AI that corporate Institute so that's that's pretty exciting for us I also wanted to just tell you a little bit about the MIT Watson MIT IBM Watson AI lab it's obviously a very exciting announcement for us so in September of 2017 we announced that we were building this 240 million dollar joint effort with MIT to pursue fundamental advances in AI the core areas are listed here so when I say fundamental advances in AI that it's really again recognizing what we can and can't do with AI today and then trying to create new algorithms to go beyond that so examples of problems just very quickly that we're interested in in terms of the the new AI lab one area is that you know learning causal structure from data is a very challenging problem we're looking at how we can use data that was captured due to CRISPR mediations our CRISPR gene in activations which basically has very large set of interventional data where they're observing the whole genome and we're going to try to learn a causal structure you know from those those interventions so that's that's one example another example in physics of AI basically what we're talking about there is the ability to have AI help quantum computing and quantum computers and quantum computers accelerate AI algorithms so we're looking at problems for example machine learning algorithms that would help us to manage the state of the quantum computer also looking at for example the ability to knowing what we know about quantum computers knowing that you know we're gonna be in the small numbers of hundreds of qubits for some time that the memory bandwidth between traditional computers and the quantum kut'rs can be relative small which of the machine learning algorithms will we be able to map on to those systems in order to get that exponential speed-up so those are just some of the examples of things that we're studying also we we feel that right now there are two industries that are really ripe for AI disruption one is healthcare life sciences the reason they're ripe for disruption is because that community has invested a lot to create what we refer to as structured knowledge so you know gene ontology you know sno-med clinical terms all of the structured knowledge that we can combine with observational data and create new algorithms and insecurity the reason why cybersecurity the reason why that one is ripe for disruption is because if everybody is advancing these AI algorithms and people start to try to use those AI algorithms to attack our systems it's very important that we also use AI algorithms to try to figure out how they're going to do that and how to defend against that so through some of the examples the last one shared prosperity is about how do we get non-discrimination on bias morals into the algorithms not just training on for you know scale-out and accuracy and these sorts of things all right we we already had our first announcement in terms of the new MIT IBM Watson AI lab so at nips we announced that we are releasing a 1 million video data set the idea for those of you are familiar and you've probably learned a lot in this class about how people used image net to make new breakthroughs in terms of deep learning just that volume of labeled data meant that people could go in and run experiments and train networks that they could never do before so we've created this this million video dataset 3-second videos were chosen for a specific reason the lead for this project o deliver is has great expertise not only in computer science but also in cognitive science and so it's expected that you know three seconds is roughly the order of times that it takes humans to recognize certain actions as well so we're we're kind of sticking with that time Computers aren't the machine learning algorithms that you were learning about today aren't able to do this well you know what you primarily learned about so far was how to segment images how to find objects within images how to classify those those objects but not actions and the reason why we also think that actions are important is because they are composable right so we want to be able to learn not just elemental actions from these videos but then to start to think about how you recognize and attack compositions of actions and procedures that people are performing because then we can use that to start to teach the computers to also perform those procedures or to help humans to be able to perform those procedures better okay now what I want to do is maybe take a break from kind of the set up and get into the more technical part of the of the talk today so as I was saying earlier you know what we've seen recently in deep learning is is truly all inspiring I mean in terms of the number of breakthroughs over over the last ten years and especially over the last five years it is very exciting you know breakthroughs in terms of you know being able to for certain tasks you know beat human error rates in terms of you know visual recognition and speech speech recognition and so on but my position is that there are still huge breakthroughs that are required to try to get to machine intelligence so some examples of challenges that the systems of today aren't able to do so one is that many many of the scenarios in order to get the performance that actually is is usable requires labeled data it requires training data where you've actually labeled the the objects and the images and so on while the systems are getting better in terms of doing unsupervised learning because of the vast amount of data that's available on the web the problem is that we at IBM care about AI for businesses right and if you think of AI for businesses there's not that much deep domain data that we're able to find try it so if you think about the medical field very deep expressive relations that are required to be understood acquires a lot of labeled data you know if you think of you know an airline manufacturer and all of the the manuals they may have and they'd like to try to be able to answer questions from those and be able to reason and help humans understand how to conduct procedures within those there's not enough data out there on those fields in terms of relationships and entities and so on for us to train so it requires a lot of labeling humans actually going through and finding the important entities and relationships and so on so an important area that we need to be able to break through is first of all why why why do these machines why do these networks require so much data label data and can we can we address that can we make the algorithms better the second thing is that you've probably realized that you you train up the algorithms and then they're able to perform some tasks the tasks are getting more and more sophisticated self-driving cars and so on but you're still not training this network so that it can perform many different tasks and even more importantly what happens is that you learn a model and even though you may as part of the training you may have reinforcement learning that's not lifelong learning that's not just you know turning the cars out on the road and enabling them to continue to learn and aggregate information and bring that into a representation so that they can adapt to you know non-stationary environments environments that change over time another thing is that you know when we train these networks you kind of get it down to a certain error rate but how do you keep improving the accuracy even though the error rates not that bad algorithms don't don't do well at this today and then the last area is that it's really important that we're creating algorithms that are interacting with the humans right that can explain how they may have come to a particular decision or classification or whatever the case may be so we need to try to think about how can we build machines that and learn they can listen interact with the humans be able to explain their their decisions to humans and so a big part of this is what we were what I refer to many in the community referred to as you know learning plus reasoning meaning that we want to be able to reason on representations that are learned preferably on representations that are learned in an unsupervised manner all right the first step in doing that is to be able to make language itself computational right so we are able to think about you know words as sort of symbols and what those words mean and the properties of them and then reason about them if you think about all the algorithms that you've been learning they expect the information to be coming to them as as numerical information preferably as real valued information how do you go from text to real valued information that can then be fed into these algorithms over time and then computed on so one of the you know the first areas here is is word embeddings the reason i italicized word is because what you'll see is that we kind of started out with word embeddings but now it's you know phrase embeddings document embedding so there's much more to this but let's let's start first with what do we mean with word embeddings okay the point is to try to represent a word as a real valued vector that basically represents what that words means in terms of other words right so so the the dimensions of that vector the features are basically other words and the point is that you can assign different weights on how well that word relates to these other words okay so now the difficult part there is how do you learn that representation that's going to give you that objective of making that vector really represent that word and be comparable to other words in a way that the machine can can compute on them so the idea is the first work earlier work in this was all right how do we do this such that those embeddings those vectors give you an understanding of the similarity between this word and other words within your dictionary first model here was what they refer to as a skip gram model basically what you try to do is you say give it a particular word how well can I predict the words around it the words that are one hop away from it two hop away from it three hops away from it what they're trying to do is to train a set of vectors where you minimize the the the loss the prediction loss of being able to predict the words around it based off of that word there's a very big difference for the community previously it had been mostly counts people just kind of count the number of occurrences and then they would use that to try to almost make that two wait what this was doing was taking a deep neural network well it was actually kind of a shallow neural network to try to optimize or maximize this log probability of being able to predict the words around it from that and what they got from that is what you can kind of see here this ability to place words symbols into a vector space such that words that are similar or closer together right so the point is you can see and and also that relationships between words you know move in similar directions right so what this figure is trying to show you is that you know you can look at the countries that are here on the on the left side and you can see that they're similar the ones that are all countries are proximal the cities are proximal the you know the the vector that is going from countries to city are you know essentially going in the same direction I'm sure I'm not sure if you can actually see that but the point is that this kind of a vector space gives us the ability to compute on those real valued vectors and then learn more about this so a very first simple thing is to be able to you know be able to find other similar things right you have you have something you have a symbol Italy for example can you find other things that are similar to or related to Italy you can find other countries you can find cities within Italy that's the that's kind of the first step the you know so the first the first work there was this kind of distributed representation that we're talking about here the second is basically showing the difference in terms of being able to second talk as a paper was about the accuracy the significant jump in accuracy that we got from being able to do that prediction based representation as opposed to the count based representation I put I signaled you know IBM authors people who are at or were at IBM in orange and the last one is the Facebook has actually recently released this fast text where you can basically go in and very easily create your own embeddings so first thing was you know how do they create it and they went after a specific thing how do you optimize that similarity but what you'll see from the rest of the talk is that there are many other ways that you might want to try to figure out how to place things together other constraints that you might want to place on the vector space and how things are represented in that vector space so that you can accomplish tasks from it prior to these types of representation the ideals for how people would actually go after representing knowledge and language were knowledge bases structured knowledge so our original ideas was alright listen I have to be able to have entities that are well defined I have to have well-defined relationships between those entities I have to have rules that basically will give me information about you know categories of relationships or categories of entities and it's great humans are able to do that they're out able to lay out an entire space you know describe you know molecules and relationships between molecules or genes or relationships between genes and the targets that they might be able to humans can do that well machines don't do it that Wow that was the problem so the second figure here was even though you you know you kind of go out and you're able to find a lot of things in Wikipedia and wicked Atta you know freebase or some of the examples where you can find structured information on the web even though you're able to find a lot of information out here and although this is a 2013 statement what you can see is that in terms of the types of relationships and completeness of that this is saying okay well for the people that are in Wikipedia or in freebase actually we only we're missing about 80 percent of their their education where their education is from we're missing you know over 90 percent of their employment history right so even though it seems like it's a lot of information it's really sparse and very difficult for humans to be able to use the algorithms that we have today to automatically populate knowledge bases that look like the form that we understand and feel that we can apply our logic to so one of the first results in terms of going from that symbolic knowledge into sub symbolic knowledge so the vectors that I was talking about earlier was could we redo our knowledge bases based off of these sub symbolic these these vectors if we were able to do that then we actually would be able to learn much more data it's possible we could learn these these representations and fill out some of the the information that we're missing from our knowledge bases this first part was saying okay look I can take some of the information I can find in freebase and other sources what I'll do is I can use text information to try to build out these embeddings I can find relationships or I can find entities that are in similar spaces and realize there may be a relationship between these and I can start to populate more of the relationships that I'm that I'm missing from my knowledge base we're able to use this this this principle of the the embeddings and the knowledgebase is to then start to grow the knowledge bases that we have right so this is basic the first study here is showing how we took information about genes diseases and drugs from ontologies that were available represented that learn vectors across that structured space so that we could predict relationships that weren't in the knowledge base that's important because if you think about how people get that information today they actually do wet lab experiments to try to understand if there is a relationship if something up regulates something else it's very expensive if we can use this knowledge to make those predictions then we can give other scientists places to look looks like there might be an interaction here maybe you could try that right the second set of results is more recent basically there was a challenge issued by the Semantic Web community how can we better improve this automated knowledge based construction so the team used a combination of these word embeddings to be able to search for and validate information gained from a set of structured and unstructured knowledge so this is actually one first place in the in the 2017 semantic web challenge okay so now we were kind of getting an idea about how we would take language make it computational put it into a knowledge base so that we can aggregate it over time but how are we going to get the neural networks to use that that's the next question okay an example task of why you would want those neural networks to be able to use that is question answering you want to build up a knowledge base everything you can possibly find and then you want to be able to ask your questions and see if it can answer those questions we say that this requires memories because the point is that if you think about some of the other tasks that you may have seen you provide once you've trained the network you provide an input and then you get an output you don't necessarily use long term memories of relationships and entities and all of these sorts of things okay so this is a challenge that was issued to the community essentially in terms of being able to read some sentences and then being able to given a question give an answer to that and there are different stages of the complexity of what is required in order to answer the question sometimes it's really just kind of finding the sentence sometimes it's being able to put multiple sentences together sometimes it's being able to chain across time and so there are many there's stages of difficulties in order to do that but what I want to focus on is some of the early work in terms of creating a neural network that can then access those knowledge bases and then be able to produce an answer from that right so the expectation is that you you build up those knowledge bases from as much information you can find previously you train them such that they know how to answer a question the types of questions that you'd like them to answer and then from that when you you handed a question it's able to produce an answer the reason this is different from what people are doing today it's not just about saying today what happens is you know when you program a computer then you tell it okay I want to be able to access this place in memory you know I do a query on a database and I say okay I'd like for you to give me all the rows where the first name is is excellent the last name is why it can come back and that's all programmed these networks are instead learning how to access memory by looking at other patterns of access to memory not program train it train so the point here is the neural net is the controller of how that memory is accessed in order to produce an answer okay so what happens is that they they it is a supervised result so they they do train jointly with okay what are the inputs what's the question that would be asked of that what's the output that is desired from that and then by providing many many many examples of that then when you provided a new set of information then it's able to to answer a question from that by basically taking a vector representation of the question cue being able to map that on to the memory so the embeddings that were produced from all the sentences that were entered and then moving back and forth across that until it gets to a confidence that in an answer and transferring that into into an output on the first version of this the first version that's on the left side wasn't able to handle many things in terms of understanding you know kind of temporal sequences for example they weren't able to do multi-hop and the second version which is much more recent is you know kind of fool into n training of that control of the of the network in order to try to start to answer these questions while it's it's incredibly exciting you know there's still many of the questions that I was showing earlier that it really can't answer so this is definitely not a solve problem but you know hopefully what you can see is that how we're starting to go up against problems that we don't know completely how to solve but we're starting to solve them and instead of just creating neural networks just an algorithm we are creating machines right we're creating machines that have controllers and they have memory and they're able to perform tasks that go well beyond just what you could do with the pure neural network algorithm they they leverage those neural network out algorithms throughout these are recurrent neural Nets LST ions and so on but the point is we're starting to try to put together machines from this if you if you'd like to learn more about this topic there's there's quite a bit of work in this so one is in addition to being able to answer those questions if we could better isolate what's the question right this is something that that humans have a problem with Jim somebody comes and they ask you a question and you kind of say well wait what what's the question your you're asking here so we've done work in terms of being able to have better the computers better understand what's the question really being asks we need systems that will help us to train these models so part of this work is to create a simulator that can take texts that are ambiguous and generate questions of certain forms that we can then use that to try to both train as well as test some of these systems another really interesting thing is that common-sense knowledge is basically you know they refer to it as what's in the white space between what you read right you read a text and a lot of times there's a lot of common sense knowledge you know knowing that you know you know this desk is made of wood and wood is hard and all of these sorts of things help you to understand a question you can't find that it's it's not on Wikipedia muster that common sense information is not not on Wikipedia it's not easy for us to learn from text because people don't state it this this third work here is okay can we take some of that common sense knowledge can we learn in vector space ways to represent information that's common sense that that white space and attach it to other information that that we're able to read from from the web and so on now some of the recent work as well as can we can we use neural nets to basically learn what a program is doing represent that and then be able to execute that program programs you know this right now people want to try to program program that takes a very sophisticated human skill to be able to probe probe a program and understand it and in fact humans don't do that very well but if we could train machines to do that and obviously that's a very powerful thing to do we're also a paper that was just published a couple a couple of months ago in December at nib so I thought was extremely interesting basically they're learning how to constrain vector representations such that they can in new rules from those and that they can basically create proofs right so when you the reason a proof is important is basically that's the beginnings of being able to explain an answer so when if you ask a question of the question from some of the other ones was you know it's it's the apples in the kitchen why I think you have a proof if you have the steps that you went through in terms of the knowledge base able to explain that out then suddenly people can interact with the the system and use the system not only to answer questions but to improve and lift what what humans the human knowledge as well learn learn from the computers right now the computers learn from us and just finally if you'd like to do more you know the research division is working on next-generation algorithms those come out through our Watson products we have a Watson developer cloud makes it very easy you know you can do things like you handed an image and it'll hand back information about what's in that image you you hand it text it'll tell you information about the sentiment of that you know label information within it and so on so we have what we believe are very easy to use you know algorithms that take many of these things that were I was talking about earlier and make them very easy for anyone to use and incorporate in into their programs I think that's it for me [Applause]
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_AI_Bias_and_Fairness.txt
hi everyone welcome to our second hot topic lecture in 6s191 where we're going to learn about algorithmic bias and fairness recently this topic is emerging as a truly pervasive issue in modern deep learning and ai more generally and it's something that can occur at all stages of the ai pipeline from data collection all the way to model interpretation in this lecture we'll not only learn about what algorithmic bias is and how it may arise but we will also explore some new and exciting methodological advances where we can start to think about how we can build machines capable of identifying and to some degree actually mitigating these biases the concept of algorithmic bias points to this observation that neural networks and ai systems more broadly are susceptible to significant biases such that these biases can lead to very real and detrimental societal consequences indeed today more than ever we are already seeing this manifesting in society from everything from facial recognition to medical decision making to voice recognition and on top of this algorithmic bias can actually perpetuate existing social and cultural biases such as racial and gender biases now we're coming to appreciate and recognize that algorithmic bias in deep learning is a truly pervasive and severe issue and from this we really need strategies on all levels to actually combat this problem to start first we have to understand what exactly does algorithmic bias actually mean so let's consider this image what do you see in this image how would you describe it well the first thing you may say to describe this image is watermelon what if i tell you to look closer and describe it in more detail okay maybe you'll say watermelon slices or watermelon with seeds or other descriptors like juicy watermelon layers of watermelon watermelon slices next to each other but as you were thinking about this to yourself i wonder how many of you thought to describe this image as red watermelon if you're anything like me most likely you did not now let's consider this new image what is in this image here now you're probably much more likely to place a yellow descriptor when describing this watermelon your top answer is probably going to be yellow watermelon and then with slices with seeds juicy etc etc but why is this the case and why did we not say red watermelon when we saw this original image well when we see an image like this our tendency is to just think of it as watermelon rather than red watermelon and that's because of our own biases for example due to geography that make us used to seeing watermelon that look like this and have this red color as this represents the prototypical watermelon flesh that i expect to see but perhaps if you're from another part of the world where the yellow watermelon originated from you could have a different prototypical sense of what color watermelons should be this points to this broader fact about how we as humans go about perceiving and making sense of the world in all aspects of life we tend to label and categorize things as a way of imposing order to simplify and make sense of the world and as a result what this means is that for everything there's generally going to be some typical representation what we can think of as a prototype and based on the frequencies of what each of us observe our tendency is going to be to point out things that don't fit what we as individuals consider to be the norm those things that are atypical to us for example the yellow watermelon for me and critically biases and stereotypes can arise when particular labels which may not necessarily be the minority label can confound our decision making whether that's human-driven or suggested by an algorithm and in this lecture we're going to focus on sources of algorithmic bias and discuss some emerging approaches to try to combat it to do that let's first consider how exactly bias can and does manifest in deep learning and ai one of the most prevalent examples of bias in deep learning that we see today is in facial detection and recently there have been a couple of review analyses that have actually evaluated the performance of commercial facial detection and classification systems across different social demographics for example in an analysis of gender classifiers this review showed that commercial pipelines performed significantly worse on faces of darker females relative to other demographic groups and another analysis which considered facial recognition algorithms again found that error rates were highest on female faces of color this notion of algorithmic bias can manifest in a myriad of different ways so as another example let's consider the problem of image classification generally and let's say we have a trained cnn and this image on the left here which shows a prototypical example of a bride in some north american and european countries now in recent analysis when this particular image of a bride was passed into a cnn that was trained on a open source large scale image data set the predicted data class labels that were outputted by the cnn were perhaps unsurprisingly things like bride dress wedding ceremony women as expected however when this image which is a prototypical example of a bride in other parts of the world such as in south asia was passed into that very same cnn now the predicted classroom labels did not in fact reflect the ground truth label of bride clothing event costume art as you can see nothing here about a bride or a wedding or even a human being so clearly this is a very very significant problem and this is not at all the desired or expected behavior for something that deep learning we may think of has solved quote-unquote image classification and indeed the similar behavior as what i showed here was also observed in another setting for object recognition when again this image of spices which was taken from a home in north america was passed in to a cnn trained to do object detection object detection and recognition the labels for the detected objects in this image were as expected seasoning spice spice rack ingredient as we'd expect now again for this image of spices shown now on the left which was in fact taken from a home in the philippines when that image was fed into that very same cnn once again the predicted labels did not reflect the ground truth label that this image was an image of spices again um pointing to something really really concerning going on now what was really interesting about this analysis was they asked okay not only do we observe this bias but what could be the actual drivers and the and the reasons for this bias and it turned out from this analysis that the accuracy of the object recognition model actually correlated with the income of the homes where the test images were taken and generated this points to a clear bias in these algorithms favoring data from from homes of higher incomes versus those from lower incomes why could this be what could be the source for this bias well it turns out that the data that was used to train such a model the vast majority of it was taken from the united states canada and western europe but in reality this distribution does not at all match the distribution of the world's population given that the bulk of the world's population is in east and south asia so here i think this is a really telling and powerful example because it can show it shows how bias can be perpetuated and exist on multiple levels in a deep learning or ai pipeline and this particular analyses sort of started to uncover and unearth some of those biases and indeed as i mentioned bias can truly poison all stages of the ai development and life cycle beginning with the data where imbalances with respect to class labels or even features can result in unwanted biases to the model itself to the actual training and deployment pipeline which can reinforce and perpetuate biases to evaluation and the types of analyses that are and should be done to evaluate fairness and performance across various demographics and subgroups and finally in our human interpretation of the results and the outcomes and the decisions from these ai systems where we ourselves can inject human error and impose our own biases that distort the meaning and interpretation of such results so in today's lecture we're going to explore this problem of algorithmic bias both in terms of first different manifestations and sources of this bias and we'll then move to discuss different strategies to mitigate each of these biases and to ultimately work towards improving uh fairness of ai algorithms and by no means is this is is this a solved problem in fact the setup and the motivation behind this lecture is to introduce these topics so we can begin to think about how we can continue to advance this field forward all right so let's start by thinking about some common types of biopsies that can manifest in deep learning systems i think we can broadly categorize these as being data driven or interpretation driven on the data-driven side we can often face problems where data are selected such that proper randomization is not achieved or particular types of data or features in the data are represented more or less frequently relative to others and also instances in which the data that's available to us as users does not reflect the real world likelihood of particular instances occurring all of these as you'll see and appreciate are very very intertwined and very related interpretation driven biases refer more to issues in how human interpretation of results can perpetuate some of these types of problems for example in with respect to falsely equating correlation and causation trying to draw more general conclusions about the performance or the generalization of a ai system even in the face of very limited test data and finally in actually favoring or trusting decisions from an algorithm over that of a human and we're going to um by by no means is this this survey of common biases that can exist an exhaustive list it's simply meant to get you thinking about different ways and different types of biases that can manifest so today we're going to touch on several of these types of biases and i'd first like to begin by considering interpretation driven issues of correlation fallacy and over generalization all right so let's suppose we have this plot that as you can see shows trends in two variables over time and as you as you notice these the data from these two variables are tracking very well together and let's say and it turns out that in fact these black points show the number of computer science phds awarded in the united states and we could easily imagine building a machine learning pipeline that can use these data to predict the number of computer science doctorates that would be awarded in a given year and specifically we could use the red variable because it's it seems to correlate very well with the number of cs doctorates to try to as our input to our machine learning model to try to predict the black variable and ultimately what we would want to do is you know train on a particular data set from a particular time frame and test in the current time frame 2021 or further beyond to try to predict the number of computer science phds that are going to be awarded well it turns out that this red variable is actually the total revenue generated by arcades in a given year and it was a variable that correlated with the number of computer science doctorates over this particular time frame but in truth it was obscuring some underlying causal factor that was what was ultimately driving the observed trend in the number of computer science doctorates and this is an instance of the correlation fallacy and the correlation fallacy can actually result in bias because a model trained on data like this generated revenue generated by arcades as input computer science doctorates as the output could very very easily break down because it doesn't actually capture the fundamental driving force that's leading to this observed trend in the variable that we're ultimately trying to predict so correlation fallacy is not just about correlation not equating to causation it can also generate and perpetuate bias when wrongfully or incorrectly used let's also consider the assumption of over generalization so let's suppose we want to train a cnn on some images of mugs from some curated internal data set and take our resulting model and deploy in the real world to try to predict and identify mugs well mug instances in the real world are likely to not be very similar to instances on which the model was trained and the over-generalization assumption and bias means that or reflects the fact that our model could perform very well on select manifestations of mugs those that are similar to the training examples it's seen but actually fail and show performance poor performance on mugs that were represented less significantly in data although we expect it to generalize and this phenomena is can be often thought of and described as distribution shift and it can truly bias networks to have worse performance on examples that it has not encountered before one recent strategy that was recently proposed to try to mitigate this source of bias is to start with the data set and try to construct an improved data set that already account for potential distribution shifts and this is done for example by specifying example sets of say images for training and then shifting with respect to a particular variable to construct the test data set so for example in this instance the distribution shift that occurs between the train and the test is with respect to the time and the geographic region of the images here or in the instance of medical images this could mean sourcing data from different hospitals for each of train validation and test and as greater awareness of this issue of distribution shift um is brought to light data sets like this could actually help try to tame and tune back the generalization bias that can occur because they inherently impose this necessity of already testing your model on a distribution shifted series of examples all right so that gives you hopefully a sense of interpretation driven biases and why they can be problematic next we're going to turn most of our attention to to what are in my opinion some of the most pervasive sources and forms of bias in deep learning which are driven by class or feature imbalances that are present in the data first let's consider how class imbalances can lead to bias let's consider some example data set as shown here or some some example data shown here and let's say that this plot on the left shows the real world distribution of that data that we're trying to model with respect to some series of classes and let's suppose that the data that is available to us in that data the frequency of these classes is actually completely different from what occurs in the real world what is going to be the resulting effect on the model's accuracy across these classes will the model's accuracy reflect the real world distribution of data no what instead is going to happen is that the model's accuracy can end up biased based on the data that it has seen specifically such that it is biased towards improved or or greater accuracies rather on the more frequently occurring classes and this is definitely not desired what is ultimately desired is we want the resulting model to be unbiased with respect to its performance its accuracy across these various classes the accuracies across the classes should be about the same and if our goal is then to train a a model that exhibits fair performance across all these classes in order to understand how we can achieve that we first need to understand why fundamentally class imbalance can be problematic for actually training the model to understand the root of this problem let's consider a simple binary classification task and let's suppose we have this data space and our task is to build a classifier that sees points somewhere in this data space and classifies them as orange or blue and we begin in our learning pipeline by randomly initializing the classifier such that it divides this up the space now let's suppose we start to see data points they're starting to be fed into the model but our data set is class imbalanced such that for every one orange point the model sees it's going to see 20 blue points now the process of learning as you know from from gradient descent is that incremental updates are going to be made to the classifier on the basis of the data that it has observed so for example in this instance after seeing these blue points the shift will be to try to move the decision boundary according to these particular observations and that's going to occur now we've made one update more data is going to come in and again they're all going to be blue points again due to this 1 to 20 class imbalance and as a result the decision boundary is going to move accordingly and so far the random samples that we have seen have reflected this underlying class imbalance but let's suppose now we see an orange data point for the first time what's going to happen to that decision boundary well it's going to shift to try to move the decision boundary closer to the orange point to account for this new observation but ultimately remember this is only one and one orange point and for every one orange point we're going to see 20 blue points so in the end our classifier's decision boundary is going to end up occupying more of the blue space since it will have seen more blue samples and it will be biased towards the majority class so this is a very very simplified example of how learning can end up skewed due to stark class imbalances and i assure you that class imbalance is a very very common problem which you almost certainly will encounter when dealing with real world data that you will have to process and curate and in fact one setting in which class imbalance is particularly relevant is in medicine in healthcare and this is because that the incidence of many diseases such as cancer is actually relatively rare when you look at the general population so to understand why this could be problematic and why this is not an ideal setting for training and learning let's imagine that we want to try to build a deep learning model to detect the presence of cancer from medical images like mri scans and let's suppose we're working with a brain tumor called glioblastoma which is the most aggressive and deadliest brain tumor that exists but it's also very rare occurring at a incidence of approximately 3 out of every 100 000 individuals our task is going to be to try to train a cnn to detect glioblastoma from mri scans of the brain and let's suppose that the class incidence in our data set reflected the real-world incidence of disease of this disease meaning that for a data set of 100 000 brain scans only three of them actually had brain tumors what would be the consequences on the model if it was trained in this way remember that a classification model is ultimately being trained to optimize its classification accuracy so what this model could basically fall back towards is just predicting healthy all the time because if it did so it would actually reach 99.997 percent accuracy even if it predicted healthy for instances when it saw a brain tumor because that was the rate at which healthy occurred in its data set obviously this is extremely problematic because the whole point of building up this pipeline was to detect tumors when they arise all right so how can we mitigate this to understand this we're going to discuss two very common approaches that are often used to try to achieve class balance during learning let's again consider our simple classification problem where we randomly initialize our classifier dividing our data space the first approach to mitigate this class in balance is to select and feed in batches that are class balanced what that means is that we're going to use data batches that exhibit a one-to-one class ratio now during learning our classifier is going to see equal representation with respect to these classes and move the decision boundary accordingly and once again the next batch that comes in is again going to be class balance and the decision boundary will once again be updated and our end result is going to be a quite a reasonable decision boundary that divides the space roughly equally due to the fact that the data the model has seen is much more informative than what would have been seen with starkly imbalanced data and in practice this balanced batch selection is an extremely important technique to try to alleviate this issue another approach is to actually weight the likelihood of individual data points being selected for training according to the inverse of the frequency at which they occur in the data set so classes that are more frequent will have lower weights classes that are less frequent will have higher weights and the end result is that we're going to produce a class balance data set where different classes will ultimately contribute equally to the model's learning process another way we can visualize this rewaiting idea is by using the size of these data points to reflect their probability of selection during training and what example waiting means is that we can increase the probability that rare classes will be selected during training and decrease the probability that common classes will be selected so so far we have focused on the issue of class balance class imbalance and discuss these two approaches to mitigate class imbalance what if our classes are balanced could there still be biases and imbalances within each class absolutely to get at this let's consider the problem where we're trying to train a facial detection system and let's say we have an equal number of images of faces and non-faces that we can use to train the model still there could be hidden biases that are lurking within each class which in fact may be even harder to identify and even more dangerous and this could reflect a lack of diversity in the within class feature space that is to say the underlying latent space to this data so continuing on with the facial detection example one example of such a feature may be the hair color of the individuals whose images are in our face class data and it turns out that in the real world the ground truth distribution of hair color is about 75 to 80 percent of the world's population having black hair eighteen to twenty percent having brown hair two to five percent having blonde hair and approximately two percent having red hair however some gold standard data sets that are commonly used for image classification and face detection do not reflect this distribution at all in that they're over representing brown and blonde hair and under representing black hair and of course in contrast to this a perfectly balanced data set would have equal representation for these four hair colors i'll say here that this is a deliberate and over deliberate oversimplification of the problem and in truth all features including hair color will exist on a spectrum a smooth manifold in data space and so ideally what we'd ultimately like is a way that we can capture more subtlety about how these features are distributed across the data manifold and use that knowledge to actively de-bias our deep learning model but for the purpose of this example let's continue with the simplified view and let's suppose we take this gold standard data set and use it to train a cnn for facial detection what could end up occurring at test time is that our model ends up being biased with respect to its performance across these different hair color demographics and indeed as i introduced in the beginning of this lecture these exact same types of biases manifest quite strongly in large-scale commercial-grade facial detection and classification systems and together i think these result in considerations raise the critical question of how can we actually identify potential biases which may be hidden and not overly obvious like skin tone or hair color and how can we actually integrate this information into the learning pipeline from there going be a step beyond this how can learning pipelines and techniques actually use this information to mitigate these biases once they are identified and these two questions introduce an emerging area of research within deep learning and that's in this idea of using machine learning techniques to actually improve fairness of these systems and i think this can be done in two principle ways the first is this idea of bias mitigation and in this case we're given some bias model data set learning pipeline and here we want to try to apply a machine learning technique that is designed to remove aspects of the signal that are contributing to unwanted bias and the outcome is that this bias is effectively mitigated reduced along the particular axis from which we remove the signal resulting in a model with improved fairness we can also consider techniques that rather than trying to remove signal try to add back signal for greater inclusion of underrepresented regions of the data space or of particular demographics to ultimately increase the degree to which the model sees particular slices of the data and in general this idea of using learning to improve fairness and improve equitability is an area of research which i hope will continue to grow and advance in the future years as these as these problems gain more traction all right so to discuss and understand how learning techniques can actually mitigate bias and improve fairness we first need to set up a few definitions and metrics about how we can actually formally evaluate the bias or fairness of a machine or deep learning model so for the sake of these examples we'll consider the setting of supervised learning specifically classification a classifier should ultimately produce the same output decision across some series of sensitive characteristics or features given what it should be predicting therefore moving from this we can define that a classifier is biased if its decision changes after it is exposed to particular sensitive characteristics or feature inputs which means it is fair with respect to particular variable z if the classifier's output is the same whether we condition on that variable or not so for example if we have a single binary variable z the likelihood of the prediction being correct should be the same whether or not z equals 0 or z equals 1. so this gives a a framework for which we can think about how to evaluate the bias of a supervised classifier we can do this we can take this a step further to actually define performance metrics and evaluation analyses to determine these degrees of bias and fairness one thing that's commonly done is to measure performance across different subgroups or demographics that we are interested in this is called disaggregated evaluation so let's say if we're working with colored shapes this could be with respect to the color feature keeping shape constant or the shape feature keeping color constant we can also look at the performance at the insert intersections of different subgroups or demographics which in our shape and color example would mean simultaneously considering both color and shape and comparing performance on blue circles against performance on orange squares and so on and so forth so together now that we've defined what a fair um super what a fair classifier would look like and also some ways we can actually evaluate bias of a classification system we now have the framework in place to discuss some recent works that actually used deep learning approaches to mitigate bias in the context of supervised classification so the first approach uses a multi-task learning setup and adversarial training in this framework the way it works is that we the human users need to start by specifying an attribute z that we seek to devise against and the learning problem is such that we train we're going to train a model to jointly predict and output y as well as the value of this attribute z so given a particular input x the the network is going to this is going to be passed in to the network via embedding and hidden layers and at the output the network will have two heads each corresponding to one of the prediction tasks the first being the prediction of the target label y and the second being the prediction of the value of the sensitive attribute that we're trying to devise against and our goal is to be to try to remove any confounding effect of the sensitive attribute on the outcome of the task prediction decision this effect removal is done by imposing an adversarial objective into training specifically by negating the gradient from the attribute prediction head during back propagation and the effect of this is to remove any confounding effect that that attribute prediction has on the task prediction when this model was proposed it was first applied to a language modeling problem where the sensitive attribute that was specified was gender and the task of interest was this problem of analogy completion where the goal is to predict the word that is likely to fill an analogy for example he is the she as doctor is to blank and when a biased model was tested on this particular analogy the top predictions it returned were things like nurse nanny fiance which clearly suggested a potential gender bias however a de-biased model employing this multi-task approach with specification of gender as the attribute was more likely to return words like pediatrician or physician examples or synonyms for doctor which suggested some degree of mitigation of the gender bias however one of the primary limitations of this approach is this requirement for us the human user to specify the attribute to devise against and this can be limiting in two ways first because there could be hidden and unknown biases that are not necessarily apparent from the outset and ultimately we want to actually actually also devise against these furthermore by specifying what the sensitive attribute is we humans could be inadvertently propagating our own biases by way of telling the model what we think it is biased against and so ultimately what we want and what we desire is an automated system that could try to identify and uncover potential biases in the data without any annotation or specification and indeed this is a perfect use case for generative models specifically those that can learn and uncover the underlying latent variables in a data set and in the example of facial detection if we're given a data set with many many different faces we may not know what the exact distribution of particular latent variables in this data set is going to be and there could be imbalances with respect to these different variables for example face pose skin tone that could end up resulting in unwanted biases in our downstream model and as you may have seen in working through lab 2 using generative models we can actually learn these latent variables and use this information to automatically uncover underrepresented and over-represented features and regions of the latent landscape and use this information to mitigate some of these biases we can achieve this by using a variational auto encoder structure and in recent work we showed that based on this va network architecture we can use this to learn the underlying latent structure of a data set in a completely unbiased unsupervised manner for example picking up in the case of of face images particular latent variables such as orientation which were once again never specified to the model it picked up and learned this as a particular latent variable by looking at a lot of different examples of faces and recognizing that this was an important factor from this learned latent structure we can then estimate the distributions of each of these learned latent variables which means the distribution of values that these latent variables can take and certain instances are going to be over represented so for example if our data set has many images of faces of a certain skin tone those are going to be overrepresented and thus the likelihood of selecting a particular image that has this particular skin tone during training will be unfairly high which could result in unwanted biases in favor of these types of faces conversely faces with rare features like shadows darker skin glasses hats may be under represented in the data and thus the likelihood of selecting instances with these features to actually train the model will be low resulting in unwanted bias from this uncovering of the distribution of latent structure we showed that this model could actually adaptively adjust sampling probabilities of individual data instances to re-weight them during the training process itself such that these latent distributions and this resampling approach could be used to adaptively generate a more fair and more representative data dataset for training to dig more into the math behind how this resampling operation works the key point to this approach is that the latent space distribution is approximated by this joint histogram over the individual latent variables specifically we estimate individual histograms for each of the individual latent variables and for the purpose of this approximation assume that these latent variables are independent such that we can take their product to arrive at a joint estimate a an estimate of the joint distribution across the whole latent space and based on this estimated joint distribution we can then define the adjusted probability for sampling a particular data point x during training based on the latent space for that input instance x specifically we define the probability of selecting that data point according to the inverse of the approximated joint distribution across latent space which is once again defined by each of these individual histograms and furthermore weighted by a devicing parameter alpha which tunes the degree of debian that is desired using this approach and applying it to facial detection we showed that we could actually we could actually increase the probability of resampling for faces that had underrepresented features and this qualitatively manifested when we inspected the top faces with the lowest and highest resampling probabilities respectively we then could deploy this approach to actually select batches during training itself such that batches sampled with this learned debian algorithm would be more diverse with respect to features such as skin tone pose and illumination and the power of this approach is that it conducts this resampling operation based on learn features that are automatically learned there's no need for human annotation of what the attributes or biases should be and thus it's more generalizable and also allows for de-biasing against multiple factors simultaneously to evaluate how well this algorithm actually mitigated bias we tested on a recent benchmark data set for evaluation of facial detection systems that is balanced with respect to the male and female sexes as well as skin tone and to determine the degree of bias present we evaluated performance across subgroups in this data set grouped on the basis of the male female annotation and the skin tone annotation and when we considered the performance first of the model without any devising so the supposedly bias model we observed that it exhibited the lowest accuracy on dark males and the highest accuracy on light males with around a 12 difference between the two we then compared this accuracy to that of the d bias models and found that with increasing de-biasing the accuracy actually increased overall and in particular on the subsets and subgroups such as dark males and dark females and critically the difference in accuracy between dark males and light male faces decreased substantially with the debias model suggesting that this approach could actually significantly decrease categorical bias to summarize in today's lecture we've explored how different biases can arise in deep learning systems how they manifest and we also went beyond this to discuss some emerging strategies that actually use deep learning algorithms to mitigate some of these biases finally i'd like to close by offering some perspectives on what i think are key considerations for moving towards improved fairness of ai the first is what i like to call best practices things that should become standard in the science and practice of ai things like providing documentation and reporting on the publication of data sets as well as that of models that summarize things like training information evaluation metrics and model design and the goal of this being to improve the reproducibility and transparency of these data sets and models as they're used and deployed the second class of steps i think that need to be taken are these new algorithmic solutions to actually detect and mitigate biases during all aspects of the learning pipeline and today we consider two such approaches but there's still so much work to be done in this field to really build up to robust and scalable methods that can be seemly seamlessly integrated into existing and new ai pipelines to achieve improved fairness the third criterion i think will be improvements in terms of data set generation in terms of sourcing and representation as well as with respect to distribution shift and also formalized evaluations that can become standard practice to evaluate the fairness and potential bias of new models that are output above all i think what is going to be really critical is a sustained dialogue and collaboration between ai researchers and practitioners as well as end users politicians corporations ethicists so that there is increased awareness and understanding of potential societal consequences of algorithmic bias and furthermore discussion about new solutions that can mitigate these biases and promote inclusivity and fairness with that i'll conclude and i'd like to remind you that for those of you uh taking the course that the entries for the lab competitions are going to be due today at midnight eastern time please submit them on canvas and as always if you have any questions on this lecture the labs any other aspects of the course please come to the course gather town thank you for your attention
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_Deep_Generative_Modeling.txt
[Music] so we are now arrived at one of my favorite lectures of the course I think they're all great but this one is particularly um one of my favorites and I think today it's also particularly Salient because we're going to be talking about this notion of generative model or generative Ai and today we're in this moment where this word of generative AI has maybe been propagating a lot and creating a lot of buzz but today we're actually going to dive in to learn the foundations of what this concept of generative modeling even means and what that core is is that we can build systems that not only look for patterns in data and to make decisions about that data but to actually generate brand new instances of data based on those learned patterns and it's an incredibly powerful idea and I think it gets really a heart at the heart of you know what we mean by intelligence and what we want these types of systems to be capable of doing so let's start with a little uh quiz of sorts I want you to take a look at these three faces and think about which one of these faces you believe is real is it face a all right maybe 5% of people face B slightly more maybe 7 10% face C also okay I'm I'm measuring these percents relative to the total population not relative to each other none so yes yes now everyone is getting more intelligent and I can't fool people as easily as in the past but the truth is yes they are all fake none of these faces is real none of these people actually exist in fact they were generated by a generative model called a style Gan and we're going to get to it to this um type of architecture and how it works later in the question in in the lecture sorry um but generative modeling to start from a basic like machine learning point of view when we talk about types of machine learning models or deep learning models that we can build we can make some classifications about what these models are actually doing in terms of the task the objective that they're trying to learn and very very broadly we can think of two classes of models the first class of models that we've been talking about so far are what we call supervised models and they perform supervised learning meaning that we have data that is of the form of um instances like images or sequences so on and each of those instances is associated with a label something that we're trying to learn a mapping between that data and that label and the goal with deep learning supervised deep learning models is to learn a neural network that maps that function this can be used for classification regression segmentation and so on now there's another whole class of problems out there in machine learning called unsupervised learning problems and that's going to be our focus in this lecture today and the concept is that with unsupervised learning our goal is not necessarily to learn a mapping from input X to Output y but to just look at X on its own and try to learn patterns and features that Define that data distribution learning the underlying and hidden structure to the data and this is not necessarily something that's exclusive to deep learning and neural networks there are other techniques in stats and machine learning that can perform this type of unsupervised analysis and the idea here is that with this concept of learning the underlying structure to data what we're ultimately trying to do is to build a neural network that captures something about the distribution of that data and learning a model that represents that distribution what that means and how that is realized is in two principal ways the first is we can try to build models that perform what we call density estimation meaning that these models are going to see a bunch of different samples of data and they're going to learn to try to fit a function that models the probability distribution associated with how those data fall in some space right the other thing that we can do is we can take this learned distribution this learned uh probability density and then try to now say okay what if because we have this underlying probability distribution I draw new samples from it can I use this sort of sampling operation to actually generate new data instances and this is this notion of sample generation and the common point to both these use cases of generative modeling is this question of how can we learn a very good model of the probability distribution that is similar to the true probability distribution of our real data right and so it's a very very powerful idea and a framework that's useful in a lot of different settings Beyond just you know generating images of people's faces in fact we can actually use generative models to do more intelligent learning of the algorithm and the model itself to use this information about what aspects of the probability distribution are more represented relative to others to actually train models that are more fair less biased with respect to how they perform on particular facets of the underlying data distribution and the concept here is that the model is only going to be as good as the data it sees traditionally but if we can use the information about what aspects of the data are more represented in the overall population we could try to think about how we can create more fair and represent ative data sets based on these learned features and this is something that you're going to get hands-on experience with in lab 2 of of the class another use case for using these probability distributions and learn generative models is in the context of outlier detection right often times we have settings where we want to really be able to preemptively find failure modes right right so in the self-driving car example that Alexander previously introduced there are going to be some instances you know that could occur on the road that are very very rare but are really critical let's say a person walks in front of the car or an animal crosses the road you want the model to be robust to be able to handle those instances and so by modeling this probability distribution we can detect automatically is this new observation and outline or is it falling squarely in the middle of the probability distribution and then use this information adaptively to improve the model itself so this is kind of a flavor of what types of applications these generative models can be useful for and in today's lecture we're going to specifically dive deeply into two classes of models they're both what we call latent variable models and we'll dive in look at the architectures give you a sense of how they work and ultimately try to convey again the fundamentals and some of the math Behind these types of modeling Frameworks so before we get to that I think it's really really useful to have a core understanding of what we mean by a latent variable and the example that I love to use and always used to illustrate this is this story from Plato's work the Republic dating back many years uh back to ancient Greece and this is known as the myth of the cave and in this myth there are a group of prisoners and they're forced to stay in this cave and asked to just face a particular wall of the cave and the prisoners are subject to being imprisoned and they're forced to only actually observe things that are projected in front of them on the wall of this cave so what they're seeing is Shadows of objects that actually exist behind them and they're only seeing the shadow projection that appears in front of them on the cave wall and so to the prisoners right these Shadows are their reality they're the only things that they're observing but we know that in truth there's actually something behind them that is what is casting the shadow that the prisoners observe and this is this notion of what is an observed variable the shadows and what is a latent variable something that's hidden something that's underlying and actually driving the structure of the problem and of course this is an analogy but it gets at this intuition that latent variables are these underlying elements of structure to data that are the factors that are driving what it is that we're measuring and what it is that we're observing and so our goal with generative models and latent variable models is to try to learn in an automated way something about a distribution that hopefully captures these underlying drivers or factors that uh are resulting in The observed data that we see so to get at a sense of how we can do this from a deep learning neural network architecture framework we're going to start by talking about a really simple generative model that we call an autoencoder and the notion here is that we can take data and try to learn a lower dimensional encoding some set of features that hopefully represents that data faithfully and so what I'm showing here is an example with an image the digit 2 and the auto encoder is taking examples of this data and trying to learn a lower dimensional feature space that is representing that in a completely unlabeled way so maybe let's first ask ourselves a question here if we're trying to learn a encoding what I'm denoting here as the variable Z why would we maybe want to care that this uh set of variables Z has a fairly low dimensionality yes us eliminate redundancies exactly other ideas as well efficiency efficiency exactly so the concept here is that we want to eliminate redundancy while being efficient to basically compress the data right to a lower dimensional encoding that hopefully captures a rich representation of that data and so now the question is how can we actually train a model to learn that lower dimensional encoding the concept of the auto encoder is well okay if we're going to start from high-dimensional data like an image and move down to low dimensional data maybe we can use reconstruction of the data as effectively a task an unsupervised task to teach the model to learn this encoding so the concept here is that the auto coder takes let's say an image compresses it down to this lower dimensional encoding space and then performs a decoding back all the way up to the dimensionality of the original data space and so effectively you're learning this lossy reconstruction right between your input data and this predicted reconstruction that we're calling X hat now right because this reconstruction is imperfect relative to the uh original data what we can do is we can train the network by comparing the input data X and the Reconstruction X hat and just asking it okay learn to minimize the difference between the two that's it and this is done using that same type of loss and same objective that we saw I think back in lecture one something like a mean squared error right which with an image just means comp pixel by pixel in the image what is the difference between the original data X and the Reconstruction X2 right so this is a very straightforward way and importantly notice that in this reconstruction operation our loss is only dependent on the input data X and the Reconstruction X hat there's no sense of why there's no sense of labels here right so the last step that I'm just showing schematically is you know taking those individual layers like convolutional layers abstracting them away in this diagram and again showing you this concept of autoencoding the input going down to a lower dimensional latent space z and then decoding back out to the original data space via this reconstruction and this is a really really powerful idea that I hope you appreciate for a moment here because what this lower dimensional latent space the set of latent variables Z allows us to do is to learn a feature set that we have not dis observed directly right it's entirely learned by the network but our hypothesis is that it's an effective enough way to be able to reconstruct the data back out right so when it comes to this earlier question of like what's the dimensionality of this space z it turns out a really good way to think about this is this this notion of orthogonality and also of memory bottleneck and compression right so depending on how big you choose that latent space that encoding to be the quality of your reconstructions and the quality of the generations you can get out is going to be uh different so you can imagine the smaller and smaller you go you're forcing a larger memory bottleneck which means that you're going to ensure more loss and your reconstructions are going to be poorer quality so there's a trade-off there in terms of how good of samples you can get up and how small you make that latent space so the core concept with this idea of autoencoding right is we're effectively forcing the network to learn this compressed latent representation and we're doing this via this reconstruction loss capturing as much information about the data as possible by uh having the network learn to decode back out and this is the idea and where the name autoencoding kind of comes from um you can think of it as self-encoding or automatically encoding the data now the same concept right of Auto encoding down to this bottleneck latent layer is the basis by which we can start now introducing a little more power in our ability to actually generate new samples and to do that we're going to explore this concept of variation and variational autoencoders traditionally with what we just saw right in this takx reconstruct it out this is kind of a very deterministic operation right this latent encoding Z is just like any other neural network layer meaning that once it's trained once the network is the network weights are finalized no matter how many times you put this input two in if those weights hold still you're going to get that same reconstruction out right it's a deterministic operation it's not super useful if now you want to generate new samples right because all the network has learned is to reconstruct in order to get more variability and more sampling what we need is to introduce some notion of Randomness and some notion of an actual probability distribution and the way that is done is via this idea of introducing stochasticity or sampling to the network itself and this is the difference between a variational autoencoder and a standard Auto encoder and the concept here is now in this bottleneck latent layer instead of learning those latent variables directly let's say we have means and standard deviations for each of the latent variables then now let us Define what a probability distribution for each of those latent variables what that means is now we've gone from a vector of latent variable Z to a vector of means mu and a vector of standard deviations Sigma and so this is effectively putting a probabilistic Twist and flavor on this idea of autoencoding meaning that we can now learn these latent variables defined by a mean defined by a standard deviation and hopefully use this to Now sample from those probability distributions to generate new data instances question great question so the question is do we assume that the distributions are normal short answer is yes we do and we're going to now transition exactly into explaining why that is a reasonable assumption to make so now to get a little bit further into this right now instead of a purely deterministic setup we have these two halves to this Auto vae autoencoder framework first we have what we call an encoder what's shown in green and second we have a decoder shown in purple right all the encoder is trying to do is take the data and compute a representation of a probability distribution of these latent variables given the data X the decoder does kind of the inverse now operating back out given those latent variables can we learn the distribution of the data X and these two networks uh the encoder and decoder modules are defined by their own sets of Weights Fe here represents the weights for the encoder component and Theta represents the weights for the uh decoder component the next step is to Define an actual loss function and objective that will allow us to learn this network uh and optimize the vae and to break that down we're going to look at two components to that there's first this reconstruction loss which is going to be very very similar if not exactly equal to the autoencoder loss that we introduced previously but now we're going to introduce a second term that gets at the probability aspect a little more so first let's remember that we're trying to optimize our loss with respect to the weights Fe and the weights Theta the first component right is that reconstruction with an image we can look at our input data X the Reconstruction X hat directly compare them pixelwise and compute this reconstruction error now we have to introduce a a another term that gets a little bit more interesting what we call this regulation regularization term and the reason for that is we need to make some assumption as the question asked about what this probability distribution actually looks like effectively what is done with regular regularization is we have to place a prior a guess a hypothesis on what what we expect this distribution of latent variables Z defined by mu and sigma to be like the regularization term is then saying okay now we're going to take that prior and we're going to compare how good the representation learned by the encoder is relative to that prior how close does it match our assumption about let's say a normal distribution and the concept here is that it's learning kind of the distance it's capturing the distance between the learn distribution the inferred latent distribution q and some prior we have some guess we have about what those latent variables should look like right now the question comes how do we choose that prior right question yeah how do we how do we choose that PRI so the most common choice and what's done very very very frequently in practice is to assume and place the prior of a normal distribution what that means is it's normal gausian meaning that we assume each latent variable is centered with mean zero and has a standard deviation and variance of one and the rationale here is that it's going to encourage the model hopefully to kind of distribute the encodings of these latent variables roughly evenly around the centered latent space and we can further try to penalize the network when it tries to cheat and diverge too far from this prior I'm going to go into kind of a more intuitive explanation of why this normal distribution prior makes sense but first briefly very briefly to to touch on kind of what the form of this regularization term actually looks like mathematically there's this metric that we called the KL Divergence which is effect effectively a distance function that says I have one distribution and I have a second distribution how closely do they match right that's all this K Divergence is trying to compute when we place the prior of a normal gausian the k diverges takes a pretty clean form shown here don't need to concern yourself so much with the details but should just kind of know conceptually that the concept here is we're trying to look at the distance and help regularize the network to try to minimize the similarity between uh minimize the distance between the prior and the Learned latent distribution so now the question that's been raised a couple times right how do we choose this normal prior why do we choose it and what does it effectively do so let's to break that down let's consider right what properties we want this technique of regularization to actually achieve the first idea is that we want our latent space to be continuous meaning that points that are close together in this Laten space z should hopefully be similar meaning after you decode back out the related content related images so let's say images multiple different types of Twos for example secondly when we go to use our vae to actually sample we want to have completeness meaning that whenever we draw a sample from the latent space hopefully we get something reasonable back out right we don't want samples to result in something nonsensical and so with these two uh criteria in mind the concept is that regularization without regularization we can sacrifice both these prop we end up sacrificing both these properties things that are close in the latent space may not be similarly decoded meaning there's lack of continuity so for example if we have you know these this Green Point here this Orange Point in the latent space here one may end up decoding back a square with one point and a triangle with another point but they're yet similar and close in the latent space so objects that are actually dissimilar are close in the latent space and we don't want that rather we want things that are similar to be close in the latent space if we impose some form of regularization what this affects and amounts to is that now points that are ending up close in the latent space are similar when they're decoded and they're decoded meaningfully we're going to get both roughly triangle shapes out with these two points that are located very close to each other in the latent space that gives us a sense of why regularization can be useful why does the normal prior help us is because of the fact that without this regularization imposing the no normal prior there is not so much of a a structure in terms of what these distributions of the latent variables actually look like meaning that you could get very very small dist very very small variances that end up to pointed distributions in the latent space meaning you're going to get different means when you try to sample you're going to get discontinuities and furthermore on the inverse if you have very very large variances then this kind of destroys any sense of difference right when you have things that are just covering the latent space very broadly so again illustrating now visually if we get very pointed distributions from not regularizing having the small variance and different means this could lead to discontinuities that we don't want the high level intuition with the normal prior is that it imposes this regularization to be roughly centered mean zero standard deviation of one to try to encourage the network to satisfy these desired properties of continuity and completeness okay any questions on that I know that was it's kind of a hard intuition to to wrap your minds around yes is there a fundal relationship between weights and so on in the decoding that's learned from the encoding yes so the question is is there a fundamental relationship weights in the decoder and weights in the encoder the answer is yes and that is because the network is learned completely end to endend meaning we don't train the encoder separately and the decoder separately we learn them all at once so they're in they're fundamentally linked and that linkage that pattern of linkage is what is actually learned by the the model in training like completely out track is it anything physical or do we even like look at the hidden variables just absolutely so the question is how can we interpret the latent variables right the answer is we can interpret them and they're not necessarily always abstract in fact that's kind of the whole point of doing this encoding and this representation learning type of approach is you want to try to extract some notion of structure in those latent variables but do it in a datadriven way right and so I'll show a technique of how we can do that one way you can start to interpret the latent VAR Ables is hold one fixed or hold all but one fixed right and then effectively perturb change the value of that latent variable holding all the other latent variables constant decode reconstruct the input based on that perturbation and then examine what are the instances you get out how are they changing and use that to now kind of back interpret what that latent variable was capturing and show example of this and it's also a Hands-On example in the software lab I'm going to I know there are a couple questions but I'm going to keep keep progressing to the first question our our next point and section of this lecture gets out this notion of how we actually train this model end to endend to learn these sets of weights right we have this loss function that comprised of the reconstru C term and the regularization term and our goal is to learn these sets of Weights completely end to end using this loss function there's a problem though and a little bit of a Nuance here in that by introducing this notion of sampling and the uh mean and standard deviation terms here it's not immediately clear we don't have a direct solution of how we can back propagate GR when we want to capture something probabilistic and so with Vees they employ this really clever trick that effectively diverts the sampling operation to reparameterize the latent variable equation slightly so that we can train the model end to end so rather than saying that our latent variable Z is directly the sample of a normal distribution defined by mu Sigma squ we say let's offset all the randomness all that sampling to this other variable Epsilon and say our latent variables are now the sum of a fixed Vector of means mu and a fixed Vector of standard deviation Sigma but that's now scaled by some random constant which is going to be drawn from a prior normal distribution and is effectively going to capture the stochasticity that we want out of our sampling operation right fixed mean fixed uh standard deviation vector and as a result this effectively now diverts the sampling operation away from the latent variable Z where we cannot prior back pop through but rather we can rather we can uh learn a fixed variable set of means fixed set of standard deviations and then divert the stochasticity to this variable Epsilon this allows us to now back propagate gradients through and directly learn the latent variables via this mean and standard deviation Vector so this is called the repar parameterization trick of a v AE and it's ultimately the solution to allow the network to be trained end to end now when it comes to having a trained vae a trained model and trying to understand this question of what do the latent variables actually represent what we can do is keep all variables except for one fixed and slowly perform a perturbation changing the the value of that single latent variable incrementally and using the decoder component to now decode back out to our original data space and so in this example with the face you're seeing that come to life right where the face is being reconstructed following a perturbation of a single latent variable that hopefully you can see is capturing the head pose the tilt of the face in the image and so what this kind of ETA is now the different dimensions the different latent variables are trying to encode these different features that are hopefully interpretable ideally right with this idea of orthogonality and feature learning you want those latent variables to be as orthogonal to each other as uncorrelated to each other as possible because then you're maximizing the amount of information that your model is learning across this few set of latent variables and this is this notion of disentanglement and it's a very common question in training these VA type models of how you can disentangle distinct latent variables something like head pose versus smile in this in this example shown here turns out in practice right there's a pretty straightforward solution to actually encourage disentanglement when you learn a vae and the solution is to basically relatively wait the Reconstruction term relative to the regularization term and tune the relative strength of these two components of the loss to encourage disentanglement and there's this slight new slight variant of a vae called a beta VA vae that just says okay if we put a a weight on the regularization term that's greater than one it can uh constrain the latent bottleneck to basically be encouraged to disentangle these distinct variables such that now you can get um more interpretable and better dis disentangled features with greater weights on the regularization term and so there's more details in this uh paper that go into the math of why this works but the concept is by imposing stricter regularization you can now try to disentangle these variables more strongly okay so to summarize this part of the talk and to wrap up this concept of lat and variable models you're going to get some hands-on experience in the software lab playing with these types of models in the context of computer vision and facial detection systems and see how we can interpret these types of features and use those features to actually create better models that are going to be um more fair and more unbiased finally to close right the summary of the the key points and Concepts behind vaes is that fundamentally this notion of latent variable models is effectively trying to compress data into a low dimensional encoding that we can use to both generate new samples and also understand the features underlying that data this framework allows us for completely unsupervised reconstruction we don't need labels associated with the data we can use this notion of reparameterization to train the model end to end we can interpret the latent variables via this perturbation analysis and we can now use the Learned latent variables to sample from that space and generate new examples so with VES the core is that this latent variable space is effectively um estimate a representation of that probability distribution that I mentioned at the beginning right sometimes though we don't really we want to prioritize the ability to generate very faithful samples while sacrificing our ability to uh learn those features in a more interpretable or probabilistic way and so there's another complementary kind of class of models that we call generative adversarial networks or Gans that are more focused on this question of how can we just generate samples that are going to be very good sacrificing the the ability to decode and interpret a set of latent variables and the idea behind Gans is rather than trying to explicitly model these probability densities let's just learn a transformation from something very very very simple say completely random noise and now learn a model that can transform that distribution of noise to the real data distribution such that we can then use that learn model to generate samples that hopefully fall somewhere in that real data distribution and Gans initially introduced a lot of excitement about this idea of generation from completely random noise which seems kind of wild when you think about it in that way right and the intuition behind ganss is pretty clever the concept here is that we're going to put two components of the network what we call a generator and what we call a discriminator in competition with each other they're going to act as adversaries the concept here is that we're going to first have a generator component that looks at completely random noise and attempts to sample from this noise and let's say up upscale it to the real data distribution and at first its Generations are going to be pretty pretty bad right they're not going to be very good then we're going to have another Model A discriminator come in and look at those generated instances compare them to real data and say is this real or is this fake and by linking these two together and training them jointly the concept is can we induce the generator to create better and better examples that will soon be good enough to fool the discriminator into not being able to tell what's fake and what's real and they're effectively Waring at uh competing with each other right in this framework and so to break down how this works one of my absolute favorite illustrations not only this in this class but ever is as follows and I think it gives a hopefully a strong intuition behind how ganss work so we're going to consider now this points along this line right we have our generator here on the right it's going to start from completely random noise and try to create an imitation of data right let's say now these orange points are those fake data samples drawn from noise now the discriminator is going to look at these points it's also going to see instances of real data and the discriminator we're going to train it to say produce a probability that the data it sees are real or if they are fake and in the beginning right it's not trained its predictions are not going to be so good but then let's say we train it we show it some more examples and it starts increasing the probabilities of what is real and decreasing the probabilities of what is fake now we continue this right training a normal classifier model a discriminator model such that using these real instances and these fake instances the discriminator is perfect it can perfectly tell us what's fake what's real now the generator comes back and it sees now where those real data instances lie and the objective that that we supply to the generator is move your samples closer and closer to these real instances and so it can start doing that and generating new samples that are hopefully if our objective is good closer and closer cl to the real data now the discriminator sees these new samples and receives these new points and now it's decision is not going to be as strong it's not going to be as clear but again it's estimating the probability that each of these points is real learning to decrease the probability of the fake points increase the probability of the real points once again we repeat this process One Last Time generator starts moving those fake points closer and closer and closer to the real data such that the fake data are almost overlapping with the real data and so if the discriminator were now to come back and look at these samples on the right it's not going to be able to tell what is real what is fake and that would indicate that the generator has succeeded in learning an objective to generate samples that mimic the real data distribution so this is the summary and the concept of this framework behind the setup ofan where you have a generator component this network G that is trying to synthesize fake data instances to fool the discriminator while conversely the discriminator is trying to identify the synthesized fake instances from examples of both real and fake and in training right this the objective the loss function that's actually set up for a gan Network composes these two competing objectives for the generator and the discriminator and if we succeed in training the Gan overall we would have reached an Optimum where the generator is able to perfectly reproduce the true data distribution and the discriminator has no power at all in in classifying these instances it turns out that in practice this is pretty hard to do ganss are notoriously very very difficult to train and in the years since there have been a lot of works in the field that try to propose better loss functions other tricks to stabilize the training of the Gans but in practice it's quite quite a difficult class of models to train and so there's things that we can do to make Gans better and there are also things that we can do to introduce new types of models that are going to be more robust and easier and more stable to train than this uh framework with Gans so at the end let's say we've trained our Gan model now what do we do with it well to actually use the Gan to generate new instances all we have to do is go back to the generator component of the model and now start with in points in that noise distribution pass them through the generator and Sample to create new data instances and here sample means just picking different points from the initial distribution of random noise and so the concept here is that the Gan is effectively learning an effective transformation of a distribution moving from gausian noise pure random noise to this learned Target Target distribution what is really really cool is that by we can think about how to Traverse this space of this learn distribution meaning that different samples from gausian noise are going to end up in different places in the data world right and the Gan is learning and mapping that allows us to do this transformation and it turns out that you can actually Traverse and interpolate in the space in the starting space of gausian noise and generate samples that can then be traversed across this learned across the target data distribution and so the results of this are pretty cool because you can apply the same type of initial perturbation starting from your noise sample to then produce Progressive uh similar images in this case across the target distribution so ganss are really really a very neat framework to think about and they are indeed the architecture that was used to generate some of those types of face images that I showed on slide one right and there are different advances that have been proposed from a modeling perspective that allow better results to come from Gans one neat thing that has been done that I'll spend a little bit of time on is thinking about now how we can better control the generative process of a gan let's say we want to condition on some different types of information and generate samples accordingly what we can do is not to supply only the random noise as the starting point but also look at other forms of conditioning factors that can then guide the generations of from again one core idea here is this notion of paired translation meaning that we can consider now pairs of inputs for example a scene and a corresponding segmentation map and now train the discriminator not to operate over only one but rather Pairs and say okay what's a true pair of an image and a segmentation map versus a fake pair of an image and a segmentation map and in practice what this allows us to do is now do these conversions of pair translation where the generator can take in an input of let's say a segmentation scene and generate the output of the the real world view scene or a street view map and generate uh kind of a grid View and different instances of how you know day to night uh black and white to color so on and so forth of this paired translation type of generation a very common way that this is employed is in uh looking at maps and in translating between let's say a map view and an aerial view of an image and vice versa a very very related idea is this notion of now what if we don't have just linked pairs but we have two related data distributions and again we want to learn some relationship and transformation beh uh that links those two domains and so an architecture that was proposed to do this is what is called a cycle Gan which is now saying let's say we don't have explicit pairs but rather we have a bunch of instances in one domain and a bunch of instances in another domain not paired how can we learn a relationship mapping these two distributions together and the concept of the of of the cycle Gan is that we can basically impose a cyclic a cyclic loss linking the discriminator and the generator in one domain to that of another and so in this example here that I'm showing what ends up being accomplished in practice is you can do these um functions where let's say you have images of horses images of zebras and you the model that's shown here is a cycle again that has learned a mapping from the horse domain to the zebra domain and is now uh generating uh those instances accordingly so notice here right in this example there's not only the transformation of the skin of the horse to Stripes but also the grass is looking different it's less green and so the it's entire transformation of the image distribution itself right again I think that this concept of now domain translation domain transformation gets at this notion that Gans are very effective ways to basically learn transformations of data distributions we can go from learning a a transformation from gausian noise to our Target data distribution or with cyc Gans from one data space X to a Target data space Y and back right and earlier in the in the prior lecture there were some questions about how we actually generated that operates on images of spectrograms which are the conversion of an audio waveform to a spectrogram image and effectively learned a cycle we built a cyclean model that can transform speech from one domain to speech of another domain so following this Con conversion of the audio waveforms to these spectrogram images the cyclean was trained to perform this conversion and specifically we what we did did was we took recordings of Alexander's voice Alexander speaking converted them to spectrogram images built our cycle Gan model took the recordings of Obama's voice did the same thing and then uh learned the psychoan model to perform this domain transformation between audio of Alexander's voice and audio of Obama's voice so today we've we've talked about these two classes of deep generative models specifically focusing on latent variable models that can learn these lower dimensional representations and encodings of the data and then this concept of putting these adversarial networks together to now be able to generate new data instances so vaes and ganss are really kind of the two deep learning Frameworks that brought generative modeling and generative AI kind of to life a few years back and since then right the field has really expanded taken off and there is a particular class of models that have led to quite quite tremendous advances in generative capabilities not just in images but in many other domains and so this is an this is an instance an image instance from one such model which are a class of models called diffusion models that are very closely related to vaes um in the concept and how how they've how they're built but they have this ability to be able to generate new instances with very high fidelity and also be guided and conditioned on different forms of input like text or other types of modalities to really be able to impose more control over the generative process itself and so tomorrow we're going to talk about uh how these models work and talk about uses of them not only in but other domains as well so with that I'm going to close out getting some hands-on experience with these latent variable models in the context of computer vision and in the context of facial detection in particular thank you [Applause]
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_Recurrent_Neural_Networks_Transformers_and_Attention.txt
[Music] okay so maybe if the those at the top can take their seats and we can get started my name is Ava and this is lecture two of 6s one91 thank you John thank you [Applause] everyone it should be a good time it's a pack time so today in this in this portion of the class we're going to talk about problems that we call sequence modeling problems and in the first lecture with Alexander we built up really about what deep learning is what are the essentials of neural networks what is a feedforward model and basically how do we train a neural network from scratch using gradient descent and so now we're going to turn our attention to a class of problems that involve sequential data or sequential processing of data and we're going to talk about how we can now build neural networks that are well suited to tackle these types of problems and we're going to do that step by step starting from the intuition and building up our Concepts and our knowledge from there starting back right where we left off with perceptrons and feedforward models so to do that I'd like to First motivate what we even mean when we talk about something like sequence modeling or sequential data so let's start with it super simple example let's say we have this image of a ball and it's moving somewhere in this 2D space and your task is to predict where this ball is going to travel to next now if I give you no prior information about the history of the ball its motion how it's moving so on your guess on its next pos position is probably going to be nothing but a random guess however now right if I give you an ADD addition to the current position of the ball information about where that ball was in the past the problem becomes much easier it's more constrained and we can come up with a pretty good prediction of where this ball is most likely to travel to next I love this example because while it's a you know visual of a ball moving in a 2d space right this is gets at the heart of what we mean when we're talking about sequential data or sequential modeling and the truth is that Beyond this sequential data is really all around us right my voice as I'm speaking to you the audio waveform is sequential data that could be split up into chunks and sequences of sound waves and processed as such similarly language as we express and communicate in the written form in text is very naturally modeled as a sequence of either characters individual letters in this alphabet or words or chunks that we could break up the text into smaller components and think about these chunks one by one in sequence beyond that it's everywhere right from everything from medical readings like EKGs to financial markets and stock prices and how they change and evolve over time to actually biological sequences like DNA or protein sequences that are representing and encoding the of of life and Far Far Beyond right so I think it goes without saying that this is a very rich and very diverse type of data and class of problems that we can work with here so when we think about now how we can build up from this to answer specific neural network and deep learning modeling questions we can go back to the problem Alexander introduced in the first lecture where we have a simple task binary classification and am I going to pass this class we have some single input and we're trying to generate a single output a classification based on that with sequence modeling we can now handle sequences of uh data that are sequences meaning that we can have words in sentences in a large body of text and we may want to reason about those sequences of words for example by taking in a sentence and saying okay is this a positive emotion a positive sentiment associated with that sentence or is it something different we can also think about how we can generate sequences based on other forms of data let's say we have an image and we want to caption it with language this is also can be thought of as a sequence modeling problem we're now given a single input we're trying to produce a sequential output and finally we can also consider tasks where we have sequence in sequence out let's say you want to translate speech or text between two different languages this is very naturally thought of as a many to many or a translation type problem that's ubiquitous in a lot of natural language translation types of Frameworks and so here right again emphasizing the diversity and richness of the types of problems that we can consider when we think about sequence so let's get to the heart of from a modeling perspective and from a neural network perspective how we can start to build models that can handle these types of problems and this is something that I personally kind of had a really hard time wrapping my head around of initially when I got started with machine learning how do we take something where we're mapping input to output and build off that to think about sequences and deal with this kind of time time nature to sequence modeling problems I think it really helps to again start from the fundamentals and build up intuition which is a consistent theme throughout this course so that's exactly what we're going to do we're going to go step by step and hopefully walk away with understanding of of the models for this type of problem okay so this is the exact same diagram that Alexander just showed right the perceptron we defined it where we have a set of inputs X1 through XM and our perceptron neuron our single neuron is operating on those to produce an output by taking these its weight Matrix doing this linear combination applying a nonlinear activation function and then generating the output we also saw how we can now stack perceptrons on top of each other to create what we call a layer where now we can take an input compute on it by this layer of neurons and then generate an output as a result here though still we don't have a real notion of sequence or of time what I'm showing you is just a static single input single output that we can now think about collapsing down the neurons in this layer to a simpler diagram right where I've just taken those neurons and simplified it into this green block and in this input output put mapping we can think of it as an input at a particular time step just one time step T and our neural network is trying to learn uh a mapping in between input and output at that time step okay now I've been saying okay sequence data it's data over time what if we just took this very same model and applied it over and over again to all the individual time steps in a data point what would happen then all I've done here is I've taken that same diagram I've just flipped it 90° it's now vertical where we have an input Vector of numbers our neural network is Computing on it and we're generating an output let's say we we have some sequential data and we don't just have a single time step anymore we have multiple individual time steps we start from x0 our first time step in our sequence and what we could do is we could now take that same model and apply it stepwise step by step to the other slices the other time steps in the sequence what could be a potential issue here that could arise from treating our sequential data in this kind of isolated step-by-step view yes so I heard some some comments back that inherently right there's this dependence in the sequence but this diagram is completely missing that right there's no link between time Step Zero time step two indeed right in this in this setting we're just treating the time steps in isolation but I think we can all hopefully appreciate that at output at a later time step we wanted to depend on the input and the observations we saw prior right so by treating these in ation we're completely missing out on this inherent structure to the data and the patterns that we're trying to learn so the key idea here is what if now we can build our neural network to try to explicitly model that relation that time step H time step to time step relation and one idea is let's just take this model and Link the computation between the time steps together and we can do this mathematically by introducing a variable that we call H and H of T stands for this notion of a state of the neural network and what that means is that state is actually learned and computed by the neuron and the neurons in this layer and is then passed on and propagated time step by time step to time step and iteratively and sequentially updated and so what you can see here now as we're starting to build out this modeling diagram is we're able to now produce a relationship where the output at a time step T now depends on both the input at that time step as well as the state from the prior time step that was just passed forward and so this is a really powerful idea right again this is an abstraction of that we can capture in the neural network this notion of State capturing something about the sequence and we're iteratively updating it as we make observations in this time in this sequence data and so this idea of passing the state forwards through time is the basis of what we call a recurrent cell or neurons with recurrence and what that means is that the function and the computation of the neuron is a product of both the current input and this past memory of previous time steps and that's reflected in this variable of the state and so on the right on the right hand side of this slide what you're seeing is basically that model that neural network model unrolled or unwrapped across these individual time steps but importantly right it's just one model that still has this relation back to itself okay so this is kind of the the Mind warpy part where you think about how do we unroll and visualize and reason about this operating over these individual time steps or having this recurrence relation with respect to itself so this is the core idea this notion of recurrence of a neural network architecture that we call RNN recurrent networks and rnn's are really the found one of the foundational Frameworks for sequence modeling problems so we're going to go through and build up a little more details and a little more of the math behind rnns now that we have this intuition about the state update and about the recurrence relation okay so our next step all we're going to do is just formalize this this thinking a little bit more the key idea that we talked about is that we have the state H oft and it's updated at each time step as we're processing the sequence that update is captured in what we call this recurrence relation and this is a standard neural network operation just like we saw in lecture one right all we're doing is we're having the cell State variable h of T we're learning a set of Weights w and that set of Weights W is going to be a function of both the input at a particular time step and the information that was passed on from the prior time step in this variable H oft and what is really important to keep in mind is for a particular neur uh RNN layer right we have the same set of weight parameters that are just being updated as the model is being learned same function same set of Weights the difference is just we're processing the data time step by time step we can also think of this from another angle in terms of how we can actually Implement an RNN right we can begin we think about initializing the hidden State and initializing an input sentence broken up into individual words that we want this RNN to process to make updates to the hidden state of that RNN all we're going to do is basically iterate through each of the individual words the individual time steps in the sentence and update the hidden State and generate an output prediction as a function of the current word and the hidden State and then at the very end we can then take that learned model that learned updated hidden State and generate now the next word pred prediction for what word comes next at the end of the sentence and so this is this idea of how the RNN includes both a state update and finally an output that we can generate per time step and so to walk through this component right we have this input Vector X oft we can use a mathematical description based on the nonlinear activation function and a set of neural network weights to update the hidden State h of and while this may seem complicated right this is really very much similar to what we saw prior all we're doing is we're T learning a matrix of Weights we are learning an individual Matrix for updating the hidden State and then one for updating the input we're multiplying those by their inputs adding them together applying a nonlinearity and then using this to update the actual State variable H oft finally then we can then output an actual prediction at that time step as a function of that updated internal State H oft right so the RNN has updated its state we apply another weight Matrix and then generate an output prediction according to that question different nonlinear functions into the T each and if so how do you have intuition on which one to choose yes absolutely so the question is how do we choose the activation function besides tan H you can indeed choose uh different activation functions we'll get a little bit later in the lecture how we dictate that intuition and we'll also see there are examples of slightly more complicated versions of rnns that actually have multiple different activation functions within uh one layer of the RNN so this is another uh strategy that can be used so this is the idea now of updating the internal State and generating this output prediction and as we kind of started to see right we can either depict this using this Loop function or by basically unrolling the state of the RNN over the individual time steps which can be a little more intuitive the idea here is that right you have a input at a particular time step and you can visualize how that input and output prediction occurs at these individual time steps in your sequence making the weight matrices explicit we can see that uh this ultimately leads to both updates to the hidden State and predictions to the output and furthermore reemphasizing the fact that it's the same weight Matrix right for the input to Hidden State transformation that uh hidden state to Output transformation that's effectively being reused and re-updated across these time steps now this gives us a sense of how we can actually go forward through the RNN to compute predictions to actually learn the weights of this RNN we have to compute a loss and use the technique of back propagation to actually learn how to adjust our weights based on how we've computed the loss and because now we have this way of computing things time step by time step what we can simply do is take the individual metric of the loss from the individual time steps sum them all together and get a total value of the loss across the whole sequence one question progression differ from setting the bias a bias is you know something that comes in separate from the X of that particular time this is different than uh the servings bias yes yes so what I'm talking about here is specifically how the weights the Learned weights are updated as a function of you know learning the model and how they're act the weight Matrix itself is is applied to let's say the input and transforms the input in this in this visualization and the equations we showed we kind of abstracted away the bias term but the important thing to keep in mind is that matrix multiplication is a function of the Learned weight Matrix uh multiplied against the input or the hidden State okay so similarly right this is now a little bit more detail on the inner workings of how we can Implement an RNN uh layer from scratch using code in tensor flow so as we introduce right the RNN itself is a layer a neural network layer and what we start by doing is first by initializing those three sets of weight matrices that are key to the RN computation right and that's what's done in this first block of code where we're seeing that initialization we also initialized the hidden State the next thing that we have to do to build up an RNN from scratch is to Define how we actually make a prediction a forward pass a call to the model and what that amounts to is taking that hidden State update equation and translating it into python code that reflects this application of the weight Matrix the application of the nonlinearity and then Computing the output as a transformation of that right and finally at each time step both that updated hidden State and the predicted output can be returned by the uh the call function from the RNN this gives you a sense of kind of the inner workings and computation translated to code but in the end right tensorflow and machine learning Frameworks abstract a lot of this away such that you can just take in uh and Define kind of the dimensionality of the RNN that you want to implement and use buil-in functions and built-in layers to Define it in code so again right this this flexibility that we get from thinking about sequence allows us to think about different types of problems and different settings in which sequence modeling becomes important we can again look at settings where now we're processing these individual time steps across the sequence and generating just one output at the very end of the sequence right maybe that's a classification of the emotion associated with a particular sentence we can also think about taking a single input and now generating uh outputs at at individual time steps and finally doing the translation of sequence input to sequence output and you'll get Hands-On practice implementing and developing a neural network for this type of problem in today's lab and the first lab of the course so building up from this right we've talked about kind of how an RNN works and what's the underlying framework But ultimately when we think about sequence modeling problems we can also think of you know what are the unique aspects that we need a neural network to actually effectively capture to be able to handle these data well we can all appreciate that sequences are not all the same length right a sentence may have five words it may have a 100 words we want the flexibility in our model to be able to handle both cases we need to be able to maintain a a sense of memory to be able to track these dependencies that occur in the sequence right things that appear very early on may have an importance later on and so we want our model to be able to reflect that and pick up on that the sequence inherently has order we need to preserve that and we need to learn a conserved set of parameters that are used uh across the sequence and updated and rnn's give us the ability to do all of these things they're better at some aspects of it than others and we'll get into a little bit of why that is but the important thing to keep in mind is as we go through the rest of the lecture is what is it that we're actually trying our neural network to be able to do in practice in terms of the capability it has so let's now get into more specifics about a very typical sequence modeling problem that you're going to encounter and that's the following given a stretch of words we want to be able to predict the next word that comes following that stretch of words so let's make this very concrete right suppose we have this sentence this morning I took my cat for a walk our task could be just as follows given the first words in this sentence we want to predict the word that follows walk how we can actually do this before we think about building our RNN the very first thing we need to do is have a way to actually represent this text represent this language to the neural network remember again right neural networks are just numerical operators right their underlying computation is just math implemented in code and they don't really have a notion of what a word is we need a way to represent that that numerically so that the network can compute on it and understand it they can't interpret words what they can interpret and operate on is numerical inputs so there's this big question in this field of sequence modeling and natural language of how do we actually encode language in a way that is understandable and makes sense for a neural network to operate on numerically this gets at this aide of what we call an embedding and what that means is we want to be able to transform input in some different type of modality like language into a numerical Vector of a particular size that we can then give us input to our neural network model and operate on and so with language there are different ways that we can now think about how we can build this embedding one very simple way is let's say we have a vast vocabulary a set of words all the different and unique words in English for example we can then take those different and unique words and just map them to a number an index such that each distinct word in this vocabulary has a distinct index then we take we construct these vectors that have the length the size of the number of words in our vocabulary and just indicate with a binary one or zero whether or not that represents that Vector represents that word or some other word and this is an idea of what we call a one hot embedding or a one hot in coding and it's a very simple but very powerful way to represent language in a numerical form such that we can operate on it with a neural network another option is to actually do something a little fancier and try to learn a numerical Vector that Maps words or other components of our language to some sort of distribution some sort of space where the idea is things that are related to each other in language should numerically be similar close to each other in this space and things that are very different should be numerically dissimilar and far away in this space and this too is a very very powerful concept about learning and embedding and then taking those learned vectors forward to an to a downstream uh neural network so this solves a big problem about how we actually encode language the next thing in terms of how we tackle this sequence modeling problem is we need a way to be able to handle these sequences of differing length right s sentence of four words sentence of six words the network needs to be able to handle that the issue that comes with the ability to handle these variable sequence lengths is that now as your sequences get longer and longer your network needs to have the ability to capture information from early on in the sequence and process on it and incorporate it into the output maybe later on in the sequence and this is this idea of a long-term dependency or this idea of memory in the network and this is another very fundamental problem to squence modeling that you'll encounter in practice the other aspect that we're going to touch on briefly is again the intuition behind order the whole point of sequence is that you know things that appear in a programat in a programmed or defined way capture something meaningful and so even if we have the same set of words if we flip around the order the Network's representation and modeling of that should be different and capture that dependence of order all this is to say is in this example of natural language taking uh the question of next word prediction it highlights why this is a challenging problem for a neural network to learn and and try to model and fundamentally how we can think about keeping that in the back of our mind as we're actually trying to implement and test and build these algorithms and models in practice one quick question yes large uh embedding uh how do you know what dimension of space you're supposed to use to like group things together this is a fantastic question about how large we set that embedding space right you can eni Envision right as the number of distinct things in your vocabulary increases you may first think okay maybe a larger space is actually useful but it's not always it's not true that strictly increasing the uh dimensionality of that embedding space leads to a better embedding and the reason for that is it's gets sparser the bigger you go and effectively then what you're doing is you're just making a lookup table that's more or less closer to a one hot uh encoding so you're kind of defeating the purpose of learning that embedding in the first place the idea is to have a balance of a small but large enough dimensionality to that embedding space such that you have enough capacity to map all the diversity and richness in the data but it's small enough that it's efficient and that embedding is actually giving you an efficient bottleneck NE and representation and that's kind of a a design choice that there are you know um works that show what is effective embedding space for language let's say but that's a that's kind of the balance that we keep in mind I'm going to keep going for the for the sake of time and then we'll have time for questions at the end okay so that gives us you know rnns how they work where we are at with these sequence modeling problems now we're going to dive in a little bit to how we actually train the RNN using that same algorithm of back propagation that Alexander introduced if you recall in a standard feed forward Network right the operation is as follows we take our inputs we compute on them in the forward path to now generate an output and when we back propop when we try to update the weights based on the loss what we do is we go backwards and back propagate the gradients through the network back towards the towards the input to try to adjust these parameters and uh minimize the loss and the whole concept is we have our loss objective and you're just trying to shift the parameters of the model the weights of the model to minimize that objective with rnns now there's a wrinkle right because we now have this loss that's computed time step to time step as we are doing this sequential computation and then added at the very end to get a total loss what that means is now when we make our backward pass in trying to learn back propagation we just have to back propagate the gradients per the time step and then finally across all the time steps from the end all the way back to the beginning of the sequence and this is the idea of of back propagation through time because the errors are additionally back propagated along this time axis as well to the beginning of the data sequence now you could maybe see why this can get a little bit hairy right if we take a closer look at how this computation actually works what back prop through time means is that as we're going stepwise time step by time step we have to do this repeated computation of weight Matrix um weight Matrix weight Matrix weight Matrix and so on and the reason that this can be very problematic is that this repeated computation if those values are very large and you multiply or take the derivative with respect to those values in a repeated fashion you can get gradients that actually grow excessively large and grow uncontrollably and explode such that the network learning is not really uh tractable and so one thing that's done in practice is to effectively try to cut these back scale them down to try to learn uh effectively you can also have the opposite problem where if you start out and your values are very very small and you have these repeated Matrix multiplications your values can shrink very quickly to become diminishingly small and this is also quite bad and there are strategies we can employ in practice to try to mitigate this as well the reason why that this notion of gradient diminishing or Vanishing gradients is a very real problem for actually learning an effective model is that it kind of shoot we're shooting ourselves in the foot in terms of our ability to model long-term dependencies and why that is is as you grow your sequence length right the idea is that you're going to have to have a larger memory capacity and then be able to better track these longer term dependencies but if your sequence is very large and you have long-term dependencies but your gradients are Vanishing Vanishing Vanishing you're losing all ability as you go out in time to actually learn something useful and keep track of those dependencies within the model and what that means is now the Network's capacity to model that dependency is reduced or destroyed so we need real strategies to try to mitigate this in the RNN framework because of this inherent sequential processing of the data in practice going back to uh one of the earlier questions about how we select activation functions one very common thing that's done in RNN is to choose the activation functions wisely to be able to try to kind of mitigate a little bit this shrinking gradient Problem by having uh activation functions that are either zero or one namely the reu activation function another strategy is to try to initialize the weights those actual first values of the weight matrices smartly to be able to get them at a good starting point such that once we now start making updates maybe we're less likely to run into this Vanishing gradient problem as we do those repeated Matrix multiplications the final idea and the most robust one in practice is to now build a more a more robust uh neural network layer or recurrent cell cell itself and this is the concept of what we call gating which is effectively introducing additional computations within that recurrent cell to now be able to try to selectively keep or selectively remove or forget some aspects of the information that's being inputed into the into the recurrent unit we're not going to go into detail about how this notion of ating Works mathematically for the the sake of time and focus but the important thing that I want to convey is that there's a very common architecture um called the lstm or long short-term memory Network that employs this notion of gating to be more robust than just a standard RNN uh in being able to track these long-term dependencies the core idea to take away from that and this idea of gating is again we're thinking about how information is updated numerically within the recurrent unit and what lstms do is very similar to how the RNN on its own functions have a variable a cell State that's maintained the difference is how that cell state is updated is using some additional layers of computation to selectively forget some information and selectively keep some information and this is the intuition behind how these different uh components within an lstm actually interact with each other to now give basically a more intelligent update to the cell state that will then um better preserve the core information that's necessary the other thing I'll note about this is that this operation of forgetting or keeping I'm speaking it about it in a very high level and Abstract way but what I want you to keep in mind as well is that this is all learned as a function of actual weight matrices that are defined as part of these neural network units right all of this is our way of abstracting and reasoning about the mathematical operations at the core of a network or a model like this okay so to close out our discussion on on on RNN we're going to just touch very briefly on some of the applications where we've seen them employed and and are commonly used one being music generation and this is what you're actually going to get Hands-On practice with in the software Labs building a recurrent neural network from scratch and using it to generate new songs and so this example that I'll play is actually a demo from a few years ago of a music uh piece generated by a recurrent neural network based architecture that was trained on classical music and then asked to produce the uh a portion of a piece that was famously unfinished by the composer France Schubert who died before he could complete this um famous unfinished Symphony and so this was the output of the neural network that uh was asked to now compose two new movements based on um the prior prior true movements let's see if [Music] it goes on but you can you can appreciate the quality of that and I will also like to briefly highlight that on Thursday we're going to have an awesome guest lecture that's going to take this idea of Music generation to a whole new level so stay tuned I'll just give a a teaser and a preview for that more to come we also introduced uce again this problem of you know sequence classification something like assigning a sentiment to a input sentence and again right we can think of this as a classification problem where now we reason and operate over the sequence data but we're ultimately trying to produce a probability associated with that sequence whether a sentence is positive or negative for example so this gives you right two flavors music generation sequence to sequence generation and also classification that we can think about with using recurrent models but you know we've talked about right these design criteria of what we actually want any neural network model to do when handling sequential data it doesn't mean the answer has to be an RNN in fact RNN have some really fundamental limitations because of the very fact that they're operating in this time step by time step manner the first is that to encode really really long sequences the memory capacity is effectively bottlenecking our ability to do that and what that means is that information in very long sequences can be lost by imposing a bottleneck in the size of that hidden state that the RNN is actually trying to learn furthermore because we have to look at each slice in that sequence one by one it can be really computationally slow and intensive to do this for when things get longer and longer and as we talked about with respect to long-term dependencies Vanishing gradients the memory capacity of a standard RNN is simply not that much for being able to track uh sequence data effectively so let's let's break this down these problems down a little further right thinking back to our high level goal of sequence modeling we want to take our input broken down time step by time step and basically learn some neural network features based on that and use those features to now generate series of outputs rnm say okay we're going to do this by linking the information time step to time step via this state update and Via this idea of recurrence but as we saw right there these core limitations to that iterative computation that iterative update indeed though if we think about what we really want what we want is we now want to no longer be constrained to thinking about things time step by time step so long as we have a continuous stream of information we want our model to be able to handle this we want the computation to be efficient we want to be able to have this long memory capacity to handle those dependencies and uh and Rich information so what if we eliminated this need to process the information sequentially time step by time step get away with recurrence entirely how could we learn a neural network in this setting a naive approach that we could take is we say okay well you know we have sequence data but what if we all mush it all together smash it together and concatenate it into a single Vector feed it into the model calculate some features and then use that to generate output well this may seem like a good first try but yes while we've eliminated this recurrence we've completely eliminated the notion of sequence in the data we've restricted our scalability because we've said okay we are going to put everything together into a single input we've eliminated order and again as a result of that we've lost this memory capacity the core idea that came about about 5 years ago when thinking about now how can we build a more effective architecture for sequence modeling problems was rather than thinking about things time step by time step in isolation let's take a sequence for what it is and learn a neural network model that can tell us what parts of that sequence are the actually important parts what is conveying important information that the network should be capturing and learning and this is the core idea of attention which is a very very powerful mechanism to monitor uh neural networks that are used used in sequential processing tasks so to Prelude to what is to come and also to uh couple lectures down the line I'm sure everyone in this room hopefully has raise your hand if you've heard of GPT or chat GPT or Bert hopefully everyone who knows what that t stands for Transformer right the Transformer is a type of neural network architecture that is built on attention as its foundational mechanism right so in the remainder of this lecture you're going to get a sense of how attention Works what it's doing and why it's such a powerful building block for these big architectures like Transformers and I think attention is a beautiful concept it's really elegant and intuitive and hopefully we can convey that in what follows okay so the core nugget the core intuition is this idea of let's attend to and extract the most important parts of an input and what we'll specifically be focusing on is what we call Self attention attending to the input parts of the input itself let's look at this image of the hero Iron Man right how can we figure out what's important in this image a super naive way super naive would be just scan pixel to pixel and look at each one right and then be able to say okay this is important this is not so on but our brains are immediately able to look at this and pick out yes Iron Man is important we can focus in and attend to that that's the intuition identifying which parts of an input to attend to and pulling out the Associated feature that has this High attention score this is really similar to how we think about searching searching from across a database or searching across an input to pull out those important parts so let's say now you have a search problem you came to this class with the question how can I learn more about neural Network's deep learning AI one thing you may do in addition to coming to this class is to go to to the internet go to YouTube and say let's try to find something that's going to help me in this search right so now we're searching across a giant database how can we find and attend to what's the relevant video in helping us with our search problem well you start by supplying an ask a query deep learning and now that query has to be compared to what we have in our database titles of different videos that exist let's call these keys and now our operation is to take that query and our brains are matching what my query is closest to right is it this this first uh video of beautiful elegant sea turtles in coral reefs how similar is my query to that not similar is it similar to this second key value key uh key entity the lecture 20 2020 introduction to deep learning yes is it similar to this last key no so we're Computing this effective attention mask of this metric of how similar our query is to each of these Keys now we want to be able to actually pull the relevant information from that extract some value from that match and this is the return of the value that has this highest notion of this intuition of attention this is a metaphor right an analogy with this problem of search but it can conveys these three key components of the attention mechanism and how it actually operates mathematically so let's break that down and let's now go back to our sequence model problem of looking at a sentence in natural language and trying to model that our goal with a neural network that employs self atttention is to look at this input and identify and attend to the features that are most important what we do is first right we're not going to I said we're not going to handle this sequence time step by time step but we still need a way to process and preserve information about the position and the order what is done in self attention and in Transformers is an operation that we call a position aware encoding or a positional encoding we're not going to go into the details of this mathematically but the idea is that we can learn an embedding that preserves information about the relative positions of the components of the sequence and they're neat and elegant um math solutions that allow us to do this very effectively but all you need to know for the purpose of this class is we take our input and do this computation that gives us a position aware embedding now we take that positional embedding and compute a way of that query that key and that value for our search operation remember our task here is to pull out what in that input is relevant to the our search how we do this is the message of this class overall by learning neural network layers and in intention and in Transformers that same positional embedding is replicated three times across three separate neural network linear layers that are used to compute the values of the query of the key and the value the these three sets of matrices these are the three uh The three principal components of that search operation that I introduce with the YouTube analogy query key and value now we do that same exact task of computing the similarity to be able to compute this attention score what is done is we take the query Matrix the key Matrix and Define a way to compute how similar they are remember right these are numerical matrices in some space and so intuitively you can think of them as vectors in the high dimensional space when two vectors are in that same space we can look and measure mathematically how close they are to each other using a metric uh by Computing the dot product of those two vectors otherwise known as the cosine similarity and that reflects how similar these query and key uh matrices are in this space that gives us a way once we employ some scaling to actually compute a metric of this attention weight and so now thinking about what this operation is actually doing remember right this whole point of query and key computation is to find the features and the components of the input that are important this self attention and so what is done is if we take the let's say visualizing the words in a sentence we can compute this self attention score this attention waiting that now allows us to interpret what is the relative relationship of those words in that sentence with respect to how they relate to each other and that's all by virtue of the fact that we're again learning this operation directly over the input and attending to parts of it itself we can then basically squish right that that similarity to be between zero and one using an operation known as a soft Max and this gives us concrete weights that compute these attention scores the final step in the whole self attention pipeline is now to take that attention waiting take our value Matrix multiply them together and actually uh extract features from this operation and so it's a really elegant idea of taking the input itself and taking these three interacting components of query key and value to not only identify what's important but actually extract relevant features based on those attention scores let's put it all together step by step the overall goal here identify and attend to the most important features we take our positional encoding captures some notion of order and position we extract these query key value matrices using these learned linear layers we compute uh the metric of similarity using the cosine similarity computed through the dot product we scale it and apply softmax to put it between zero and one constituting our attention weights and finally we take that entity multiply with the value and use this to actually ex extract features uh back relative to the input itself that have these high attention scores all this put together forms what we call a single self attention head and the beauty of this is now you have a hierarchy and you can put multiple attention heads together to design a larger neural network like a Transformer and the idea here is that this attention mechanism is really the foundational building block of the Transformer architecture and the way that this architecture is so powerful is the fact that we can parallelize and stack these attention heads together to basically be able to attend to different features different components of the input that are important so we may have let's say one attention mask that's attending to Iron Man in the image and that's the output of the first attention head but we could have other attention heads that are uh picking up on other relevant features and other components of this complex space so hopefully right you've got an understanding of the inner workings of this mechanism and its intuition and the Elegance of this operation and attention is now really we're seeing it and the basis of the Transformer architecture applied to many many different domains and many different settings perhaps the most prominent and most notable is in natural language with models like GPT which is the basis of a tool like chat GPT and so we'll actually get hands-on experience building and fine-tuning large language models uh in the final lab of the course and also go more into the details of these architectures later on as well it doesn't just stop there though right because of this natural notion of what is language what is sequence the idea of attention and of a transformer extends far beyond just human text and written language right we can model sequences in biology like DNA a or protein sequences using these same principles and these same struct uh structures of architectures to now reason about biology in a very complex way to do things like accurately predict the three-dimensional shape of a protein based solely on sequence information finally right Transformers and the notion of attention have been applied to things that are not intuitively sequenced data or language at all even in tasks like computer vision with architectures known as Vision Transformers that are again employing this same notion of self- attention so to close up and summarize right this is a very whirlwind sprint through what sequence modeling is how rnns are a good first starting point for sequence modeling tasks using this notion of time step processing and recur we can train them using backrop through time we can deploy rnns and other types of sequence models for a variety of tasks music generation and Beyond we saw how we can go beyond recurrence to actually learn self- attention mechanisms to model sequences without having to go time step by time step and finally how self- attention can form the basis for very powerful very large architectures like large leg models so that concludes uh the lecture portion for today thank you so much for your attention and for bearing with us through this uh this Sprint and this boot camp of a week and with that I'll close and say that we now have open time to Talk Amongst each other talks with the Tas and with the instructors about your questions get started with the labs get started implementing thank you so much
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2020_Machine_Learning_for_Scent.txt
you hey everybody first of all thank you for inviting me thank you for organizing all this this seems like a really really cool what's it called No so j-term is Harvard what's this called IP okay cool so this I'm sure there's many different courses you could choose from it's really cool that you were able to choose this one so I'm going to tell you a bit about some of the work that I'm doing now in the past I've done kind of machine learning for biology have done machine learning systems I'm kind of pure methodology stuff and this is a bit of a weird project but it's my favorite project I've done in my life so far I've waited a long time to do it it's still really early so this isn't like a collection of you know great works and great conferences at this point this is like kind of fresh off the presses so any feedback actually is is definitely welcome so kind of venturing into unknown territories and I should mention this is the work of many people I'm very proud to represent them here but this is definitely not a solo effort at all even though I'm the only person here so my name is Alex I'm a research scientist at Google research there's a when you work at a mega Corp there's interesting levels of kind of organization of things so if you hear kind of these words I'll kind of explain what the different tiers mean so Google research is all the researchers at Google Google brain is the sub team that's existed for some time that focuses on deep learning and then I lead a team inside of Google brain that focuses on machine learning for olfaction and Google research is big when I joined it didn't appreciate just how big it was so there's 3,500 researchers and engineers across 18 offices I think is actually out of date in 11 countries and the large scale mandate is to make machines intelligent and improve people's lives and that could mean a lot of different things and our approach generally and the number one bullet here is kind of my favorite and this is where I spend most of my time is doing foundational research right so we're kind of in at least in my opinion another golden era of industrial research kind of like you know Bell Labs and Xerox PARC like those those eras now we have really wonderful thriving industrial research labs today and I feel really fortunate to be able to kind of work with people who are in this environment we also build tools to enable research and democratize artificial intelligence and machine learning tensorflow like we've got the shirts up there I don't have that shirt so that's kind of a collector's item I guess we you know open-source tools to help people I'll be able to use and deploy machine learning in their own endeavors and then also internally enabling Google products with artificial intelligence and that was kind of one of the original or one of the activities that Google brain has done a lot historically is to collaborate with teams across Google to to add artificial intelligence and here's some logos of some of the products that AI and ml has impacted you can see you know YouTube on there you can see search ads Drive Android and a lot of these have some things in common which is Google knows a lot about what the world looks like and a lot about what the world sounds like but this is kind of where it gets a little bit sci-fi this is where I step in it doesn't know a lot about what the world smells like and tastes like and that might seem silly but there's a lot of restaurants out there a lot of menu items there's a lot of natural gas leaks there's a lot of sewage leaks there's a lot of things that you might want to smell or avoid and further than that in terms of you know building like a Google Maps for what the world tastes and smells like there's control systems that were you might actually want to know what something smells like like a motor is burned out right or you might want to know what something tastes like if there's a contaminant in some giant you know shipment of orange juice or something like that so we haven't digitized the sense of smell it seems a little bit silly that we might want to do that but that was perhaps something that seemed silly for vision before the talkies right before movies and before audition when the Kronecker or the phonograph came about so those those were weird things to think about for those sensory modalities in the 1800s and in the nineteen hundreds but right now it seems like digitizing sense and flavor are the silly things to think about but nonetheless we persevere and work on this so we're starting from the very very beginning with the simplest problem and I'll describe that in a second but first simple faction facts so my training is actually in olfactory neuroscience and I kind of through a secured this route ended up in machine learning and so since I have you captive here I want to teach you a little bit about how the olfactory system works so this is if you took somebody's face you sliced it in half and the interesting do I have a pointer here great so there's a big hole in the center of your face and that's where when you breathe in through your nose air is going through there most of the air that goes into your head is not smelled most of it just goes right to your lungs there's a little bit at the top there which is called it's just a patch of tissue it's like five or ten millimeters square seems a little bit more than that but it's very small and that's the only part of your head that I can actually smell and the way it's structured is nerves or axons from the nerve fibers from the olfactory bulb actually poke through your skull and they innervate the olfactory epithelium and it's it's one of only two parts of your brain that leaves your skull and contacts the environment the other ones the pituitary gland that one dips into your bloodstream which is kind of cheating and there's three words that kind of sometimes get used in the same sentence a taste scent and flavor so taste lives on your tongue flavor is a collaboration between your nose and your tongue right so what happens when you eat something is you not you masticate it you chew it up and that releases a bunch of vapors and there's a chimney effect where the heat of those vapors and of your own body shoots it back up your nose it's called retro days or olfaction have you noticed if you've had a cold things taste kind of more bland that's because your sense of smell is not working as much and is not participating in flavor so little factoids there for you before you get to the machine learning part so there's three colors envision RGB and there's three cones or cell types photoreceptor types in your eye that roughly correspond to RGB there's 400 types in the nose and we know a lot less about each one of these that we do about the photoreceptors we don't know what the receptors actually look like they've never been crystallized so we can't like build deterministic models of how they work and in in mice there's actually a thousand of these and there's two thousand in elephants and maybe elephants smell better this could be an evolutionary byproduct but they don't you we actually don't know but it's another fun party fact they're also they comprise an enormous amount of your genome so it's two percent of your genome which is of the protein coding genome which is an immense expense so like for something that we actually pay comparatively little attention to in our daily lives it's actually an enormous part of our makeup right so worth paying attention to and we don't really know which receptors respond to which ligand so basically we don't know enough about the sense itself to moderate model it deterministically which is kind of a hint like maybe we should actually skip over modeling the sense and model the direct perception that people have this is my other favorite view of the nose instead of cutting you know sagittal II like this this is a coronal section this outline is the Airways of your turbinates I think this is a really beautiful structure I think it's under top the curly bits that are big down here this is just where kind of air is flowing it's it's humidified and then this little bit up top is where you actually smells so the upper and lower turbinates and this is what I used to study in mice this is a mouse upper and lower turbinates you notice it's a lot more curly the more that smell is important to an organism the curlier this gets meaning the higher surface area there is inside of this particular sensory organ you should there's actually some cool papers for this in otters it's like this times a hundred it's really incredible if you go look at other and there's this notion that that smell is highly personalized and there's no logic to it like vision and audition we've got like fast Fourier transforms and we've got you know good board but we've got a lot of theory around how vision and hearing are structure and the idea that that is kind of pervasive is sent to somehow wishy-washy and people do smell different things it is somewhat personal but people also see different things right so who similar the this is black and blue to me but I'm sure it's white and gold to some people who is it white and gold for who is it black and blue - right so maybe vision isn't as reliable as we thought doesn't maintain itself on top of the pedestal I actually cannot unsee this as white and gold let you resolve that between your neighbors so there are examples of specific dimorphism x' in the sense of smell that can be traced to individual nucleotide difference --is right for single molecules which is pretty incredible though there's genetic underpinnings to the dye morphisms that we perceive and smell and they are common there's a lot there's a lot more snips that look like likely dye morphisms I just haven't been tested but you know 5% of the world is colorblind and 15% of the world has selective hearing loss right so let's give the sense of smell a little bit more credit and let's be aware that we each see the world hear the world and smell the world a little bit differently it doesn't mean there's not a logic to it that doesn't mean that the brain evolutionarily speaking has adapted to tracking patterns out there in order to transduce them into useful useful behaviors that help us survive so that's a bit of background on olfaction what we're doing is starting with the absolute simplest problem right so when I mentioned foundational research I really meant it so this is gonna look quite basic so this is the problem you got a molecule on the left this is um vanillin right so this is the main flavor and olfactory component of the vanilla bean the vanilla plant the flower is white so we think vanilla soap should be white but the bean is actually this kind of nice dark black when it's dried and if you've ever seen vanilla extract that's the color real vanilla is actually incredibly expensive and is the subject of a lot of criminal activity because the beans can be easily stolen and then sold for incredible markups that's the case for a lot of natural products in the flavor and fragrance space there's a lot of interesting articles so you can you can google to find out about that so part of the goal here is like if we can actually design better olfactory molecules we can like actually reduce crime and strife so the problem is we've got a molecule let's predict what it smells like sounds simple enough but there's a lot of different ways that we can describe something is having can describe it with a sentence Oh smells sweet with a hint of vanilla sand notes of creamy and a back note of chocolate which just sounds funny for vanillin but that's indeed the case what we'd like to work with is a multi label problem or you've got some finite taxonomy of what something smells it wasn't it could smell like and then only some number of them are activated here creamy sweet vanilla chocolate right so that's what we like to work with so why is this hard like what why is this something that you haven't heard is already being solved so this is Laurel you've all probably smelled this molecule you've all probably had it on your skin this is the dryer sheets knob right this is the fresh laundry smell it's a very commercially successful molecule that is now being declared is illegal in Europe because it's an allergen in some cases the u.s. we don't really care about that generally as far as I can tell I always have different standards I suppose I should say the four main characteristics are muguet which is another word for lily of the valley that's that flower of the dryer sheett smell fresh floral sweet so here are some different molecules that smell about the same right they look about the same right so for this guy we just clipped a carbon off of this chain here for this guy we just attached it to the functional group on the end right so why is this so hard right the main kind of structure the scaffold here is the same this molecule looks very different and smells the same right you can have a wild structural differences for many many different odor classes and still maintain the same odor percepts right this is a difference of where the electrons are in this ring it structurally it turns out this kind of stiffens things but even in 3d representations that are static and certainly in the graph representation here these look pretty close but you've rendered something very commercially valuable into something useless so we built a benchmark dataset I'll kind of describe where this is from and what the characteristics are but we took two sources of basically perfume materials catalogs so like the Sears catalog for things that perfumers might want to buy all right so these are think we've got about 5,000 molecules in this data set and they include things that will make it into fine fragrances or into fried chicken flavoring or all kinds of different stuff and on average there's four to five labels per molecule so here's an example this is vanillin it's actually labeled twice in the data set there's some consistency here so that's another question you might have is like how good are people at labeling what these odors actually smell like and the answer is you and me probably everybody in the room most people two-room are bad right and that's because we grew up or at least I did with a Crayola crayon box that helped me do color word associations but I didn't have anything to help me do outer word associations you can learn this it's harder to learn this as an adult but you definitely can I took my team to get perfume training it's incredibly difficult but it is it's a craft it can be practiced but amongst experts that are trained on the taxonomy people end up being quite consistent we have some human data that I don't have in this talk that indicates that that's you know that's the case there is a lot of bias in this data set skew because this is for perfumery materials who were we have a lot that we have over representation of things that smell nice alright so lots of fruit green sweet floral woody you don't see a lot of solvent bread radish perfumes and so we have kind of less of those molecules which I guess is to be expected the reason why there's a spike at 30 as we impose a hard cutoff there just we don't want to have too little representation of a particular odor class and this this is there's no modeling here this is a picture I'll kind of break it down we have a hundred and thirty-eight odor descriptors and they're arrayed on the rows and columns and so each row each column has the same kind of indexing system it's an odor ID and each IJ entry here is the frequency with which those two odor descriptors show up in the same molecule right so if I have a lot of molecules that smell like both pineapple and vanilla it doesn't happen but if I did then the pineapple vanilla index would be yellow so which shows up just in the data is the structure that reflects common sense right it's like clean things show up together pine lemon mint right toasted stuff like cocoa and coffee those go together they often co-occur together in molecules at the mono molecular level these things are cope you know correlated savory stuff like onion and beef that has kind of loco occurrence with popcorn you don't want beef popcorn generally baby actually that'd be good I have no idea dairy cheese milk stuff like that so this there's a lot of structure in this data set and historically what people have done is treat the prediction problem of going from a molecule structure to its odor one odor at a time and what this indicates is there's a ton of correlation structure here you should exploit you should do it all at once in hint deep-learning etc so what people did in the past is basically use pen and paper and intuition so this is crafts vetiver rule I presented this slide and took a little bit of a swipe at how simplistic this is and Kraft was sitting right there which was a bit of an issue we had a good back and forth and he was like yeah we did this a long time ago and we can do better but the essence of this is you observe patterns with your brain your occipital cortex and you write down what they are and you basically train on your test set and you publish a paper and that seems to be kind of the trend in like classic structure at a relationship literature there are examples of these rules being used to produce new molecules so that are generalizing the ones that I've been able to find and searching the literature on the are in the upper right hand corner here but generally these are really hard to code up right because there's some kind of fudge factor it's not quite algorithmic well people do now what kind of the incumbant approach is to take a molecule and to treat it as kind of a bag of subgraphs let me explain what that means so go through the molecule think of as a graph pick a radius so like I'm only gonna look at atoms and then yes okay what atoms are in this molecule they say okay I'm gonna look at things of radius one okay what atom atom pairs are there other carbon carbons or there you know carbon sulfur carbon oxygens you go through this comprehensively up to a radius that you choose and you hash that into a bit vector and that's your representation of the molecule it's kind of like a bag of words or back up fragments representation the modern sensation of this is called morgan fingerprints so fever here morgan fingerprints or circular fingerprints this is effectively the strategy that's going on you typically put a random forest on top of that make a prediction you predict all kinds of things toxicity you solubility whether or not it's going to be a good material for photos photovoltaic cells whether that's gonna be even battery and the reason why you know this is the baseline is because it worked really well and its really simple to implement right so you can once you've got your molecules loaded in you can start making predictions in two lines of code with RT kit and scikit-learn and it's a strong baseline so you kind of hard to beat sometimes so you know you'd expect the next slide is we did deep learning at it but let's take a little bit of a step back first so in the course so far you've certainly touched on feed-forward neural networks and kind of their you know most famous instantiation the convolutional neural network and the trick there is you've got a bunch of images and you've got labels that are the human labeled ground truth for what's in that image you pass in the pixels use you know through successive layers of mathematical transformations you progressively generalize what's in that image until you've distilled down the essence of that image then you make a decision yes cat no cat and you know when you see some new image they was not in your training set you hope that if your model is well trained that you're able to take this unseen image and predict you know yes dog no dog even if it's a challenging situation and this has been applied in all kinds of different domains and neural rendering is one that I did not know a lot about and that is super cool stuff so pixels is input predict what's in it right audio is input the trick here is to turn it into an image so to calculate the spectrogram of audio and then and then do a comment on that or an LST M on the on the time slices and then you transcribe that to the speech that's inside of that spectrogram text-to-text for translation image captioning all of these tasks have something in common which is that as was alluded to in the previous talk they all have a regularly shaped input the pixels are a rectangular grid right that's very friendly to the whole history of all the kind of statistical techniques we have text is very regular it's like a string of characters from a finite size alphabet you can think of that as a rectangle like a1 wherever there's character in a 0 for the unused part of the alphabet and the next character etc hard to put this into a rectangle right you can imagine taking a picture of it and then trying to predict on that but if you rotate this thing it's still the same molecule but you've probably broken your prediction that's really data inefficient so these things are most naturally represented as graphs and kind of like meshes like that's actually not very natural to give to classical machine learning techniques but what's happened in the past three four or five years there's been an increasing maturity of some techniques that are broadly referred to as graft neural networks and the idea here is to not try to you know fudge the graph into being something that it's not but to actually take the graph as input and to make predictions on top of it and this is this has opened up a lot of different application areas so I'll talk about chemistry today where we're predicting a property of a whole graph but this has also been useful for like protein-protein interaction networks where you actually care about the nodes or you care about the edges will there be an interaction social network graphs where people are nodes and friendships or edges and you might want to predict does this person exist or does this friendship potentially exist what's the likelihood of it in citation that works as well so interactions between anything right are naturally phrased as graphs and so that's that's what we use but let me show you kind of how it works in practice for chemistry so first thing is you've got a molecule it's nice to save make it a graph but like what exactly does that mean so the way we do this is so pick an atom and you want to give each node in this graph of vector representation so you want to load information into a graph about kind of what's what's at the atom so you might say with the hydrogen count the charge the degree the atom type that might be a one hot doctor you concatenate in and then you place it into the node of the graph and then this is the GNN part right so this is a message passing process by which you pick an atom and you induce for every atom pick an atom you go grab its neighbors you grab the vector representation and its neighbors you can sum it you can concatenate it basically you get to choose what that function is you passed that to a neural network that gets transformed you're gonna learn what that neural network is based on some loss and then you put that new vector representation back in the same place you've got it from and that's one round of message passing right the number of times you do this we just call that the number of layers in the in the graph neural network sometimes you parameterize the neural network differently a different layers sometimes you share the weights but that's the essence of it and what happens is around for this molecule like five or six or so the atom at the far or the node now because it's no longer an atom because it's informations been mixed the node at the far end of the molecule actually has information from the other end of the molecule so it's you know over successive rounds of message passing you can aggregate a lot of information and if you want to make predictions it seems a little bit silly but you just take every vector in every node once you're done with this and you sum them you can take the average you can take the max whatever I mean the kind of hyper parameter tune that for your application but for whole graph predictions something works really well in practice and there's some reasons for that that have been talked about in the literature and then that's a new vector that's your graph representation pass that to neural network and you can make whatever prediction you like in our case the multi-label problem of what does it smell like and how well can we predict really good predict real good on the x-axis is the performance of the strongest baseline model that we could come up with which is a random forest on count based fingerprint so when I said it's a bag of fragments you either say 0 or 1 the fragments present what's better to do is to count the number of those sub fragments that are present this is the strongest count of chemo and formatic baseline that we know of and on the y-axis is the performance of our graph neural network and you can see we're better than the baseline for almost everything what's interesting is the ones that were not better at bitter and music medicinal are some selections in talking with experts in the field the consensus is that the best way to predict these things is to use molecular weight and it's actually surprisingly difficult to get these graphed neural networks to learn something global like molecular weight and so we've got some tricks that we're working - basically sideload graph level information and hopefully improve performance across the board so this is a this kind of hardest benchmark that we know about a structure odor relation prediction and we're pretty good at it but you know we'd actually like to understand what our neural network has learned about odor because we're not just making these predictions for the sake of beating a benchmark we actually want to understand how odor is structured and use that representation to build you know other technologies so in the penultimate layer of the neural network stack on top of the GNN there's an embedding you guys talked about embedding from this course okay cool so the embedding is a notion of like the general representation of some input that the neural network has learned that it's good and that's the last thing is going to use to actually make a decision so let me show you what our embeddings look like so the first two dimensions here are these are the two first principal components of a 63 dimensional vector this is not TC or anything like that so you can think of this as like a two-dimensional shadow of a 63 dimensional object each dot here is a molecule and each molecule has a smell and if you pick one odor like Musk we can draw a little boundary around where we find most of the musk molecules and color them but we've got other odors as well and you know it seems like they're sorting themselves up very nicely we know this they have to sort themselves out nicely because we classify well so it's kind of tautologically true but what's interesting is we actually have these macro labels like floral we have many flowers in our data set you might wonder ok floral where is that in relation to rose or lily or or muguet and it turns out that you know floral is this macro class and inside of it are all the flowers this is kind of like fractal structure to this embedding space we didn't tell it about that all right I just kind of learned naturally that there's an assortment of how odor is arrayed and you know there's the meaty cluster which conveniently looks like a t-bone steak if you squint your eyes my favorite is the alcoholic cluster because it looks like a bottle i'm never retraining this network because that's definitely not going to be true next time and you know this is kind of an indication that something is being learned about odor there's a structure that's happening here it's amazing this comes out in PCA like this almost never happens at least in in the stuff that I've worked with for a linear dimensionality reduction technique to reveal a real structure about the task and the way we kind of this it's this itself as an object of study and we're in the beginning stages of trying to understand what this is what it's useful for what it does we view it a little bit as like the first draft or V 0.01 of an RGB for odor or is like an odor space an odour codec right color spaces are really important in vision and without it we wouldn't really be able to have our cameras talk to our computers talk to our display devices right so we need something like that if we're gonna digitize the sense of smell we need a theory or a structure and because odor is something that doesn't exist in outer space it's not universally true it's uniquely planet Earth it's uniquely human taking a data-driven approach might not be that unreasonable it might not be something that Newton or guff could have come across themselves through first principles since evolution had a lot to do with it so that's the embedding space it's kind of a global picture of how odor is structured through the eyes of a fancy neural network but what about local right so I told you about global structure what about local structure I you know I could maybe wave my hands and tell you that nearby molecules smell similar because there's a little clumps of stuff but we can actually go and test that and this is also the task that our indie chemists and flavor and fragrance engage in they say here's my target molecule it's being taken off the market or it's too expensive find me stuff nearby and let's see what its properties are maybe it's less of an allergen maybe it's cheaper maybe it's easier to make so let's first use structure let's use nearest neighbors lookups nearest neighbor lookups using those bag of fragments representations that I showed you the chemo informatics name for this distance is called 10 Emoto distance it's jacquard on the bit base morgan fingerprints and this is kind of the standard way to do lookups in chemistry and let's start with something like dihydrogen so if you look at the structural nearest neighbors it gets little sub fragments right right little pieces of the molecule they all match right but almost none of them actually smell like the target molecule now if you use our GCN features so if you use cosine distance in our embedding space what you get is a lot of molecules that have the same kind of convex hull they look really really similar and they also smell similar we showed this to a fragrance R&D chemist and she said oh those are all bio Isis tears and I was like that's awesome well it's that what's a bio Isis there I have no idea what that is as you said bio isis tears are lateral moves in chemical space that maintain biological activity so the little things you can do to a molecule that make it look almost the same but it's now a different structure and don't mess with its biological activity and to her I and again I'm not an expert in in this specifically these were all kind of lateral moves and chemical space that she would have come up with except for this one she said that's really interesting I wouldn't have thought of that and my colleagues said the highest praise you can ever get from a chemist is high I wouldn't have thought of that so you know that's great we've hit the ceiling so I've showed you we can predict well relative to baselines the embedding space has some really interesting structure both globally and locally and the question now is like well is this all true inside of this you know bubble inside of this task that we designed using a data set that we curated and this is generally really good test to move out of your data set so will this model generalize to new adjacent tasks right and you know this is you talk about transfer learning in this course yet domain adaptation so one of the big tricks in industrial machine learning is transfer learning so train a big model on imagenet and then use that model freeze it and you know take off the top layer that just has the decisions for what the image classes are so like you've got dog and cat maybe you want to predict you know house and car you take off the dog and you know dog and cat layer you put on the house and car layer you just train that last layer it's called fine-tuning or transfer learning they're kind of related that works extremely well in images yes you might hear a fine-tuning or transfer learning there's really no examples of this working in a convincing way to my eye and chemistry so the question is like do we expect this to work at all there's an excuse I like this progression in time there was an xkcd cartoon which is you know alright we would say when a user takes a photo we should check if they're in a national park easy GPS lookup and then we want to check if the photo is of a bird and the response in I think this was like 2011 or something like that is I'll need a research team in five years for fun a team at Flickr made this by fine-tuning an image net model right so is this a bird or is this a park so there's that there was it was a large technological leap in between these two so this this really really works in images but it's unclear if it works in general on graphs or specifically in chemistry and what we did is we took our embeddings we froze the embeddings and we added a random forests on top of it or logistic regression I don't remember the models here there's two main data sets that are kind of the benchmarks and odered the dream will factory challenge and the drought mix data set they are both interesting they both have challenges they're not very large but that's kind of what's the standard in the field we don't have state of the art on both of these through transfer learning to these tasks so this actually really surprised us and it's really encouraged us that we actually learned something fundamental about how humans smell molecular structures so the you know the kind of remaining question to me is like this is all really great you've got a great neural network but I occasionally have to convince chemists to make some of these molecules and a question that often comes up is why should I make this molecule what about this makes it smell like vanilla or popcorn cinnamon and so we'd like to try to open up the innards of the neural network or at least expose what the model is attending to when it's making decisions so we set up a really simple positive control and built some methodology around attribution and I'll show you what we did so the first thing we did is to set up a simple task predict if there's a benzene in a molecule the benzene is a six atom ring where you've got three double bonds it's trivial this task is trivial but there's a lot of ways to cheat on the task because there's any statistical anomalies like benzene Kokura for chlorine you might just say okay look at the chlorine and predict that so we wanted to make sure that our attributions weren't weren't cheating and so we built an attribution method this is something that's being submitted right now and what should come out and what does come out indeed is a lot of weight on the benzene zin no weight elsewhere and we've kind of verified that this is the case across lots of different molecules and when it's not a benzene there's no weight when there is a benzene there's a weight on the benzene and sometimes some leakage and we've improved this this is kind of not our current best at this point so this means we can go look at the actual odors that are in our dataset like garlic garlic's actually really easy to predict you count the number of Sulphurs and if there's a lot it's gonna smell really bad like rotten eggs or sulfurous or garlicky and so this is a bit of a sanity check you'll notice that this guy down here has a sulfur these types of molecules show up and beer a lot they're responsible for a lot of the hop aroma in beers so there's this one is is a like a grapefruit characteristic or something like that and these sulfur's are the sulfur's that eventually contribute to the skunked smell or taste of beer because these molecule can oxidize those sulfurous can then leave the molecule and contribute to a new odor which is less pleasant the Walter a part of the molecule they don't contribute to that sulfuric smell they contribute to the grapefruit smell fatty this one we thought was going to be easy and we showed it to flavor and fragrance experts and they were a little bit astounded that we could predict as well so apparently having a big long chain of you know like a fatty chain is not sufficient for its knowing fatty it turns out that a class of molecules called terpenes this is like the main flavor component in marijuana has incredible diversity and we kind of have like an olfactory phobia on molecules like this so one carbon differences can take something from like coconut to pineapple and we have incredible acuity here because perhaps they're really prevalent in plants and things that are safe to eat right so we might have an O representation of sensitivity to molecules of this kind total speculation I'm on video I guess it probably shouldn't have said that and then vanilla so this is a commercially really interesting class and this is the attributions here have been validated by the intuitions of Rd flavor chemists I can't explain that intuition to you because they don't have it but I got a lot of this so that's the best best as we've we've got at this point we haven't done a formal evaluation of how useful these things are but this is to me a tool in the toolbox of trust building so it's not enough really to build a machine learning model if you want to do something with it you have to solve the sociological and cultural problem of getting it in the hands of people who will use it that is often more challenging than building the model itself so data cleaning is the hardest thing and then convincing people that you should use the thing that you built is the second hardest thing the fancy machine learning stuff is neat but you can you can learn it pretty quickly you're all extremely smart that will not end up being the hardest thing very very quickly and then whiny we don't know what's going on here but this is something that we're investigating in collaboration with with experts so that's kind of the state of the state of things this is really early research we're exploring what it means to digitize the sense of smell and we're starting with the simplest possible task which is why does it molecule smell the way that it does we're using graph neural networks to do this which is a really fun new emerging area of technology in machine learning and deep learning there's really interesting and interpretable embedding space that we're looking into that could be used as a codec for electronic noses or for scent delivery devices and we've got state of the art on the existing benchmarks that's a nice validation that we're doing a good job in the modeling but there's you know there's a lot more there's a lot more to do we really want to test this all right so we want to we want to see if this actually works in human beings that are not part of our historical data set but ones that have really never been part of our evaluation process and that's something we're thinking about right now you also never smell molecules alone that's a very rare it's actually hard to do even if you order seeing them all with those those contaminants can be a bit Challenge so thinking about this in the context of mixtures is a bit of a challenge so what should that representation be is that a weighted set of graphs is it like a meta graph of graphs like I don't actually know how to represent mixtures in an effective way and machine learning models and that's something that we're thinking about and then also the dataset that we have is what we're able to pull together right it's not the ideal data set it's definitely gotten us off the ground but there is no image Annette of sent but there you know there wasn't an image meta vision for a long long time but we want to get a head start on this and so this is something that we're thinking about and investing in as well and again I'm super fortunate to work with incredible people I just want to call them all out here Ben Bryan Kari Emily and Jennifer are absolutely amazing fantastic people they did the actual real work and I you know feel fortunate to be able to represent them here so thanks again for having me and yeah any questions happy to have it answer [Applause]
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2020_Neurosymbolic_AI.txt
you thanks so much for inviting me this is a great this is a great class and a great program and I'm really excited to see deep learning front and center as part of IEP I understand you've covered kind of the basics of deep learning and I'm going to tell you today about something that's a little bit of a mash up on top of sort of standard deep learning kind of going beyond deep learning but before I start I just want to say something about this word artificial intelligence because you know I had artificial intelligence in the last slide if you look at my business card it actually says artificial intelligence in two separate places on that card and I actually also have a confession I'm a recovering academic so I was a professor at Harvard for ten years I just joined IBM about two years ago this is my first real job my mom is very proud of me that I finally got out of school and I will say as an academic researcher working on AI we hated this term 2017 and before we would do anything we could not to say these words you know we'd say machine learning or we'd say deep learning be more specific but 2018 and beyond for whatever reason we've all given up and we're all calling it a I you know we're calling it AI Google's calling it AI academics are calling it AI but when I when I got to IBM they had done something that I really appreciated and it helps kind of frame the discussion it will frame the discussions about what I'm gonna tell you about research wise in a minute that's just to do something very simple since are part of something that IBM does called the global technology outlook which is like a annual process where we vision for that company for the corporation what the future lies holds ahead and they did something very simple just to put adjectives in front of AI just to distinguish what we're talking about when we're talking about different things so this will be relevant to where I where we want to push relative to where we are today with deep learning to where we want to go and that's to distinguish what we have today as narrow AI so it's not to say it's not powerful or disruptive but just to say that it's limited in important ways and also to kind of distinguish it from general AI which is the stuff that the public and the press likes to talk about sometimes when IBM research which if you don't know where a 4,300 person global research organization they have six Nobel Prizes been around for 75 years when IBM research tried to decide when this was going to happen when generally I was gonna happen we said 2050 and Beyond and basically when you when you ask scientists something and they tell you 2015 beyond when it's coming that means we have no idea but it's no time soon but in the middle of this notion of broad iya and that's really what what we're we're here today about and what the lab Iran is about and you know just to unpack this one level deeper you know we have on one hand we have have general AI this is this idea of you know broadly autonomous systems that can decide what they do on their own this is the kind of thing that Elon Musk described as summoning the demon so congratulations everyone you're helping to summon the demon according to Johann Musk or you know slightly more level-headed people like Stephen Hawking warning that artificial intelligence could end and mankind yeah this is kind of you know maybe we need to worry about this in the future but actually what I'll argue in just a minute is that we're actually in quite a bit more limited space right now and really if this this broad AI that we really need to focus on so how do we build systems that are multi task and multi domain they can take knowledge from one place apply it in another that can incorporate lots of different kinds of data not just you know images or video but images video tax structured data unstructured data it's distributed it runs in the cloud but also runs an edge devices and it's explainable so we can understand what these systems do and this is basically then the the roadmap for everything that my lab does so we're we're asking what are the barriers we need to break down to bring in this era where we can apply AI to all the different kinds of problems that we need to apply to so things like explain ability we need to have systems that aren't just black boxes but we can look inside and understand why they make decisions when they make a right decision we know why it made that decision when they make a wrong decision we have the ability to get reach in and figure out how we would do bug that system one interesting thing about the AI revolution and you know back in the day people said that software was gonna eat the world and these days Jennsen Wang the CEO of Nvidia is on record saying that AI is going to eat software I think increasingly that's true we're gonna have data-driven software systems that are based on technology like deep learning but the terrifying thing about that is we don't really yet have debuggers it's very hard in many cases to figure out why systems aren't working so this is something that's really holding ái today security I'll tell you a little bit about the kind of weird world of security we live in now with AI where AI systems can be hacked in interesting ways we close those gaps to be able to really realize the full potential of AI systems need to be fair and I'm biased you know we that's both the thing that's good for the world but it's also the case that in many regulated industries like the kinds of companies that I've VM serves like banks they're regulated such that the government insists that their systems be provably fair we need to be able to look inside see and understand that the decisions the system will make will be fair and unbiased and then on a practical level you know I think the real battleground going forward for deep learning it for AI in general as much as people talk about big data actually the most interesting battlegrounds that we see across many different industries all have to do with small data so how do we work with very small amounts of data you know it turns out if you look across all the businesses that make the world run heavy industries health care financial services most of the problems that those companies faced and that we face in the world in general don't have enormous annotated carefully curated data sets to go with them so if we're gonna be able to use AI broadly and tackle all of these you know hard problems that we want to solve we need to be able to learn how to do more with less data so part of that has to do with things like transfer learning learning to transfer from one domain to another so learn in one domain and then use that knowledge somewhere else but increasingly and this is what I'm going to tell you about today there's this notion of reasoning so how do we not only extract the structure of the data we're looking at the data domain we're interested in but then also be able to logically and fluidly reason about that data and then finally just to close it out just give you a little bit of a pitch about what the lab is and what we do there's also a piece about infrastructure that we think is really important so if you track energy usage from computing year-over-year by some estimates by the year 2040 if we keep increasing our energy usage due to computing will exceed the power budget of the planet Earth there won't be enough solar radiation from the Sun not enough stuff we can dig up out of the earth and burn to fuel our computing habit and AI isn't helping the planet is not helping so many models that we train we'll take the equivalent energy of running a whole city for several days just for one model and that's obviously not gonna not going to last for a long time so we also do work both at the algorithmic level some of which I'll tell you about today but also at the at the physics level to ask can we build different kinds of computers so this is a member of a diagram of a member steff device this is an analog computer which we think we can get power consumption and for deep learning workloads down by maybe a factor of 100 or even a thousand and we're also working in quantum computing IBM as you may know is one of the leaders in quantum we have some of the biggest quantum computers that are available today and we're asking how that all interacts with AI so you know when IBM when we set up through this challenge of how do we make AI broadly applicable to all the kinds of problems that we'd like to apply AI to just as a small plug for the lab since we're here we decided we didn't want to do it alone and we chose a partner and a particularly chose MIT and actually you know the idea being that this is one of like the the last you know last standing industrial research labs of the bell lab era IBM Research together with MIT which obviously needs no introduction because we're here right now and we're partnering around AI and just give you a little bit of historical context it actually turns out that IBM and MIT have been together since the beginning of AI literally since the term artificial intelligence was coined way back in 1956 so right when the very first computers were being developed Nathaniel Rochester who's the the gentleman right there who developed the IBM 701 which is one of the first practical computers got together with MIT professors like Emma like John McCarthy and dreamed up this future of AI and it's actually really fascinating I encourage you all to go and find the the proposal for this workshop because a lot of the language including neural network language is all here like bigger they got a lot of the words right they were just a little bit off on the timescale you know you know baby like seven decades off but but really interesting and you know the partnership here the idea here is that we're combining the long horizon time horizon that MIT brings to the creation of knowledge you know you know maybe a hundred year time horizons you know departments of everything from chemistry biology economics and physics together with IBM where we have a lot of those same departments because we're such a big research organization but to bring those together with industry problems to bring data to the table so we can do the kind of research we want to do and to bring the compute resources along as well so this is what the lab is and what we do were we were founded with a quarter billion dollar investment over ten years from from IBM and we have 50 projects currently more than 50 projects currently running over a hundred and 50 research across researchers across MIT and IBM and there are opportunities for undergraduates for graduate students to be involved in these projects so if you're interested in the things I show you today we'd love to have you join our team either on the MIT side or on the IBM side and we're basically drawing from all of the different departments of MIT and even though we've only been running for about a year and a half we have over a hundred publications and top academic conferences and journals we had 17 papers in Europe's just a few months ago just to give you a sense of that everything is up and running so you know this is this is the evolution this is where we're going so why why do I say that today's AI is narrow why would I say that because clearly AI is powerful you know in particular you know in 2015 Forbes said you know that deep learning machine intelligence would eat the world and you know I think it's safe to say that the progress you know since 2012 or so has been incredibly rapid so this was a paper that really for me as a researcher was working in computer vision really convinced me that something dramatic was happening so this was a paper from Andre Carpathia now leads Tesla's AI program together with Faye Faye Lee who created the image net data set and they built a system that you probably had study a little bit in this course where they can take an image and produce a beautiful natural language caption so it takes an input like this and it produces a caption like a man in a black shirt is playing a guitar or you taking this image can you get a construction worker in an orange safety vest he's working on on the road when I started studying computer vision and AI and machine learning I wasn't sure we were gonna actually achieve this even in my career or perhaps even in my lifetime like it was it was it seems like such science-fiction it's so commonplace now that we have systems that can do that so it's hard to overstate how important deep learning has been in the progress of AI and machine learning you know meanwhile there are very few games left that humans are better than machines app everything from you know jeopardy which IBM did way back in 2011 to go with alphago from deep mind group at Carnegie Mellon created a system that could beat the world champion in poker and recently my own company created a system called Project debater that can actually carry on a pretty credible natural language debate with a debate champion so if you like your computers to argue with you we can we can do that for you now and even domains like art which we would have thought maybe would have been privileged domains for humanity like surely machines can't create art right but you know that's not the case so even way back in 2015 which now feels like a long time ago Mattias Becca's group in at the moxa plunk in tubing and created a system of with style transfer where you could go from a photograph of your own and then re-render it in the style of any artist you like this is a very simple style transfer model that leveraged the internal representation of a convolutional neural network up to what we have today which is again just astonishing how fast progress is moving these this is a these are the outputs from a system called big game which came from deep mind and all four of these images are all of things that don't exist in the real world so this dog not a real dog you put it in a random vector into the game and into the begin and it generates whole cloth this this beautiful high-resolution dog or this bubble or this Cup and this is actually getting to be a problem now because now we have this notion of deep fakes we're getting so good creating fake images that now we we were having to come up with actual countermeasures and that's actually one thing we're working on in in the laboratory I run it at IBM where we're trying to find Gantt add oats you know antidotes countermeasures against Gans as we move forward so clearly the progress is impressive so why am I saying that AI is still narrow today well does anyone know what this is anyone have any ideas yeah good job but you're wrong it turns out it's a teddy bear so if you ask a state-of-the-art imagenet trained CNN and you'll often see these CNN's described as being superhuman in their accuracy has anyone heard that before like they'll say object recognition is a solved problem these image net trained CNN's can do better than humans if you've ever actually done looked at image that carefully the reason that's true is because image net required it has huge numbers of categories of dogs so you basically need to be a dog show judge to be able to outperform a human at image net but this is starting to you know illustrate a problem this image is real so this is a piece of art and in the Museum of Modern Art in New York by Meret Oppenheim called the DNA on for a luncheon in fur a little bit unsettling image but we not like who thought it was a teddy bear right like like the most untidy bear like image ever right why did why did the CNN think this was a teddy bear soft and fluffy it's round it's got fur what kinds of things in the training set would be round and furry teddy bears you know it's a little bit of a garbage in garbage out kind of scenario this is in many ways you know people talk about corner cases or edge cases those rare things that happen but are different from the distribution you've trained on previously and and this is a great example of such a such a thing so this is starting to show that even though deep learning systems we have today are amazing and they are amazing there's you know there's room for improvement this on missing here and actually if we dig a little bit deeper which you know a variety of researchers have done so this is from Alan eul's group at Johns Hopkins even in cases where you know the objects are the standard objects that the system knows how to detect its supposedly superhuman you know sort of levels if we take this guitar and we put it on top of this monkey in the jungle a couple of funny things happen one is it thinks the guitar is a bird they might have an idea why that is yeah I hear pieces of the answer all around so it's it's colorful right it's a color that you would expect a tropical bird things that are in the jungle in distribution would tend to be colorful tropical birds interestingly because you put the guitar in front of the monkey now the monkeys a person and you know again you know monkeys don't play guitars in the training set and that's that's clearly messing with the results so even though we have no trouble at all telling that this is that these objects are a guitar and a monkey this the state-of-the-art systems are falling down and then even this is this captioning example which I highlighted as being you know an amazing success for deep learning and it is an amazing success for deep learning when you poke a little bit harder which you know Josh Tenenbaum and Sam Gershman and Brendan like and Tamar Holman did you find things like this so this image is captioned as a man riding a motorcycle on the beach this next one is an airplane is parked on the tarmac at an airport and this one next one is a group of people standing on top of a beach which is correct so score one for the AI but what you can see is there's a strong sense in which the system doesn't really understand what it's looking at and that leads to mistakes and that leads to sort of you know you know missing the point you know in many cases and again this is this has to do with the fact that these systems are trained on the data and largely they're constrained by what data they've seen before and things that are out-of-sample these edge cases these corner cases tend to perform poorly now the success of deep learning you know I think it's safe to say it's you know two things happened you know deep learning as you already know is a rebrand of a technology called artificial neural networks it dates all the way back to that Dartmouth conference and at least to the 80s you know a lot of the fundamental math back prop was worked out in the 80s when we went through decades of time where it was disreputable to study neural networks and I lived through that era where you would try and publish a paper about neural networks and people would tell you that everyone knows that neural networks don't work but what happened was the amount of data that was available grew enormously so we digitalized the world we got digital cameras now we're all carrying I'm carrying like four cameras on me right now and we took a lot of images and then the compute caught up as well and particularly graphics process units graphics processing unit CPUs came available and it turned out they were even better for doing deep learning than they were for doing graphics and really the seminal moment that really flipped the switch and made deep only take off was the collection of this data set called image net which Bailey collected and it's basically millions of carefully curated images with categories associated with them now you need to have data sets of this scale to make deep learning work so if you're working on projects now and you're training neural networks you'll know that you need to have thousands to millions of images to be able to train a network and have it perform well that's in stark contrast to how we work so does anyone know what this object is just quick raise of your hands okay a few people not so many but even though you've never seen this object before a single training example you're now all experts in this object just one training example so I can show you to you and ask is that object present in this image I think we all agree yes I can ask you questions like how many are in this image and I think we'd all agree there are two and I can even ask you is it present in this image and I think you'd all agree yeah but it's weird right so so not only can you recognize the object from a single training example not thousands not millions one you can reason about it now in contexts where it's just weird right and that's why you can tell that it's a fur-covered sauce or cup and spoon and not a teddy bear could you have this ability to reason out a sample and that's really a remarkable ability that we'd love to have because when you get past imagery you have past digital images there are very few data sets that have this kind of scale that imagenet has but even image net turns out you know if there's something else wrong with it so does anyone notice anything about these chairs in the image these were all taken from image net from the chairs category does anyone notice anything you know unusual about these all facing the camera their own canonical views they're all more or less centered in the case where there's multiple chairs that kind of like almost like a texture of chairs right so these are very unusual images actually I mean we look at them and we think these are normal images of chairs but these are actually very carefully posed and crafted images and one of the projects that we've been working on together with MIT across the MIT IBM lab was this is a project that was led by Boris Katz and Andre barboo together with our own Dan Gouffran they asked okay well what if we collected a data set where that wasn't true so where we didn't have carefully perfectly centered objects and what they did is they enlisted a whole bunch of Mechanical Turk on Amazon Mechanical Turk and they told them take a hammer take it into your bedroom put it on your bed and here's a smartphone app and please put it in this bounding box so so basically you would have to go and these people get instructions you know take your chair we want you to put it in the living room and we want you to put it on its side and put in that bounding box or we want you to take a knife out of your kitchen put it in the bathroom and make it fit in that bounding box or you know take that that bottle and put it on a chair on this orientation so they went through and they just collected a huge amount of this data so it corrected 50 thousand of these images from 300 object classes that overlap with imagenet and then they asked the Mechanical Turk occurs to go to four different rooms with those things so remember everyone talks about how imagenet is state-of-the-art in object categorization and that's a solve problem but it turns out when you take these images of these objects that are not in the right place humans can perform at well over ninety five percent accuracy on this task but but the AIC cnn's that were previously performing you know at state-of-the-art levels drop all the way down forty to forty five percent down in their performance so there's a very real sense in which as amazing as deep loading is and i i'm gonna keep saying this deep learning is amazing but some of the gains in the you know the sort of declarations of victory are a little bit overstated and they all circle around this idea of small data of corner cases edge cases and being able to reason about situations that are a little bit out of the ordinary alright and of course the last piece you know that's that's concerning for anyone who's trying to deploy neural networks in the real world is that they're weirdly vulnerable to hacking so i don't know if you guys covered adversarial examples in this class yet but here's an example targeting that same captioning system so we're you know the captioning system can see this picture of a stop sign and produced this beautiful natural language caption a red stop sign sitting on the side of a road our own pin you chen who's an expert in this area at IBM created this image which is a very subtle perturbation of the original and you can get that to now say a brown teddy bear lying on top of the bed with high confidence so this is a case again where there's something divergent between how we perceive images and understand what the content of an image is and how these end and trained neural networks do the same and you know this kind of this you know this image the the perturbations of the the pixels in this image we're done in such a way that they'd be small so that you couldn't see them so they're specifically hidden from us but it turns out that you don't have to actually have access to the digital image you can also do real-world in the wild adversarial attacks and this is one that was it was kind of fun some some folks in my lab in my group decided to be fun to have a t-shirt that was adversarial so you took a person detector and so this is like you know it will detect a person you could imagine like an AI powered surveillance system if you were to intrude in the building you might wanna have a person detector that could detect a person and warn somebody hey there's a person in your building it doesn't belong but what they did is they created this shirt so this shirts very carefully crafted it's a very loud ugly shirt you could we have it in the lab if you want to come over anytime and try it on you're welcome to but this shirt basically makes you invisible to AI so this is CJ who's who's who's our wizard adversarial examples he's not wearing the shirt so you can see the person detectors detecting him just fine Tron foo is wearing the shirt therefore he is invisible he is camouflaged and you can see even as you walk around even if the shirt is folded and bent and wrinkled it makes you invisible so weird right like that like if anything for us this ugly looking shirt makes you even more visible so there's something just something weird about how deep learning seems to work relative to how weird we work but there are also problems where even under the best of conditions no adversarial perturbation you could have as much training day as you like where deep learning still struggles and these are really interesting for us because these are cases where no matter how much data you have deep learning just doesn't cut it for some reason and and we want to know why so problems like this if you ask the question so I give you a picture and ask question how many blocks are on the right of the three-level tower or well the block tower fall if the top block is removed or are there more animals than trees or what is the shape of the object closest to the large cylinder these are all questions that even a child could answer I mean provided they understand language and you know and read and stuff you know it's very easy for us to work on these things but it turns out the deep loading systems irrespective of how much training data you give struggle so that's you know case where they know there's smoke and you kind of want to know where's the fire actually the answer we think or one of the things we're exploring is the idea that maybe the answer lies all the way back in 1956 so this is a picture of that Dartmouth workshop back in 1956 and back in this time period neural networks were already you know had already been sort of born we were thinking about neural networks but we were also thinking about another kind of AI back then and that kind of AI is is interesting a different hasn't really enjoyed a resurgence the way that neural networks have but just to step back for a moment this is what you've been studying neural network basically is a nonlinear function approximator you take an input in and you get some output that you want out and it learns the weights of the network through training with data so this is what an apple looks like to a neural network you know you put an apple picture in and you light up a you know a unit that says there's probably an apple in that scene there's another kind of AI that's been around since the beginning called symbolic AI and this is from a book by Marvin Minsky and in 1991 was created here and this is what an apple looks like to symbolic AI so we know things about an apple we know that an apple has an origin it comes from an apple tree we know that an apple is a kind of fruit you know the apple has a PEZ parts it has a body and has a stem the body has a shape it's round it's has a size it can fit in your hand it's got a color could be red or green we know lots of knowledge about what an apple is and that's a very different take on AI and you know basically this field of what we call good old-fashioned AI or symbolic AI has been around since the very beginning and it just hasn't yet enjoyed that resurgence that neural networks did and one of the central theses that we're exploring is that just the same way that neural networks have been waiting they were waiting for compute and data to come along to make them really work we think that symbolic eyes also been waiting but what it's been waiting for his neural networks so now that neural networks work can we go back to some of these ideas from symbolic AI and do something something different and the work I'm going to tell you about is a collaboration as part of the MIT IBM lab chuan GaN is one of the researches my group together with Josh Tenenbaum and particularly John Woo who's now an assistant professor at Stanford along with some others and what they're asking is can we can we mix together the ideas of neural networks together with the ideas from symbolic AI and do something that's more than the sum of its parts and this picture that I showed you here I should do this earlier this is actually a data set called clever that was it was basically created to illustrate this problem where irrespective of how much training data you have this very simple kind of question answering tasks where you have to answer questions like are there an equal number of large things in metal spheres seems to be hard so the data set was created to illustrate the problem and if you tackle this the way you're supposed to with neural networks and deep learning and and perhaps you've learned this you know over the course of this IEP course that the best way to train a system is end-to-end right there the best way to get what you want is to start from what you have and end with what you need and don't get in the way in the middle just just let the neural network do its thing the problem is that when you build end-to-end neural networks and try and train them to go from from these inputs to these outputs for data sets like this it just doesn't work well at all and the reason for that is that the concepts things like colors and shapes and objects and things like that and then the portions of reasoning like counting the number of objects or reasoning about the relationships between objects are fundamentally entangled inside the representation of the neural network and then not only does that cause problems where it's very difficult to cover the entire distribution and they not get caught by corner cases but it also means it's hard to transfer to other kinds of tasks like image captioning or instance retrieval or other kinds of things so fundamentally this end end approach just doesn't seem to work very well so I'm already telling you something that's sort of probably against the advice you've gotten that's far but when you step back and look at well how do we solve this problem of visual reasoning you'd have a question like this are there an equal number of large things in metal spheres you know when we tackle this problem well first we read the question and we see there's something about large things we use our visual system to sort of find the large things then we read the question we see that something about that phears we use our visual system to find the metal spheres and then critically we do an operation a symbolic operation and a quality operation where we decide are these an equal number and we say yes so that's what we do and if you unpack that yeah it's got visual perception and CNN's are a great candidate for doing that and yeah it's got a question understanding natural language processing and yeah or intends are a great tool for doing that but critically it also has this component of logical reasoning where you can very flexibly apply operations in a compositional way so what the team did then was to basically this is kind of like the the neuro symbolic hello world hello world is the first program you write in most programming languages this is kind of the simplest example that we could tackle and this is the sort of diagram of the flow system and don't worry I'm gonna unpack all this where you know because you know neural networks are good at vision we we use a CNN to do the vision part but instead of going straight to an answer it's used to basically D render the scene so rendering goes from a symbolic representation to an image D rendering goes from the image back to some kind of symbolic structured representation so we're gonna take apart the image using the neural network and then the question you know you'd be crazy not to use something like an LS TM to parse the language but instead of going from the language straight to an answer or going from the language to you know a label or something but language is gonna be parsed into a program into a symbolic program which is then going to be executed on the structured representation so just to walk you through that you know we have a vision part we have a language part you parse the scene using a CNN so you turn you know you find the objects in the scene and you basically create a table that says what are the objects what are their properties and where are they and then you do semantic parsing on the language part and again the goal here is to go from natural language with all of its you know sort of vagaries and messiness to a program a program that we're going to run in a minute and so you need to learn how to take this language and turn it into a series of symbolic operations and then you're going to run at symbolic program on the structured symbolic information and get an answer so you would start by filtering and saying I want to look the questions asking about something Reds I need to filter on red and then I need to query the shape of that object that I've just filtered so this is basic you know sort of program program execution and critically the system is trained jointly with reinforcement learning so the neural network that does vision and the neural network that translates from language to program fundamentally learned something different by virtue of being part of a hybrid symbolic system right so it gets reward or doesn't get reward and you you've probably a great and small that based on whether or not the symbolic system got the right answer and we use reinforcement learning of course because you can't differentiate through the symbolic part but but you know fundamentally this isn't just a matter of bolting neural networks onto a symbolic reasoner but rather training them jointly so that the symbolic so the neural networks learn how to extract the symbols learn to give the right symbolic representations through experience and learning on the data so this does a couple of really interesting things so one of the first things you'll notice this data set was created this clever days that was created because it Illustrated a problem with end and learning but it turns out that with just a dash of symbolic execution now you can be effectively perfect on the clever data set so clever is now solved and this was actually an oral spotlight paper at NURBS because this is a big deal previously unsolved with a problem now solved that's good but interestingly more than that remember I said the biggest problem with deploying neural networks in the real world is that we rarely have big data we usually have pretty small data so anything that reduces the the sample that it improves the sample efficiency of these methods is really valuable and so here is the number of training examples that the system is given this is the accuracy of the system the neuro symbolic system is up here in blue I'll just point out several things one is it's always better and but if you look at you know down here the end-to-end train systems require close to a million examples they are kind of you know okay results not perfect but okay the neuro symbolic system again with just a dash of symbolic mixed in with just one percent of the data can do better than most of the end-to-end train systems and with just ten percent of the data one tenth of the data can perform at effectively perfect performance so drastically lower you know requirement for data drastically higher sample efficiency and then the last piece remember I said explain ability was super important now people are actually don't really use AI systems they need to be able to look inside and understand why the decision is made otherwise they won't trust the AI system they won't use it because the system has a symbolic choke point in the middle where you parse the question into a series of symbolic operations we can debug the system the same way you would debug a traditional coded system so you can see okay well what did it do it filtered on cyan it filtered on metal was that the right thing you can just you can understand why it made the decision it made if it made the wrong decision now you have some guidance on what you'd want to do next so that was a paper from from this team in 2018 neuro symbolic vqa actually since then there's basically been a parade of papers that have made this more and more sophisticated so there was a paper and I clear in 2019 called the neuro symbolic concept learner that relaxed the requirement that that the concepts be pre coded another paper that just came out in NURBS just a few months ago or last month called the neuro symbolic Matt a concept learner that autonomously learns new concepts it can can sort of use meta concepts to do better and we're even now getting this to work in not just these toy images but also in real world images which is obviously important and we think this is actually really interesting and profitable direction to go forward so here's a neural symbolic concept learning basically what's happened is relaxing now these concepts into concept embeddings so you can when you look at an object with the CNN you can now embed into a space of color and then compare that color to store concept embeddings which means you can now learn new concepts dynamically you can learn them from concept from context so you don't need to know that Green is a color you can figure that out and learn that this is important because the world's full of new concepts that were constantly you know encountering so that was the sort of next innovation on the system also remember I said that one of the things that's magical about symbology is that we can leverage lots of different kinds of knowledge and in fact we can leverage these sort of meta relationships between different different concepts we know there are synonyms for instance which led to this paper which was just presented last month called the neuro symbolic meta concept learner where you can have a notion of is read the same kind of concept as green or as Cuba synonym of block which then of course lets you do things like you know if if I go through in the regular mode here I'm creating you know a representation of the object and I'm creating a symbolic program I can do the regular thing but then also critically I can now use relationships I know about synonyms and and concepts equivalencies to meta verify these things and then I can take advantage of the fact that if I know that there's an airplane that I also know there's a plane because a plane and an airplane are synonyms I can know that if there's any kind of kid the answer is yes that is there any kind of child you know that's also yes because child and kid are synonyms so we can start to see how we can get more and more complex and more and more sophisticated with our symbolic reasoning and and do more and more and more of course it works well we're also now extending since clever is now beaten we're now looking also at we're releasing a new data set called video clever is video it's called clever or which is a very tortured acronym looking at the relationships between objects and counterfactuals what would happen if this block weren't there so you can see that we can kind of expand to more and more sophisticated environments as we go I'll just also say you know that that this this notion of symbolic program execution isn't the only idea from Somali ki that we can bring together with neural networks we're also looking at the field of planning so there's a field of symbolic AI called planning where you try and start from an initial state and then use an action plan to arrive at some target state which is really good for solving problems like the Tower of Hanoi which you may have encountered or these kinds of slider puzzles where you need to produce a series of operations to achieve a certain end state like make the picture into the right shape and another area of projects that we're working on is mixing these together with neural networks so that we don't just have to rely on on sort of static symbolic representations but we can actually work in the latent space of an autoencoder so we have binary discrete autoencoders and we can actually plan in the latent space of an autoencoder so obviously these are topics that would you know be a whole talk unto themselves I just wanna give you a little bit of a flavor that this idea of mashing up neural networks and symbolic AI has a lot of a lot of range and there's a lot of a lot of a room to grow and explore and lots of ideas in symbolic AI now that we can bring together and every time we do we seem to find that good things happen so with that I'll stop just to give you one picture in your mind you know these sort of to venerable traditions of AI I think we're coming to a place where we can bring the symbolic stuff you know off out of the you know out of the closet dusted off and in many ways the power of neural networks solves many of the problems they complement each other's strengths and weaknesses and really important and useful ways so with that I'll stop and thank you all for your attention and if you have any questions I'm very happy [Applause]
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2019_Convolutional_Neural_Networks.txt
all right so let's get started so thank you all for coming to day two of six s-191 we're really excited to to have you for these two lectures and lab today which are largely going to be focused on deep learning for computer vision so to motivate I think we can all agree that vision is one of the most important human senses and sighted people rely on vision quite a lot for everything from navigating in the physical world to recognizing and manipulating objects to interpreting facial expressions and understanding emotion and I think it's safe to say that for many of for all of us or many of us vision is a huge part of our lives and that's largely thanks to the power of evolution evolutionary biologists traced the origins of vision back 540 million years ago to the Cambrian explosion and the reason that vision seems so easy for us as humans is because we have 540 million years of data that evolution has effectively trained on and if you compare that to other capabilities like bipedal movement and language the difference is is quite significant and starting in the 1960s there was a surge of interest in both the neural basis of vision and how to systematically characterize visual processing and this led to computer scientists beginning to wonder about how findings in neuroscience could be applied to achieve artificial computer vision and it all started with these series of seminal experiments from two neuroscientists David Hubel and Torsten vessel who were working at Harvard at the time and they were looking at processing in the visual cortex of cats and what they were able to demonstrate was that their neural mechanisms for spatially invariant pattern recognition and that certain neurons in the visual cortex respond very specifically to specific patterns and regions of visual stimuli and furthermore that there's an exquisite hierarchy of neural layers that exist within the visual cortex and these concepts have transformed both neuroscience and artificial intelligence alike and today we're going to learn about how to use deep learning to build powerful computer vision systems that have been have been shown to be capable of extraordinary complex computer vision tasks so now that we've gone in a sense at a very high level of why this is important and sort of how our brains may process visual information we can turn our attention to what computers see how does a computer process an image so well to a computer images are just numbers so suppose we have this picture of Abraham Lincoln it's made up of pixels and since this is a grayscale image each of these pixels is just a single number and we can represent this image as a 2d matrix of numbers one for each pixel in the image and this is how a computer sees this image likewise if we were to have a RGB color image not grayscale we can represent that with a 3d array where now we have two D matrices for each of the channels are g and b so now that we have a way to represent images to computers we can think about what types of computer computer vision tasks we can perform and two very common types of tasks and machine learning broadly are those of regression and those of classification in a regression our output takes a continuous value while in classification our output takes a single class label so for example let's consider the task of image classification say we want to predict a single label for some image and let's say we have a bunch of images of US presidents and we want to build a classification pipeline to tell us which President is in an image outputting the probability that that image is of a particular President and so you can imagine right that in order to cry classified these images our pipeline needs to be able to tell what is unique about a picture of Lincoln versus a picture of Washington versus a picture of Obama and another way to think about this problem at a high level is in terms of the features that are characteristic of a particular class and classification it can then be thought of as involving detection of the features in a given image and sort of deciding okay well if the feature is for a particular class are present in an image we can then predict that that image is of that class with a higher probability and so if we're building a image classification pipeline our model needs to know what those features are and it needs to be able to detect those features in an image in order to generate this prediction one way to solve this problem is to leverage our knowledge about a particular field say those of human faces and use our prior knowledge to define those features ourselves and so a classification pipeline would then try to detect these manually defined features and images and use the results of some sort of detection algorithm to do the classification but there's a big problem with this approach and if you remember images are just 3d arrays of effectively brightness values and they can have lots and lots and lots of variation such as occlusion variations in illumination and intraclass variation and if we want to build a robust pipeline for doing this classification task our model has to be invariant to these variations while still being sensitive to the differences that define the individual classes even though our pipeline could use these features that we the human define where this manual extraction will break down is actually in the detection task and that's again due to the incredible variability in visual data because of this the detection of these features is actually really difficult in practice because your detection algorithm would need to withstand each of these different variations so how can we do better we want a way to both extract features and detect their presence in images automatically in a hierarchical fashion and again right you came to a class on deep learning we we we hypothesize right that we could use a neural network based approach to learn visual features directly from data without any you know manual definition and to learn a hierarchy of these features to construct a representation of the image that's internal to the network for example if we wanted to be able to classify images of faces maybe we could learn how to detect low-level features like edges and dark spots mid level features like eyes ears and noses and then high level features that actually resemble facial structure and we'll see how neural networks will allow us to directly learn these visual features from visual data if we construct them cleverly going back in lecture 1 right we learned about fully connected architectures where you can have multiple hidden layers and where each neuron in a given layer is connected to every single neuron in the subsequent layer and let's say that we wanted to use a fully connected neural network for image classification in this case our 2d input image is transformed into a vector of pixel values and this vector is then fed into the network where each neuron in the hidden in the first hidden layer is connected to all neurons in the input layer and here hopefully you can appreciate that by squashing our 2d our 2d matrix into this 1d vector and defining these fully connected connections all spatial information is completely lost furthermore in in defining the network in this way we end up having men many different parameters right you need a different weight parameter for every single neural connection in your network because it's fully connected and this means that training in network like this on a task like image classification becomes infeasible in practice importantly right visual data has this really rich spatial structure how can we leverage this to inform the architecture of the network that we design to do this let's represent our 2d input image as an array of pixel values like I mentioned before and one way we can immediately use the spatial structure that's inherent to this input is to connect patches of the input to neurons in the hidden layer another way of thinking about this is that each neuron in a hidden layer only sees a particular region of what the input to that layer is and this not only reduces the number of weights in our model but also allows us to leverage the fact that in an image pixels that are spatially close to each other are probably somehow related and so I'd like you to really notice how the only region how only a region of the input layer influences this particular neuron and we can define connections across the whole input by applying the same principle of connecting patches in the input layer to neurons in the subsequent subsequent layer and we do this by actually sliding the patch window across the input image in this case we're sliding it by two units and in doing this we take into account the spatial structure that's inherent to the input but remember right that our ultimate task is to learn visual features and the way we achieve this is by weighting these connections between the patch and the neuron and the neuron in the next layer so as to detect particular features so this this principle is is called we think of what you can think of it as is applying a filter essentially a set of weights to extract some sort of local features that are present in your input image and we can apply multiple different filters to extract different types of features and furthermore we can spatially share the parameters of each of these filters across the input so that features that matter in one part of the image will still matter elsewhere in the image in practice this amounts to this patchy operation that's called convolution and if we first think about this at a high level suppose we have a four by four filter which means we have 16 different weights right and we're going to apply this same filter to four by four patches in the input and use the result of that filter operation to define the state of the neuron that the patch is connected to right then we're going to shift our filter over by a certain width like two pixels grab the next patch and apply that filtering operation again and this is how we can start to think about convolution at a really high level but you're probably wondering how does this actually work what am I talking about when I keep saying oh features extract visual features how does this convolution operation allow us to do this so let's make this concrete by walking through a couple of examples suppose we want to classify X's from a set of black and white images of letters where black is equal to minus 1 and white is represented by a value of 1 to classify it's it's really not possible to simply compare the two matrices to see if they're equal because we want to be able to classify and act as an X even if it's transformed rotated reflected deformed etc instead we want our model to compare the images of an X piece by piece and those important pieces that it learns to look for are the features and our model can get rough feature matches in roughly the same positions relatively speaking in two different images it can get a lot better sense at seeing the similarity between different examples of exes so each feature is like a mini image right a small two-dimensional array of values and we can use these filters to pick up on the features that are common to exes so in the case of exes right filters that can pick up on diagonal lines and a crossing capture what's important about an X so we can probably capture these features in the arms and center of any image of an X and I'd like you to notice that these smaller matrices are the filters of weights that we'll actually use to detect the corresponding features in the input image now all that's left is to define an operation that picks up where these features pop up in our image and that operation is convolution and convolution is able to preserve the spatial relationship between pixels by learning image features in small squares of the input and to do this what we do is we simply perform an element-wise multiplication between the filter weight matrix and the patch of the input image of the same dimensions and this results in this case in a three by three matrix and here in this example all the entries in this matrix are 1 and that's because everything is black and white either minus 1 and 1 and this indicates that there is a perfect correspondence between our filter matrix and the patch of the input image where we multiplied it right so this is our filter this is our patch they directly correspond and the result is is as follows finally if we add all the elements of this of this matrix this is the result of convolving this 3x3 filter with that particular region of the input and we get back to number nine so let's consider another example right right to hopefully drive this home even further suppose we want to compute the convolution of this 5x5 representation of an image and this 3x3 filter to do this we need to cover the entirety of the input image by sliding this filter over over the over the image performing this element wise multiplication at each step and adding the outputs that result after each element wise multiplication so let's see what this looks like first we start off in this upper left corner we multiply this filter by the values of our of our input image and add the result and we end up with the value of 4 this results in the first entry in our output matrix which we can call the feature map we next slide the 3 by 3 filter over by 1 to grab the next patch and repeat this element wise multiplication in addition this gives us our second entry 3 we continue this process until we have covered the entirety of this input image progressively sliding our filter to cover it doing this element wise multiplication patch by patch adding the result and filling out our feature map and that's it that's convolution and this feature map right this is a toy example but in practice you can imagine that this feature map reflects where in the input image was activated by this filter where in the input image that filter picked up on right because higher values are going to represent like sort of a greater activation if you will and so this is really the the bare-bones mechanism of this convolution operation and to consider really how powerful this is different different different weight filters can be used to produce distinct feature Maps so this is a very fit famous picture of this woman who's called Lenna and as you can see here in these three examples we've taken three different filters applied this filter to the same input image and generated three very different outputs and as you can see by simply changing the weights of the filters we can detect we can detect and extract different features like edges that are present in the input image this is really really powerful so hopefully you can now appreciate how convolution allows us to capitalize on the spatial structure that's inherent in visual data and use these sets of weights to extract local features and we can very easily detect different features simply by applying different filters and these concepts of preserving spatial structure and local feature extraction using these convolutions are at the core of the neural networks used for computer vision tasks which are called convolutional neural networks or CNN's so sort of with these bare bones mechanism under our belt we can think about how we can utilize this to build neural networks for computer vision tasks so let's first consider a CNN designed for image classification remember the goal here is to learn features directly from the image data this means learning the weights of those filters and using these learned feature maps for classification of these images now there are three main operations to a CNN the first is convolution which we went through right and we saw how we can apply these filters to generate future maps the second is applying non-linearity the same concept from lecture one and the third key idea is pooling which is effectively like a down sampling operation to reduce the size of of a map and finally the computation of class scores and actually outputting a prediction for the class of an image is achieved by a fully connected layer at the end of our network and so in training we train our model on a set of images and we actually learn those weights of the filters that are going to be used in the network as well as the weights in the fully connected layer and we'll go through each of these to break down this basic architecture of CNN so first right as we've already seen let's consider the convolution operation as before each neuron in a hidden layer will compute a weighted sum of its inputs apply a bias and activate with a non-linearity that's the same exact concept from lecture one but what's special here is this local connectivity the fact that each neuron in a hidden layer is only seeing a patch of what comes before it and so this relation defines how neurons and convolutional layers are connected what it boils down to is the same idea of applying a window of weights computing the linear combination of those weights against the input and then activating with non with a nonlinear activation function after applying a bias another thing we can think about is the fact that within a single convolutional layer we can actually have many different filters that we are learning different sets of weights to be able to extract different features and so the output layer after a convolution operation will have a volume where the height and the width are the spatial dimensions dependent on the input layer so if we had say a 40 by 40 input image the width and height would be 40 by 40 assuming that you know the dimensionality scales after the operation and and these dimensions are dependent on the size of the filter as well as the degree to which we're sliding it over the over the input layer finally the depth is defined by the number of different filters that we we are using the last key thing that I would like to like for you to keep in mind is this notion of the receptive field which is essentially a term to describe the fact that locations and input layers and are connected to excuse me a neuron in a downstream layer is only connected to a particular location in its in its respective input layer and that is termed its receptive field okay so this kind of at a high level explains sort of how these convolutional operations work in within convolutional layers the next step that I mentioned is applying a non-linearity to the output of a convolutional layer and exactly the same concept as the first lecture we do this because image data is highly nonlinear right and in CNN's it's very common practice to apply nonlinearities after every convolution operation that is after each convolutional layer and the most common activation function that is used is called the relu function which is essentially a pixel by pixel operation that reply replaces all negative values that follow from a convolution with zero and you can think of this as sort of a threshold incurring indicates sort of negative direction of a negative detection of that associated feature the final key operation in cnn's is pooling and pooling is a operation that's used to reduce dimensionality and to preserve spatial invariants and a common technique is called max pooling it's shown in this example and it's exactly what it sounds you simply take the maximum value in a patch in this case a 2x2 patch that is is being applied with a stride of 2 over this over this array and the key idea here is that we're reducing the dimensionality going from one layer to the next and I encourage you to think about other ways in which we can perform this sort of down sampling operation so these are the three key operations to cnn's and we're now ready to put them together to actually construct our network and with cnn's the key the key is that we can layer these operations hierarchically and by layering them in this way our network can be trained to learn a hierarchy of features present in the image data so CNN for image classification can be broken down into two parts first this feature learning pipeline where we learn features in our input images through convolution and through these convolutional layers and finally the the second half is that these convolutional and pooling layers will output high level features that are present in the input images and we can then pass these features on to fully connected layers to actually do the classification task and these fully connected layers can effectively output a probability distribution for the images membership over a set of possible classes and a common way that this is achieved is using this function called the softmax whose output represents a categorical probability distribution over the set of classes that you're interested in okay so the final key piece right is how do we train this and it's the same idea as we introduced in lecture 1 back propagation the important thing to keep in mind is what it is we're learning when we're training SIA what we learn when we train a CNN model is the weights of those convolution filters at another degree of abstraction right you can think of this as what features the network is learning to detect in addition we also learn the weights for the fully connected layers if we're performing a classification task and since our output in this case is going to be a probability distribution we can use that cross entropy loss that was introduced in lecture 1 to optimize via back propagation so arguably the most famous example of cnn's and for cnn's for classification and maybe cnn's in general is those trained and tested on the famous imagenet data set an image net is a massive data set with over 14 million images across over 20,000 different categories and this was created and curated by a lab at Stanford and is really become a very widely used data set across a machine learning so just to take an example in image net there are 1,400 nine different pictures of bananas and even better than the size of this data set I really appreciated their description of what a banana is succinctly described as an elongated crescent-shaped yellow fruit with soft sweet / which both gives a pretty good description of what a banana is and speaks to its obvious deliciousness so the creators of image net also created a set of visual recognition challenges on their data set and most notably the image net classification task which is really simple it's produced a list of the object different object categories that are present in images across a ground truth set of 1,000 different categories and in this competition they measured the accuracy of the models that were submitted in terms of the rate at which the model did not output the correct label in its top five predictions for a particular image and the results of this image net classification challenge are pretty astonishing 2012 was the first time a CNN won the challenge and this was the famous CNN called Alex net and since then the neural networks that have have neural networks have completely dominated this competition and the error that the state of the art is able to achieve keeps decreasing and decreasing surpassing human error in 2015 with the famous ResNet Network which had 152 convolutional layers in its design but with improved accuracy the number of layers in these networks has steadily been increasing so take it as what you will you know there's something to be said about building deeper and deeper networks to achieve higher and higher accuracies so so far we've only talked about classification but in truth CNN's are extremely flexible architecture and have been shown to be really powerful for a number of different applications and when we considered a CNN for classification I showed this general pipeline schematic where we had two parts the feature learning part and the classification part and what makes a convolutional neural network a convolutional neural network is really this feature learning portion and after that we can really change the second part to suit the application that we desire so for example this portion is going to look different for different image classification domains and we can also introduce new architectures for different types of tasks such as object recognition segmentation and the image captioning so I'd like to consider three different applications of CNN's beyond image classification the first is semantic segmentation where the task is to assign each pixel in the image an object class to produce a segmentation of the image object detection where we want to detect instances of specific objects in the image and finally image captioning where the task is to generate a language description of the image that captures its semantic meaning so first let's talk about semantic segmentation with this architecture called fully convolutional networks or f c ends and here the way it works is the network takes in an input of arbitrary size and produces an output of of corresponding size where each pixel has been assigned an object class which we can then visualize as a segmentation and so as before we have a series of convolutional layers arranged in this hierarchical fashion to create this learned hierarchy of features but then we can supplement this these down sampling operations with up sampling operations that increase the resolution of the output from those feature learning layers and then you can combine the output from these up sampling layers with those from the down sampling layers to actually produce a segmentation one application of this sort of architecture is to the real-time segmentation of driving scenes so this was a results from a couple years ago where the authors were using this encoder decoder like structure where you have down sampling layers followed by up sampling layers to to produce these these segmentations and last year this was sort of the state of the art performance that you could achieve with an architecture like this in in doing semantic segment pation and deep learning is moving extremely fast and now the new state of the art in semantic segmentation is what you see here and it's actually the same the same authors as that previous result but with an improved architecture where now they're using one network trained to do three tasks simultaneously semantic segmentation shown here depth estimation and instant segmentation which means identifying different instances of the same object type and as you can see in this upper right corner these segmentation results are pretty astonishing and they've they significantly improved in terms of their crypts crispness compared to the previous result another way CNN's have been extended is for object detection where here the task is to learn features that characterize particular regions of the input image then classify those regions as belonging to particular object classes and the pipeline for doing this is is an architecture called our CNN and it's pretty straight forward so given an input image this algorithm extracts a set of region proposals bottom-up computes features for these proposals using convolutional layers and then classifies each region proposal and there have been many many different approaches to computing these different computing and estimating these region proposals step two in this in this pipeline and as this has resulted in a number of different extensions of this of this general principle the final application that I'd like to consider is image captioning and so suppose we're given this image of a cat riding the skateboard in classification our task could be to output the class label for this particular image cat and as we saw this is done by feeding the image through a set of convolutional layers to extract features and then a set of fully connected to generate a prediction in image captioning what we want to do is generate a sentence that describes the semantic content of the image so if we take that same CNN Network from before and instead of fully connected layers at the end we replace it with an RNN what we can do is we can use a set of convolutional layers to extract visual features encode them in put them into a recurrent neural network which we learned about yesterday and then generate a sentence that describes the semantic content that's present in that image and the reason we can do this is that the output of these convolutional layers gives us a fixed length encoding that initializes our that we can use to initialize an RNN and train it on this captioning task so these are three very concrete fundamental applications of of CNN's but to take it a step further in terms of sort of the depth and breadth of impact that these sort of architectures have had across a variety of fields I'd like to first appreciate the fact that these advances would not have been possible without the curation and availability of large well annotated image datasets and this is what has been fundamental to really rapidly accelerating the progress in the development of convolutional neural networks and so some really famous examples of image data sets are shown here amnesty in today's lab image net which I already mentioned and the places data set which is out of MIT of different scenes and landscapes and as I as I sort of alluded to the impact of these sorts of approaches has been extremely far-reaching and and deep no pun intended and one area that convolutional neural networks have been have made a really big impact is in face detection and recognition software and this is you know every time you pick up your phone this your your phone is running these sorts of algorithms to pick up you know your face and your friends face and this type of software is pretty much everywhere from social media to security and in today's lab you'll have the chance to build a CNN based architecture for facial detection and you'll actually take this a step further by exploring how these how these models can be potentially biased based on the nature of the training data that they use another application area that has led to a lot of excitement is in autonomous vehicles and self-driving cars so the man in this video was a guest lecturer that we had last year and he was really fun and dynamic and this is work from Nvidia where they have this pipeline where they take a single image from a camera on the car feed it into a CNN that directly outputs a single number which is a predicted steering wheel angle and Beyond self-driving cars NVIDIA has a really large-scale research effort that's focused on computer vision and on Friday we'll hear from the leader of Nvidia's entire computer vision team and he'll talk about some of the latest and greatest research that they're doing there finally there's been a pretty significant impact of of these types of architectures in medicine and healthcare where deep learning models are being applied to the analysis of a whole host of types of medical images so this is a paper from Nature Medicine from just a few weeks ago where it was a multi Institute team and they presented a CNN that uses a pretty standard architecture to identify rare genetic conditions from analysis of just a picture of a child's face and in their paper they report that their model can actually outperform physicians when Tess on a set of images that are would be relevant to a clinical scenario and one reason that work like this is really exciting is because it presents sort of another standard approach standardized approach to identifying and diagnosing in this case genetic disorders and you can imagine that this could be combined with already existing clinical tests to improve classification or subtyping alright so to summarize what we've covered in today's lecture we first considered sort of the origins of the computer vision problem and how we can represent images as arrays of brightness values and what convolutions are and how they work and we then discussed the basic architecture of convolutional neural networks and kind of went in depth on how cnn's can be used for classification and finally we talked a bit about extensions and the applications of the basic CNN architecture and why they have been so impactful over the past several years so I'm happy to take questions at the end of the at the end of the lecture portion you feel free feel free to come to the front to speak to Alexander or myself so with that I'm going to hand it off to him for the second lecture of today [Applause]
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2018_Sequence_Modeling_with_Neural_Networks.txt
hi everybody my name is Hirini and i'm gonna be talking about how to use neural networks to model sequences in the previous lecture you saw how you could use a neural network to model a data set of many examples the difference with sequences is that each example consists of multiple data points there can be a variable number of these data points per example and the data points can depend on each other in complicated ways so a sequence could be something like a sentence like this morning I took the dog for a walk this is one example but it consists of multiple words and the words depend on each other another example would be something like a medical record one medical record would be one example but it consists of many measurements another example would be something like a speech waveform where this one waveform is an example but again it consists of many many measurements you've probably encountered sequence modeling tasks in your everyday life especially if you've used things like Google Translate alexa or siri tasks like machine translation and question answering are all sequence modeling tasks and the state of the art in these tasks is mostly deep-learning based another interesting example I saw recently was the self-parking car by Audi when you think about it parking is also a sequence modeling task because parking is just a sequence of movements and the next movement depends on all the previous movements you can watch the rest of this video online ok so a sequence modeling problem now I'm just gonna walk through a sequence modeling problem to kind of motivate why we need a different framework for specifically for modeling sequences and what we should be looking for in that framework so the problem is predicting the next word given these words we want to predict what comes next the first problem we run into is that machine learning models that are not explicitly designed to deal with sequences take as input a fixed length vector think back to the feed-forward neural network from the first lecture that Alexander introduced we have to specify the size of the input right at the outset we can't sometimes feed in a vector of length ten other times feed an elector a vector of length 20 it has to be fixed length so this is kind of an issue with sequences because sometimes we might have seen ten words and we want to predict the next word sometimes we might have seen four words and we want to predict the next word so we have to get that variable length input into a fixed length vector one simple way to do this would be to just cut off the vector so say okay we're gonna just take a fixed window force this vector to be fixed length by only considering the previous two words no matter where we're making the prediction we'll just take the previous two words and then try to predict the next word now we can represent these two words as a fixed length vector by creating a larger vector and then allocating space in it for the first word and for the second word we have a fixed length vector now no matter what two words we're using and we can feed this into a machine learning model like a feed-forward neural network or a logistic regression or any other model and try to make a prediction one thing you might be noticing here is that by using this fixed window we're giving ourselves a very limited history we're trying to predict the word walk having only seen the words four and a this is almost impossible but differently it's really hard to model long term dependencies to see this clearly consider the word in sorry consider the sentence in France I had a great time and I learned some of the blank language where we're trying to predict the word in the blank I knew it was French but that's because I looked very far back at the word France that appeared in the beginning of the sentence if we were only looking at the past two words or the past three words or even the past five words it would be really hard to guess the word in that blank so we don't want to limit ourselves so much we want to ideally use all of the information that we have in the sequence but we also need a fixed length vector so one way we could do this is by using the entire sequence but representing it as a set of counts in language this representation is also known as a bag of words all this is is a vector in which each slot represents a word and the number in that slot represents the number of times that that word occurs in the sentence so here the second slot represents the word this and there's a 1 because this appears once in the sentence now we have a fixed length vector no matter how many words we have the vector will always be the same size the counts will just be different we can feed this into a machine learning model and try to make a prediction the problem you may be noticing here is that we're losing all of the sequential information these counts don't preserve any order that we had in the sequence to see why this is really bad considered these two sentences the food was good not bad at all versus the food was bad not good at all these are completely opposite sentences but their bag of words representation would be exactly the same because they contain the same set of words so by representing our sentences counts we're losing all of the sequential information which is really important because we're trying to model sequences ok so what do we know now we want to preserve order in the sequence but we also don't want to cut it off to to a very short length you might be saying well why don't we just use a really big fixed window before we were having issues because we were just using a fixed window of size 2 what if we extended that to be a fixed window of size 7 and we think that by looking at 7 words we can get most of the context that we need well yeah ok we can do that now we have another fixed length vector just like before it's bigger but it's still fixed length we have allocated space for each of the 7 words we can feed this into a model and try to make a prediction the problem here is that and consider this in the scenario where we're feeding this input vector into a feed-forward neural network each of those inputs each of those ones and zeros has a separate weight connecting it to the network if we see the words this morning at the beginning of the sentence very very commonly the network will learn that this morning represents a time or a setting if this morning then appears at the end of the sentence we'll have a lot of trouble recognizing that because the weights at the end of the vector never saw that phrase before and the weights from the beginning of the vector and not being shared with the end in other words things we learn about the sequence won't transfer if they appear at different points in the sequence were not sharing any parameters all right so you kind of see all the problems that arise with sequences now and why we need a different framework specifically we want to be able to deal with variable length sequences we want to maintain sequence order so we can keep all about sequential information we want to keep track of longer term dependencies rather than cutting it off too short and we want to be able to share parameters across the sequence so we don't have to relearn things across the sequence because this is a class about deep learning I'm gonna talk about how to address these problems with neural networks but know that time series modeling and sequential modeling is a very active field in machine learning and it has been and there are lots of other machine learning methods that have been developed to deal with these problems but for now I'll talk about recurrent neural networks okay so a recurrent neural network is architected in the same way as a normal neural network we have some inputs we have some hidden layers and we have some outputs the only difference is that each hidden unit is doing a slightly different function so let's take a look at this one hidden unit to see exactly what it's doing a recurrent hidden unit computes a function of an input and its own previous output its own previous output is also known as the cell state and in the diagram it's denoted by s the subscript is the time step so at the very first time stuff T equals zero the recurrent unit computes a function of the input at T equals zero and of its initial state similarly at the next time step it computes a function of the new input and its previous cell state if you look at the function at the bottom the function to compute s 2 you'll see it's really similar to the function for consider unit in a feed-forward Network computes the only difference is that we're adding in an additional term to incorporate its own previous state a common way of viewing recurrent neural networks is by unfolding them across time so this is the same hidden unit at different points in time here you can see that at every point in time it takes as input its own previous state and the new input at that time step one thing to notice here is that throughout the sequence we're using the same weight matrices W and u this solves our problem of parameter sharing we don't have new parameters for every point of the sequence once we learn something it can apply at any point in the sequence this also helps us deal with variable length sequences because we're not pre specifying the length of the sequence we don't have separate parameters for every point in the sequence so in some cases we could unroll this RNN to four time steps in other cases we can unroll it to ten time steps a final thing to notice is that S sub n the self state at time n can contain information from all of the past time steps notice that each cell state is a function of the previous self state which is the function which is a function of the previous cell state and so on so this kind of solves our issue of long-term dependencies because add a time step very far in the future that self state encompasses information about all of the previous cell states all right so now that you kind of understand what a recurrent neural network is and just to clarify I've shown you one hidden unit in the previous slide but in a full network you would have many many of those hidden units and even many layers of many hidden units so now we can talk about how you would train a recurrent neural network it's really similar to how you train a normal neural network it's back propagation there's just an additional time dimension as a reminder in back propagation we want to find the parameters that minimize some loss function the way that we do this is by first taking the derivative of the loss with respect to each of the parameters and then shifting the parameters in the opposite direction in order to try and minimize the loss this process is called gradient descent so one difference with RNN is that we have many time steps so we can produce an output at every time step because we have an output at every time step we can have a loss at every time step rather than just one single loss at the end because and the way that we deal with this is pretty simple the total loss is just the sum of the losses at every time step similarly the total gradient is just the sum of the gradients at every time step so we can try this out by walking through this gradient computation for a single parameter W W is the weight matrix that were multiplying by our inputs we know that the total loss the the total gradient so the derivative of the loss with respect to W will be the sum of the gradients at every time step so for now we can focus on a single time step knowing that at the end we would do this for each of the time steps and then sum them up to get the total gradient so let's take time step two we can solve this gradient using the chain rule so the derivative of the loss with respect to W is the derivative of the loss with respect to the output the derivative of the output with respect to the cell state at time two and then the derivative of the cell state with respect to W so this seems fine but let's take a closer look at this last term you'll notice that s2 also depends on s 1 and s 1 also depends on W so we can't just leave that last term as a constant we actually have to expand it out farther ok so how do we expand this out farther what we really want to know is how exactly does the cell state at time step 2 depend on W well it depends directly on W because it feeds right in we also saw that s 2 depends on s 1 which depends on W and you can also see that s 2 depends on s 0 which also depends on W in other words and here I'm just writing it as a summation those the the some that used on the previous slide as a summation form and you can see that the last two terms are basically summing the contributions of W in previous time steps to the error at time step T this is key to how we model longer term dependencies this gradient is how we shift our parameters and our parameters define our network by shifting our parameters such that they include contributions to the error from past time steps they're shifted to model longer term dependencies and here I'm just writing it as a general sum not just for time step two okay so this is basically the process of back propagation to through time you would do this for every parameter in your network and then use that in the process of gradient descent in practice ions are a bit difficult to train so I kind of want to go through why that is and what some ways some ways that we can address these issues so let's go back to this summation as a reminder this is the derivative of the loss with respect to W and this is what we would use to shift our parameters W the last two terms are considering the error of W at all of the previous time steps let's take a look at this one term this is how we this is the derivative of the cell state at time step two with respect to each of the previous cell states you might notice that this itself is also a chain rule because s 2 depends on s 1 and s 1 depends on a zero we can expand this out farther this is just for the derivative of s 2 with respect to s 0 but what if we were looking at a time step very far in the future like time step n that term would expand into a product of n terms and ok you might be thinking so what well as notice that as the gap between time steps gets bigger and bigger this product in the grade gets longer and longer and if we look at each of these terms what what are you - these terms they all kind of take the same form it's the derivative of a cell state with respect to the previous cell state that term can be written like this and the actual form that actual formula isn't that important just notice that it's a product of two terms double use and F Prime's double use are our weight matrices these are sampled mostly from a standard normal distribution so most of the terms will be less than one F prime is the derivative of our activation function if we use an activation function such as the hyperbolic tangent or a sigmoid F prime will always be less than one in other words we're multiplying a lot of small numbers together in this product okay so what does this mean basically recall that this product is how we're adding the gradient from future time steps to the gradient sorry how we're adding the gradient from past time steps to the gradient at a future time step what's happening then is that air is due to further and further back time steps have increasingly smaller gradients because that product for further back time steps will be longer and since the numbers are all decimals they'll be it will be it will it'll become increasingly smaller what this what this ends up meaning at a high level is that our parameters will become biased to capture shorter term dependencies the errors that arise from further and further back time steps will be harder and harder to propagate into the gradient at future time steps recall that recall this example that I showed at the beginning the whole point of using recurrent neural networks is because we wanted to model long term dependencies but if our parameters are biased to capture short term dependencies even if they see the whole sequence they'll be but the parameters will become biased to predict things based mostly on the past couple words okay so now I'm gonna go through some a couple methods that are used to address this issue in practice that work pretty well the first one is the choice of activation function so you saw that one of the terms that was making that product really small was the F prime term F prime is the derivative of whatever activation function we choose to use here I've plotted the derivatives of some common activation functions you can see that the derivative of hyperbolic tangent and sigmoid is always less than one in fact for sigmoid it's always less than 0.25 and instead we choose to use an activation function like relu it's always 1 above zero so that will at least prevent the F prime terms from shrinking the gradient another solution would be how we initialize our weights if we initialize the weights from a normal distribution they'll be mostly less than 1 and they'll immediately shrink the gradients if instead we initialize the weights to something like the identity matrix it'll at least prevent that W term from shrinking that product at least at the beginning the next solution is very different it involves actually adding a lot more complexity to the network using a more complex type of cell called a gated cell rather than here rather than each node just being that simple our n n unit that I showed at the beginning will replace it with a much more complicated cell a very common gated cell is something called an L STM or a long short term memory so like its name implies L STM cells are able to keep memory within the cell state unchanged for many time steps this allows them to effectively model longer-term dependencies so I'm gonna go through a very high-level overview of how Alice TMS work but if you're interested feel free to email me or ask me afterwards and I can direct you to some more resources to read about Alice gems in a lot more detail alright so Alice geum's basically you have a three-step process the first step is to forget irrelevant parts of the cell state for example if we're modeling a sentence and we see a new subject we might want to forget things about the old subject because we know that future words will be conjugated according to the new subject the next state the next step is an update step here's where we actually update the cell state to reflect the new the information from the new input in this example like I said if we've just seen a new subject we might want to this is where we actually update the cell state with the gender or whether the new subject is plural or singular finally we want to output certain parts of the cell state so if we've just seen a subject we have an idea that the next word might be a verb so we'll output information relevant to predicting a verb like the tense each of these three steps is implemented using a set of logic gates and the logic gates are implemented using sigmoid functions to give you some intuition on ylst ms help with the vanishing gradient problem is that first the forget gate the first step can equivalently be called the remember gate because there you're choosing what to forget and what to keep in the cell state the forget gate can choose to keep information in the cell state for many many time steps there's no activation function or anything else shrinking that information the second step the second thing is that the cell state is separate from what's outputted we're made this is not true of normal recurrent units like I showed you before in a simple recurrent unit the cell state is the same thing as what that cell outputs with an LS TM it has a separate cell state and it only needs to output information relevant to the prediction at that time step because of this it can keep information in the cell state which might not be relevant at this time stuff but might be relevant at a much later time step so we can keep that information without being penalized for that finally I didn't indicate this explicitly in the diagram but the way that the update step happens is through an additive function not through a multiplicative function so when we take the there's not a huge expansion so now I just want to move on to going over some possible tasks so the first task is classification so here we want to classify tweets as positive negative or neutral and this task is also known as sentiment analysis the way that we would design a recurrent neural network to do this is actually not by having an output at every time step we only want one output for the entire sequence and so we'll take in the entire sequence the entire tweet one word at a time and at the very end we'll produce an output which would which will actually be a probability distribution over possible classes where our classes in this case would be positive negative or neutral note that the only information that is producing the output at the end is the final cell state and so that final cell state kind of has to summarize all of the information from the begin the entire sequence into that final cell state so we can imagine if we have very complicated tweets or well I don't know if that's possible but very complicated paragraphs or sentences we might want to create a bigger network with more hidden states to allow that last state to be more expressive the next task would be something like music generation and I'll see if this will play you can kind of hear it okay so that was music generated by an RNN which is pretty cool and something you're actually also gonna do in the lab today but music generator you can on RNN can produce music because music is just a sequence and the way that you would the way that you would construct a new recurrent neural network to do this would be at every time point taking in a note and producing the most likely next note given the notes that you've seen so far so here you would produce an output at every time step the final task is machine translation machine translation is interesting because it's actually two recurrent neural networks side-by-side the first is an encoder the encoder takes as input a sentence in a source language like English it then there's done a decoder which produces the same sentence in a target language like French notice in this architecture that the only information passed from the encoder to the decoder is the final cell state and the idea is that that final state should be kind of a summary of the entire encoder sentence and given that summary the decoder should be able to figure out what the encoder sentence was about and then produce the same sentence in a different language you can imagine though that okay maybe this is possible for a sentence a really simple sentence like the dog eats maybe we can encode that in the final cell state but if we had a much more complicated sentence or a much longer sentence that would be very difficult to try and summarize the whole thing in that one cell state so what's typically done in practice for a machine translation is something called attention with attention rather than just taking in the final cell state to the decoder at each time step we take in a weighted sum of all of the previous cell states so in this case we're trying to produce the first word we'll take in a weighted sum of all of the encoder States most of the weight will probably be on the first state because that's what would be most relevant to producing the first then when we produced the second word most of the weight will probably be on the second cell state but we might have some on the first and the third to try and get an idea for the tenths or the gender of this now and the same thing for all of the cell states the way that you implement this is just by including those weight parameters in the weighted sum as additional parameters that you train using back propagation just like everything else okay so I hope that you have an idea now why we need a different framework to model sequences and how recurrent neural networks can solve some of the issues that we saw at the beginning as well as an idea of how to train them and solve some of the vanishing gradient problems I've talked a lot about language but you can imagine using these exact same neural net recurrent neural networks for modeling time series or waveforms or doing other interesting sequence prediction tasks like predicting stock market trends or summarizing books or articles and maybe you'll consider some sequence modeling tasks for your final project so thank you [Applause]
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_Deep_CPCFG_for_Information_Extraction.txt
thank you so i i lead uh ai at eui globally and and today we're going to talk to you about some work we've done in information extraction specifically deep cpcfg so hopefully we'll introduce some concepts that you may not have come across before and and uh before we get started maybe a couple of disclaimers the views expressed in this presentation are mine and friday's they're not necessarily those of our employer and i'll let you read those other disclaimers ai is very important for ernst young many of you will be familiar with the firm we are a global organization of more than 300 000 people in more than 150 countries and we provide a range of services to our clients that include assurance consulting strategy and transformations and tax both in the context of the services that we deliver and in the context of our clients own transformations ai is incredibly important we see a huge disruption happening for many industries including our own driven by ai and because of that we're making significant investments in this area we've established a network a global network of ai research and development labs around the world as you see in various parts of the world and we also have a significant number of global ai delivery centers and we there's huge energy and passion for ai within ey and so that we have a community of more than four and a half thousand members internally and we maintain uh meaningful relationships with academic institutions including for example mit and we of course engage heavily with policy makers regulators legislators and ngos around issues related to ai some areas of particular focus for us uh the first is document intelligence which we'll talk about today and this is really the idea of re using ai to read and interpret business documents the key phrase there is business documents we're not necessarily talking about emails or web posts or social media posts or product descriptions we're talking more about things like contracts lease contracts revenue contracts employment agreements we're talking about legislation and regulation we're talking about documents like invoices and purchase orders and proofs of delivery so there's a very wide range of these kinds of documents and hundreds thousands maybe tens of thousands of different types of documents here and we see these in more than 100 languages in more than 100 countries in tens of different industries and industry segments at ui we have built some i think really compelling technology in the space we've built some products that are deployed now in more than 85 countries and used by thousands of engagements and and this space is sufficiently important to us that we helped to co-organize the first ever workshop on document intelligence at nurips in 2019 and of course we publish and patent in this area two other areas that are important to us that will not be the subject of this talk but i thought i would just allude to are transaction intelligence so the idea here is that we see many transactions uh that our clients execute and we review those transactions for tax purposes and also for audit purposes and we'll process hundreds of billions of transactions a year so it's this enormous amount of transaction data and and there's huge opportunities for machine learning and ai to help analyze those transactions to help determine for example tax or accounting treatments but also to do things like identify anomalies or unusual behavior or potentially identify fraud another area that's very important for us is trusted ai so given the role that we play in the financial ecosystem we are a provider of trust to the financial markets it's important that we help our clients and ourselves and the ecosystem at large build trust around ai and we engage heavily with academics ngos and regulators and legislators in order to achieve this but the purpose of this talk is really to talk about the purpose of this talk is really to talk about document intelligence and specifically we're going to talk about information extraction from what we call semi-structured documents things like tax forms you see the document on the screen in front of you here this is a tax form there's going to be information in this form that that is contained in these boxes and so we'll need to pull that information out these forms tend to be relatively straightforward because it's consistently located in these positions but you see to read this information or to extract this information it's not just a matter of reading text we also have to take layout information into account and there are some complexities even on this document right and there's a description of property here this is a list and so we don't know how many entries might be in this list might be one there might be two there might be more of course this becomes more complicated when these documents for example are handwritten or when they're scanned or when they're more variable like for example uh a check so many of you may have personalized checks and those checks are widely varied in terms of their background in terms of their layout typically they're handwritten and typically when we see them they're scanned often scanned at pretty poor quality and so pulling out the information there can be a challenge and again this is driven largely by the variability we have documents like invoices the invoice here is very very simple and but you'll note the key thing to know here is that there are line items and there are multiple line items each of these corresponds to an individual transaction under that invoice and there may be zero line items or there may be ten or hundreds or thousands or even in some cases tens of thousands of line items in an invoice invoices are challenging because a large enterprise might have hundreds or thousands or even tens of thousands of vendors and each one of those vendors will have a different format for their invoice and so if you try to hand code rules to extract information from these documents it tends to fail and so machine learning approaches are designed really to deal with that variation in these document types and this complex information extraction challenge let me just talk about a couple of other examples here and you'll see a couple of other problems this receipt on the left hand side is pretty typical right it's a scan document clearly it's been crumpled or creased a little bit the the information that's been entered is offset by maybe half an inch and so this customer id is not lined up with the customer id tag here and so that creates additional challenges and this document on the right hand side is a pretty typical invoice and you'll see the quality of the scan is relatively poor it contains a number of line items here and the layout of this is quite different than the invoice we saw on the previous slide or on the receipt on the left hand side here so there's lots of variability for the duration of the talk we're going to refer to this document which is a receipt and we're going to use this to illustrate our approach so our goal here is to extract key information from this receipt and so let's talk a little bit about the kinds of information we want to extract so the first kind is what we call header fields and this includes for example the date of that receipt right when when were these transactions executed it includes a receipt id um or an invoice id and it might include this total amount and these pieces of information are very important for accounting purposes make sure that we have paid the receipts and paid the invoices that we should they're important for tax purposes is this expense and taxable or not taxable have we paid the appropriate sales tax and so we do care about pulling this information out we refer to these as header fields because often they do appear at the top of the document and usually it's at the top or the bottom and but they're information that appear typically once right there's one total amount for an invoice or receipt there aren't multiple values for that and so there's a you know a fairly obvious way that we could apply deep learning to this problem we can take this document and run it through optical character recognition and optical character recognition and services or vendors have gotten pretty good and so they can produce essentially bounding boxes around tokens so they produce a number of bounding boxes and each boundary box contains a token and some of these tokens relate to the information we want to extract so this five dollars and ten cents is the total and so what we could do is we could apply deep learning to classify these bounding boxes right we can use the input to that deep learning could be this whole image it could be some context of that bounding box but we can use it to classify these bounding boxes and that can work reasonably well for this header information but there are some challenges so here for example there is this dollar amount five dollars and 10 cents there's also 4.50 60 cents 60 cents over here 1.50 if we independently classify all of those we may get multiple of them being tagged as the receiptal and how do we disambiguate those so this problem of disambiguation is fundamental and and what often happens in these systems is that there is post-processing that encodes heuristics or rules that are human engineered to resolve these ambiguities and that becomes a huge source of brittleness and this huge maintenance headache over time and we'll say more about that later the other kind of challenge we see here are fields like this vendor address so this vendor address contains multiple tokens and so we need to classify multiple tokens as belonging to this vendor address and then we have the challenges to which of those tokens actually belong to the vendor address how many of them are there and what order do we read them in to recover this address so while a straightforward machine learning approach can achieve some value it still re leaves many many problems to be resolved that are typically resolved with this hand-engineered post-processing this becomes even more challenging for line items and throughout the talk we'll emphasize line items because this is where many of the significant significant challenges arise so here we have two line items they both correspond to transactions for buying postage stamps maybe they're different kinds of postage stamp and each one will have a description posted stamps this one has a transaction number associated with the two it will have a total amount for that transaction might have a quantity you might have a unit price right so there's multiple pieces of information that we want to extract so now we need to identify where that information is we need to identify how many line items there are we need to identify which line items this information is associated with so is this 60 cents associated with this first line item or this second one and we as humans can read this and computers obviously have a much harder time especially given the variability there are thousands of different ways in which this information might be organized so this is the fundamental challenge so these are the documents we want to to read and on the other side of this are is the system of record data right typically this information will be pulled out typically by human beings and entered into some database this illustrates some kind of database schema if this was a relational database we might have two tables the first table contains the header information and the second table contains all of the line items so this is the kind of data that we might have in a system of record this is both the information we might want to extract but also uh the information that's available to us for training this system for the purposes of this talk it's going to be more appropriate to think of this uh as a document type schema think of it as json for example where we have the header information now the first three fields and this is not exactly json schema but it's meant to look like that so it's these first three fields have some kind of type information and then we have a number of line items and the number of line items isn't specified it may be zero and maybe more than one and then each one of those has has its own information so our challenge then is to extract this kind of information from those documents and the training data we have available is raw documents and this kind of information and so i want to take a little aside for a second and talk about our philosophy of deep learning and you know many people think about deep learning simply as large deep networks we have a slightly different philosophy and if you think how classical machine learning systems were built the first thing that we would do is decompose the problem into sub pieces and those sub pieces in this case might include for example something to classify bounding boxes it might include something to identify tables or extract rows and columns of tables each one of them then becomes its own machine learning problem and in order to solve that machine learning problem we have to define some learning objectives and we have to find some data and then we have to train that model and so this creates some challenges right this data does not necessarily naturally exist right we don't for these documents necessarily have annotated bounding boxes that tell us where the information is in the document it doesn't tell us which bounty boxes correspond to the information we want to extract and so in order to train in this classical approach we would have to create that data we also have to define these objectives and there may be a mismatch between the objectives we define for one piece of this problem and another and that creates friction and error propagation as we start to integrate these pieces together and then finally typically these systems have lots of post-processing at the end that is bespoke to the specific document type and is highly engineered so what happens is these systems are very very brittle if we change anything about the system we have to change many things about the system if you want to take the system and apply it to a new problem we typically have to re-engineer that post-processing and for us where we have thousands of different types of document documents in hundreds or a hundred plus languages and you know we simply cannot apply engineering effort to every single one of these problems we have to be able to apply exactly the same approach exactly the same software to every single one of them and really to me this is the core value of deep learning as a philosophy is it's about end-to-end training we have this idea that we can train the whole system end-to-end based on the problem we're trying to solve and the data we fundamentally have the natural data that we have available so again we begin by decomposing the problem into sub-pieces but we build a deep network a network component that corresponds to each of those subproblems and then we compose those networks into one large network that we train end to end and this is great because the integration problem appears once when we design this network it's easy to maintain the data acquisition problem goes away because we're designing this as an end-to-end approach to model the natural data that exists for the problem and of course there are some challenges in terms of how we design these networks and really uh that's the key challenge that arises in this case is how do we build these building blocks and how do we compose them and so we're going to talk about how we do this in this case it's about composing deep network components to solve the problem end to end so here what we do is we treat this problem as a parsing problem we take in the documents and we're going to parse them in two dimensions and this is where some of the key innovations are on parsing in two dimensions where we disambiguate the different parses using deep networks and so the deep network is going to tell us of all the potential parse trees for one of these documents which is the most probable or the most that matches the data the best and then we're going to simply read off from that parse tree the system of record data right no post processing we just literally read the parse tree and we read off that json data as output so again just uh uh we so we have these documents on the left-hand side right these input documents we run them through ocr to get the boundary boxes that contain tokens sets the input to the system and then the output of the system is this json record that describes the information we have extracted from the document right it describes the information we extracted it doesn't describe the layout of the document it just describes the information we have extracted okay so that's the fundamental approach and the machinery that we're going to use here uh is context-free grammars and context-free grammars you know anytime you want to parse you have to have a grammar to parse against context for grammars for those of you with a computer science background are really the workhorse of computer science they're the basis for many programming languages and and they're they're nice because they're relatively easy to parse we won't get into the technicalities of a context for grammar i think that the key thing to know here is that they consist of rules rules have a left-hand side and a right-hand side and the way we think about this is we can take the left-hand side and think about think of it as being made up of or composed of the right-hand side so a line item is composed of a description and a total amount the descript description can be simply a single token or it can be a sequence of descriptions right description can be multiple tokens and the way we encode that in this kind of grammar is in this recursive fashion okay so now we're going to apply this grammar to parse this kind of json and we do have to augment this grammar a little bit to capture everything we care about but still this grammar is very very simple right that's a small simple grammar really to capture the schema of the information we want to extract so now let's talk about how we parse uh a simple line item we have a simple line item it's postage stamps we have three stamps each at a dollar 50 for a total of 450. and so the first thing we do is uh for each of these tokens we replace we identify a rule where that token appears on the right-hand side and we replace it with the left hand side so if we look at this postage token we replace it by d for description we could have replaced it by t for total amount or c for count or p for price in this case i happen to know that d is the right token so i'm going to use that for illustration purposes but the key observation is that there is some ambiguity here right it's not clear and which of these substitutions is the right one to do and again this is where the deep learning is going to come in to resolve that ambiguity so the first stage of parsing is that we uh we substitute everywhere we see a right-hand side of a rule we substitute the left-hand side of the rule resolving ambiguity as we go and the next step is by construction and for technical reasons these grammars always have they either have a single token on the left-hand side or they have um a sorry this looks like maybe there's some question here and so they either have a single token uh on the right-hand side or they have two uh symbols on the right-hand side and so since we've dealt with all the tokens we're now dealing with these pairs of symbols and so we have to identify pairs of symbols that we substitute again with the left-hand side so this is description description that we substitute with with a description and and likewise for count and price we substitute with u okay so we just repeat this process and and get a full parse tree where the final symbol here is a line item so that tells us this whole thing is linear and made up of count price and total amount okay so as i said there's some ambiguity and one place for this ambiguity here is uh three and a dollar fifty how do we know that this is in fact a count and a price right this could just as easily have been a description and a description so resolving this ambiguity is hard but this is the opportunity for learning this is where the learning comes in that can learn that typically a dollar fifty is probably not part of the description it probably relates to some other information we want to extract and so that that's what we want the learning to learn and so the way we do this is we associate every rule with a score so each rule has a score and now we try to use rules that have high scores so that we produce in the end a parse tree that has a high total score so what we're actually going to do is we're going to model these scores with a deep network so for every rule we're going to have a deep network corresponding to that rule which will give the score for that rule okay now let me illustrate that on one simple example here we have a dollar fifty and this could be a description a total amount account or a price and we might intuitively think well this should be biased towards total amount or price because it's it's a monetary value but the way we're going to resolve this is we're going to apply the deep networks corresponding to each of these rules right there's four deep networks we're going to apply these deep networks and each of them will return a score and we expect that over time they will learn that the deep network for total amount will have a higher score and the network for price will have a higher score so that's fundamentally the idea at this for these bottom set of rules these um token-based rules for the more uh involved rules where we have two terms on the right-hand side we have the similar question about resolving ambiguity and so there's two i think important insights here the first is that um you know we do have this ambiguity as to how we tag these first tokens we could do cp or we could do dp but we quickly see that there is no rule that has dp on the right hand side and so the grammar itself helps to correct this error right because there is no rule that would allow this parse the grammar will correct that error and so the grammar allows us to impose some constraints about the kind of information these documents contain it allows us to encode some prior knowledge of the problem and that's really important and valuable and the second uh kind of ambiguity is where you know it is allowed but maybe it's not the right answer so in this case we cp could be replaced by a u and we're going to evaluate the model for this rule based on the left hand side of this tree and the right hand side of the tree so this is going to have two inputs the left hand side and the right hand side and and likewise for this rule which uh tries to model this as a description description and so this each of these models will have a score which will help us to disambiguate between these two choices so the question then arises how do we score a full tree and i'm going to introduce a little notation for this this is hopefully not too much notation but the idea is we're going to call the full parse tree t and we're going to denote the score for that tree by c t and i'm going to abuse notation here a little bit and re-parameterize c as having three parameters the first is the symbol at the root of the tree and the other two are the span of the sentence the span of the input covered by that tree right in this case it's the whole sentence so we use the indices 0 to n now the same notation works for a subtree right where again we the first term is the symbol at the root of that subtree and we have the indices of the span here it's not the whole sentence it just goes from i to j okay so this is the score for um for a subtree let's see if there's questions here real quick um so this is the score for um for a sub tree and we're going to define that again in terms of the deep network corresponding to the rule at the root of that subtree but also it's made up of these other sub trees so we're going to have terms that correspond to the left-hand side and the right-hand side of this tree so these are the uh we've defined this score now recursively in terms of the trees that make it up and i do see there's a question here and it says by taking into account the grammar constraints does this help the neural network learn and be more sample efficient yes exactly the point is that we know things about the problem that allow the network to be more efficient because we've applied that prior knowledge and that's really helpful in complex problems like this so okay so as i said we've defined now to score for the entire tree in terms of these three terms and uh it's key to note here that actually what we want is this to be the best possible score that could result in d at the root and so uh in fact we're going to model this score as the max over all possible rules that end with a d and over all possible ways to parse the left tree and the right tree and this might seem challenging but actually we can apply dynamic programming fairly straightforward to find this in a fairly efficient manner okay so we've defined now a scoring mechanism for parse trees for these line items and what happens is that we're then going to be able to choose among all possible parse trees using that score and the one with the highest score is the one that we consider the most likely the one that's most likely is the one that contains the information we care about and then we just read that information off the parse tree and so we're now going to train the deep network in order to select those most likely our most probable parse trees so just a reminder this first term as i've mentioned is a deep network there's a deep network for every single one of these rules and then we have these recursive terms that again are defined in terms of these deep networks so if we unroll this recursion if we unroll this recursion we will build a large network composed out of the networks for each of these individual rules and we'll build that large network for each and every document that we're trying to parse and so uh the the deep network is this dynamic object that has been composed out of solutions to these sub-problems in order to identify which is the correct parse tree so how do we train that it's fairly straightforward we use an idea from structure prediction and the idea in structure prediction is we have some structure in this case laparse tree and we want to maximize the score of good parse trees and minimize the score of all other parse trees right so what this loss function here is trying to do or this objective is trying to do is maximize the score of the correct parse trees the ones that we see in our training data and minimize the scores of all other parse trees the ones that we don't see in our training data and this can be optimized using back propagation and gradient descent or any of the machinery that we have from deep learning so this now becomes a classic machine learning our sorry deep learning optimization problem and and the result of this is an end-to-end system for parsing these documents now what i haven't talked about is the fact that and these documents are actually in two dimensions so far i've just focused on one-dimensional data so now i'll hand over to freddie and he will talk about how we deal with this two-dimensional nature of this data freddie well thank you nigel uh okay so this is the portion of the receipt that we were showing earlier on and we are going to focus on the lines within this blue boundary region over here moving along what we do is we apply ocr to the receipts and we get the bounding boxes that represents each tokens on the left side that is the line items as shown in the receipt on the right side that is the annotation that we are trying to pass we're trying to get the paths to match these annotations um what you will see is that um there are many boxes over here that we considered as irrelevant because it wouldn't be shown in the annotations and that wouldn't be part of the information that we want to extract so let's call these extra boxes and to simplify the parsing i'm going to remove them and then after going to the 2d parsing we're going to come back and and see how we handle these extra boxes so before talking about 2d parsing let me just motivate why we need to do 2d passing so what you see over here is this 2d layer of the first line item and if we were to take this line item and we reduce it into a 1d sequence what happens is that the the description which was originally in a contiguous layout in the 2d layout is no longer continuous in the 1d representation um you can see that this the blue the yellow region is continuous in the 2d layout and it's no longer contiguous in the sequence and that's because there's this blue region over here the 60 sense which has truncated the description so while we can add additional rules to handle this situation uh it typically wouldn't generalize to all the situations of all cases um a better solution would be actually to pass it in 2d and pausing it in 2d would be more consistent with how we humans interpret documents in general so we begin with the tokens and as what nigel had mentioned we put it through the deep network the deep network is going to give us uh what it thinks each token represents so in this case we get the classes for the tokens and now we can begin to merge them beginning with the the token in the top left corner uh is the word postage so we know that post stage is the description the most logical choice to pass is to combine with the token that is nearest to it and that's the stance so we can pass it in horizontal direction and we can do that because there's a rule in the grammar that says two description boxes can merge to form one description now the next thing to do is we can either pass it to the right as we can see over here or you know we can pass it in the vertical direction so which one do we choose if we do it we deposit horizontally like what we showed over here this works because there is a rule in the grammar that says that a line can be simply the total amount but what happens is that all the other boxes are left dangling there and then it wouldn't belong to any line item and in fact this wouldn't be a ideal path because then it wouldn't match the annotations that we have so an alternative path is to pass it in the vertical direction as shown over here what's important to note is that we do not hard code the direction of the pass instead the deep network is going to tell us which is the more probable path in this case you can combine them together because we know that postage stamps are description and the d network has told us that this string of numbers over here there's a description you can join them to be a description again the next thing to do is to look at um the box the token one over here and 60 cents we know that one is a count six cents is a price and we can join them because uh we have a rule in the grammar disease you can join a counter price to get a simple u now then the next thing is we can join a description and a simple u and we get a simple cube finally let's not forget that we still have a token over there we know that the token is a total amount finally we can join horizontally and we and as a result of the whole line come over here so moving along that early on we have simplified the parsing problem by removing all these extra boxes they're not there but what if we put them back if we put them back it complicates the parsing it's not in the annotations and we i didn't show it earlier so what do we do okay right so early on we already know that postage stamps they are descriptions and we can join them to become a description again so there's this extra words over here the transaction number words uh what do we do about them we introduce a new rule in the grammar and the new rule is saying we allow a token to be a noise so noise becomes a class that the deep network will possibly return to us if we know that transaction number they can be classified as noise then the next thing to do is we have oops sorry about that we can join two noise uh tokens together and we get one noise token because we have introduced a rule in the grammar that allows us to do that next thing we're gonna add another rule that says the description can be surrounded by some noise around them in this case i've added a rule over here you can see the exclamation mark here represents a permutation on this rule what this means is that we're not going to put a constraint on how these right hand side symbols can appear in this case noise can come before description or description can come after noise in this case the example shown over here the noise comes b for the description and we can combine a noise and a description together to get another description the simple d over here and moving along i can combine two description and i get a description so you can see that this is how i how we handle irrelevant tokens in in the documents so continuing with the logic eventually we would end up with the right freeze for the line items in the documents and this is what we get matching the current information okay so um finally i would like to talk about some experimental results that we have well our firm is mostly focused on invoices and most of these invoices tends to be confidential documents so while we believe that there could be other labs other companies also working on the same problem it's really very hard to find a public data set to compare our results with fortunately there is this interesting piece of work from clover ai is a lab we're doing this company in south korea called naval corp they also look at the problem online items and to their credit they have released a data set release a set of data set of receipts a set of reasons that they have written about and their paper is on exit as a preprint the main differences between our approach and their approach is that what they require is for every bounding box within the receipt to be annotated which means every bounding box you're gonna go in you're gonna say this bounding box belongs to this class and every boundary box you need to have the associated coordinates with it in our case all we do is to rely on the system on records in front of a json format which doesn't have the bounding box coordinates so effectively what we are doing is that we are training we are relying on less information than we should have and based on the results we achieve pretty comparable results as far as possible we try to implement the metrics close as possible to what they described so uh i guess with that uh i can anybody nigel great thanks freddie let me see if i have control back okay let me okay i think i do okay so um you know this was uh a number of people helped us helped us with this work and so i want to acknowledge that help and and and please do get in touch we do lots of interesting hand work at ui and all around the globe and we are hiring please reach out to us at aiadyy.com and we referenced some other work during the talk lots of really interesting great papers here definitely worth having a look at and and with that there were a couple of questions uh in the chat that i i thought were really great and so maybe let me try and answer them here because i think they also help to clarify the question the the the content so there was one question which is and can we let the ai model learn the best grammar to parse rather than defining the grammar constraints and it's it's a really good question but actually the grammar comes from the problem right the grammar is intrinsic to the problem itself the grammar can be automatically produced from the schema of the data we want to extract so it's it's natural for the problem it's not something we have to invent it comes from the problem itself there was another question about is it possible to share networks across the rules and and again really good question uh i think there's a few ways to think about this so number one is that each of these rules has its own network and we share the weights across every application of those rules whether that will be applied multiple times in a single parse tree or across multiple parse trees from multiple documents the other is that oftentimes we will leverage things like a language encoder and bert for example to embed to provide an embedding for each of the tokens and so there's lots of shared parameters there so there are many ways in which these parameters are shared and they end up it ends up being possible to produce relatively small networks to solve even really complicated problems like this and there was a question as to whether the 2d parsing is done greedily and so again really good question the the algorithm for parsing cfgs leverages dynamic programming so it doesn't uh it's not a greedy algorithm it actually produces the highest core parse tree and it does that in an efficient manner so naively that algorithm looks like it would be exponential but with the application of dynamic programming i believe it's enqueued um and then there's a question uh do the rules make any attempt to evaluate the tokenized data for example total actually equaling price times count when evaluating the likelihood of a tree again really good question we have not done that yet but it's something that we do have in mind to do because that's a really useful constraint right it's something we know about the problem is that a line item total tends to equal the unit count times the unit price and so that constraint should be really valuable in helping with a problem like this um and then a final question the grammar rules generalizable to different document types and so again these grammar rules are fundamental or natural to the problem that they correspond to the schema of the information we want to extract so that notion of generalizability of the grammar between document types is less important so thank you uh i'm happy to answer other questions hand it back to you alex
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2023_Convolutional_Neural_Networks.txt
Hi everyone and welcome back  to Intro to Deep Learning! We had a really awesome kickoff day  yesterday so we're looking to keep that same momentum all throughout  the week and starting with today. Today we're really excited to be talking  about actually one of my favorite topics in this course which is how we can build computers  that can achieve the sense of sight and vision. Now I believe that sight and  specifically like I said vision is one of the most important  human senses that we all have. In fact sighted people rely on vision  quite a lot in our day-to-day lives from everything from walking around navigating the world interacting and sensing other  emotions in our colleagues and peers. And today we're going to learn about how we  can use deep learning and machine learning to build powerful vision systems that can both see and predict what is where by  only looking at raw visual inputs. I like to think of that phrase as a  very concise and sweet definition of what it really means to achieve vision but at its core vision is actually so much more  than just understanding what is where. It also goes much deeper takes the scene for  example we can build computer vision systems that can identify of course all of the objects  in this environment starting first with the yellow taxi or the van parked on the side of  the road but we also need to understand each of these objects at a much deeper level not just  where they are but actually predicting the future predicting what may happen in the scene next for  example that the yellow taxi is more likely to be moving and dynamic into the future because  it's in the middle of the lane compared to the white van which is parked on the side of the road  even though you're just looking at a single image your brain can infer all of these very subtle cues  and it goes all the way to the pedestrians on the road and even these even more subtle cues in the  traffic lights and the rest of the scene as well now accounting for all of these details in the  scene is an extraordinary challenge but we as humans do this so seamlessly within a split second  I probably put that frame up on the slide and all of you within a split stepping could reason  about many of those subtle details without me even pointing them out but the question of today's  class is how we can build machine learning and deep learning algorithms that can achieve that  same type and subtle understanding of our world and deep learning in particular is really leading  this revolution of computer vision and achieving sight of computers for example allowing robots  to keep pick up on these key visual cues in their environment critical for really navigating  the world together with us as humans these algorithms that you're going to learn about today  have become so mainstreamed in fact that they're fitting on all of your smartphones and your  pockets processing every single image that you take enhancing those images detecting faces and  so on and so forth and we're seeing some exciting advances ranging all the way from biology and  Medicine which we'll talk about a bit later today to autonomous driving and accessibility as well  and like I said deep learning has taken this field as a whole by storm in the over the past decade  or so because of its ability critically like we were talking about yesterday its ability to learn  directly from raw data and those raw image inputs in what it sees in its environment and learn  explicitly how to perform like we talked about yesterday what is called feature extraction of  those images in the environment and one example of that is through facial detection and recognition  which all of you are going to get practice with in today's and tomorrow's Labs as part of the grand  final competition of this class another really go-to example of computer vision is in autonomous  driving and self-driving Vehicles where we can take an image as input or maybe potentially a  video as input multiple images and process all of that data so that we can train a car to learn  how to steer the wheel or command a throttle or actuate a breaking command this entire control  system the steering the throttle the braking of a car can be executed end to end by taking  as input the images and the sensing modalities of the vehicle and learning how to predict those  actuation commands now actually this end-to-end approach having a single neural network do all of  this is actually radically different than the vast majority of autonomous vehicle companies like if  you look at waymo for example that's a radically different approach but we'll talk about those  approaches in today's class and in fact this is one of our vehicles that we've been building at  MIT in my lab in CSL just a few floors above this room and we'll again and share some of the details  on this incredible work but of course it doesn't stop here with autonomous driving these algorithms  directly the same algorithms that you'll learn about in today's class can be extended all  the way to impact Healthcare medical decision making and finally even in these accessibility  applications where we're seeing computer vision algorithms helping the visually impaired so for  example in this project researchers have built deep learning enabled devices that could detect  Trails so that visually impaired Runners could be provided audible feedback so that they too  could you know navigate when they go out for runs and like I said we often take many of these tasks  that we're going to talk about in today's lecture for granted because we do them so seamlessly in  our day-to-day lives but the question of today's class is going to be at its core how we can build  a computer to do these same types of incredible things that all of us take for granted day to day  and specifically we'll start with this question of how does a computer really see and even more  detailed than that is how does a computer process an image if we think of you know site  as coming to computers through images then how can a computer even start to process those images  well to a computer images are just numbers right and suppose for example we have a picture here  of Abraham Lincoln okay this picture is made up of what are called pixels every pixel is just a  dot in this image and since this is a grayscale image each of these pixels is just a single  number now we can represent our image now as this two-dimensional Matrix of numbers and because  like I said this is a grayscale image every pixel is corresponding to just one number added that  Matrix location now assume for example we didn't have a grayscale image we had a color image that  would be an RGB image right so now every pixel is going to be composed not just of one number  but of three numbers so you can think of that as kind of a 3D Matrix instead of a 2d Matrix where  you almost have three two-dimensional Matrix that are stat stacked on top of each other so now with  this basis of basically numerical representations of images we can start to think about how we can  uh or what types of computer vision algorithms we can build that can take these systems as input  and what they can perform right so the first thing that I want to talk to you about is what kind of  tasks do we even want to train these systems to complete with images and broadly speaking there  are two broad categories of tasks we touched on this a little bit in yesterday's lecture but  just to be a bit more Concrete in today's lecture those two tasks are either classification or  regression now in regression your prediction value is going to take a continuous value right  that could be any real number on the number line but in classification your prediction could take  one of let's say k or n different classes right these are discrete different classes so let's  consider first the task of image classification in this test we want to predict an individual  label for every single image and this label that we predict is going to be one of n different  possible labels that could be considered so for example let's say we have a bunch of images of U.S  precedence and we want to build a classification pipeline to tell us which President is in this  particular image that you see on the screen now the goal of our model in this case is going  to be basically to Output a probability score a probability of this image containing one of  these different precedents right and the maximum score is going to be ultimately the one that we  infer to be the correct precedent in the image so in order to correctly perform this task and  correctly classify these images our pipeline our computer vision model needs the ability to be able  to tell us what is unique about this particular image of Abraham Lincoln for example versus a  different picture of George Washington versus a different picture of Obama for example now another  way to think about this whole problem of image classification or image processing at its high  level is in terms of features or think of these as almost patterns in your data or characteristics  of a particular class and classification then is simply done by detecting all of these different  patterns in your data and identifying when certain patterns occur over other patterns so for example  if the features of a particular class are present in an image then you might infer that that image  is of that class so for example if you want to detect cars you might look for patterns in your  data like Wheels license plates or headlights and if those things are present in your image  then you can say with fairly high confidence that your images of a car versus one of these  other categories so if we're building a computer vision pipeline we have two main steps really to  consider the first step is that we need to know what features or what patterns we're looking  for in our data and the second step is we need to then detect those patterns once we detect  them we can then infer which class we're in now one way to solve this is to leverage knowledge  about our particular Fields right so we if we know something about our field for example about human  faces we can use that knowledge to Define our features right what makes up a face we know faces  are made up of eyes noses and ears for example we can Define what each of those components look like  in defining our features but there's a big problem with this approach and remember that images are  just these three-dimensional arrays of numbers right they can have a lot of variation even within  the same type of object these variations can include really anything ranging from occlusions  to variations in lighting rotations translations into a class variation and the problem here  is that our classification pipeline needs the ability to handle and be invariant to all of these  different types of variations while still being sensitive to all of the inter-class variations the  variations that occur between different classes now even though our pipeline could use features  that we as humans you know Define manually Define based on some of our prior knowledge the problem  really breaks down in that these features become very non-robust when considering all of these  vast amounts of different variations that images take in the real world so in practice like I said  your algorithms need to be able to withstand all of those different types of variations and then  the natural question is that how can we build a computer vision algorithm to do that and still  maintain that level of robustness what we want is a way to extract features that can both detect  those features right those patterns in the data and do so in a hierarchical fashion right so going  all the way from the ground up from the pixel level to something with semantic meaning like  for example the eyes or the noses in a human face now we learned in the last class that we can  use neural networks exactly for this type of problem right neural networks are capable  of learning features directly from data and learn most importantly a hierarchical  set of features building on top of previous features that it's learned to build  more and more complex set of features now we're going to see exactly how neural networks  can do this in the image domain as part of this lecture but specifically neural networks will  allow us to learn these visual features from visual data if we construct them cleverly and the  key Point here is that actually the models and the architectures that we learned about in yesterday's  lecture and so far in this course we'll see how they're actually not suitable or extensible to  today's uh you know problem domain of images and how we can build and construct neural networks  a bit more cleverly to overcome those issues so maybe let's start by revisiting what we talked  about in lecture one which was where we learned about fully connected networks now these were  networks that you know have multiple hidden layers and each neuron in a given hidden layer is  connected to every neuron in its prior layer right so it receives all of the previous layers inputs  as a function of these fully connected layers now let's say that we want to directly without any  modifications use a fully connected Network like we learned about in lecture one with an image  processing pipeline so directly taking an image and feeding it to a fully connected Network could  we do something like that actually in this case we could the way we would have to do it is remember  that because our image is a two-dimensional array the first thing that we would have to do is  collapse that to a one-dimensional sequence of numbers right because it's a fully connected  network is not taking in a two-dimensional array it's taking in a one-dimensional sequence so  the first thing that we have to do is flatten that two-dimensional array to a vector of  pixel values and feed that to our Network in this case every neuron in our first layer  is connected to all neurons in that input layer right so in that original image flattened down we  feed all of those pixels to the first layer and here you should already appreciate the very  important notion that every single piece of spatial information that really defined our image  that makes an image and image is totally lost already before we've even started this problem  because we've flattened that two-dimensional image into a one-dimensional array we've completely  destroyed all notion of spatial information and in addition we really have a enormous number  of parameters because this system is fully connected take for example in a very very small  image which is even 100 by 100 pixels that's an incredibly small image in today's standards but  that's going to take 10 000 neurons just in the first layer which will be connected to let's say  10 000 neurons in the second layer the number of parameters that you'll have just in that one layer  alone is going to be 10 000 squared parameters it's going to be highly inefficient you can  imagine if you wanted to scale this network to even a reasonably sized image that we have to  deal with today so not feasible in practice but instead we need to ask ourselves how we can build  and maintain some of that spatial structure that's very unique about images here into our input and  here into our model most importantly so to do this let's represent our 2D image as its original  form as a two-dimensional array of numbers one way that we can use spatial structure here inherent to  our input is to connect what are called basically these patches of our input to neurons in the  hidden layer so for example let's say that each neuron in the hidden layer that you can see here  only is going to see or respond to a certain uh set or a certain patch of neurons in the previous  layer right so you could also think of this as almost a receptive field or what the single neuron  in your next layer can attend to in the previous layer it's not the entire image but rather a  small receptive field from your previous image now notice here how the region of the input layer  right which you can see on the left hand side here influences that single neuron on the right hand  side and that's just one neuron in the next layer but of course you can imagine basically defining  these connections across the whole input right each time you have the single patch on your input  that corresponds to a single neuron output on the other layer and we can apply the same principle  of connecting these patches across the entire image to single neurons in the subsequent layer  and we do this by essentially sliding that patch pixel by pixel across the input image and we'll  be responding with you know another image on our output layer in this way we essentially preserve  all of that very key and Rich spatial information inherent to our input but remember that the  ultimate task here is not only to just preserve that spatial information we want to ultimately  learn features learn those patterns so that we can detect and classify these images and we can  do this by waiting right waving the connections between the patches of our input and and in order  to detect you know what those certain features are let me give a practical example here and so in  practice this operation that I'm describing this patching and sliding operation that I'm describing  is actually a mathematical operation formerly known as convolution we'll first think about this  as a high level supposing that we have what's called a four by four pixel patch right so you can  see this 4x4 pixel patch represented in red as a red box on the left hand side and let's suppose  for example since we have a 4x4 patch this is going to consist of 16 different weights in this  layer we're going to apply this same four by four let's call this not a patch anymore let's use the  terminology filter we'll apply the same 4x4 filter in the input and use the result of that operation  to define the state of the neuron in the next layer right and now we're going to shift our  filter by let's say two pixels to the right and that's going to define the next neuron in  the adjacent location in the future layer right and we keep doing this and you can see that on  the right hand side you're sliding over not only the input image but you're also sliding  over the output neurons in the secondary layer and this is how we can start to think  about convolution at a very very high level but you're probably wondering right not just how  the convolution operation works but I think the main thing here to really narrow down on is how  convolution allows us to learn these features these patterns in the data that we were talking  about because ultimately that's our final goal that's our real goal for this class is to extract  those patterns so let's make this very concrete by walking through maybe a concrete example  right so suppose for example we want to build a convolutional algorithm to detect or classify an X  in an image right this is the letter X in an image and we hear for Simplicity let's just say we have  only black and white images right so every pixel in this image will be represented by either  a zero or a one for Simplicity there's no grayscale in this image right and actually here  so we're representing black as negative one and white as positive one so to classify we simply  cannot you know compare the left hand side to the right hand side right because these are both  X's but you can see that because the one on the right hand side is slightly rotated to some degree  it's not going to directly align with the X on the left hand side even though it is an X we want to  detect x's in both of these image so we need to think about how we can detect those features that  define an x a bit more cleverly so let's see how we can use convolutions to do that so in this  case for example instead we want our model to compare images of this x piece by piece or patch  by patch right and the important patches that we look for are exactly these features that  will Define our X so if our model can find these rough feature patches roughly in the same  positions in our input then we can determine or we can infer that these two images are of the  same type or the same letter right it can get a lot better than simply measuring the similarity  between these two images because we're operating at the patch level so think of each patch  almost like a miniature image right a small two-dimensional array of values and we can use  filters to pick up on when these small patches or small images occur so in the case of x's these  filters may represent semantic things for example the diagonal lines or the crossings that capture  all of the important characteristics of the X so we'll probably capture these features in the  arms and the center of our letter right in any image of an X regardless of how that image is you  know translated or rotated or so on and note that even in these smaller matrices right these  are filters of weights right these are also just numerical values of each pixel in these mini  patches is simply just a numerical value they're also images in some effect right and all that's  really left in this problem and in this idea that we're discussing is to Define that operation that  can take these miniature patches and try to pick up you know detect when those patches occur  in your image and when they maybe don't occur and that brings us right back to this notion of  convolution right so convolution is exactly that operation that will solve that problem convolution  preserves all of that spatial information in our input by learning image features in those smaller  squares of regions that preserve our input data so just to give another concrete example to perform  this operation we need to do an element wise multiplication between the filter Matrix those  miniature patches as well as the patch of our input image right so you have basically think  of two patches you have the weight Matrix patch the thing that you want to detect which you can  see on the top left hand here and you also have the secondary patch which is the thing that you  are looking to compare it against in your input image and the question is how how similar are  these two patches that you observe between them so for example there was this results in a  three by three Matrix because you're doing an element-wise multiplication between two  small three by three matrices you're going to be left with another three by three Matrix in  this case all of the Matrix all of the elements of this resulting Matrix you can see here are  ones right because in every location in the filter and every location in the image patch  we are perfectly matching so when we do that element-wise multiplication we get ones everywhere  the last step is that we need to sum up the result of that Matrix or that element-wise multiplication  and the result is let's say 9 in this case right everything was a one it's a three by three Matrix  so the result is nine now let's consider one more example right now we have this image in green  and we want to detect this filter in yellow suppose we want to compute the convolution of  this five by five image with this three by three filter to do this we need to cover basically the  entirety of our image by sliding over this filter piece by piece and comparing the similarity or the  convolution of this filter across the entire image and we do that again through the same mechanism  at every location we compute an element-wise multiplication of that patch with that location  on the image add up all of the resulting entries and pass that to our next layer so let's walk  through it first let's start off in the upper left hand corner we place our filter over the  upper left hand corner of our image we element wise multiply we add up all the results and we  get four and that 4 is going to be placed into the next layer right this next layer again is  another image right but it's determined as the result of our convolution operation we slide  over that filter to the next location the next location provides the next value in our image  and we keep repeating this process over and over and over again until we've covered our filter  over the entire image and as a result we've also completely filled out the result of our output  feature map the output feature map is basically what you can think of as how closely aligned our  filter is to every location in our input image so now that we've kind of gone through the  mechanism that defines this operation of convolution let's see how different filters  could be used to detect different types of patterns in our data so for example let's take  this picture of a woman's face and the output of applying three different types of filters to  this picture right so you can see the exact filter this is they're all three by three filters so the  exact filters you can see on the bottom right hand corner of the corresponding face and by applying  these three different filters you can see how we can achieve drastically different results and  simply by changing the weights that are present in these three by three matrices you can see the  variability of different types of features that we can detect so for example we can design filters  that can sharpen an image make the edges sharper in the image we can design filters that will  extract edges we can do stronger Edge detection by again modifying the weights in all of those  filters so I hope now that all of you can kind of appreciate the power of you know number one is  these filtering operations and how we can Define them you know mathematically in the form of these  smaller patch-based operations and matrices that we can then slide over an image and these concepts  are so powerful because number one they preserve the spatial information of our original input  while still performing this feature extraction now you can think of instead of defining those  filters like we said on the previous slide what if we tried to learn them and remember again that  those filters are kind of proxies for important patterns in our data so our neural network  could try to learn those elements of those small patch filters as weights in the neural  network and learning those would essentially equate to picking up and learning the patterns  that Define one class versus another class and now that we've gotten this operation and this  understanding under our belt we can take this one step further right we can take this singular  convolution operation and start to think about how we can build entire layers convolutional layers  out of this operation so that we can start to even imagine convolutional networks and neural  networks and first we'll take a look at you know what are called well what you ultimately create  by creating convolutional layers and convolutional networks is what's called a CNN a convolutional  neural network and that's going to be the core architecture of today's class so let's consider  a very simple CNN that was designed for image classification the task here again is to learn the  features directly from the raw data and use these learn features for classification towards some  task of object detection that we want to perform now there are three main operations to a CNN and  we'll go through them step by step here but then go deeper into each of them in the remainder  remainder of this class so the first step is convolutions which we've already seen a lot of  in today's class already convolutions are used to generate these feature Maps so they take as  input both the previous image as well as some filter that they want to detect and they output a  feature map of how this filter is related to the original image the second step is like yesterday  applying a non-linearity to the result of these feature maps that injects some non-linear  activations to our neural networks allows it to deal with non-linear data third step is pooling  which is essentially a down sampling operation to allow our images or allow our networks to  deal with larger and larger scale images by progressively down scaling their size so that our  filters can progressively grow in receptive field and finally feeding all of these resulting  features to some neural network to infer the class scores now by the time that we get to this  fully connected layer remember that we've already extracted our features and essentially you can  think of this no longer being a two-dimensional image we can now use the methods that we  learned about in lecture one to directly take those learned features that the neural  network has detected and infer based on those learned features and based on what if they were  detected or if they were not what class we're in so now let's basically just go through each of  these operations one by one in a bit more detail and see how we could even build up this very  basic architecture of a CNN so first let's go back and consider one more time the convolution  operation that Central a central core to the CNN and as before each neuron in this hidden layer  is going to be computed as a weighted sum of its inputs applying a bias and activating with a  non-linearity should sound very similar to lecture one in yesterday's class but except now when  we're going to do that first step instead of just doing a DOT product with our weights we're  going to apply a convolution with our weights which is simply that element-wise multiplication  and addition right and that sliding operation now what's really special here and what I really  want to stress is the local connectivity every single neuron in this hidden layer only sees a  certain patch of inputs in its previous layer so if I point at just this one neuron in  the output layer this neuron only sees the inputs at this red square it doesn't see any  of the other inputs in the rest of the image and that's really important to be able to scale  these models to very large scale images now you can imagine that as you go deeper and deeper  into your network eventually because the next layer you're going to attend to a larger  patch right and that will include data from not only this red square but effectively a much  larger Red Square that you could imagine there now let's define this actual computation that's  going on for a neuron in a hidden layer its inputs are those neurons that fell within its patch in  the previous layer we can apply this Matrix of Weights here denoted as a 4x4 filter that you can  see on the left hand side and in this case we do an element-wise multiplication we add the outputs  we apply a bias and we add that non-linearity it's the the core steps that we take and  really all of these neural networks that you're learning about in today's  and this week's class to be honest now remember that this element-wise multiplication  and addition operation that sliding operation that's called convolution and that's the basis  of these layers so that defines how neurons in convolutional layers are connected how they're  mathematically formulated but within a single convolutional layer it's also really important  to understand that a single layer could actually try to detect multiple sets of filters right maybe  you want to detect in one image multiple features not just one feature but you know in if you're  detecting faces you don't only want to detect eyes you want to detect you know eyes noses mouths ears  right all of those things are critical patterns that Define a face and can help you classify  a face so what we need to think of is actually convolution operations that can output a volume of  different images right every slice of this volume effectively denotes a different filter that can  be identified in our original input and each of those filters is going to basically correspond to  a specific pattern or feature in our image as well think of the connections in these neurons in terms  of you know their receptive field once again right the locations within the input of that node that  they were connected to in the previous layer these parameters really Define what I like to think  of as the spatial arrangement of information that propagates throughout the network and  throughout the convolutional layers in particular now I think just to summarize what we've seen  and how Connections in these types of neural networks are defined and let's say how the how  the output of a convolutional network is a volume we are well on our way to really understanding you  know convolutional neural networks and defining them right that's the what we just covered is  really the main component of cnns right that's the convolutional operation that defines these  convolutional layers the remaining steps are very critical as well but I want to maybe pause  for a second and make sure that everyone's on the same page with the convolutional operation  and the definition of convolutional layers awesome okay so the next step here is to  take those resulting feature maps that our convolutional layers extract and apply a  non-linearity to the output volume of the convolutional layer so as we discussed in the  first lecture applying these non-linearities is really critical because it allows us  to deal with non-linear data and because image data in particular is extremely  non-linear that's a you know a critical component of what makes convolutional neural  networks actually operational in practice in particular for convolutional neural networks  the activation function that is really really common for these models is the relu activation  function we talked a little bit about this in lecture one and two yesterday the relative  activation function you can see it on the right hand side think of this function as a pixel  by pixel operation that replaces basically all negative values with zero it keeps all positive  values the same it's the identity function when a value is positive but when it's negative it  basically squashes everything back up to zero think of this almost as a thresholding function  right thresholds is everything at zero anything less than zero comes back up to zero so negative  values here indicate uh basically a negative detection in convolution that you may want to  just say was no detection right and you can think of that as kind of an intuitive mechanism for  understanding why the relative activation function is so popular in convolutional neural networks  the other common the other uh popular belief is that relu activation functions well it's not  a belief they are extremely easy to compute and they're very easy and computationally efficient  their gradients are very cleanly defined they're constants except for a piecewise non-nonlinearity  so that makes them very popular for these domains now the next key operation in a CNN is that  of pooling now pooling is an operation that is at its core it serves one purpose and that  is to reduce the dimensionality of the image progressively as you go deeper and deeper through  your convolutional layers now you can really start to reason about this is that when you decrease  the dimensionality of your features you're effectively increasing the dimensionality of your  filters right now because every filter that you slide over a smaller image is capturing a larger  receptive field that occurred previously in that Network so a very common technique for pooling  is what's called maximum pooling or Max pooling for short Max pooling is exactly you know what it  sounds like so it basically operates with these small patches again that slide over an image but  instead of doing this convolution operation what these patches will do is simply take the maximum  of that patch location so I think of this as kind of activating the maximum value that comes from  that location and propagating only the maximums I encourage all of you actually to think of maybe  brainstorm other ways that we could perform even better pooling operations than Max pooling there  are many common ways but you could think of some for example or mean pooling or average pooling  right maybe you don't want to just take the maximum you could collapse basically the  average of all of these pixels into your into your single value in the result but these  are the key operations of convolutional neural networks at their core and now we're ready to  really start to put them together and form and construct a CNN all the way from the ground  up and with cnns we can layer these operations one after the other right starting first with  convolutions non-linearities and then pooling and repeating these over and over again to learn  these hierarchies of features and that's exactly how we obtained pictures like this which we  started yesterday's lecture with and learning these hierarchical decompositions of features by  progressively stacking and stacking these filters on top of each other each filter could then use  all of the previous filters that it had learned so a CNN built for image classification can be  really broken down into two parts first is the feature learning pipeline which we learn  the features that we want to detect and then the second part is actually detecting  those features and doing the classification now the convolutional and pooling layers output  from the first part of that model the goal of those convolutional pooling layers is to Output  the high level features that are extracted from our input but the next step is to actually  use those features and detect their presence in order to classify the image so we can feed  these outputted features into the com the fully connected layers that we learned about in lecture  one because these are now just a one-dimensional array of features and we can use those to detect  you know what class we're in and we can do this by using a function called a softmax function  you can think of a softmax function as simply a normalizing function whose output represents  that of a categorical probability distribution so another way to think of this is basically  if you have an array of numbers you want to collapse and those numbers could take any real  number form you want to collapse that into some probability distribution a probability  distribution has several properties name that all of its values have to sum to one it  always has to be between zero and one as well so maintaining those two properties is what a soft  Max operation does you can see its equation right here it effectively just makes everything positive  and then it normalizes the result across each other and that maintains those two properties  that I just mentioned great so let's put all of this together and actually see how we could  program our first convolutional neural network end to end entirely from scratch so let's start  by firstly defining our feature extraction head which starts with a convolutional layer and here  32 filters or 32 features you can imagine that this first layer the result of this first layer  is to learn not one filter not one pattern in our image but 32 patterns okay so those 32 results  are going to then be passed to a pooling layer and then passed on to the next set of convolutional  operations the next set of convolutional operations now will contain 64 features we'll  keep progressively growing and expanding our set of patterns that we're identifying in this  image next we can finally flatten those resulting features that we've identified and feed all of  this through our dense layers our fully connected layers that we learned about in lecture one these  will allow us to predict those final let's say 10 classes if we have 10 different final possible  classes in our image this layer will account for that and allow us to Output using softmax the  probability distribution across those 10 classes so so far we've talked about right how we  can let's say use cnns to perform image classification tasks but in reality one thing I  really want to stress in today's class especially towards the end is that this same architecture  and same building blocks that we've talked about so far are extensible and they extend to so  many different applications and model types that we can imagine so for example when we  considered the CNN for classification we saw that it really had two parts right the first  part being feature extraction learning what features to look for and the second part being the  classification the detection of those features now what makes a convolutional neural network really  really powerful is exactly the observation that the feature learning part this first part of the  neural network is extremely flexible you can take that first part of the neural network chop off  what comes after it and put a bunch of different heads into the part that comes after the goal of  the first part is to extract those features what you do with the features is entirely up to you  but you can still Leverage The flexibility and the power of the first part to learn all of those core  features so for example that portion will look for you know all of the different image classification  domains that future portion after you've extracted the features or we could also introduce new  architectures that take those features and maybe perform tasks like segmentation or image  captioning like we saw in yesterday's lecture so in the case of classification for example just  to tie up the the classification story there's a significant impact in domains like healthcare  medical decision making where deep learning models are being applied to the analysis of medical scans  across a whole host of different medical imagery now classification tells us basically a discrete  prediction of what our image contains but we can actually go much deeper into this problem  as well so for example imagine that we're not trying to only identify that this image is  an image of a taxi which you can see here but also more importantly maybe we want our neural  network to tell us not only that this is a taxi but identify and draw a specific bounding  box over this location of the taxi so this is kind of a two-phase problem number one is  that we need to draw a box and number two is we need to classify what was in that box right  so it's both a regression problem where is the box right that's a continuous problem as well as  a classification problem is what is in that box now that's a much much harder problem than what  we've covered so far in the lecture today because potentially there are many objects in our scene  not just one object right so we have to account for this fact that maybe our scene could contain  arbitrarily many objects now our Network needs to be flexible to that degree right it needs to be  able to infer a dynamic number of objects in the scene and if the scene is only of a taxi then it  should only output you know that one bounding box but on the other hand if the image has many  objects right potentially even of different classes we need a model that can draw a bounding  box for each of these different examples as well as associate their predicted classification  labels to each one independently now this is actually quite complicated in practice because  those boxes can be anywhere in the image right there's no constraints on where the boxes can be  and they can also be of different sizes they can be also different ratios right some can be tall  some can be wide let's consider a very naive way of doing this first let's take our image and  start by placing a random box somewhere on that image for example we just pick a random location a  random size we'll place a box right there this box like I said has a random location random size then  we can take that box and only feed that random box through our convolutional neural network which is  trained to do classification just classification and this neural network can detect well number one  is there a class of object in that box or not and if so what class is it and then what we can do is  we could just keep repeating this process over and over again for all of these random boxes in our  image you know many many instances of random boxes we keep sampling a new box feed it through our  convolutional neural network and ask this question what was in the box if there was something in  there then what what is it right and we keep moving on until we kind of have exhausted all of  the the boxes in the image but the problem here is that there are just way too many potential  inputs that we would have to deal with this would be totally impractical to run in a real-time  system for example with today's compute it results in way too many scales especially for the types  of resolutions of images that we deal with today so instead of picking random boxes let's  try and use a very simple heuristic right to identify maybe some places with lots  of variability in the image where there is high likelihood of having an object might  be present right these might have meaningful insights or meaningful objects that could be  available in our image and we can use those to basically just feed in those High attention  locations to our convolutional neural network and then we can basically speed up that first  part of the pipeline a lot because now we're not just picking random boxes maybe we use  like some simple heuristic to identify where interesting parts of the image might be but still  this is actually very slow in practice we have to feed in each region independently to the model  and plus it's very brittle because ultimately the part of the model that is looking at where  potential objects might be is detached from the part that's doing the detection of those objects  ideally we want one model that is able to both you know figure out where to attend to  and do that classification afterwards so there have been many variants that have been  proposed in this field of object detection but I want to just for the purpose of today's class  introduce you to one of the most popular ones now this is a point or this is a model called rcnn  or faster rcnn which actually attempts to learn not only how to classify these boxes but learned  how to propose those where those boxes might be in the first place so that you could learn how  to feed or where to feed into the downstream neural network now this means that we can feed  in the image to what are called these region proposal networks the goal of these networks is  to propose certain regions in the image that you should attend to and then feed just those regions  into the downstream cnns so the goal here is to directly try to learn or extract all of those key  regions and process them through the later part of the model each of these regions are processed  with their own independent feature extractors and then a classifier can be used to aggregate  them all and perform feature detection as well as object detection now the beautiful thing about  this is that this requires only a single pass through the network so it's extraordinarily  fast it can easily Run in real time and it's very commonly used in many industry applications  as well even it can even run on your smartphone so in classification we just saw how we can  predict you know not only a single image per or sorry a single object per image we saw  an object detection potentially inferring multiple objects with bounding boxes in your  image there's also one more type of task which I want to point out which is called segmentation  segmentation is the task of classification but now done at every single Pixel this takes  the idea of object detection which bounding boxes to the extreme now instead of drawing  boxes we're not even going to consider boxes we're going to learn how to classify every  single Pixel in this image in isolation right so it's a huge number of classifications  that we're going to do and we'll do this well first let me show this example so on the  left hand side what this looks like is you're feeding an original RGB image the goal of the  right hand side is to learn for every pixel in the left hand side what was the class of that  pixel right so this is kind of in contrast to just determining you know boxes over our image  now we're looking at every pixel in isolation and you can see for example you know this pixels  of uh the cow are clearly differentiated from the pixels of the sky or the pixels of the the grass  right and that's a Cree a key critical component of semantic segmentation networks the output here  is created by again using these convolutional operations followed by pooling operations which  learn an encoder which you can think of on the left hand side these are learning the features  from our RGB image learning how to put them into a space so that it can reconstruct into a new space  of semantic labels so you can imagine kind of a Down scaling and then Progressive upscaling into  the semantic space but when you do that upscaling it's important of course you can't be pulling down  that information you need to kind of invert all of those operations so instead of doing convolutions  with pooling you can now do convolutions with basically reverse pulling or expansions right  you can grow your feature sets at every labels and here's an example on the bottom of just a code  piece that actually defines these layers you can plug these layers combine them with convolutional  layers and you can build these fully convolutional networks that can accomplish this type of task  now of course this can be applied in many other applications in healthcare as well especially  for segmenting out let's say cancerous regions or even identifying parts of the blood which are  infected with malaria for example and one final example here of self-driving cars let's say that  we want to build a neural network for autonomous navigation specifically building a model let's say  that can take as input an image as well as let's say some very coarse maps of where it thinks it  is think of this as basically a screenshot of you know the Google Maps essentially to the neural  network right it's the GPS location of the map and it wants to directly inferred not a  classification or a semantic classification of the scene but now directly infer the actuation  how to drive and steer this car into into the future right now this is a full probability  distribution over the entire space of control commands right it's a very large continuous  probability space and the question is how can we build a neural network to learn this function  and the key point that I'm stressing with all of these different types of architectures here  is that all of these architectures use the exact same encoder we haven't changed anything  when going from classification to detection to semantic segmentation and now to here all of them  are using the same underlying building blocks of convolutions non-linearities and pooling the  only difference is that after we ex perform those feature extractions how do we take those features  and learn our ultimate tasks so for example in the case of probabilistic control commands we would  want to take those learned features and understand how to predict you know the parameters of a  full continuous probability distribution like you can see on the right hand side as well  as the deterministic control of our desired destination and again like we talked about at  the very beginning of this class this model which goes directly from images all the way to  steering wheel angles essentially of the car is a single model it's learned entirely end to end we  never told the car for example what a lane marker is or you know the rules of the road it was able  to observe a lot of human driving data extract these patterns these features from what makes  a good human driver different from a bad human driver and learn how to imitate those same types  of actions that are occurring so that without any you know human intervention or human rules that  we impose on these systems they can simply watch all of this data and learn how to drive entirely  from scratch so a human for example can actually enter the car input a desired dust Nation and this  end-to-end CNN will actually actuate the control commands to bring them to their destination now  I'll conclude today's lecture with just saying that the applications of cnns we've touched on a  few of them today but the applications of cnms are enormous Right Far Beyond these examples that  I've provided today they all tie back to this core concept of feature extraction and detection  and after you do that feature extraction you can really crop off the rest of your network  and apply it to many different heads for many different tasks and applications that you  might care about we've touched on a few today but there are really so so many in different  domains and with that I'll conclude and very shortly we'll just be talking about generative  modeling which is a really Central class really central part of today's and this week's lectures  series and after that later on we'll have the software lab which I'm excited for all of you  to to start participating in and yeah we can take a short five minute break and continue  the lectures from there thank you [Applause]
MIT_6S191_Introduction_to_Deep_Learning
MIT_Introduction_to_Deep_Learning_2022_6S191.txt
okay good afternoon everyone and thank you all for joining today i'm super excited to welcome you all to mit 6s191 introduction to deep learning my name is alexander amini and i'm going to be your instructor this year along with ava soleimani now 6s191 is a really fun and fast-paced class and for those of you who are not really familiar i'll start by giving you a bit of background on on what deep learning is and what this class is all about just because i think we're going to cover a ton of material in today's class and only one week this class is in total and in just that one week you're going to learn about the foundations of this really remarkable field of deep learning and get hands-on experience and practical knowledge and practical guides through these software labs using tensorflow now i like to tell people that 6s 191 is like a one week boot camp in deep learning and that's because of the amount of information that you're going to learn over the course of this one week and i'll start by just asking a very simple question and what is deep learning right so instead of giving you some boring technical answer and description of what deep learning is and the power of deep learning and why this class is so amazing i'll start by actually showing you a video of someone else doing that instead so let's take a look at this first hi everybody and welcome to mip fitness 191 the official introductory course on deep learning taught here at mit reflecting is revolutionizing so many views from robotics to medicine and everything in between you'll learn the fundamentals of this field and how you can build some of these incredible algorithms in fact this entire speech and video are not real and were created using deep learning and artificial intelligence and in this class you'll learn how it has been an honor to speak with you today and i hope you enjoy the course so in case you can tell that video was actually not real at all that was not real video or real audio and in fact the audio you heard was actually even purposely degraded even further just by us to make it look and sound not as real and avoid any potential misuse now this is really a testament to the power of deep learning uh to create such high quality and highly realistic videos and quality models for generating those videos so even with this purposely degraded audio that intro that we always show that intro and we always get a ton of really exciting feedback from our students and how excited they are to learn about the techniques and the algorithms that drive forward that type of progress and the progress in deep learning is really remarkable especially in the past few years the ability of deep learning to generate these very realistic uh data and data sets extends far beyond generating realistic videos of people like you saw in this example now we can use deep learning to generate full simulated environments of the real world so here's a bunch of examples of fully simulated virtual worlds generated using real data and the power and powered by deep learning and computer vision so this simulator is actually fully data driven we call it and within these virtual worlds you can actually place virtual simulated cars for training autonomous vehicles for example this simulator was actually designed here at mit and when we created it we actually showed the first occurrence of using a technique called end-to-end training using reinforcement learning and training a autonomous vehicle entirely in simulation using reinforcement learning and having that vehicle controller deployed directly onto the real world on real roads on a full-scale autonomous car now we're actually releasing this simulator open source this week so all of you as students in 191 will have first access to not only use this type of simulator as part of your software labs and generate these types of environments but also to train your own autonomous controllers to drive in these types of environments that can be directly transferred to the real world and in fact in software lab three you'll get the ability to do exactly this and this is super exciting addition to success one nine this year because all of you as students will be able to actually enter this competition where you can propose or submit your best deep learning models to drive in these simulated environments and the winners will actually be invited and given the opportunity to deploy their models on board a full-scale self-driving car in the real world so we're really excited about this and i'll talk more about that in the software lab section so now hopefully all of you are super excited about what this class will teach you so hopefully let's start now by taking a step back and answering or defining some of these terminologies that you've probably been hearing a lot about so i'll start with the word intelligence intelligence is the ability to process information take as input a bunch of information and make some informed future decision or prediction so the field of artificial intelligence is simply the ability for computers to do that to take as input a bunch of information and use that information to inform some future situations or decision making now machine learning is a subset of ai or artificial intelligence specifically focused on teaching a computer or teaching an algorithm how to learn from experiences how to learn from data without being explicitly programmed how to process that input information now deep learning is simply a subset of machine learning as a whole specifically focused on the use of neural networks which you're going to learn about in this class to automatically extract useful features and patterns in the raw data and use those patterns or features to inform the learning tasks so to inform those decisions you're going to try to first learn the features and learn the inputs that determine how to complete that task and that's really what this class is all about it's how we can teach algorithms teach computers how to learn a task directly from raw data so just be giving a data set of a bunch of examples how can we teach a computer to also complete that task like the like we see in the data set now this course is split between technical lectures and software labs and we'll have several new updates in this year in this year's edition of the class especially in some of the later lectures in this first lecture we'll cover the foundations of deep learning and neural networks starting with the building blocks of of neural networks which is just a single neuron and finally we'll conclude with some really exciting guest lectures were and student projects from all of you and as part of the final prize competition that you'll be eligible to win a bunch of exciting prizes and awards so for those of you who are taking this class for credit you'll have two options to fulfill your credit requirement the first option is a project proposal where you'll get to work either individually or in groups of up to four people and develop some cool new deep learning idea doing so will make you eligible for some of these uh awesome sponsored prizes now we realize that one week is a super short and condensed amount of time to make any tangible code progress on a deep learning progress so what we're actually going to be judging you here on is not your results but other rather the novelty of your ideas and the ability that we believe that you could actually execute these ideas in practice given the the state of the art today now on the last day of class we'll give you all a three-minute presentation where your group can present your idea and uh win an award potentially and there's actually an art i think to presenting an idea in such a short amount of time that we're also going to be kind of judging you on to see how quickly and effectively you can convey those ideas now the second option to fill your grade requirement is just to write a one-page essay on a review of any deep learning paper and this will be due on the last thursday of the class now in addition to the final project prizes we'll also be awarding prizes for the top lab submissions for each of the three labs and like i mentioned before this year we're also holding a special prize for lab 3 where students will be able to deploy their results onto a full-scale self-driving car in the real world for support in this class please post all of your questions to piazza check out the course website for announcements the course canvas also for announcements and digital recordings of the lectures and labs will be available on canvas shortly after each of the each of the classes so this course has an incredible team that you can reach out to if you ever have any questions either through canvas or through the email list at the bottom of the slide feel free to reach out and we really want to give a huge shout out and thanks to all of our sponsors who without this who without their support this class would not be possible this is our fifth year teaching the class and we're super excited to be back again and teaching such a remarkable field and exciting content so now let's start with some of the exciting stuff now that we've covered all of the logistics of the class right so let's start by asking ourselves a question why do we care about this and why did all of you sign up to take this class why do you care about deep learning well traditional machine learning algorithms typically operate by defining a set of rules or features in the environment in the data right so usually these are hand engineered right so a human will look at the data and try to extract some hand engineered features from the data now in deep learning we're actually trying to do something a little bit different the key idea of deep learning is that these features are going to be learned directly from the data itself in a hierarchical manner so this means that given a data set let's say a task to detect faces for example can we train a deep learning model to take as input a face and start to detect the face by first detecting edges for example very low level features building up those edges to build eyes and noses and mouths and then building up some of those smaller components of faces into larger facial structure features so as you go deeper and deeper into a neural network architecture you'll actually see its ability to capture these types of hierarchical features and that's the goal of deep learning compared to machine learning is actually the ability to learn and extract these features to perform machine learning on them now actually the fundamental building blocks of deep learning and their underlying algorithms have actually existed for decades so why are we studying this now well for one data has become much more prevalent so data is really the driving power of a lot of these algorithms and today we're living in the world of big data where we have more data than ever before now second these models and these algorithms neural networks are extremely and massively parallelizable they can benefit tremendously from and they have benefited tremendously from modern advances in gpu architectures that we have experienced over the past decade right and these these advances these types of gpu architecture simply did not exist when we think about when these algorithms were detected in and created excuse me in for example the neuron the idea for the foundational neuron was created in almost 1960. so when you think back to 1960 we simply did not have the compute that we have today and finally due to amazing open source toolboxes like tensorflow we're able to actually build and deploy these algorithms and these models have become extremely streamlined so let's start with the fundamental building block of a neural network and that is just a single neuron now the idea of a single neuron or let's call this a perceptron is actually extremely intuitive let's start by defining how a single neuron takes as input information and it outputs a prediction okay so just looking at its forward pass it's forward prediction call from inputs on the left to outputs on the right so we define a set of inputs let's call them x1 to xm now each of these numbers on the left in the blue circles are multiplied by their corresponding weight and then added all together we take this single number that comes out of this edition and pass it through a nonlinear activation function we call this the activation function and we'll see why in a few slides and the output of that function is going to give us our our prediction y well this is actually not entirely correct i forgot one piece of detail here we also have a bias term which here i'm calling w0 sometimes you also see it as the letter b and the bias term allows us to shift the input to our activation function to the left or to the right now on the right side here you can actually see this diagram on the left illustrated and written out in mathematical equation form as a single equation and we can actually rewrite this equation using linear algebra in terms of vectors and dot products so let's do that here now we're going to collapse x1 to xm into a single vector called capital x and capital w will denote the vector of the corresponding weights w1 to wm the output here is obtained by taking their dot product adding a bias and applying this non-linearity and that's our output y so now you might be wondering the only missing piece here is what is this activation function right well i said it's a nonlinear function but what does that actually mean here's an example of one common function that people use as an activation function on the bottom right this is called the sigmoid function and it's defined mathematically above its plot here in fact there are many different types of nonlinear activation functions used in neural networks here are some common ones and throughout this entire presentation you'll also see what these tensorflow code blocks on the bottom part of the screen just to briefly illustrate how you can take the concepts the technical concepts that you're learning as part of this lecture and extend it into practical software right so these tensorflow code blocks are going to be extremely helpful for some of your software labs to kind of show the connection and bridge the connection between the foundation set up for the lectures and the practical side with the labs now the sigmoid activation function which you can see on the left hand side is popular like i said largely because it's the it's one of the few functions in deep learning that outputs values between zero and one right so this makes it extremely suitable for modeling things like probabilities because probabilities are also existing in the range between zero and one so if we want the output of probability we can simply pass it through a sigmoid function and that will give us something that resembles the probability that we can use to train with now in modern deep learning neural networks it's also very common to use what's called the relu function and you can see an example of this on the right and this is extremely popular it's a piecewise function with a single non-linearity at x equals 0. now i hope all of you are kind of asking this question to yourselves why do you even need activation functions what's the point what's the importance of an activation function why can't we just directly pass our linear combination of their inputs with our weights through to the output well the point of an activation function is to introduce a non-linearity into our system now imagine i told you to separate the green points from the red points and that's the thing that you want to train and you only have access to one line it's an it's not non-linear so you only have access to a line how can you do this well it's an extremely hard problem then right and in fact if you can only use a linear activation function in your network no matter how many neurons you have or how deep is the network you will only be able to produce a result that is one line because when you add a line to a line you still get a line output non-linearities allow us to approximate arbitrarily complex functions and that's what makes neural networks extremely powerful let's understand this with a simple example so imagine i give you a trained network now here i'm giving you the weights and the weights w are on the top right so w0 is going to be set to 1 that's our bias and the w vector the weights of our input dimension is going to be a vector with the values 3 and negative 2. this network only has two inputs right x1 and x2 and if we want to get the output of it we simply do the same step as before and i want to keep drilling in this message to get the output all we have to do is take our inputs multiply them by our corresponding weights w add the bias and apply a non-linearity it's that simple but let's take a look at what's actually inside that non-linearity when i do that multiplication and addition what comes out it's simply a weighted combination of the inputs in the form of a 2d line right so we take our inputs x of t x transpose excuse me multiply it as a dot product with our weights add a bias and if we look at what's inside this parentheses here what is getting passed to g this is simply a two dimensional line because all right we have two inputs x1 and x2 so we can actually plot this line in feature space or input space we'll call it because this is along the x-axis is x1 and along the y-axis is x2 and we can plot the the decision boundary we call it of the input to this um class to this activation function this is actually the line that defines our perceptron neuron now if i give you a new data point let's say x equals negative 1 2 we can plot this data point in this space in this two-dimensional space and we can also see where it falls with respect to that line now if i want to compute its weighted combination i simply follow the perceptron equation to get 1 minus 3 minus 4 which equals minus 6. and when i put that into a sigmoid activation function we get a final output of approximately 0.002 now why is that the case so assume we have this input negative 1 negative 2 and this is just going through the math again negative 1 and 2. we pass that through our our equations and we get this output from g let's dive in a little bit more to this feature graph well remember if i if the sigmoid function is defined in the standard way it's actually outputting values between 0 and 1 and the middle is actually at 0.5 right so anything on the left hand side of this feature space of this line is going to correspond to the input being less than 0 and the output being greater than 0.5 or excuse me less than 0.5 and on the other side is the opposite that's corresponding to our activation z being greater than 0 and our output y being greater than 0.5 right so this is just following all of the sigmoid math but illustrating it in pictorial form and schematics and in practice neural networks don't have just two weights w1 w2 they're composed of millions and millions of weights in practice so you can't really draw these types of plots for the types of neural networks that you'll be creating but this is to give you an example of a single neuron with a very small number of weights and we can actually visualize these type of things to gain some more intuition about what's going on under the hood so now that we have an idea about the perceptron let's start by building neural networks from this foundational building block and seeing how all of this story starts to come together so let's revisit our previous diagram of the perceptron if there's a few things i want you to take away from this class in this lecture today i want it to be this thing here so i want you to remember how a perceptron works and i want to remember three steps the first step is dot product your inputs with your weights dot product add a bias and apply a non-linearity and that defines your entire perceptron forward propagation all the way down into these three operations now let's simplify the diagram a little bit now that we got the foundations down i'll remove all of the weight labels so now it's assumed that every line every arrow has a corresponding weight associated to it now i'll remove the bias term for simplicity as well here you can see right here and note that z the result of our dot product plus our bias is before we apply the non-linearity right so g of z is our output our prediction of the perceptron our final output is simply our activation function g taking as input that state z if we want to define a multi-output neural network so now we don't have one output y let's say we have two outputs y one and y two we simply add another perceptron to this diagram now we have two outputs each one is a normal perceptron just like we saw before each one is taking inputs from x1 to xm from the x's multiplying them by the weights and they have two different sets of weights because they're two different neurons right they're two different perceptrons they're going to add their own biases and then they're going to apply the activation function so you'll get two different outputs because the weights are different for each of these neurons if we want to define let's say this entire system from scratch now using tensorflow we can do this very very simply just by following the operations that i outlined in the previous slide so our neuron let's start by a single dense layer a dense layer just corresponds to a layer of these neurons so not just one neuron or two neurons but an arbitrary number let's say n neurons in our dense layer we're going to have two sets of variables one is the weight vector and one is the bias so we can define both of these types of variables and weights as part of our layer the next step is to find what is the forward pass right and remember we talked about the operations that defined this forward pass of a perceptron and of a dense layer now it's composed of the steps that we talked about first we compute matrix multiplication of our inputs with our weight matrix our weight vector so inputs multiplied by w add the bias plus b and feed it through our activation function here i'm choosing a sigmoid activation function and then we return the output and that defines a dense layer of a neural network now we have this dense layer we can implement it from scratch like we see in the previous slide but we're pretty lucky because tensorflow has already implemented this dense layer for us so we don't have to do that and write that additional code instead let's just call it here we can see an example of calling a dense layer with the number of output units set equal to 2. now let's dive a little bit deeper and see how we can make now a full single layered neural network not just a single layer but also an output layer as well this is called a single hidden layered neural network and we call this a hidden layer because these states in the middle with these red states are not directly observable or enforceable like the inputs which we feed into the model and the outputs which we know what we want to predict right so since we now have this transformation from the inputs to the hidden layer and from the hidden layer to the output layer we need now two sets of weight matrices w1 for the input layer and w2 for the output layer now if we look at a single unit in this hidden layer let's take this second unit for example z2 it's just the same perceptron that we've been seeing over and over in this lecture already so we saw before that it's obtaining its output by taking a dot product with those x's its inputs multiplying multiplying them via the dot product adding a bias and then passing that through through the form of z2 if we took a different hidden node like z3 for example it would have a different output value just because the weights leading to z3 are probably going to be different than the weights leading to z2 and we we basically start them to be different so we have diversity in the neurons now this picture looks a little bit messy so let me clean it up a little bit more and from now on i'll just use this symbol in the middle to denote what we're calling a dense layer dense is called dense because every input is connected to every output like in a fully connected way so sometimes you also call this a fully connected layer to define this fully connected network or dense network in tensorflow you can simply stack your dense layers one after another in what's called a sequential model a sequential model is something that feeds your inputs sequentially from inputs to outputs so here we have two layers the heightened layer first defined with n hidden units and our output layer with two output units and if we want to create a deep neural network it's the same thing we just keep stacking these hidden layers on top of each other in a sequential model and we can create more and more hierarchical networks and this network for example is one where the final output in purple is actually computed by going deeper and deeper into the layers of this network and if we want to create a deep neural network in software all we need to do is stack those software blocks over and over and create more hierarchical models okay so this is awesome now we have an idea and we've seen an example of how we can take a very simple and intuitive mechanism of a single neuron a single perceptron and build that and build that all into the form of layers and complete complex neural networks let's take a look at how we can apply them in a very real and practical problem that maybe some of you have thought about before coming today's to today's class now here's the problem that i want to train an ai to to solve if i was a student in this class so will i pass this class that's the problem that we're going to ask our machine or a deep learning algorithm to answer for us and to do that let's start by defining some inputs and outputs or sorry input features excuse me to the to the ai to the ai model one feature that's let's use to learn from is the number of lectures that you attend as part of today as part of this course and the second feature is the number of hours that you're going to spend developing your final project and we can collect a bunch of data because this is our fifth year teaching this amazing class we can collect a bunch of data from past years on how previous students performed here so each dot corresponds to a student who took this class we can plot each student in this two-dimensional feature space where on the x-axis is the number of lectures they attended and on the y-axis is the number of hours that they spent on the final project the green points are the students who pass and the red points are those who failed and then there's you you lie right here right here at the point four five so you've attended four lectures and you've spent five hours on your final project you want to build now a neural network to determine given everyone else's standing in the class will i pass or fail this class now let's do it so we have these two inputs one is four one is five this is your inputs and we're going to feed these into a single layered neural network with three hidden units and we'll see that when we feed it through we get a predicted value of probability of you passing this class as 0.1 or 10 percent so that's pretty bad because well you're not going to fail the class you're actually going to succeed so the actual value here is going to be one you do pass the class so why did the network get this answer incorrectly well to start with the network was never trained right so all it did was we just started the network it has no idea what success 191 is how it occurs for a student to pass or fail a class or what these inputs four and five mean right so it has no idea it's never been trained it's basically like a baby that's never seen anything before and you're feeding some random data to it and we have no reason to expect why it's going to get this answer correctly that's because we never told it how to train itself how to update itself so that it can learn how to predict such a such an outcome or to predict such a task of passing or failing a class now to do this we have to actually define to the network what it means to get a wrong prediction or what it means to incur some error now the closer our prediction is to our actual value the lower this error or our loss function will be and the farther apart they are the uh the farther the part they are the more error we will incur the closer they are together the less error that we will occur now let's assume we have data not just from one student but for many students now we care about how the model did on average across all of the students in our data set and this is called the empirical loss function it's just simply the mean of all of the individual loss functions from our data set and when training a network to to solve this problem we want to minimize the empirical law so we want to minimize the loss that the network incurs on the data set that it has access to between our predictions and our outputs so if we look at the problem of binary classification for example passing or failing a class we can use something a loss function called for example the softmax cross-entropy loss and we'll go into more detail and you'll get some experience implementing this loss function as part of your software labs but i'll just give it as a a quick aside right now as part of this slide now let's suppose instead of predicting pass or fail a binary classification output let's suppose i want to predict a numeric output for example the grade that i'm going to get in this class now that's going to be any real number now we might want to use a different loss function because we're not doing a classification problem anymore now we might want to use something like a mean squared error loss function or maybe something else that takes as input continuous real valued numbers okay so now that we have this loss function we're able to tell our network when it makes a mistake now we've got to put that together with the actual model that we defined in the last part to actually see now how we can train our model to update and optimize itself given that error function so how can it minimize the error given a data set so remember that we want the objective here is that we want to identify a set of weights let's call them w star that will give us the minimum loss function on average throughout this entire data sets that's the gold standard of what we want to accomplish here in training a neural network right so the whole goal of this class really is how can we identify w star right so how can we train our the weights all of the weights in our network such that the loss that we get as an output is as small as it can possibly be right so that means that we want to find the w's that minimize j of w so that's our empirical loss our average empirical loss remember that w is just a group of all of the ws from our from every layer in the model right so we just concatenate them all together and we want to minimize the we want to find the weights that give us the lowest loss and remember that our loss function is just a is a function right that takes us input all of our weights so given some set of weights our loss function will output a single value right that's the error if we only have two weights for example we might have a loss function that looks like this we can actually plot the loss function because it's it's relatively low dimensional we can visualize it right so on the x on the horizontal axis x and y axis we're having the two weights w0 and w1 and on the vertical axis we're having the loss so higher loss is worse and we want to find the weights w0 and w1 that will bring us the lowest part to the lowest part of this lost landscape so how do we do that this a process called optimization and we're going to start by picking an initial w0 and w1 start anywhere you want on this graph and we're going to compute the gradient remember our loss function is simply a mathematical function so we can compute the derivatives and compute the gradients of this function and the gradient tells us the direction that we need to go to maximize j of w to maximize our loss so let's take a small step now in the opposite direction right because we want to find the lowest loss for a given set of weights so we're going to step in the opposite direction of our gradient and we're going to keep repeating this process we're going to compute gradients again at the new point and keep stepping and stepping and stepping until we converge to a local minima eventually the gradients will converge and we'll stop at the bottom it may not be the global bottom but we'll find some bottom of our lost landscape so we can summarize this whole algorithm known as gradient descent using the gradients to descend into our loss function in pseudocode so here's the algorithm written out as pseudocode we're going to start by initializing weights randomly and we're going to repeat the two steps until we convert so first we're going to compute our gradients and then we're going to step in the opposite direction a small step in the opposite direction of our gradients to update our weights right now the amount that we step here eta this is the the n character next to our gradients determines the the magnitude of the step that we take in the direction of our gradients and we're going to talk about that later that's a very important part of this problem but before i do that i just want to show you also kind of the analog side of this algorithm written out in tensorflow again which may be helpful for your software labs right so this whole algorithm can be replicated using automatic differentiation using platforms like tensorflow so tensorflow with tensorflow you can actually randomly initialize your weights and you can actually compute the gradients and do these differentiations automatically so it will actually take care of the definitions of all of these gradients using automatic differentiation and it will return the gradients that you can directly use to step with and optimize and train your weights but now let's take a look at this term here the gradient so i mentioned to you that tensorflow and your software packages will compute this for you but how does it actually do that i think it's important for you to understand how the gradient is computed for every single weight in your neural network so this is actually a process called back propagation in deep learning and neural networks and we'll start with a very simple network and this is probably the simplest network in existence because it only contains one hidden neuron right so it's the smallest possible neural network now the goal here is that we're going to try and do back propagation manually ourselves by hand so we're going to try and compute the gradient of our loss j of w with respect to our weight w2 for example this tells us how much a small change in w2 will affect our loss function right so if i change and perturb w2 a little bit how does my error change as a result so if we write this out as a derivative we start by applying the chain rule and use we start by applying the chain rule backwards from the loss function through the output okay so we start with the loss function here and we specifically decompose dj dw2 into two terms we're going to decompose that into dj dy multiplied by d y d w two right so we're just applying the chain rule to decompose the left hand side into two gradients that we do have access to now this is possible because y is only dependent on the previous layer now let's suppose we want to compute the gradients of the weight before w2 which in this case is w1 well now we've replaced w2 with w1 on the left hand side and then we need to apply the chain rule one more time recursively right so we take this equation again and we need to apply the chain rule to the right hand side on the the red highlighted portion and split that part into two parts again so now we propagate our gradient our old gradient through the hidden unit now all the way back to the weight that we're interested in which in this case is w1 right so remember again this is called back propagation and we repeat this process for every single weight in our neural network and if we repeat this process of propagating gradients all the way back to the input then we can determine how every single weight in our neural network needs to change and how they need to change in order to decrease our loss on the next iteration so then we can apply those small little changes so that our losses a little bit better on the next trial and that's the backpropagation algorithm in theory it's a very simple algorithm just compute the gradients and step in the opposite direction of your gradient but now let's touch on some insights from training these networks in practice which is very different than the simple example that i gave before so optimizing neural networks in practice can be extremely difficult it does not look like the loss function landscape that i gave you before in practice it might look something like this where your lost landscape is super convex uh super non-convex and very complex right so here's an example of the paper that came out a year ago where authors tried to actually visualize what deep learn deep neural network architecture landscapes actually look like and recall this update equation that we defined during gradient descent i didn't talk much about this parameter i alluded to it it's called the learning rates and in practice it determines a lot about how much step we take and how much trust we take in our gradients so if we set our learning rate to be very slow then we're model we're having a model that may get stuck in local minima right because we're only taking small steps towards our gradient so we're going to converge very slowly we may even get stuck if it's too small if the learning rate is too large we might follow the gradient again but we might overshoot and actually diverge and our training may kind of explode and it's not a stable training process so in reality we want to use learning rates that are neither not small not too small not too large to avoid these local minima and still converge right so we want to kind of use medium-sized learning rates and what medium means is totally arbitrary you're going to see that later on just kind of skip over these local minima and and still find global or hopefully more global optimums in our lost landscape so how do we actually find our learning rate well you set this as the define as a definition of your learning algorithm so you have to actually input your learning rate and one way to do it is you could try a bunch of different learning rates and see which one works the best that's actually a very common technique in practice even though it sounds very unsatisfying another idea is maybe we could do something a little bit smarter and use what are called adaptive learning rates so these are learning rates that can kind of observe its landscape and adapt itself to kind of tackle some of these challenges and maybe escape some local minima or speed up when it's on a on a local minima so this means that the learning rate because it's adaptive it may increase or decrease depending on how large our gradient is and how fast we're learning or many other options right so in fact these have been widely explored in deep learning literature and heavily published on as part of also software packages like tensorflow as well so during your labs we encourage you to try out some of these different types of of uh optimizers and algorithms and how they they can actually adapt their own learning rates to stabilize training much better now let's put all of this together now that we've learned how to create the model how to define the loss function and how to actually perform back propagation using an optimization algorithm and it looks like this so we define our model on the top we define our optimizer here you can try out a bunch of different of the tensorflow optimizers we feed the output of our model grab its gradient and apply its gradient to the optimizer so we can update our weight so in the next iteration we're having a better prediction now i want to continue to talk about tips for training these networks in practice very briefly towards the end of this lecture and because this is a very powerful idea of batching your data into mini batches to stabilize your training even further and to do this let's first revisit our gradient descent algorithm the gradient is actually very very computationally expensive to compute because it's computed as a summation over your entire data set now imagine your data set is huge right it's not going to be feasible in many real life problems to compute on every training iteration let's define a new gradient function that instead of computing it on the entire data set it just computes it on a single random example from our data set so this is going to be a very noisy estimate of our gradient right so just from one example we can compute an estimate it's not going to be the true gradient but an estimate and this is much easier to compute because it's it's very small so just one data point is used to compute it but it's also very noisy and stochastic since it was used also with this one example right so what's the middle ground instead of computing it from the whole data set and instead of computing it from just one example let's pick a random set of a small subset of b examples we'll call this a batch of examples and we'll feed this batch through our model and compute the gradient with respect to this batch this gives us a much better estimate in practice than using a single gradient it's still an estimate because it's not the full data set but still it's much more computationally attractive for computers to do this on a small batch usually we're talking about batches of maybe 32 or up to 100 sometimes people use larger with larger neural networks and larger gpus but even using something smaller like 32 can have a drastic improvement on your performance now the increase in gradient accuracy estimation actually allows us to converge much quicker in practice so it allows us to more smoothly and accurately estimate our gradients and ultimately that leads to faster training and more parallelizable computation because over each of the elements in our batch we can kind of parallelize the gradients and then take the average of all of the gradients now this last topic i want to address is that of overfitting this is also a problem that is very very general to all of machine learning not just deep learning but especially in deep learning which is why i want to talk about it in today's lecture it's a fundamental problem and challenge of machine learning and ideally in machine learning we're given a data set like these red dots and we want to learn a model like the blue line that can approximate our data right said differently we want to build models that learn representations of our data that can generalize to new data so assume we want to build this line to fit our red dots we can do this by using a single linear line on the left hand side but this is not going to really well capture all of the intricacies of our red points and of our data or we can go on the other far extreme and overfit we can really capture all the details but this one on the far right is not going to generalize to a new data point that it sees from a test set for example ideally we want to wind up with something in the middle that is still small enough to maintain some of those generalization capabilities and large enough to capture the overall trends so to address this problem we can employ what's called a technique called regularization regularization is simply a method for in that you can introduce into your training to discourage complex models so to encourage these more simple types of models to be learned and as we've seen before it's actually critical and crucial for our models to be able to generalize past our training data right so we can fit our models to our training data but actually we can minimize our loss to almost zero in most cases but that's not what we really care about we always want to train on a training set but then have that model be deployed and generalized to a test set which we don't have access to so the most popular regularization technique for deep learning is a very simple idea of dropout and let's revisit this picture of a neural network that we started with in the beginning of this class and in dropout during training what we're going to do is we're going to randomly drop and set some of the activations in this neural network in the hidden layer to zero with some probability let's say we drop out 50 of the neurons we randomly pick 50 of neurons that means that their activations now are all set to zero and we force the network to not rely on those neurons too much so this forces the model to kind of identify different types of pathways through the network on this iteration we pick some random 50 to drop out and on the next iteration we may pick a different random percent and this is going to encourage these different pathways and encourage the network to identify different forms of processing its information to accomplish its decision making capabilities another regularization technique is a technique called early stopping now the idea here is that we all know the definition of overfitting is when our training set is or sorry when our model starts to have very bad performance on our test set we don't have a test set but we can kind of create a example test set using our training set so we can split up our training set into two parts one that we'll use for training and one that will not show to the training algorithm but we can use to start to identify when we start to overfit a little bit so on the x-axis we can actually see training iterations and as we start to train we can see that both the training loss and the testing loss go down and they keep going down until they start to converge and this pattern of divergence actually continues for the rest of training and what we want to do here is actually identify the place where the testing accuracy or the testing loss is minimized and that's going to be the model that we're going to use and that's going to be the best kind of model in terms of generalization that we can use for deployment so when we actually have a brand new test data set that's going to be the model that we're going to use so we're going to employ this technique called early stopping to identify it and as we can see anything that kind of falls on the left side of this line is are models that are under fitting and anything on the right side of this line are going to be models that are considered to be overfit right because this divergence has occurred now i'll conclude this lecture by first summarizing the three main points that we've covered so far so first we learned about the fundamental building blocks of neural networks the perceptron a single neuron we learned about stacking and composing these types of neurons together to form layers and full networks and then finally we learned about how to actually complete the whole puzzle and train these neural networks and to end using some loss function and using gradient descent and back propagation so in the next lecture we'll hear from ava on a very exciting topic taking a step forward and actually doing deep sequence modeling so not just one input but now a series a sequence of inputs over time using rnns and also a really new and exciting type of model called the transformer and attention mechanism so let's resume the class in about five minutes once we have a chance for ava to just get set up and bring up her presentation so thank you very much
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_Google_Generative_AI_for_Media.txt
[Music] all right okay let's settle in and and get started here so we have two amazing sets of guest lectures today that I'm super excited about and it is my pleasure to introduce them both today so first we'll start off with Dr Doug eek who's the senior research director dor at Google Deep Mind and there Doug is overseeing all of Google deep mind's efforts in generative AI for media including image generation music generation and more and he has done really really seminal work in developing music recommendation and music generation systems um and comes from a storied history and training and pedigree in sequence modeling so I think this should be a really exciting and fun talk and please join me in welcoming Doug thanks thank you very much AA yeah you guys are I I just saw the other Doug Doug blank and so we're both state school boys from Indiana University we played a little bit of Ultimate Frisbee together and he just dropped up here and says I'm giving the talk after you so like you have the the intersection of Doug and IU in Ultimate in front of you all right uh yeah my name's Doug um e and uh I'm going to give you a very opinionated and historical overview of generative media and and one reason I'm going to give an opinionated and historical overview is that quite honestly probably most of you know as much or more about things like mid journey and what's happening right now and I say that not only because you're um your MIT students or your MIT faculty or your touring Award winners but because the uh you know like I occasionally do things for Google were you know speaking with like some company seite or something like that and 5 years ago or six years ago you'd go give this talk and they wouldn't care about AI they wouldn't understand it at all and you know they would like the slide that would excite them is the picture of the data center and they're like oh have you seen the data centers it's like no we're doing research in machine learning like you know we're not doing that you know now you give these talks and they're like oh well you know blah blah blah rhf and I'm like they just mentioned rhf and they're like a you know CTO of a company so like it's in the air now um every you know we're all concerned and all thinking about the positives and negatives of large language models how they impact Society how they impact the economy and of course as researchers how we can improve them make them better make them safer um I wanted to talk a little bit about how we got here and uh with a focus on um on generative models for for images and a little bit of video um and then um my my main area my passion area which is music and um what you won't see here are lots of equations um it's on you to read the papers um many of you have and many of this will be just review for you much of this will be just review so we're going to talk about uh generative Ai and by this I don't mean um Michael Jordan style uh generative AI though that's part of it so we're not necessarily talking about modeling the prior and the posterior in a in a beijan sense but really generating media generating useful ideally useful controllable media um and uh and doing something interesting and valuable with it and I'll talk about you know I'm outlining here four families of generative AI text to text text to image text to video and text to audio I'm going to talk about in some sense a little bit about all of them but mostly focus on the image audio and video work um part of that has to do with how we're organized in in Google Deep Mind with the Gemini team working really hard on language models and my team largely uses the Gemini models uh to do things with media and so it's just the focus that I'm bringing to the table here my own personal Focus so part of this how we got here the obligatory uh Moors law-like slide um is just we've got more and more and more compute um this one's price performance so it's it's really not about per se processing speed or flops it's about cost but you can see that we're following uh an exponential curve this is obviously a log scale plot and it's a nice straight line up and to the left and this is part of what's driving the the success and the quality of of generative models across the board um I really like this plot I apologize that it ends in 2022 um from a brief history of AI and this is uh was put out as a uh CC uh creative common so I'm happy to use it this is actually log scale flops and you're looking at um the models going up and to the right there let's zoom in a little bit on some recent models it's it's actually quite surprising how much more compute has come to play in Training these models look at alexnet a big breakthrough model uh that happened in Google brain um 40 470 pedop flops versus say gpt3 with 314 million petaflops and manura uh 2.7 billion pflops so part of the story here is processing power and we shouldn't discount that uh but there's also a bunch of innovation that happened uh that goes along with this processing power that um that I'm much more excited about for the record I'm I told you this was an opinionated talk I'm person Ally not very excited by um the scaling laws um because it shows you that it's either a it's exponential or it's a it's like a high power function um so if someone comes to you selling solar panels and they say if you buy exponentially more solar panels from me you know I'll give you linearly more power you should think twice before you buy those solar panels right it's not a good deal um and and so let's start with images and let's uh let's talk about where we've come from and where we are I'm going to dive into some of these models and just talk about them a little bit um but you can see uh on the upper left is good fellow Ian's uh Gan generative aerial models and on the bottom um this graph stops at 2022 with a model that happens to come from our group imagine um that's simply because this particular graph stopped obviously we have mid Journey we have other models now that have um that are driving the state-of-the-art as well and I want like especially those of you um who weren't doing research at this point in time um really take a look at that image this image and the work that came from these Gans blew our minds how many of you were doing research when the Gan stuff happened in 2015 how many of you were at the The NPS tutorial where Ian got in a fight with Jurgen a few of you oh nobody really wow I'm getting old anyway this drove this drove machine learning at least the generative modeling part of machine learning uh generated so much interest but by today's standards this just isn't that impressive of an image right um yet it was 2015 it wasn't that long ago pretty quickly we saw um some some nice improvements on uh on Gans using deep convolution here Alec Radford Luke Mets and sumits um all of them have made really great names for themselves as well this this was trained on um pictures of bedrooms from a real estate data set and I remember being blown away by these and saying oh my God look at the upper right hand corner focus on the the upper right hand corner it looks like a bed and it kind of looks like a window and maybe it has straight walls at right angles the fact that this model this confet could generate images that had this structure at the time was mind-blowing these were album covers from the same paper and I work in music and so I thought wow you know this one looks like it's heavy metal and we actually were playing around with kind of matching the album covers with music some of them have a heavy metal Vibe um etc etc a lot of fun and then you know quality starts going up pretty quickly right this was style Gan 2018 I think this is when uh generative imagery started to hit the Press uh what was that website this is not a face this is yeah um and these are um you know quite impressive though I would point out many people maybe it's just the those of us that work in the field I think you can kind of recognize that these are fake now you know you get a you can kind of feel it um some of them are pretty obvious like the um uh well I won't pick on any of these fake people that but I think some of them look I think I think something look weird you know so the the artifacts are there um you know and uh over time we become more and more resilient to the to to the fakery if you will but very very impressive work and then 2021 Dolly comes out of open AI um this is from their paper uh it is uh something like the prompt was like walnuts cracked open right and uh the impressive bit here um is not the the quality of the image it's that you can type a prompt like walnuts cracked open and you get it and don't underestimate the power of that how important that is you could try to take a style Gan and if you train it on a diverse data set you might have pictures of walut being cracked open uh you won't find it or it could take you years to find it you browse around the latent space that's all you can do right so the real breakthrough that made these models useful was not the quality of the images was the ability to follow human intent and I'll come back to that point over and over again it's the ability to control these models with a prompt that makes them of of perhaps lasting value but certain certainly transitory value over the last few years because we spent a lot of time playing with image generation models and then I pulled something from 2022 imagin from from our group um again uh somewhat laziness I thought about grabbing some mid Journey slides and I didn't know if I could clear them and blah blah blah and you know you can all go look at them on your own and they're amazing right um but you know here we are here's the punchline Slide the punchline slide is um 2015 to 20122 that's seven years right in terms of visible progress in machine learning I I'm hardpressed to find another seven years that has that has brought this kind of of of hill climbing Improvement on some basic ideas and I think as uh those of you that are students right now and you're launching your careers take take stock of that that means that a whole bunch of doors that were previously closed are now open and we likely don't know what those doors are or where they lead and I find it absolutely inspiring to to be in this position where um you know I'm still actively working in the space and and this kind of of opportunity is here let's pivot now to text uh Alex Graves with whom I I worked for several years uh with Jurgen schmidhuber at in Switzerland we were all working on lstm I was a postto and Alex was a PhD student uh he stuck with lstm and started in the in the in the earlys doing um did I say that right those the OTS yeah the earlys um doing text generation and maybe it's maybe I'm easily impressed but I thought this wasn't really cool at the time it's trained on Wikipedia for some range of start rail years as dealing with 161 or 18950 million usd2 in covert all carbonate functions I was like it kind of smells like language it's not quite random right it's got It's like got a vibe to it um and you could see actually I could see that that there's a hill to climb here right he' sort of cracked a code with with with these lstm uh experiments in my mind because I was working on this earlier and we couldn't even get here and so once you see this kind of progress starting I think it becomes quite uh quite compelling um 2013 to 19 6 years later the the famous uh gpt2 that came from from Transformer I worked closely with the Transformer folks while they were still at Google very uh really impressed by that that paper now here's a prompt written by a human that talks about a herd of unicorns we look four or five paragraphs down in in the in the text and wow it's still talking about unicorns like like at the time that's just also shockingly impressive and I think this example comparing Alex's work with with recurrent neural networks and this Transformer approach this makes the point right if attention is good for anything any one thing it is that it can trivially attend to Something in the Deep past if it needs to and use it it doesn't need to use a recurrent memory to try to bring forward in time this Dynamic State and then unpack that state and it could just as easily look at stuff in the distant past as as now I'll come back to that um but I want to dive into this is sort of a two or three-part talk hopefully leaving time for questions I want to go a little bit deeper into some of the image uh generation Technologies um for those of you that are looking for a really deep dive you're not going to get that here today um but that's okay too you there Lots there's lots of time to read papers so I'm going to dive into two models that came from our group um my reason for focusing on these models is that one of of them on the left imagine is based on diffusion and one of them on the right party is based as an llm all the way through and I think it's instructive to see how both of these approaches can work quite well and have relative weaknesses and relative strengths and I think a lot of work in building usable fast controllable image generation models is actually going to come from pulling from numerous families like this and and and using them when they make sense so let's start with imagine um this uh landed uh just in time for ner in 2020 2020 is that right no 2020 2020 yeah oh we'll come back to it uh I should I should have that date memorized I you know what and I'm embarrassed 2022 okay um and so what this model is doing is taking advantage of of diffusion a process that nature has known about since the beginning of time that was brought into machine learning probably the first person to do this was Josh a sxin uh in a paper when he was still at Stanford uh using using diffusion for for training models um he worked with us for about 8 years at Google reported to me until recently and is is now at anthropic I'm really thrilled for him you know I think he's going to have do great work there um so what's this uh this diffusion process if you haven't looked at it in your coursework um it's a really cool trick the idea is we're going to learn some sort of mathematical destructive process that slowly makes noise from structure so we have a puppy dog on the left that's structure and we have noise on the right some gausa noise on the right and we're going to learn some way to gradually turn the puppy dog into noise and the puppy dog disappears and the noise takes over of course the crucial trick for those of you that know the the the the model is that this is done in a sense in a fashion that's invertible and because it's invertible we can run it backwards and running it backwards is quite cool because now we can go from noise to puppy dogs in in multiple steps and then furthermore we can collapse these these mult multiple steps the the the the underlying process would be you know hundreds or thousands of Tiny Steps we can collapse that by learning that mapping that invertable mapping within oural Network into one two or three or four or five steps and this gives us this wonderful back and forth process it also has the property of being non-causal so there's there's no autor regressive component meaning it parallelizes quite well and it's also quite good at picking up Global structure in images so it's quite well suited for uh for image generation and specifically imagine which for a short time was at the state-of-the-art it was the best model in the world um you know things things things progress we're all still working on these things um takes text as an input and outputs 1024 x 1024 pixels by going through uh three three steps um one to make a very low uh resolution image 64x 64 think Ian goodfellow's generated face and actually think that might have been 28 by 28 um and then continually upsampling with more and more parameters and so it looks like this um we have a um a a prompt from a user a photo of a group of teddy bears eating pizza in Time Square and um that gets uh run through some sort of pre-trained model no no no learning is happening on the language model it's off the shelf gives us some Vector that Vector is a bunch of numbers that represent uh that string and then that uh that embedding that Vector is uh used to condition the noise to start generating an image and then that embedding is kept there and the low res uh image replaces noise and is used to create a higher res image which replaces noise and used to create a higher res image the embedding is always there pinning the model down so that it uh it generates what you want and critically and very interestingly for those of you that like to build cool things you're not really required to use the same embedding each time there's no reason why you have to use the same embedding each time and it turns out that's one of great ways to use diffusion to do style transfer you can start with an embedding that's talking about a photo of a group of teddy bears and in the last phase you can say oh screw that I want this to look like Picasso and you put a Picasso embedding in there and it will obey it to some extent and it will thus style transfer the uh the the result which I think is kind of cool um so that's one family of models the another family of models that I wanted to talk about is uh autor regressive models all the way through so we're not going to use to Fusion we're instead going to uh output a series of tokens that become an image um particular kind of decoder that drives is driven directly by tokens so you have a Transformer encoder and a Transformer decoder um you inject the text into the process a dignified Beaver wearing glasses Etc by the way there's this whole era like it's going to be of a time when everybody was kind of worried about the impact of these models were they're like choosing the safest and stupidest prompts in the world right like the dignified Beaver or like the astronaut on a horse in outer space or you know like the turtles I don't know it's just it's it's of an era like I think we're going to remember that oh no one wanted to like really get in trouble by generating cool things they want to generate you know animals wearing suits um and uh the uh so we have fundamentally the same outcome here we have an image that's conditioned on text but we have a very different way of of of of generating the actual pixels in this case the decoding is being handled by a language model in the case of Imagine The decoding is being handled by a diffusion model and I think kind of beyond that it doesn't really matter um what's critical for me here and where I think the revolution has happened um my personal opinion it's it's in that little arrow that's driving the the the text into the model it's the fact that there's human control it's the second time I focused on that you know this model can be driven by text strings and other conditioning methods and uh the images will reflect that intent nice scaling slide it I really like the part paper I'm not on the paper so I'm not like patting myself on the back um they did a nice they did a nice study of just what happens to these images as you add more and more parameters not hard work but nice you know nice investigation and so I you know I really like the um the bottom one a green sign that says very deep learning and is at the edge of the Grand Canyon um if you have 20 billion parameters you're going to spell deep learning correctly very deep learning it's got it if you have uh 350 million parameters on the left it's just garbage it's useless right this is exactly the same model it's the same code okay so this is just increasing capacity and you see um just the flavor of the images changes pretty dramatically by the way this is makes it very hard as we move into a large scale models it makes it really hard to compare models because even relatively small differences in scale can yield very different you know outputs and as scientists we really want to understand what methods work best we don't necessarily want to understand what method plus scale works best I mean we might if we have a budget but um you know I think deep down inside we really want the right the right algorithms in play here okay let's shift from images to video um and I'm going to tell a similar story for video again these are um the same slightly out ofd uh slides and in fact there's been some really nice recent video work to come out of um um uh Runway and and also uh mid Journey so it's it's a really really hot and active space we remain very active in this space in Google Deep Mind um here you're seeing some some images from Imagine which is again this is defusion based and then versus a model called faki which is very similar to partti in the sense that it's a language model all the way through um and I would comment that even in these slides you can get a kind of idea of sort of the natural biases of some of these models the way the diffusion process works it's moving through I'll do a slide on this but it's it's it's basically 3D diffusion so you're diffusing over time as well and you tend to get really smooth movements so a lot of the images that you see that look like kind of a picture that you've animated for a couple of seconds diffusion does that very very well and in fact the folks that are working on this are like oh that looks like diffusion even if we don't know where the model came from it's got kind of got a feel to it the faki stuff the language modeling stuff seems to be more able to follow complex prompts and give rise to more complicated movements um and and and be more Dynamic at the expense of the quality of an individual frame now these are these are all conditioned on how big was the model did you get something wrong you know probably you can do something similar with both but there's a kind of natural bias towards smooth movements with diffusion and an ability to fuse multiple prompts with with with language modeling um this is the Imagine stack basically this is just showing you um this is number salad but in all of the the rows two and three you're seeing three numbers um it's the number of frames times width times height so it's basically defining a a column of um a column of images that you're making and gradually upsampling both in frame rate and in size that's all I want to say about an imagine right now um faki um you'll there will be a quiz on this slide memorize it all please um is basically showing you that um there's a bunch of like fancy encoding happening that's generating tokens able to capture small patches of of of image and and then you know serialize that and and that's your language model and then the ability to train and and and decode into video but it's it's basically it's Transformers all the way down um and these open-ended text prompts from Transformer can change over time um Can generate video that can be multiple minutes long because it's Auto regressive you can just keep going and um the emphasis really here is on novel combinations of Concepts that don't exist in the training set so the research goals here were not oh can I use language models and make something that looks like diffusion the research goals were more can I can I tell a longer story um and this was um one of the videos that um we released I thought was pretty cool um it's a balloon stuck in the branches of a redwood tree these are the actual prompts camera pans from the tree with a single balloon to the Zoo entrance camera pans to the Zoo entrance camera moves very quickly into the zoo firstperson view of flying inside a beautiful garden the head of a giraffe emerges from the side giraffe walks towards a tree zoom into the giraffe's mouth zoom into the giraffe's mouth thank you etc etc I'm I prefer I should get a good David atenor uh reading of this um to give you an idea of why this matters uh at least the diffusion models that we've trained natively like I can't say it's not possible there were some prompts that were surprisingly hard my favorite prompt was um a baby rolls a red ball across the floor and the ball turns into a bird and flies away and faki is just able to do that okay and the diffusion models we had would generate the ball rolling across the floor generate a cut and then go to a bird flying in the sky right so now part of that might have just been how it was trained so you know if I had someone from the Imagine te they could raise their hand vigorously and say yeah but you're not being fair we could have done that if we had different training data don't get me wrong you can do it but there's kind of a natural smoother progression that seems to be possible with these language models um finally I wanted to in the in the in the video space I wanted to throw in a slide about dream fusion um because I think it's cool and I also think it points to some um real challenges in in video right now um we've been making experimental videos internally and trying to tell real stories so try to tell like a 10-minute Story by doing multiple cuts and do a script and have some fun and the the some of the challenges you know there's no secrets here they're pretty predictable it's really hard to generate the same room twice if all you're doing is using prompts right so you have like a 5-second scene that's set in this Auditorium and like oh no I want another view and oh all right it's an auditorium with red chairs and blah blah blah because you don't really have any way to bring the same room into multiple into multiple Cuts that's true also for characters and so dream Fusion this this gives us One Direction forward um this is fun Fally having IM generate multiple views of the same object training it to do that and then using Nerf to render those views um so it's quite simple but it does give rise to um to real 3D volumes that can be imported into video games into your favorite 3D modeling software and uh is both playful but I think also points to a fusion between not just Nerf and uh uh imagine which was the fusion they're talking about here but really video making where you have control over the spaces that you're you're you're working in and also the characters that you're you're using okay okay uh a quick aside I this is meant to be a very quick aside then we'll get on to my my love which is music uh a little robotics um I don't like these okay there we go so this is coming from Google robotics um and they started applying language models to the task of Robotics and the the task would be something like um hey robot I knocked my my can of Coke over where you go get me another Coke okay and this is home robotics that could help uh people with certain uh movement disabilities Etc right and it turns out that's really hard in robotics um those of you anybody here do robotics this is MIT yeah so none of you were at NPS at the time but you've all you're all doing robotics this is perfect okay um it's possible it's possible it's because like universities like University of Montreal invested in deep learning early not Yale and Harvard and you know Etc it's meant for a laugh come on give me give me a break I'm just joking I was I was at University of Montreal for a while um okay so there there are three there are really four um directions of research but I want to focus on the first one Sayan uh Sean simply takes a phrase like I spilled my drink can you help and divide and conquers it into a bunch of a bunch of uh of prompts so this is like classic you could use GPT for this you could use Bard for this you could use whatever you want and then that gets merged with this list of affordances that the robot can handle and you just take the ones that you can merge and it actually worked really well it uh it had um it had great um Great Lift and then they went a little bit further somewhat naively and just said well why don't we just like treat the robot control as a as a as a language modeling problem let's create a language model for robot control and then we're just going to train the model to generate those the appropriate robot control signals from from the uh the command and it actually worked quite well um this is not a talk for me to give right now what what struck me here and the reason I brought it up was not to show off uh Vincent Van's uh robotic stuff uh at Google um but to point out that this is just another example of taking human intent and using a language model to translate that human intent into another so the way that I read this is that the real power of large language models is not reasoning it's not AGI it's that these models speak to us in our language and they're fundamentally doing translation but the kind of translation they're doing must be expanded Beyond translating from one human language to another this is translation from human language to robot ease and image generation is translation from the intent make me a picture of a of a of a beaver wearing a stupid suit into whatever language you can build around the generation of images and so on and I think that's for me that's where the revolution lies and I think it to the extent that we can understand that will really be driven in the right direction with respect to what to do with language models okay on to music and audio now this video is private oh you're not that's not true I'm still logged in as me um is that for real you may not get to hear this which is a um it's probably okay um so this is uh some of the music that I generated in 2002 which now is more than sadly more than 20 years ago um in Jurgen Schmid hoer lab using doing lstm and uh the reason that we were looking at music was that music repeats and music has nested structure and uh in the world of sequence learning that's kind of an interesting property you know like even the simplest music even even kids music the kind of music that like a four-year-old might like get will probably repeat and if you come from the sequence learning background of of of recurrent neural networks and that family of models that's actually really hard you start to think of your brain goes a little crazy and you start to tell yourself oh we need Stacks oh no it's context sensitive oh no we can't learn it with the neural network you know these sorts of you know now heretical ideas um and uh and so this ended up being a very interesting direction direction of work that I've carried on I don't think I can play this which is a little weird um because I know aha sign in all right verify it to you okay this is how weird we are with security like Signs me out during a talk um welcome to Google in my Corp laptop and let's watch a bunch of slides go by really fast it'll be fun and this is where we are I'm going to I'm going to review all right everybody gets sick to their stomach a little bit here robotics boom boom boom close your eyes my wife's an architect and she we're like we're doing a plan for house and she's she's like I'm going to I'm going to move around the plan for I'll close your eyes because it makes you car sick she's like zooming in and out too much all right thank God you're going to get to hear this this is really good guys um 2002 2002 I spent so much time writing the code to do this work and training this model um it's an lstm Network very small on the order of like 20 25 you know cells very small trying to learn enough structure to be able to generate some uh some [Music] music this is just midi right this is generate note names and they're being rendered so um yes it's impressive I know I know it got me a job with yosua Benjo at University of Montreal True Story as faculty um this actual paper from meeting him at in Switzerland um so you know that worked out for me um but really you know was pretty primitive um in fact these core changes were fixed so like you know like that's providing all this structure um yeah let's not talk further about that let's just keep moving a little bit further but I wanted to show it to you because it was it was movement there's another another person who did good work in the space because I think it was good work I'm I'm being self-deprecating it was just where we were at the time um a guy named Mike Moser who's a um does a bunch of work with memory he's he's kogai at uh in Denver and now at Google um he was also working on using neural networks to do music for the same reason he's like I was obsessed with memory music is about memory and he called his model concert and my favorite line because I read his paper when I was doing my dissertation it's got the be most beautiful line he said this neural network generates music only its mother could love it's so good and so true and so you know okay so we're moving forward in time now we're in 2016 we're still working with with with midi data and uh a paper that I thought was really nice paper by Natasha Jakes who's from MIT media lab um and uh she's she's now at Washington um we did some stuff with um what looks a little bit like rhf we didn't call it that but we're going to now take an anural Network and we're going to train it on some external reward that's nondifferentiable reward and in our case the reward where where it's not rlf there's no HF the reward is from a series of like basically what we know about the problem and the paper also looked at chemistry generating molecules in string form format but what we said was you know we know a lot about music how could we show a neural network some of the rules of composition kind of naively just to see what would happen and so you know it turns out that like we pulled a textbook of like Counterpoint from the 18th century and mostly if if those of you that Know music you usually think of CounterPoint is like how the chords go together in Bach right there's also these rules of CounterPoint over time like if you begin uh don't repeat notes over and over again resolve large leaps which means if you move away from the tonic the M like in the key of C the note C is the Tonic If you jump way up jump back stupid stupid rules but it turns out if you optimize a model to to to try to follow those Rules by simply giving it scaler reward you know I'll give you a positive number if you um if you do this and I and also you need an evaluation function that can evaluate your output easy you actually get really different output so this is the Baseline RNN at the time we didn't have the memory to use Q learning at scale so these are really small neural networks so they're fit but this is just kind of what the model would do without any q without any um uh tutorial learning it's like [Music] this and that example is is is good the model learned to play sparsely it learned to just play a few notes if you listen to a lot of examples it's kind of um it's just wandery kind of in key music with with the learning in place my read is that it actually just starts sounding what I would just call catchy which is maybe not a surprise like you follow the rules you get catchy oops we get [Music] catchy and when we rated these listeners really liked the the tutorial learning and and uh we just moved on um interesting little paper I want to show you so there we are um and then um about the same time 27 to 2018 uh we started to look at at uh variational autoencoders and music that can move smoothly from one space to another um in the interest of time I'm going to skip this demo but you can see visually the model is able to move smoothly through a latent space but this latent space is now measures of music and you're able to move from one place to another actually sounds quite nice and sounds there's some nice musical moments here where it moves nicely from one to another um um and then here then I think this this this paper I'm I'm I'm quite proud of this is music Transformer um the first author on this is Anna Hong um and she's coming here uh as faculty and uh this was really the first paper to use uh uh Transformer in this space and there was a previous paper that we did so this is part of a project that I started in 2015 called magenta which is a project that's about using uh uh generative models to make music and and art mostly music and we wanted to start a project that would help us explore that space um that project has the the benefit of at least being there early I don't know if we were successful but we were there right in 2015 we're there like saying this stuff matters and trying to do it in a relatively safe space so um language models are uh kind of intrinsically dangerous because they generate language and language is about all sorts of ideas our our music models might bore you um but they're probably not going to lead immediately to good one's law and some sort of strange you know horrible outcome um anyway this was trained on on Transformer using a Transformer model the the data of interest um data is everything we know that we figured out how to take a um solo piano performance solo I don't mean one note but just you know one person playing the piano and uh find the note onsets and their velocities that their loudness with you know like 10 millisecond timing and we ran it on uh millions of songs in YouTube pieces of music in YouTube the ones that we're allowed to look at um and there we were able to generate lots and lots of of MIDI files that had natural expressive timing and we put that out in open source so even though we had access to the YouTube stuff we put there still and then we train this music Transformer model on it and the second trick is that um we're getting this like 10 millisecond level timing you know like the little bit of a lilt in a walls that kind of feel right and um uh a couple people on the team Ian Ian uh Simon is the the main person figured out that you know you don't sample time regularly if you want 10 milliseconds timing you don't sample you don't generate an event 100 times a second you use events to control the clock so we had a logic we had some softex values that were saying how much do you advance the clock before you make your next note and then you generate a note and you change the loudness and so now if you play a slow major scale you're generating dozens of events not tens you know not thousands of events that makes it much easier the other reason I want to show you this is I think it'll teach you something about Transformers you all know everything there is to know about Transformers because it's all we talk about in machine learning these days right just about um but what uh someone on this was U Monica Jesu uh and Anna had this idea of showing the attention mechanism using little colored arcs so you're going to hear a piece of music generated by Transformer music Transformer um it's just midi so it's just note events the the the piano sounds are being generated by a piece of software but you're going to be able to see how the uh the model what the model is attending to there are eight attention heads in this particular model each one's colored and I want you to pay attention to when the music repeats and you can see visual in this timeline this is the music notes unfolding from time uh zero to Time end left to right you see those little short short bursts those are fast runs and they actually are repetitions of a theme and what you'll see is that the model is locking into the previous version of that theme and repeating it with modification um so give this a look I think it's it's quite interesting to see [Music] there's that repetition [Music] [Applause] [Music] [Applause] and also I stopped in the middle this good because a little bit of repetition happening here this is what's happening in Transformer models presumably for language as well this just has such strict repetition that it's easy to see it's attending to the important chunks of information in the past it can attend into the distant past as easily into the recent past there's no need to carry along a recurrent recurrent memory um for whatever reason if you look at the very left hand side here we noticed in every one of these that we visualized that the Transformer was naturally assigning one attention head to keep track of time zero I can't explain why it could be a weird side effect of our implementation but it's quite interesting it kind of it gives it one bit of like absolute time like how far into the future have I gone right um and uh and and and so you know there's your uh I think quite cool visual ation of what attention is up to um moving on so I want to um fast forward a bit now we're living in the age of audio lots of amazing work happening in audio um sound like Donald Trump lots of amazing work happening in audio the best the best audio so sad how did I even let him get into my brain um project uh that we've uh published and we also have tools out that you can play with called music LM which is based upon I think a really cool idea of uh you know an audio codec that's based that's a language model so Soundstream this all came from from Google research so you have an encoder decoder codec that will basically take any piece of audio and turn it into a sequence of tokens those tokens represent tiny slices of audio and then a trained decoder can take the token stream causally in real in uh Auto auto regressively and and then render it as audio and the quality is actually quite quite good um one of the one of the uh wins here was having a gan likee real versus fake discrimin loss that help clean things up and then um using that for music then uh conditioning it with uh something that looks a little bit like the clip embeddings that are used for uh for um image generation so you have an embedding that can uh generate a vector um that uh aligns rock song with distorted guitar in this case with the audio and so now you have a way to control things and um so here's here's a little example of this model just continuing on um Auto aggressively on a prompt so just think of this like any other prompt but it's music instead of language so the dotted line is where the it becomes generated [Music] now you can probably hear that there's a it's a bit sounds a bit compressed maybe not the highest bit rate but that one of the nice things about a model like music LM with Soundstream is that we know we can scale this you know you basically just increase your vocabulary size to cover finer and finer grain structure of audio and you just turn the crank and it just works so we know we can get to 44 khz stereo and play with all that's a question of model size um so we really want this expressivity um the other uh the other cool model that I wanted to mention that I think I really is is is is cool is um remember I told you that you can like condition on different things and in clever ways um in this case we um we the training data is Source separated meaning we pull out the the the drums and the bass and Etc into into separate lines this can be just done with offthe shelf software or you can learn it um and then we condition on one of them so we pull the vocals out and then we condition the generation the other streams with the vocals right that means you don't have to give the model the vocals that it was trained on you can give it your own vocals so here's uh one this is a amateur vocalist with a little bit of Reverb singing something I think it's an amateur vocalist professional vocalist we all oh how do I know that we're all still the same and sing song will then provide the uh [Music] accompaniment sorry I want to keep moving um so this actually we've played with this model with numbers lots of musicians and it's quite addictive and I think this is where where I see us going with a space is not just improving the research but actually building out cool like new ways for people to express themselves okay um couple of thoughts about control and safety and then we're done um this is a picture of an avalanche and it's my my personal analogy for what the future of work might look like and I think it's quite serious so I make a lot of jokes but I take this part of the talk really seriously um we um we could see uh pretty big uh changes coming in how um how work is structured based upon the automation that we're getting from from large language models and this also hits the Arts also it'sit the creative uh field creative fields and the analogy that I came up with when I talk for example I think this analogy is a little simple for the MIT group but you're all MIT so you're really smart um but it's really great for talking to creatives and it's like it's like an avalanche in the following way what's happening right now if there's an avalanche in the mountains on Wednesday if you're hiking in those mountains on Tuesday you're fine right and really if you're hiking in those mountains on Thursday you're probably okay to you know just look out for all of the debris but if you're in those mountains on Wednesday look out and the point being made here is if your mid career or late career right now in these areas that are going to be disrupted you have to look out and history shows that these these these technical revolutions come at a regular Cadence and they're constantly creating this kind of rethinking of our of our economy and and so you know the reason I use the analogy of of of of avalanches is that the the the Avalanche Survival um recommendations are not terrible advice in this case it's kind of corny I admit but you know if you're caught in an avalanche you're supposed to kind of swim stay try to stay above the snow at all costs stay just keep your head above water or above the snow because once that Avalanche stops it's gonna it's going to freeze like concrete so you can't you don't just passively ride out an avalanche if you want to live through it and the reason I mentioned that is I think even if people aren't natively curious in this technology if you're in the creative space you just learn about it just lean into it as much as you can because what's going to happen is if this if this technology of any use it's going to give rise to new forms of Art New forms of of thinking about things and there's no reason for you necessarily to get caught in that um I liked this um photo that I found on Wikipedia it's of a woman who exists at Norma uh tadash and um she was unable to adapt to what we call talkis that was the term for movies with sound she just she was she was she was very expressive she was very able to work in silent film and she was unable able to use her voice correctly such that it made sense with microphones and so her career was really impacted by this and we don't look back now and say oh how horrible is that we have sound in films but actually at a point in time that was very disruptive to someone's life and so you know I want to both show how serious this is but also show how it's not necessarily long-term long-term bad um also in terms of safety obviously um we have huge responsibility to avoid things like the recent claims um by Stanford researchers that Layon 5B um includes child sexual abuse imagery uh completely unacceptable we have to work work through that and understand better ways to to red team and and make sure data is clean and and also you know I'm not trying to pick on the competition um this um Mid Journey's clearly training on you know commercial movies and I think that's questionable and we have to really think about a way to control control rights control who what the stakeholder who are the stakeholders someone animated um this adventur film or did the CGI for it and you know if I did that work think could be a little bit annoyed that like I mean it's not an exact copy but but it's it's it's really close right so so we have to figure out a way to build a marketplace around uh creative content um there are precedents for this one of my favorite is uh sampling sampling of audio in in in music uh initially hip-hop um bis Marquee famously got in trouble for for for doing this and the the labels just tried to sue it out of existence eventually piece was made there's a Marketplace for it you can license samples you can do sampling and the people that generated the samples will get paid and I think that's the outcome we would be looking for here if we can understand you know attribution correctly and and and be honest about where things are going um we do take this seriously at Google and um I always highlight that um you know you should expect from a company of our size and and Microsoft that that we have to take this stuff seriously right um we're we're very heavily incentivized to do it if you want to set altruism aside or just that hey we're good people but you know it's the brand we need the brand to be trusted we need trust um and so we're working really hard without any like haha we're going to get away with this we're working really hard to build products that are safe and to be respectful to our stakeholders um this including for example like it's not just I'm going to write a list of seven things this is uh a uh a graph of how you might um music LM actually um is trying to generate audio that you can look at music LM online it's called audio um Audio FX is how it was branded or music FX um I don't know branding people change the names of things um and we actually query uh we we we filter your query that way you type in um for hate speech um only if it gets through that query system is audio generated then we try to remove infringing uh outputs etc etc and we're always listening to user feedback along the way uh we have some papers finally this is really my closer but is it art um the answer is yes but I can say it longer um you know I think that um this is deeply what we're trying to address with magenta um I'd really encourage you to go look at magenta g.co magenta um most of what's there is our blog our blog's gone a little quiet but we're we're going to pour more energy into it in 2024 um it's basically looking at the um machine learning as a tool in the creative process we're not trying to have machines make art we're trying to have machines help you make art and I strongly believe it doesn't need to make it easier so um I love these two quotes creativity is more than just being different anyone could be playing weird that's easy what's hard is to be as simple as Bach making the simple awesomely simple that's creativity Charles Mingus Edgar dega even better painting is easy when you don't know how but very difficult when you do I think that's deeply true so what we should be asking is you know are there people that are taking Ai and actually doing something hard novel and of artistic interest and the answer I think is yes um I took a did spent a lot of time looking uh what this this year's generation of artists is is doing um I really like the work of Josh Rose um you can find him on uh uh medium or you can also find him on Instagram and he has a beautiful essay about boring is the new interesting and and he's he's expressly trying to generate boring images using using image generation but like they're so beautiful and they they tell a story it's it's calming it's it's meditative so nothing is happening in these images by Design and he talks quite quite nicely about how AI helps his process I also really like the work of Beth fry um she is doing a bunch of what I would call like hitting at The Uncanny Valley like kind of creepily human and in non-human at the same time and it's a lot about it's about body imagery and as a woman her experience with her the image of her body and how she's playing with food and and her body image in in ways that I think are are are really really both creepy but also beautiful and expressive and so I'm hopeful for the future I'm excited about where we are and um I'm not going to read this quote but as we move to questions this is Brian Eno you should read that and um I'll stop there [Applause] I I fear I ran a little long apologies no problem at all thank you so much Doug um okay so we have we have some time for some questions and and then we'll uh transition towards the next talk for the second half of the class so any questions first yes I'll bring the mic around thank you so much for the presentation um I have two questions so first is why did you you leave Montreal to go to Google like why that transition from like being a professor to your role now and the second is I think like something in this class that we really talked about is how having more data is really what is driving a lot of AI right now how long do you think that's going to last like what is the next big I guess technical Revolution um is it forever just going to be like more and more data in your opinion if I told you that no um so the first answer why did I leave University of Montreal um uh this shallow answer is the weather which is not it's part of the answer um but I uh I actually I got tenure and I was on my tenure break and I just really wanted to do something new I had been a post doctor for three years and faculty for seven years and that's a decade and you had four years of grad school and that's 14 years and you know I had the chance to come to Google and be a research scientist so I didn't stop doing research my H index went up um and uh so I didn't stop caring about research I just had the chance to do something different and I loved being a professor and I loved working at Google so for it's for me it's a win-win I don't have a strong like value judgment either way but I was just ready for a change and then the second one is uh it's really not more data it's data quality and understanding how to align the data sets that you need to solve the problem you want to solve so if you look at what like one of the things that's kind of cool about mid Journey you know I'm mentioning them because they're in you know they're in the Zeitgeist right now like their their images are so cinematic it's almost like mid Journey has its own style and we're following that style and so you think okay like the data if you want to if you want to live in the space and now the models are actually changing our tastes right then you know we have to be thinking about how to train models that have that similar looking feel and that's where the the art of getting the right data the right kinds of annotations um it's everything but I don't think it's more data I used to think it was like just give me give me I'll I'll take a lot of noisy data over a small amount of clean data for some definition of small I don't believe that anymore I think clean well thought out you know appropriate data is is important I hope you know like I've got a bunch of people working on on my team myself included you know we don't want to spend the rest of our lives scaling Transformer models um you know um and I'm hoping to see more breakthroughs you know we've had rhf we had uh pre-training you know there's got to be more out there I mean our team put out transform but please please beat it you know like do something new so we can have some fun with something new and I you know I don't think we're done innovating in the space of computer science or in the space of of AI machine learning so you so much um I have a question who is the owner of the copyrights of the music which was generated by magenta uh the the person who made the music so you we explicitly yeah we explicitly we we exercise no copyright and the data that we trained on we're saying is safe to train on so you you it's yours good question yeah um I really appreciated the commitment to making safe actually like in the interest of humanity Ai and I was curious what it looks like to try and develop a tool or like some platform where you're the first people in the field like you invent transformers how do you anticipate the safety risks that come with them things like that when like you're you're at The Cutting Edge the this was a tough question for me um the big difference here is that this these things actually work like most of what most of what I've done for my 30 years doing like generative AI this stuff so bad it's never going to hurt anybody right because it's just bad right and then you're left like wait actually this generates like really realistic text right how do I keep people from being fooled by this um on the music side we have YouTube so on our side we've been thinking about safety and taking down infringing content so we have a bunch of tooling to rely on but really I think it's I think the answer is um we should just constantly be red teaming ourselves and be be humble and just be aware that we could get this wrong and we should be thinking really hard about what we launch and you know because we only need to screw things up for three months to really cause damage right so I wish I wish I had a oh we've got this but you know as we're moving really fast right now makes me a little nervous hi oh hi um thank you this is about uh the use of art as a training model model um I think for a lot of artists that have been publishing their work online for years you know you look 10 20 years ago and the zeit guys sort of said get your work out there publish it for free get as many eyes on on it as you can and now a lot of that art is being used to train these models so for these artists that find themselves in a position where they feel and not artists that have a copyright like Disney but the small artists that feel that their work has been used what would you say to them is there a way to put that Genie back in the bottle or yes um I think that the another really great and very serious question thanks for asking it um the uh um Rachel Mets the uh where is she now Bloomberg um she's a really great tech journalist um She interviewed a woman from Oregon who was an artist and the tech article was like artists are infuriated by uh Ai and you actually listen to what this woman is saying and she's she's looking at this output it was known that her data is in Leon in the data set and and she's saying wow you know like this is really good but but it looks like my work doesn't it and like that's not someone who's enraged that's someone who's in shock and and sad and and and and emotionally affected by this so it's incredibly serious you know I think the Genie's out of the bottle right now but I think we can put it back in and I I think that there has been damage done and it it would be irresponsible to to say that there hasn't been Damage Done you know the people that make the art should be paid for the art now as we move forward um we're moving towards having something that looks like robots. text which is the text file that's placed at the top level of every website and tells crawlers what they can and can't look at we will start to see evolving standards for what you might call ai. text where you can say you know you need to be able to put your art online and you need to be able to say hey this is not for training AI models right and you also may have folks that want to be in an ecosystem so so my goal will be twofold I think we need an ai. text that's like robots. text and then I also we deeply deeply deeply need an ecosystem where people can make money off of this like you you know you pay artists that helps like yall got to eat right um and I think um even though our YouTube ecosystem isn't perfect like I certainly don't go to parties saying yohoo I work with Google and so we have YouTube because people get like the YouTube The Creator space is is challenging sometimes but the fact is when people get removed the biggest complaint about the YouTube Creator space is when you get removed from it and and that's kind of a good sign of a Marketplace that kind of works and so you know you have this possibility of getting famous you have the possibility of making some money I would love for us and it doesn't have to be Google just someone we need to develop some sort of marketplace where you can put your art out there and explicitly say I want AI generated outcome from my art but I want to be paid for it and it's voluntary I can be in it or I can be out of it um I think it's possible by the way I think it really drives research in attribution and in ways in which we can mix in the stock style of artists at inference time uh quickly and efficiently um there's a lot of work to be done so I think that that's how we'd put the genie back in the bottle it's not going to be just by removing everybody's data it's going to be by building marketplaces that allow you to have very explicit control over how and when you want your data to be used and I think there's frankly I think there's a lot of money to be made there for someone I think those marketplaces will evolve um and once they evolve I think that's that's going to give us a path forward we'll still have the equivalent of bit torrent there's no reason you have to obey robots. text but what happens is if the marketplace is big enough and there's enough money moving through it that that artists can survive then what you do is you just kind of put more and more energy into making that that Marketplace better and better and that's largely what happened with streaming music Services they're not perfect but they're better than they're better than bit torrent right they're at least they're monetizing it's just you got to figure out the monetization scheme I know that was a long answer but I feel strongly about that so I hope that was helpful awesome okay let's do two more questions thank you I was sorry I was interested um when you were talking about how it's hard to compare models um because like the number of parameters will impact uh the end quality so I was wondering like how do you compare those models so we're working on internally um we do this externally as well but always having a model card so the first thing to do is just to pay attention to how many parameters are there we find especially if we work even internally with product groups you know we have internal competition at Google multiple groups are training text to image models of course they are and then it's like they'll be like well this they won't the the product groups won't notice that this one was trained on 10 times as much data as this one and so we're moving towards naming models explicit with a number of parameters and the training data they were trained on that's thing one it's just sort of explicitly saying that matters the data matters if you're going to compare apples to apples you have to have roughly the same number of parameters and you can measure that in compute if as opposed to parameters because what's a parameter you know roughly the same amount of compute is used to to to do training and inference on these models and then what's the data set that you used then you're getting closer and I think it's just actually a lot of it's just like good bookkeeping um like beyond that it's all human EV valves and if you actually start using these models you'll have product goals you'll have some reason you want to use this model and that also drives um how how and when you make a decision of what to use awesome okay last question hi um how do you view this coming Avalanche that you talked about impacting software developers and how specifically can current software develop best repair that's question one the question two is are Google and other big tech companies accounting for this in their current hiring practices understanding that they might need less developers to accomplish the same productive output yeah so the how is software development going to be impacted um I'm pretty impressed with the code generation that we see right now right um I certainly see that um whether it's chat GPT or or Bard we're seeing people go there rather than stack Overflow and other places like that um you know I had a task that I I I sorry I'm an old guy I didn't use I didn't use any chatbot to solve my last coding problem I just sort of hacked away at it but um you know I think it will yeah I think it will make coding less labor intensive and I think it will long-term change what it means to be a coder so that you'll have less reward for being able to just hack out tens of thousands of lines of code in in your year um if I were to pick an industry where it would really sweep through first with my limited experience is the video game industry um I I did a year of Consulting I was actually there full-time for a while at Ubisoft in Montreal it was like a summer summer and then turned into part of my sabatical and like like at least at that point in time so this has been about a decade maybe some of you in the video game industry it's the same like it is incredibly labor intensive and Incredibly templated and repetitive C++ code right it's just like and it's like it burns them out like those guys like those people their wrist braces and they're going I'm serious like it's a really really it's like the code mines of code mines the video game industry you know in that sense right I mean I think there the Avalanche you know you'll lose um you know you'll lose a bunch of Labor that needs to get done um My Hope Is that and my belief is that we won't lose the what we love about coding right we're just solving hard problems and what we'll do is we'll we'll do whatever you know you have a xerox machine that solves one problem for you so the people that used to do the typing and the typing pool do something else you know we used to have typing pools right um so yeah I think um yeah I do think that it will be there has to be a constriction in the in the demand for uh kind of labor-based code I you know I would be be irresponsible to say otherwise but I don't think the other way to think about it is like we also may have just created a whole new sector of the economy around Ai and talk about a non-zero sum game if what we do is we close down some options to do especially very labor intensive coding like the kind of video game Market that existed for Assassin's Creed or for Doom but we open up this whole other world where we're chaining together ideas and and models and Frameworks and we're solving really complicated problems for people and creating business value then the people that can do that call them programmers call them whatever you want to they're going to be the kind of people that are in this room that understand the underlying technology and um you know I think those jobs will will will spring up might even be more oh and the second one is because of this new technology I you know I I don't know um the uh um you know Google's had some limited layoffs um over the last 12 months um first I I I it would be inappropriate for me to comment um but not just because like like oh weasel weasel weasel and I want to talk about Google but like I don't really know the answer I'm not I'm not a seite business person companies of the size of Google or Microsoft we're constantly Contracting and expanding what what I see is that the the the AI Market is booming and you know this this market around like the kind of problems that you people that are taking this course are trying to solve is just booming like there's never been growth like I've seen so you have to offset what might be like adjusting and re prioritizing um with uh a genuine concern about y'all are doing AI you're going to get jobs okay you're going to be fine right right I that's really my answer um listen there's another Doug from Indiana University who should have some time to talk what are your music examples huh coming soon coming soon thank you all very much for your attention and thanks for the questions [Applause]
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_Evidential_Deep_Learning_and_Uncertainty.txt
hi everyone and welcome back to today's lecture in the first six lectures of 6s191 we got a taste of some of the foundational deep learning algorithms and models in the next two lectures we're going to actually be diving into a lot more detail focusing specifically on two critically important hot topic areas in modern deep learning research that have really impacted everything that we have been learning so far in this class now these two hot topic lectures are going to focus first on probabilistic modeling with deep neural networks for uncertainty estimation in this lecture as well as algorithmic bias and fairness in the next lecture now today we'll learn about a very powerful new technique called evidential deep learning for learning when we can trust the output of the neural networks that we're training and interpret when they're not confident or when they are uncertain in their predictions now this whole field of uncertainty estimation in deep learning is super important today more than ever as we're seeing these deep learning models like we've been learning about so far in this class start to move outside of the lab and into reality interacting with or at the very least impacting the lives of humans around them traditional deep learning models tend to propagate biases from training and are often susceptible to failures on brand new out-of-distribution data instead we need models that can reliably and quickly estimate uncertainty in the data that they are seeing as well as the outputs that they're predicting in this lecture we'll actually start to learn about evidential deep learning for uncertainty estimation of neural networks so training our model not only just to make a prediction and predict an answer but also to understand how much evidence it has in that prediction how much we should trust its answer a big reason and actually cause for uncertainty estimation for uncertainty in deep learning is due to the very large gap in how neural networks are trained in practice and how they're evaluated in deployment in general when we're training machine learning models we make the following assumption that our training set is drawn from the same distribution as our test set but in reality this is rarely true for example when we look at many of the state-of-the-art state-of-the-art algorithms that we've learned about so far they are almost all trained on extremely clean pre-processed data sets often with minimal occlusions minimal noise or ambiguity in the real world though we are faced with so many edge cases of our model that our model is going to be completely unequipped to handle for example if we're training this classifier to identify dogs and we train it on these clean images of dogs on the left hand side it will yield very poor performance when we take it into the real world and we start showing it dogs in brand new positions dogs in upside down configurations or even this parachuting dog coming through the sky or if we take this driving model trained on clean urban streets and then we take it out into the real world and starts to see all kinds of strange unexpected edge cases in reality now the point of today's lecture is to build models that are not only super accurate and have high performance but also to build quantitative estimation techniques into our learning pipeline such that our model will be able to tell us when it doesn't know the right answer now there's this famous quote here by george box that i've adapted a bit to convey this now in the end all models are wrong but some that know when they can be trusted are actually going to be useful in reality now very rarely do we actually need a model to achieve a perfect 100 accuracy but even if our accuracy is slightly lower if we can understand when we can trust the output of that model then we have something extremely powerful now the problem of knowing when we don't know something turns out to be extremely difficult though and this is true even for humans there are so many tasks that we believe we can achieve even though we don't really know the right answer uh and are probably more likely to sorry are more likely to fail than to succeed now this picture probably doesn't even need an explanation for anyone who has driven in a new city before we had things like google maps in our phones there's often this tendency to refuse to accept that you might be lost and ask for help even though we may truly be lost in a location that we've never been in before so today we're going to learn about how we can teach neural networks to predict probability distributions instead of purely deterministic point outputs and how formulating our neural networks like this can allow us to model one type of uncertainty but not all types and specifically we'll then discuss how it does not capture one very important form of uncertainty which is the uncertainty in the prediction itself finally we'll see how we can learn neural representations of uncertainty using evidential deep learning which will allow us to actually capture this uh this other type of uncertainty and estimate it quickly while also scaling to some very high dimensional learning problems and high dimensional output problems let's start and discuss what i mean when i say that we want to learn uncertainties using neural networks and what this term of probabilistic learning and how it what is this term of probabilistic learning and how does it tie into everything that we've been seeing so far in this class now in the clay in the case of supervised learning problems there have been two main sources of data input to our model the first is the data itself we've been calling this x in the past and it's what we actually feed into the model given our data we want to predict some target here denoted as y this could be a discrete class that x belongs to it could also be any real number that we want to forecast given our data either way we're going to be given a data set of both x and y pairs and have been focused so far in this class on really in the case of at least supervised learning learning a mapping to go from x to y and predict this expected value of y given our inputs x this is exactly how we've been training deterministic supervised neural networks in the past classes now the problem here is that if we only model the expectation of our target the expectation of why then we only have a point estimate of our prediction but we lack any understanding of how spread or uncertain this prediction is and this is exactly what we mean when we talk about estimating the uncertainty of our model instead of predicting an answer on average the expectation of y we want to also estimate the variance of our predicted target y this gives us a deeper and more probabilistic sense of the output and you might be thinking when i'm saying this that this sounds very very similar to what we have already seen in this lecture in in this class and well you would be completely right because in the very first lecture of this course we already had gotten a big sense of training neural networks to output full distributions for the case of classification specifically so we saw an example where we could feed in an image to a neural network and that image needed to be classified into either being a cat or being a dog for example now each output here is a probability that it belongs to that category and both of these probabilities sum to one or must sum to one since it's a probability distribution our output is a probability distribution now this is definitely one example of how neural networks can be trained to output a probability distribution in this case a distribution over discrete class categories but let's actually dive into this a bit further and really dissect it and see how we're able to accomplish this well first we had to use this special activation function if you recall this was called the softmax activation function we had to use this activation function to satisfy two constraints on our output first was that each of the probability outputs had to be greater than zero and second that we needed to make sure that the sum of all of our class probabilities was normalized to one now given an output of class probabilities emerging from this softmax activation function we could then define this special loss that allowed us to optimize our distribution learning we could do this by minimizing what we called the negative log likelihood of our predicted distribution to match the ground truth category distribution now this is also called the cross entropy loss which is something that all of you should be very familiar with now as you've implemented it and used it in all three of your software labs let's make some of this even more formal on why we made some of these choices with our activation function and our loss function well it really all boils down to this assumption that we made before we even started learning and that was that we assumed that our target class labels y are drawn from some likelihood function in this case a categorical distribution defined by distributional parameters p the distributional parameters here actually define our distribution and our likelihood over that predicted label specifically the probability of our answer being in the i class is exactly equal to the ith probability parameter now similarly we also saw how we could do this for the case of continuous class targets as well in this case where we aren't learning a class probability but instead a probability distribution over the entire real number line like the classification domain this can be applied to any supervised learning problem we saw when we saw it in the previous lectures we were focusing in the case of reinforcement learning where we want to predict the steering wheel angle that a vehicle should take given a raw image pixel image of the scene now since the support of this output is continuous and infinite we cannot just output raw probabilities like we did in the case of classification because this would require an infinite number of outputs coming from our network instead though we can output the parameters of our distribution namely the mean and the standard deviation or the variance of that distribution and this defines our probability density function our mean is unbounded so we don't need to constrain it at all on the other hand though our standard deviation sigma must be strictly positive so for that we can use an exponential activation function to enforce that constraint and again in the classification domain we are similar to the classification domain we can optimize these networks by using a negative log likelihood loss and again how did we get to this point well we assumed we made this assumption about our labels we assumed that our labels were drawn from a normal distribution or a gaussian distribution with known parameters mu and sigma squared our mean and our variance which we wanted to train our model to predict to output now i think this is actually really amazing because we don't have any ground truth uh variables in our data set for ground truth means and ground truth variances all we have are ground truth wives our labels but we use this formulation and this loss function to learn not a point estimate of our label but a distribution a full distribution a gaussian around describing that data likelihood now we can summarize the details of these likelihood estimation problems using neural networks for both the discrete classification domain as well as the regression the continuous regression domain now fundamentally the two of these domains differ in terms of the type of target that they're fitting to in the classification domain the targets can be one of a fixed set of classes one to k in the regression domain our target can be any real number now before getting started we assume that our labels came or were drawn from some underlying likelihood function in the case of classification again they were being drawn from a categorical distribution whereas in the case of regression we assume that they were being drawn from a normal distribution now each of these likelihood functions are defined by a set of distributional parameters in the case of categorical we have these probabilities that define our categorical distribution and in the case of a normal or a gaussian distribution regression we had a mean and a variance to ensure that these were valid probability distributions we had to apply some of the relevant constraints to our parameters through way of activation functions cleverly constructed activation functions and then finally for both of these like we previously saw we can optimize these entire systems using the negative log likelihood loss this allowed us to learn the parameters of a distribution over our labels while we do have access to the probability and the variance here it is critically important for us to remember something and that is that probability or likelihood let's call it that we get by modeling the problem like we have seen so far in this lecture should never be mistaken for the confidence of our model these probabilities that we obtain are absolutely not the same as confidence what we think of as confidence at least and here's why let's go back to this example of classification where we feed in an image it can be either cats or dogs to this neural network and we have our neural network predict what is the probability that this image is of a cat or what probability that it is of a dog now because if we feed in an image of let's say a cat our model should be able to identify assuming it's been trained to identify some of the key features specific to cats and say okay this is very likely a cat because i see some cat-like features in this image and likewise if i fed in a dog input then we should be more confident in predicting a high probability that this image is of a dog but what happens if we say that let's feed in this image of both a cat and a dog together in one image our model will identify some of the features corresponding to the successful detection of a cat as well as a successful detection of a dog simultaneously within the same image and it will be relatively split on its decision now this is not to say that we are not confident about this answer we could be very confident in this answer because we did detect both cat features and dog features in this image it simply means that there's some ambiguity in our input data leading to this uncertainty that we see in our output we can be confident in our prediction even though our answer we are answering with a probability or a percentage of 50 percent cat and 50 dog because we're actually because we we're actually training on images that share these types of features but what about in cases where we are looking at images where we did not train on these types of features so for example if we take that same neural network but now we feed in this image of a boat something completely new unlike anything that we have seen during training the model still has to output two things the probability that this image is of a cat and the probability that this image is of a dog and we know that because this is a probability distribution trained with the soft max activation function we know that this is a categorical distribution both of these probabilities probability of cat plus probability of dog must sum to one no matter what thus the output likelihoods in this scenario will be severely unreliable if our input is unlike anything that we have ever seen during training and we call this being out of distribution or out of the training distribution and we can see that in this case our outputs or of these two probabilities are unreasonable here they should not be trusted now this is really to highlight to you that when we say uncertainty estimation there are different types of uncertainty that we should be concerned about while training neural networks like this captures probabilities they do not capture the uncertainty in the predictions themselves but rather the uncertainty in the data this brings into question what are the different types of uncertainty that exist and how can we learn them using different algorithmic techniques in deep learning and what this all boils down to is the different ways that we can try to estimate when the model doesn't know or when it is uncertain of its answer or prediction we saw that there are different types of these uncertainties and this is true even in our daily lives i think a good way to think about this is through this two by two matrix of knowns and unknowns and i'll give a quick example just illustrating this very briefly so imagine you are in an airport for a flight you have some known knowns for example that there will be some flights taking off from that airport that's a known known you're very confident and you're very certain that that will happen there are also things like known unknowns things that we know uh we know there are things that we simply cannot predict for example we may not know when our flight the exact time that our flight is going to take off that's something that we cannot predict maybe because it could get delayed or it's just a it's just something that we don't have total control over and can possibly change then there are unknown knowns things that others know but you don't know a good example of that would be someone else's scheduled departure time their scheduled flight time you know that someone else knows their scheduled departure time but to you that's an unknown known and finally there are unknown unknowns these are completely unexpected or unforeseeable events a good example of this would be a meteor crashing into the runway now this is an emerging and exciting field of research in fundamental machine learning understanding how we can build algorithms to robustly and efficiently model and quantify uncertainties of these deep learning models and this is really tough actually because these models have millions and millions billions and now even trillions of parameters uh and understanding and introspecting them inspecting their insides uh to estimate understanding when they are going to not know the correct answer is definitely not a straightforward problem now people do not typically train neural networks to account for these types of uncertainties so when you train on some data for example here you can see the observations in black we can train a neural network to make some predictions in blue and the predictions align with our observations inside this region where we had training data but outside of this region we can see that our predictions start to fail a lot and estimating the uncertainty in situations like this comes in two forms that we're going to be talking about today the first form is epistemic uncertainty which models the uncertainty in the underlying predictive process this is when the model just does not know the correct answer and it's not confident in its answer the other form of uncertainty is aliatoric uncertainty this is uncertainty in the data itself think of this as statistical or sensory noise this is known as this is known as irreducible uncertainty so since no matter how much data that you collect you will have some underlying noise in the collection process it is inherent in the data itself the only way to reduce alliatoric uncertainty is to change your sensor and get more accurate data now we care a lot about both of these forms of uncertainty both alliatoric uncertainty and epistemic uncertainty and again just to recap the differences between these two forms of uncertainty we have aliatoric on the left-hand side this focuses on the statistical uncertainty of our data it describes how confident we are in our data itself it's highest when our data is noisy and it cannot be reduced by adding more data on the other hand we have epistemic uncertainty this is much more challenging to estimate than aliatoric uncertainty and there are some emerging approaches that seek to determine epistemic uncertainty now the key is that epistemic uncertainty reflects the model's confidence in the prediction we can use these estimates to begin to understand when the model cannot provide a reliable answer and when it's missing some training data to provide that answer unlike aliatoric uncertainty epistemic uncertainty can be reduced by adding more data and improving that confidence of the model so while alliatoric uncertainty can be learned directly using neural networks using likelihood estimation techniques like we learned about earlier in this lecture today epistemic uncertainty is very challenging to estimate and this is because epistemic uncertainty reflects the uncertainty inherent to the model's predictive process itself with a standard neural network a deterministic neural network we can't obtain a sense of this uncertainty because the network is deterministic with a given set of weights passing in one input to the model multiple times will yield the same output over and over again it's going to always have the same fixed output no matter how many times we feed in that same input but what we can do instead of having a deterministic neural network where each weight is a deterministic number a single number for each weight we can represent every single weight by a probability distribution so in this case we're going to actually model each of these weights by a distribution such that when we pass in an input to our neural network we sample from the point in this distribution for every single weight in the neural network this means that every time we feed in an input to the model we're going to get out a slightly different output depending on the samples of our weights that we we realized that time that we fed it through the model these models are called bayesian neural networks these model distributions likelihood functions over the network weights themselves so instead of modeling a single number for every weight bayesian neural networks try to learn neural networks that capture a full distribution over every single weight and then use this distribution to actually learn the the uncertainty of our model the epistemic uncertainty of our model now we can formulate this epistemic uncertainty and formulate this learning of bayesian neural networks as follows while deterministic neural networks learn this fixed set of weights w bayesian neural networks learn a posterior distribution over the weights this is a probability of our weights given our input data and our labels x and y now these are called bayesian neural networks because they formulate this posterior probability of our weights given our data using bayes rule they actually write it out like this using bayes rule however in practice this posterior is intractable to compute analytically it's not possible for any real or non-toy examples which means that we have to resort to what are called sampling techniques to approximate and try to estimate this posterior so we can approximate this posterior through sampling where the idea is to make multiple stochastic evaluations through our model each using a different sample of our weights now this can be done many different ways one way to do this approximation is using a technique that we've learned about in class already called dropout typically dropout is used during training but now we're talking about using dropout during testing time to obtain these multiple samples through our network so the way this works is nodes or neurons are dropped or dropped out or not dropped out based on the value of a bernoulli random variable with some probability p that we define as part of our dropout procedure each time we feed our input our same input through the model depending on which nodes are dropped in and out we're going to get a slightly different output and that's going to each one of those outputs is going to represent a different sample through our network now alternatively we can sample a different way using an ensemble of independently trained models each of these will learn a unique set of weights after seeing a unique set of training data or potentially the same training data just shown in a different order or sequence in both of these cases though what we're doing here is very similar we're drawing a set of t samples of our weights from and using these to compute t forward passes using either dropout or t models in an ensemble this allows us to formulate two terms one is the expectation of our prediction y as well as the variance of our prediction over the course of these forward passes so if our variance over these predictions is very large so if we take t stochastic forward passes through the model and we have a very large variance if none of our outputs really agree with each other this is a great indicator that our model has a high epistemic uncertainty and in fact it's not only an indicator that it has high fsm uncertainty this is the epistemic uncertainty the variance of our prediction over these t stochastic forward passes while these sampling based approaches are very commonly used techniques for epistemic uncertainty estimation they do have a few very notable downsides and limitations first as you may have realized from the fact that they draw multiple samples to approximate this weight distribution this means that they require running the model multiple times except t times just to obtain their predictions for ensembles it's even worse than this because you have to initialize and train multiple independent models which is extremely computationally costly then you have to store them in memory and repeatedly run them in memory as well repeatedly relatedly this imposes a memory constraint due to having to actually keep all of these models in parallel on on your network on your computer and together this means that these sampling approaches are not very efficient which is a significant limitation for applications where uncertainty estimation is necessary to be made in real time on edge devices for example on robotics or other mobile devices and the last point i'd like to make is that bayesian approaches abrasion approximation methods such as dropout tend to produce typically overconfident uncertainty estimates which may be problematic also in safety critical domains where we really need calibrated uncertainty estimates and we really prefer consider ourselves uncertain rather than confident so the backup case we don't want to assume in the backup case that we are confident that we know what we're doing when we don't really know what we're doing but what are sampling approaches really trying to do when they try to approximate this uncertainty let's look at this in a bit more detail and to answer this question let's suppose we're working within the context again of self-driving cars since this is a very nice and intuitive example we have a model that's looking to predict the steering wheel angle of the car given this raw image that it sees on the left the mean here it's predicting two things a mean and a variance the mean here is the angle that the wheel should be the variance is the alliatoric uncertainty the data uncertainty if you remember now consider the epistemic uncertainty this is the model uncertainty that we said was really difficult to capture as we've seen with sampling based approaches we can we can compute the epistemic uncertainty using ensemble of many independently trained instances of this model so we could take one model and obtain its estimates of mu and sigma squared and we can plot for this given image where that model believes mu and sigma squared should be on this two-dimensional graph on the right on the x-axis is mu on the axis is sigma squared and we can place exactly where that network believes the output should be in this two-dimensional space and we can repeat this for several different models each model we take its output and we can plot it in this space and over time if we do this for a bunch of models we can start to estimate the uncertainty by taking the variance across these predictions to get a metric of uncertainty intuitively if we get a huge variance here a huge variance over our muse if the answers are said differently if the answers are very spread out this means that our model is not confident on the other hand if our answers are very close to each other this is a great indicator that we are very confident because even across all these different models that we independently trained each one is getting a very similar answer that's a good indicator that we are confident in that answer indeed this is the epistemic or the model uncertainty in fact these estimates these these estimates that are coming from individually trained models are actually being drawn from some underlying distribution now we're starting to see this distribution shape up and the more samples that we take from that distribution the more it will start to appear like this background distribution that we're seeing capturing this distribution though instead of sampling if we could just capture this distribution directly this could allow us to better and more fully understand the model uncertainty so what if instead of drawing samples from this distribution approximating it we just tried to learn the parameters of this distribution directly now this approach this is the approach that has been taken by an emerging series of uncertainty quantification methods called evidential deep learning which consider learning as an evidence acquisition process evidential deep learning tries to enable direct estimation of both alliatoric uncertainty as well as epistemic uncertainties by trying to learn what what we call these higher order distributions over the individual likelihood parameters in this case over the parameters of mu and sigma we try to learn a distribution on top of them so to understand this consider how different types of degrees of uncertainties may manifest in these evidential distributions when we have very low uncertainty like we can see on the left hand side the spread along our mean or mu and our sigma squares is going to be very small we're going to have very concentrated density right in a specific point this indicates that we have low uncertainty or high confidence when we have something like high aleatoric uncertainty on the other hand we might see high values of sigma squared which we can actually represent by an increase along the y axis the sigma squared axis of this plot here represented in the middle finally if we have high epistemic uncertainty we'll have very high variability in the actual values of mu that are being returned so you can see this along spread along the mu axis now evidential distributions allow us to capture each of these modes and the goal is to train a neural network now to learn these types of evidential distributions so let's take a look at how we can do evidential learning specifically in the case of regression first now we call these distributions over the likelihood parameter evidential distributions like i've been referring to them now the key thing to keep in mind when trying to wrap your head around these evidential distributions is that they represent a distribution over distributions if you sample from an evidential distribution you will get back a brand new distribution over your data how we can formulate this is is uh using or how can we actually formulate these evidential distributions well first let's consider like i said the case of continuous learning problems like regression we assume like we saw earlier in the class that our target labels y are going to be drawn from some normal distribution with parameters distribution parameters mu and sigma squared this is exactly like we saw earlier in the class no different the key here is that instead of before when we assumed that mu and sigma were known things that our network could predict now let's say that we don't know mu and sigma and we want to probabilistically estimate those as well we can formally we can formalize this by actually placing priors over each of these parameters each of these distribution parameters so we assume that our distribution parameters here mu and sigma squared are not known and let's place parameters over each of these and try to probabilistically estimate them so drawing from mu will we can draw mu from a normal parametrized as follows and we can draw sigma squared our variance from an inverse gamma parameterized as follows here using these new hyperparameters of this evidential distribution now what this means is that mu and sigma squared are now being drawn from this normal inverse gamma which is the joint of these two priors the normal inverse gamma distribution is going to be parametrized by a different set of parameters gamma epsilon alpha and beta now this is our evidential distribution or what we call our evidential prior it's a distribution this normal inverse gamma defined by our model parameters that gives rise when we sample it when we sample from our evidential distribution we're actually getting back individual realizations of mu and sigma squared these are their own individual gaussians defining this original distribution on the top line over our data itself y so we call these evidential distributions since they have greater density in areas where there is more evidence in support of a given likelihood distribution realization so on the left hand side or in the middle here you can see an example of this evidential distribution one type of this evidential distribution with this normal inverse gamma prior over mu and sigma squared both parameters of the gaussian distribution which is placed over our data our likelihood function but you can see on the top left hand corner now different realizations of mu and sigma over this space of this evidential distribution then correspond to distinct realizations of our likelihood function which describes the distribution of our target values y so if we sample from any point in this evidential distribution we're going to receive back a mu and a sigma squared that defines its own gaussian that we can see on the right hand side we can also consider the analog of evidential learning in the case of classification once again keep in mind that if you sample from an evidential distribution you get a brand new distribution over the data so for classification our target labels y are over a discrete set of classes of k classes to be specific we assumed that our class labels were drawn from a likelihood function of the categorical form parameterized by some probabilities now in this case we can probabilistically estimate those distribution parameters p using what is called a dirichlet prior the dirichlet prior is itself parametrized by a set of concentration parameters here called alpha again per class so there are k alpha parameters in this dirichlet distribution and when we sample from this dirichlet we're going to receive realizations of our distributional parameters distributional probabilities defining our categorical loss function again it's this hierarchy of distributions let's take a look at the simple example of what this evidential distribution in the case of classification looks like here we have three possible classes in this case the probability mass of our dirschlag distribution will live entirely on this triangular simplex sampling from any point within this triangle corresponds to sampling a brand new categorical probability distribution for example let's say we sample from the center of this simplex this will correspond to equal probabilities over the three classes so for those who have not seen the simplex plot before imagine the corners of these triangles represent perfect prediction in one of the classes so sampling from the middle of the simplex corresponds to equal probabilities of each of the three classes being at one of the corners like i said corresponds to all of the mass being on one of those classes and zero mass being on the other two and anywhere else in this simplex corresponds to a sampled categorical distribution that is defined by these class probabilities that have to sum to one the color inside of this triangle this gradient of blues that you can see provides one example of how this mass can be distributed throughout the simplex where the categorical distributions can be sampled more frequently so this is one example representing that the majority of the mass is placed in the center of the simplex but the whole power of evidential learning is that we're going to try to learn this distribution so our network is going to try to predict what this distribution is for any given input so this distribution can change and the way we're going to sample our categorical likelihood functions will as a result also change so to summarize here is a breakdown of evidential distributions for both regression and classification in the regression case the targets take continuous values from real numbers we assumed here that the targets are drawn from a normal distribution parameterized by mu and sigma and then we had in turn a higher order evidential distribution over these likelihood parameters according to this normal inverse gamma distribution in the case of classification the targets here represented a set of k or were shown actually here over a set of k independent classes here we assumed that our likelihood of observing a particular class label y was drawn from a categorical distribution of class probabilities p and this p was drawn from a higher order evidential dirichlet distribution parameterized by alphas now here's a also a quick interesting side note you may be asking yourself why we chose this specific evidential distribution in each of these cases why did we pick the dirichlet distribution why did we pick the normal inverse gamma distribution there are many distributions out there that we could have picked for our likelihoods picked over our likelihoods rather but we chose these very special forms because these are known as what are called conjugate priors picking them to be of this form makes analytically computing our loss tractable specifically if our prior or evidential distribution p of theta is of the same family of our likelihood p of y given theta then we can analytically compute this integral highlighted in yellow as part of our loss during training which makes this whole process feasible so with this formulation of these evidential distributions now let's consider concretely how we can build and train models to learn these evidential distributions and learn and use them to estimate uncertainty what is key here is that the network is trained to actually output the parameters of these higher order evidential distributions so in the case of regression we're predicting gamma epsilon alpha and beta in the case of classification we're predicting a vector of k alphas where k is the number of classes that we have and once we have these parameters we can directly formulate estimates for each of the alliatoric and epistemic uncertainties and that is determined by the resulting distributions over these likelihood parameters we optimize these distributions by incorporating both maximization of our model fit and minimization of wrong evidence into the objective for regression this is manifested as follows where our maximization captures the likelihood of seeing data given the likelihood parameters which are controlled by this evidential prior parameters here denoted as m the minimization of incorrect evidence is captured in this regularization term on the right hand side by minimizing we seek to lower the incorrect evidence in instances where model where the model is making errors so think of the right hand side that's really fitting all of our evidential distributions to our data and the left-hand side is inflating that uncertainty when we see that we get some incorrect evidence and start to make some errors in training we can evaluate this method actually on some simple toy example learning problems where we're given some data points in the distribution and some some regions in our in our scene here where we seek to predict the target value or in some cases the class label in regression we have this case where we try to fit to our data set where we have data in the middle white region but we don't have data on the two edge regions and we can see that our evidential distributions are able to inflate the uncertainty in the regions where we are out of distribution which is exactly what we want to see that we're able to recognize that those predictions where we don't have data should not be trusted similarly in the case of classification operating on the mnist data set we can generate out of distribution examples by synthetically rotating the handwritten digits so here along the bottom side you can actually see an example one digit being rotated from left to right being rotated and you can see the uncertainty of our evidential distribution increasing in this out of distribution regime where the one no longer is even closely resembling a one but the uncertainty really drops down on the two end points where the one comes back into shape of a true one evidential learning can also be applied in much more complex high dimensional learning applications as well recently it's been demonstrated to learn neural networks to output thousands of evidential distributions simultaneously while learning to quantify pixel-wise uncertainty of monocular depth estimators given only raw rgb inputs this is a regression problem since the predicted depth of each pixel is a real number it's not a class but in the case of classification evidential deep learning has also been applied to perform uncertainty aware semantic segmentation of raw lidar point clouds also extremely high dimensional where every point in the point cloud must be predicted must predict which object or what type of class it belongs to evidential deep learning allows us to not only classify each of these points as an object but also recognize which of these objects in the scene express a form of an object that we don't know the answer to so evidential deep learning really gives us the ability to express a form of i don't know when it sees something in its input that it doesn't know how to predict confidently and it's able to let the user know when its prediction should not be trusted now to start wrapping up i wanted to provide a brief comparison of all of the different types of approaches of uncertainty estimation that we've learned about today and how evidential neural networks fit into these there were three techniques that we touched on today starting first with manila likelihood estimation of our over our data then moving to bayesian neural networks and finally exploring evidential neural networks each of these methods really has its own differences strengths and advantages at the highest level of difference we saw that fundamentally each of these methods placed probabilistic priors over different aspects of the pipeline over the data in the case of the likelihood estimators that we saw very early on in the lecture over the weights in the case of bayesian neural networks and over the likelihood for function itself in the case of evidential neural networks unlike bayesian neural networks though evidential neural networks are very fast and very memory efficient since they don't require any sampling to estimate their uncertainty and even though both methods capture a form of epistemic uncertainty this is one huge advantage that means that you don't need to train an ensemble of models you can train just one model and you only need to run it once for every single input there's no sampling required so in summary in this lecture we got to dive deep into uncertainty estimation using neural networks this is a super important problem in modern machine learning as well as as we as really start to deploy our models into the real world we need to quickly be able to understand when we should trust them more and more importantly when we should not we learned about some of the different forms of uncertainty and how these different methods can help us capture both uncertainty in the data as well as uncertainty in the model and finally we got to have some insight in how we can use evidential deep learning to learn fast and scalable calibrated representations of uncertainty using neural networks thank you for attending this lecture in the next lecture we're going to be going through another very impactful topic in today's world focusing on ai bias and fairness and seeing some strategies for mitigating adverse effects of these models so we look forward to seeing you for that lecture as well thank you
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2022_Deep_Generative_Modeling.txt
okay so welcome back hopefully you had a little bit of a break as we got set up so in this next lecture on deep generative modeling we're going to be talking about a very powerful concept building systems that not only look for patterns in existing data but can actually go a step beyond this to actually generate brand new data instances based on those learn patterns and this is an idea that's different from what we've been exploring so far in the first three lectures and this area of generative modeling is a particular field within deep learning that's enjoying a lot of success a lot of attention and interest right now and i'm eager to see how it continues to develop in the coming years okay let's get into it so to start take a look at these three images these three faces and i want you all to think about which face of these three you think is real unfortunately i can't see the chat right now as i'm lecturing but please mentally or submit your answers i think they may be coming in okay mentally think about it the punch line which i'll give to you right now for the sake of time is in fact that all three of these faces are in fact fake they were completely generated as you may or may not have guessed by a by a generative model trained on data sets of human faces so this shows the power and maybe inspire some caution about the impact that generative modeling could have in our world all right so to get into the technical bit so far in this course we've been primarily looking at what we call problems of supervised learning meaning that we're given data and we're given some labels for example a class label or a numerical value and we want to learn some function that maps the data to those labels and this is a course on deep learning so we've been largely concerned about building neural network models that can learn this functional mapping but at its core that function that is is performing this mapping could really be anything today we're going to step beyond this from the class of supervised learning problems to now consider problems in the domain of unsupervised learning and it's a brand new class of problems and here in this setting we're simply given data data x right and we're not necessarily given labels associated with each of those individual data instances and our goal now is to build a machine learning model that can take that raw data and learn what is the underlying structure the hidden and underlying structure that defines the distribution of this data so you may have seen some examples of of unsupervised learning in the setting of traditional machine learning for example clustering algorithms or principal component analysis for example these are all unsupervised methods but today we're going to get into using deep generative models as an example of unsupervised learning where our goal is to take some data examples data samples from our training set and those samples are going to be drawn from some general data distribution right our task is to learn a model that is capturing some representation of that distribution and we can do this in two main ways the first is what is called density estimation where we're given our samples our data samples right they're going to fall according to some probability distribution and our task is to learn an approximation of what the function of that probability distribution could be the second class of problems is in sample generation where now given some input samples right from again some data distribution we're going to try to learn a model of that data distribution and then use that process to actually now generate new instances new samples that hopefully fall in line with what the true data distribution is and in both of these cases our task overall is actually fundamentally the same we're trying to learn a probability distribution using our model and we're trying to match that probability distribution similar to the true distribution of the data and what makes this task difficult and interesting and complex is that often we're working with data types like images where the distribution is very high dimensional right it's not a simple you know normal distribution that we can predict with a known function and that's why using neural networks for this task is so powerful because we can learn these extraordinarily complex functional mappings and estimates of these high dimensional data distributions okay so why why care about generative models what could some applications be well first of all there because they're modeling this probability distribution they're actually capable of learning and uncovering what could be the underlying features in a data set in a completely unsupervised manner and where this could be useful is in applications where maybe we want to understand more about what data distributions look like in a setting where our model is being applied for some downstream task so for example in facial detection we could be given a data set of many many different faces and starting off we may not know the exact distribution of these faces with respect to features like skin tone or hair or illumination or occlusions so on and so forth and our training data that we use to build a model may actually be very homogeneous very uniform with respect to these features and we could want to be able to determine and uncover whether or not this actually is the case before we deploy a facial detection model in the real world and you'll see in today's lab and in this lecture how we can use generative models to not only uncover what the distribution of these underlying features may be but actually use this information to build more fair and representative data sets that can be used to train machine learning models that are unbiased and equitable with respect to these different underlying features another great example is in the case of outlier detection for example when in the context of autonomous driving and you want to detect rare events that may not be very well represented in the data but are very important for your model to be able to handle and effectively deal with when deployed and so generative models can be used to again estimate these probability distributions and identify those instances for example in the context of driving when pedestrian walks in or there's a really strange event like a deer walking onto the road or something like that and be able to effectively handle and deal with these outliers in the data today we're going to focus on two principal classes of generative models the first being inc auto encoders specifically auto encoders and variational auto encoders and the second being an architecture called generative adversarial networks or gans and both of these are what we like to call latent variable models and i just threw out this term of latent variable but i didn't actually tell you what a latent variable actually means and the example that i love to use to illustrate the concept of a latent variable comes from the story from the work of plato plato's republic and this story is known as the myth of the cave and in this legend there is a group of prisoners who are being held imprisoned and they're constrained as part of their punishment to face a wall and just stare at this wall observe it and the only things they can actually see are shadows of objects that are behind their heads right so these are their observations right they're not seeing the actual entities the physical objects that are casting these shadows in front of them and so to their perspective these shadows are the observed variables but in truth there are physical objects directly behind them that are casting these shadows onto the wall and so those objects here are like latent variables right they're the underlying variables that are governing some behavior but that we cannot directly observe we only see what's in front of us they're the true explanatory explanatory factors that are resulting in some behavior and our goal in generative modeling is to find ways to actually learn what these true explanatory factors these underlying latent variables can be using only observations right only given the observed data okay so we're going to start by discussing a very simple generative model that tries to do this and the idea behind this model called auto encoders is to build some encoding of the input and try to reconstruct an input directly and to take a look at the way that the auto encoder works it functions very similarly to some of the architectures that we've seen in the prior three lectures we take in as input raw data pass it through some series of deep neural network layers and now our our output is directly a low dimensional latent space a feature space which we call z and this is the this is the actual uh representation the actual variables that we're trying to predict in training this type of network so i encourage you to think about in considering this this type of architecture why we would care about trying to enforce a low dimensional set of variables z why is this important the fact is that we are able to effectively build a compression of the data by moving from the high dimensional input space to this lower dimensional latent space and we're able to get a very compact and hopefully meaningful representation of the input data okay so how can we actually do this right if our goal is to predict this vector z we don't have any sort of labels for what these variables z could actually be they're underlying they're hidden we can't directly observe them how can we train a network like this because we don't have training data what we can do is use our input data maximally to our advantage by complementing this encoding with a decoder network then now takes that latent representation that lower dimensional set of variables and goes up from it builds up from it to try to learn a reconstruction of the original input image and here the reconstructed output is what we call x-hat because it's a imperfect reconstruction of the original data and we can train this network end-to-end by looking at our reconstructed output looking at our input and simply trying to minimize the distance between them right taking the output taking the input subtracting them and squaring it and this is called a mean squared error between the the input and the reconstructed output and so in the case of images this is just the pixel by pixel difference between that reconstruction and our original input and no here right our loss function doesn't have any labels all we're doing is taking our input and taking the reconstructed output spread out spit out to us at the end of training by our network itself okay so we can we can simplify this plot a little bit by just abstracting away those individual neural layers and saying okay we have an encoder we have a decoder and we're trying to learn this reconstruction and this type of diagram where those layers are abstracted away is something that i'll use throughout the rest of this presentation and you'll probably also come across as you move forward with looking at these types of models further beyond this course to take a step back this idea of using this reconstruction is is a very very powerful idea in taking a step towards this idea of unsupervised learning we're effectively trying to capture these variables which could be very interesting without requiring any sort of labels to our data right and the fact is that because we're we're lowering the dimension dimensionality of our data into this compressed latent space the degree to which we perform this compression has a really big effect on how good our reconstructions actually turn out to be and as you may expect the smaller that bottleneck is the fewer latent variables we try to learn the poor quality of reconstruction we're going to get out right because effectively this is a form of compression and so this idea of the autoencoder is a powerful method a powerful first step for this idea of representation learning where we're trying to learn a com compressed representation of our input data without any sort of label from the start and in this way we're sort of building this automatic encoding of the data as well as self-encoding the the input data which is why this term of auto-encoder comes into play from this from this base bare bone auto encoder network we can now build a little bit more and introduce the concept of a variational autoencoder or vae which is more commonly used in actual generative modeling today to understand the difference right between the traditional auto encoder that i just introduced and what we'll see with the variational autoencoder let's take a closer look at the nature of this latent latent representation z so here with the traditional autoencoder given some input x if we pass it through after training we're always going to get the same input same output out right no matter how many times we pass in the same input one input one output that's because this encoding and decoding that we're learning is deterministic once the network is fully trained however in the case of a variational auto encoder and it and more generally we want to try to learn a better and smoother representation of the input data and actually generate new images that we weren't able to generate before with our auto encoder structure because it was purely deterministic and so vaes introduce an element of stochasticity of randomness to try to be able to now generate new images and also learn more smooth and more complete representations of the latent space and specifically what we do with a vae is we break down the latent space z into a mean and a standard deviation and the goal of the encoder portion of the network is to output a mean vector and a standard deviation vector which correspond to distributions of these latent variables z and so here as you can hopefully begin to appreciate we're now introducing some element of probability some element of randomness that will allow us to now generate new data and also build up a more meaningful and more informative latent space itself right the key that you'll see and the key here is that by introducing this notion of a probability distribution for each of those latent variables right each latent variable being defined by a mean standard deviation we will be able to sample from that latent distribution to now generate new data examples okay so now because we have introduced this element of of probability both our encoder and decoder architectures or our networks are going to be fundamentally probabilistic in their nature and what that means is that over the course of training the encoder is trying to infer a probability distribution of the latent space with respect to its input data while the decoder is trying to infer a new probability distribution over the input space given that same latent distribution and so when we train these networks we're going to learn two separate sets of weights one for the encoder which i'll denote by phi and one for the decoder which is going to be denoted by the variable theta and our loss function is now going to be a function of those weights phi and theta and what you'll see is that now our loss is no longer just constituted by the reconstruction term we've now introduced this new term which we'll call the regularization term and the idea behind the regularization term is that it's going to impose some notion of structure in this probability probabilistic space and we'll break it down step by step um in a few slides okay so just remember um that when we when we're after we define this loss and over the course of training as always we're trying to optimize the loss with respect to the weights of our network and the weights are going to iteratively being updated over the course of training the model to break down this loss term right the reconstruction loss is exactly is very related to as it was before with the auto encoder structure so in the case of images you can think about the pixel wise difference between your input and the reconstructed output what is more interesting and different here is the nature of this regularization term so we're going to discuss this in more detail what you can see is that we have this term d right and it's introducing something about a probability distribution q and something about a probability distribution p the first thing that i that i want you to know is that this term d is going to reflect a divergence a difference between these two probability probability distributions q of phi and p of z first let's look at the term q of phi of z given x this is the computation that our encoder is trying to learn it's a distribution of the latent space given the data x computed by the encoder and what we do in regularizing this network is place a prior p of z on that latent distribution and all a prior means is it's some initial hypothesis about what the distribution of these latent variables z could look like and what that means is it's going to help the network enforce some structure based on this prior such that the learned latent variables z roughly follow whatever we define this prior distribution to be and so when we introduce this regularization term d we're trying to prevent the network from going too wild or to overfitting on certain restricted parts of the latent space by imposing this enforcement that tries to effectively minimize the distance between our inferred latent distribution and some notion of of this prior and so in practice we'll see how this helps us smooth out the actual quality of the distributions we learn in the lane space what turns out to be a common choice for this prior right because i haven't told you anything about how we actually select this prior in the case of variational autoencoders a common choice for the prior is a normal gaussian distribution meaning that it is centered with a mean of 0 and has a standard deviation and variance of 1. and what this means in practice is that encourages our encoder to try to place latent variables roughly evenly around the center of this latent space and distribute its encodings quite smoothly and from this when we have we now that we have defined the the prior on the latent distribution we can actually make this divergence this regularization term explicit and with vaes what is commonly used is this function called the kublac liblar divergence or kl divergence and all it is is it's a statistical way to measure the divergence the distance between two distributions so i want you to think about this term the kl divergence as a metric of of distance between two probability distributions a lot of people myself included when introduced to va's have a question about okay you've introduced this idea said to us like we've we are defining our prior to be a normal gaussian why why it seems kind of arbitrary yes it's a very convenient function it's very commonly used but what effect does this actually have on how well our network regularizes so let's get some more intuition about this first i'd like you to think about what properties we actually want this regularization function to achieve the first is that we desire this notion of continuity meaning that if two points are close in a latent space probably they should relate to similar content that's semantically or functionally related to each other after we decode from that latent space secondly we want our latent space to be complete meaningful meaning that if we do some sampling we should get something that's reasonable and sensible and meaningful after we do the reconstruction so what what could be consequences of not meeting these two criteria in practice well if we do not have any regularization at all what this could lead to is that if two points are close in the latent space they may not end up being similarly decoded meaning that we don't have that notion of continuity and likewise if we have a point in latent space that cannot be meaningfully decoded meaning it just in this example doesn't really lead to a sensible shape then we don't have completeness our latent space is not very useful for us what regularization helps us achieve is these two criteria of continuity and completeness we want to realize points that are close in this lower dimensional space that can be meaningfully decoded and that can be reflect some some notion of continuity and of actual relatedness after decoding okay so with this intuition now i'll show you how the normal prior can actually help us achieve this type of regularization again going to our very simple example of colors and shapes simply encoding the encoding the latent variables according to a non-regularized probability distribution does not guarantee that we'll achieve both continuity and completeness specifically if we have variances rights these values sigma that are too small what this could result in is distributions that are too narrow too pointed so we don't have enough coverage of the latent space and furthermore if we say okay each latent variable should have a completely different mean we don't impose any prior on them being centered at mean zero what this means is that we can have vast discontinuities in our latent space and so it's not meaningful to traverse the latent space and try to find points that are similar and related imposing the normal prior alleviates both of these issues right by imposing the standard deviations to be one and trying to regularize the means to be zero we can ensure that our different latent variables have some degree of overlap that our distributions are not too narrow that they're have enough breath and therefore encourage our latent space to be regularized and be more complete and smoother and this yeah again reiterating that this is achieved by centering our means around zero and regularizing variances to be 1. note though that regularization can the the greater degree of regularization you impose in the network can adversely affect the quality of your reconstruction and so there's always going to be a balance in practice between having a good reconstruction and having good regularization that helps you enforce this notion of a smooth and complete latent space by imposing this normal based regularization okay so with that right now we've taken a look at both the reconstruction component and the regularization component of our loss function and we've talked about how both these encoder and the decoder are inferring and computing a probability distribution over their respective learning learning tasks but one key step that we're missing i'll just go back a bit sorry that was my error okay one key step that we're missing is how we actually in practice can train this network end to end and what you may notice is that by introducing this mean and variance term by imposing this probabilistic structure to our latent space we introduce stochasticity this is effectively a sampling operation operating over a probability distribution defined by these mu and sigma terms and what that means is that during back propagation we can't effectively back propagate gradients through this layer because it's stochastic and so in order to train using back propagation we need to do something clever the breakthrough idea that solved this problem was to actually re-parameterize the sampling layer a little bit so that you divert the stochasticity away from these mu and sigma terms and then ultimately be able to train the network end to end so as as we saw right this notion of probability distribution over mu and sigma squared does not lead to direct propagation because of this stochastic nature what we do instead is now reparametrize the value of z ever so slightly and the way we do that is by taking mu taking sigma independently trying to learn fixed values of mu fixed values of sigma and effectively diverting all the randomness of the stochasticity to this value epsilon where now epsilon is what is actually being drawn from a normal distribution and what this means is that we can learn a fixed vector of means a fixed vector of variances and scale those variances by this random constant such that we can still enforce learning over a probability distribution but diverting the the stochastic stochasticity away from those mean and sigmas that we actually want to learn during training another way to visualize this is that looking at sort of a broken down flowchart of where these gradients could actually flow through in the original form we were trying to go from inputs x through z to a mapping right and the problem we saw was that our probabilistic node z prevented us from doing back propagation what the re-parametrization does is that it diverts the probabilistic operation completely elsewhere away from the means and sigmas of our latent variables such that we can have a continuous flow of gradients through the latent variable z and actually train these networks end to end and what is super super cool about vaes is that because we have this notion of probability and of these distributions over the latent variables we can sample from our latent latent space and actually perturb and tune the values of individual latent variables keeping everything else fixed and generate data samples that are perturbed with a single feature or a single latent variable and you can see that really clearly in this example where one latent variable is being changed in the reconstructed outputs and all other variables are fixed and you can see that this is effectively functioning to tilt the pose of this person's face as a result of of that latent perturbation and these different latent variables that we're trying to learn over the course of training can effectively encode and pick up on different latent features that may be important in our data set and ideally our goal is we want to try to maximize the information that we're picking up on through these latent variables such that one latent variable is picking up on some feature and another is picking up on a disentangled or separate and uncorrelated feature and this is this idea of disentanglement so in this example right we have the head pose changing on the x-axis and something about the smile or the shape of the person's lips changing on the y-axis the way we can actually achieve and enforce this this entanglement in process is actually fairly straightforward and so if you take a look at the standard loss function for a vae again we have this reconstruction term a regularization term and with an architecture called beta vaes all they do is introduce this hyperparameter this different parameter beta that effectively controls the strength of how strictly we are regularizing and it turns out that if you enforce beta to be greater than one you can try to impose a more efficient latent encoding that encourages disentanglement such that with a standard vae looking at a value of beta equals one you can see that we can enforce the head rotation to be changing but also the smile to also change in in conjunction with this whereas now if we look at a beta bae with a much higher value of beta hopefully it's subtle but you can appreciate that the smile the shape of the lips is staying relatively the same while only the head pose the rotation of the head is changing as a function of of latent variable perturbation okay so i introduced at the beginning a potential use case of generative models in terms of trying to create more fair and de-biased machine learning models for deployment and what you will explore in today's lab is using this is practicing this very idea and it turns out that by using a latent variable model like vae because we're training these networks in a completely unsupervised fashion we can pick up automatically on the important and underlying latent variables in a data set such that we can build estimates of the distributions of our data with respect to important features like skin skin tone pose illumination head rotation so on and so forth and what this actually allows us to do is to take this information and go one step forward by using these distributions of these latent features to actually adjust and refine our data set actively during training in order to create a more representative and unbiased data set that will result in a more unbiased model and so this is the idea that you're going to explore really in depth in today's lab okay so to summarize our key points on variational autoencoders they use a compressed representation of the world to return something that's interpretable in terms of the latent features they're picking up on they allow for completely unsupervised learning via this reconstruction they we employ the re-parametrization trick to actually train these architectures and to end via back propagation we can interpret latent variables using a perturbation function and also sample from our latent space to actively generate new data samples that have never been seen before okay that being said the key problem of variational autoencoders is a concern of density estimation trying to estimate the probability distributions of these latent variables z what if we want to ignore that or pay less attention to it and focus on the generation of high quality new samples as our output for that we're going to turn and transition to a new type of generative model called gans where the goal here is really we don't want to explicitly model the probability density or distribution of our data we want to care about this implicitly but use this information mostly to sample really really realistic and really new instances of data that match our input distribution right the problem here is that our input data is incredibly complex and it's very very difficult to go from something so complex and try to generate new realistic samples directly and so the key insight and really elegant idea behind gans is what if instead we start from something super simple random noise and use the power of neural networks to learn a transformation from this very simple distribution random noise to our target data distribution where we can now sample from and this is the really the key breakthrough idea of generative adversarial networks and the way that gans do this is by actually creating a overall generative model by having two individual neural networks that are effectively adversaries they're competing with each other and specifically we're going to explore how this architecture involves these two components a generator network which is functioning and drawing from a very simple data very simple input distribution purely random noise and it's trying to use that noise and transform it into an imitation of the real data and conversely we have this adversary network and discriminator which is going to take samples generated by the generator entirely fake samples and is going to predict whether those samples are real or whether they're fake and we're going to set up a competition between these two networks such that we can try to force the discriminator to classify real and fake data and to force the generator to produce better and better fake data to try to fool the discriminator and to show you how this works we're going to go through one of my absolute favorite illustrations of this class and build up the intuition behind gans so we're going to start really simply right we're going to have data that's just one-dimensional points online and we begin by feeding the generator completely a random noise from this one-dimensional space producing some fake data right the discriminator is then going to see these points together with some real examples and its task is going to be to try to output a probability that the data it sees are real or if they're fake and initially when it starts out right the discriminator is not trained at all so its predictions may not be very good but then over the course of training the idea is that we can build up the probability of what is real versus decreasing the probability of what is fake now now that we've trained our discriminator right until we've achieved this point where we get perfect separation between what is real and what is fake we can go back to the generator and the generator is now going to see some examples of real data and as a result of this it's going to start moving the fake examples closer to the real data increasingly moving them closer such that now the discriminator comes back receives these new points and it's going to estimate these probabilities that each point is real and then iteratively learn to decrease the probability of the fake points and now we can continue to adjust the probabilities until eventually we repeat again go back to the generator and one last time the generator is going to start moving these fake points closer to the real data and increasingly increasingly iteratively closer and closer such that these fake examples are almost identical following the distribution of the real data such that now at this point at the end of training it's going to be very very hard for the discriminator to effectively distinguish what is real what is fake while the generator is going to continue to try to improve the quality of its samples that it's generating in order to fool the discriminator so this is really the intuition behind how these two components of a gan are effectively competing with each other to try to maximize the quality of these uh fake instances that the gener generator is spitting out okay so now translating that intuition back to our architecture we have our generator network synthesizing fake data instances to try to fool the discriminator the discriminator is going to try to identify the synthesized instances the fake examples from the real data and the way we train gans is by formulating an objective a loss function that's known as an adversarial objective and overall our goal is for our generator to exactly reproduce the true data distribution that would be the optimum solution but of course in practice it's not it's very difficult to try to actually achieve this global optimum but we'll take a closer look at how this loss function works the loss function while at first glance right may look a little daunting and scary it actually boils down to concepts that we've already introduced we're first considering here the objective for the discriminator network d and the goal here is that we're trying to maximize the probability of the discriminator of identifying fake data here as fake and real data as real and this term comprising uh a loss over the fake data and the real data is effectively a cross-entropy loss between the true distribution and the distribution generated by the generator network and our goal as the discriminator is to maximize this objective overall conversely for our generator we still have the same overall component this cross-entropy type term within our loss but now we're trying to minimize this objective from the perspective of the generator and because the generator cannot directly access the true data distribution d of x its focus is on minimizing minimizing the distribution and loss term d of g of z which is effectively minimizing the probability that it's generated data is identified as fake so this is our goal for the generator and overall we can put this together to try to comprise the comprise the overall loss function the overall min max objective which has both the term for the generator as well as the term for the discriminator now after we've trained our network our goal is to really use the generator network once it's fully trained focusing in on that and sample from it to create new data instances that have never been seen before and when we look at the train generator right the way it's synthesizing these new data instances is effectively going from a distribution of completely random gaussian noise and learning a function that maps a transformation from that gaussian noise towards a target data distribution right and this mapping this approximation is what the generator is learning over the course of training itself and so if we consider one point right in this distribution one point in the noise distribution is going to learn to one is going to lead to one point in the target distribution and similarly now if we consider an independent point that independent point is going to produce a new instance in the target distribution falling somewhere else on this data manifold and what is super super cool and interesting is that we can actually interpolate and traverse in the noise space to then interpolate and traverse in the target data distribution space and you can see the result of this inter interpolation this traversal in practice where in these examples we've transformed this image of a black goose or black swan on the left to a robin on the right simply by traversing this input data manifold to result in a traversal in the target data manifold and this idea of domain transformation and traversal in these complex data manifolds leads us to discuss and consider why gans are such a powerful architecture and how what the some examples of their generated data actually can look like and so one idea that has been very effective in the practice of building gans that can synthesize very realistic examples is this idea of progressive growing the idea here is to effectively add layers to each of the generator and discriminator as a function of training such that you can iteratively build up more and more detailed image generations as a result of the progression of training so you start you know with a very simple model and the outputs of as a result of this are going to have very low spatial resolution but if you iteratively add more and more network layers you can improve the quality and the spatial resolution of the generated images and this helps also speed up training and result in more stable training as well and so here are some examples of of a gan architecture using this progressive growing idea and you can see the realism the photorealism of these generated outputs another very interesting advancement was in this idea of style transfer this has been enabled by some fundamental architecture improvements in the network itself which we're not going to go into too detail about but the idea here is that we're going to actually be able to build up a progressive growing gan that can also transfer styles so features and effects from one series of images onto a series of target images and so you can see that example here where on one axis we have target images and on the other axis the horizontal axis is the style captures the style of image that we want to transfer onto our target and the result of such an architecture is really remarkable where you can see now the input target has effectively been transformed in the style of those source images that we want to draw features from and as you may have guessed the images that i showed you at the beginning of the lecture were generated by one of these types of gan architectures and these results are very very striking in terms of how realistic these images look you can also clearly extend it to other domains and other examples and i will note that while i've focused largely on image data in this lecture this general idea of generative modeling applies to other data modalities as well and in fact many of the more recent and exciting applications of generative models are in moving these types of architectures to new data modalities and new domains to formulate design problems for a variety of application areas okay so one final series of architecture improvements that i'll briefly touch on is this idea of trying to impose some more structure on the outputs itself and control a little bit better about what these outputs actually look like and the idea here is to actually impose some sort of conditioning factor a label that is additionally supplied to the gan network over the course of training to be able to impose generation in a more controlled manner one example application of this is in the instance of paired translation so here the network is considering new pairs of inputs for example a scene as well as a corresponding segmentation of that scene and the goal here is to try to train the discriminator accordingly to classify real or fake pairs of scenes and their corresponding segmentations and so this idea of pair translation can be extended to do things like moving from labels semantic labels of a scene to generating a an image of that scene that matches those labels going from an aerial view of a street to a map type output going from a label to a facade of a building day to night black and white to color edges of an image to to a filled out image and really the applications are very very wide and the results are quite impressive in being able to go back and forth and do this sort of pair translation operation for example in data from google street view shown here and i think this is a fun example which i'll briefly highlight this here is looking at um coloring from edges of of a sketch and in fact the data that were used to train the scan network were uh images of pokemon and these are results that the network was generating from simply looking at images of pokemon you can see that that training can actually extend to other types of artwork instances beyond the pokemon example shown here okay that just replaces it okay the final thing that i'm going to introduce and touch on when it comes to gan architectures is this cool idea of completely unpaired image to image translation and our goal here is to learn a transformation across domains with completely unpaired data and the architecture that was introduced a few years ago to do this is called cyclegan and the idea here is now we have two generators and two discriminators where they're effectively operating in their own data distributions and we're also learning a functional mapping to translate between these two corresponding data distributions and data manifolds and without going into i could explain and i'm happy to explain the details of of this architecture more extensively but for the sake of time i'll just highlight what the outputs of this of this architecture can look like where in this example the task is to translate from images of horses to images of zebras where you can effectively appreciate these various types of transformations that are occurring as this unpaired translation across domains is is occurring okay the reason why i highlighted this example is i think that the cycle gan highlights this idea of this concept of gans being very very powerful distribution transformers where the original example we introduced was going from gaussian noise to some target data manifold and in cycle gain our objective is to do something a little more complex going from one data manifold to another target data manifold for example horse domain to zebra domain more broadly i think it really highlights this idea that this neural network mapping that we're learning as a function of this generative model is effectively a very powerful distribution transformation and it turns out that cycle gans can also extend to other modalities right as i alluded to not just images we can effectively look at audio and sequence waveforms to transform speech by taking an audio waveform converting it into a spectrogram and then doing that same image based domain translation domain transformation learned by the cycle again to translate and transform speech in one domain to speech in another domain and you may uh may be thinking ahead but this turns out that this was the exact approach that we used to synthesize the audio of obama's voice that alexander showed at the start of the first lecture we used a cycle again architecture to take alexander's audio in his voice and convert that audio into a spectrogram waveform and then use the cyclegan to translate and transform the spectrogram waveform from his audio domain to that of obama so to remind you i'll just play this output welcome to mit 6s191 the official introductory course on deep learning here at mit again right with this sort of architecture the the applications can be very very broad and extend to other um instances and use cases beyond you know turning images of horses into images of zebras okay all right so that concludes the the core of this lecture and the core technical lectures for today um in this section in particular we touched on these two key generative models variational auto encoders and auto encoders where we're looking to build up estimates of lower dimensional probabilistic latent spaces and secondly generative adversarial networks where our goal is really to try to optimize our network to generate new data instances that closely mimic some target distribution with that we'll conclude today's lectures and just a reminder about the lab portion of the course which is going to follow immediately after this we have the open office hour sessions in 10 250 where alexander and i will be there in person as well as virtually in gather town two important reminders i sent an announcement out this morning about picking up t-shirts and other related swag we will have that in room 10 250. we're going to move there next so please be patient as we arrive there will make announcements about availability for the remaining days of the course the short answer is yes we'll be available to pick up shirts at later days as well and yeah that's that's basically it for for today's lectures and i hope to see many of you in office hours today and if not hopefully for the remainder of the week thank you so much
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2019_Image_Domain_Transfer_NVIDIA.txt
thank you so much for the nice introduction it's a pleasure to be here I was actually a postdoc here a long time ago from 2003 to 2006 it's nice to come back and give a talk today um so I'll talk today about image domain transfer which I'll introduce in a minute so I'm part of Nvidia research immediate research is collection of researchers in blue work on a number of things really all the way from sort of circuits it's a very low level of harbor all the way up to you know machine learning and computer vision our mission is to develop technology through through research we have about 140 researcher in 11 different time zones and then my team to learning perception team is about 20 people and our focus really is a lot of computer vision research and today I'll talk about some of it and I called it image domain transfer and the tasks I'm looking at it's yeah one way to think about it is can we teach machines some form of imagination abilities and then yeah we'll see it's not really imagination it's just yeah learning by example but still can we do things like this can we have an input image and ask a computer to translate into what it might look like say on a rainy day alright so given a sunny day image can we teach it to take this image and translate it to look like as if we were taken during a rainy day and you know we just need to play some function apply some function effort that takes this image and transfers it yeah to a rainy day the question is how can we do this and I'll talk about a whole series of techniques that refused to do things along those lines now you want to ask why would I want to do this in the first place right so the example I showed this is actually one I'll come back to but it's other examples of translating one image to another with the goal of enhancing it or changing in some particular way so it could go blow race to high-res for instance or if a verb Lurie image am I going to make it sharp or if you have a photograph you might want to make it look like a painting or if you have a loader too much you might want to make it look like Hina my grant image or if you're a synthetic image you might make it look more real or if you're a thermal image maybe you want to add color tread and then sort the other examples that I just showed if you had some are a summer image maybe want to make it a rainy day in which or winter image if you have a data image can I make it a night time in which and so forth there's lots of application of Nia translating one image to a different domain that's related but somewhat different and yeah being in video the ones we're actually quite interested in it's actually the last one to represent so day to night yeah data or summer to winter summer to rainy day and the reason is if you can do you know this type of translation you might not need to acquire as much training data for a particular use case say you want to train yeah something for yourself driving car right and you've taken lots of footage in California which is yeah always funny all right so no I'm gonna try and train it maybe or test it on winter image so either I go out and capture more data or maybe I can take my existing data and make it look like a severe winter or at night or rainy and so forth so I'll talk about two main approaches and they'll spend most of the time over on the second part or the second approach so one approach is you take examples and then based on this example you derive how an image should be translated to another image and this is usually called not nonparametric so it we don't actually train anything really just base it on an example and take this example and somehow apply it to our image so the function is the translating function f takes an example in it and uses this to the translation and then I'll spend a lot of time under learning based methods where we have some training set and we learn this function f to go say from summer to winter so how this first one work well it's often French for is often referred to style transference actually lots of old methods to do this especially in graphics originally but now also in the computer vision community is quite a few methods do this and the set up is usually false you have some kind of content photo right it's usually a photograph you want to take it and modified to look like a given style photo so I can give you a style and the content folder and now you want to apply this style to the content and make it look like yeah like this right it's still the same pyramid or glass pyramid and now it has a style of architecture photograph on the left so in a way it's it's quite simple right we generated an output given the input and given the example which is the style photo and that's it now the question is how do we do it there is a lot of work on artistic style transfer and you might have seen some of this where the goal is to take a content and make it look like a painting all right so here we apply this starry night and get you know output of I think it's tubing in and now looks like you know the painting what we want to look at today is what we call photo style France where the input is a photo the style is also photo I'm gonna apply at this photographic style to our content image and there's a slight difference between the two in the artistic style transfer your K or actually you desire to change some of the geometry in the scene right so it's fine if the houses all of a sudden get bent a little bit right that's that's part of the style whereas for photographic in which is that's not the case you want to leave the geometry alone you just want to have the look to be different and the work we've done recently published is EC CV and it's this two-step process so we first so we model this photograph excel transfer it's a closed form function and we applied to function so it's function f1 function f to the first function applies to style photograph and and that that's a yeah neural network based and it does okay but it leaves some artifacts and then B applies of a second function which we call so the first one we call sterilization ship and the second one we call a smoothing step to get rid of some of their sort of wobbly artifacts that you can see it's all of this work it's similar to degree to the neural style transfer work that you might have seen so the main insight is or the main assumption that we make and it turns out to be true is that if you encode your on the left side the input image and you have an an encoder and encodes it into some space and we just used vgg 19 so we use an existing vgg Network we encode it up to a fourth layer and take the features that come out of it we make this assumption that the covariance matrix of those features actually encode this style somehow so if you take the covariance matrix now HC is the are the features of the content image HC x HC transpose Y is the covariance matrix and we make the assumption that those somehow encode the style features so we can take the features from both the content image as well as the style image and now we're trying to enforce the covariance the covariances of the style features onto the content features so that they look alike and it's actually fairly simple to do so we encode the image didn't be applied it's whitening step it's really like yeah normal whitening step let me apply a coloring step which really take just make sure that yeah those covariances are the same and then we decode it again so it's a it's almost like a normal encoder decoder architecture except for this whitening coloring step and outcomes style transfer image and this is similar to previous work we also make use of labels if they're available a previous work also uses this so if we know the sky you know this part of an image is described in this part of this tile and which is also sky we mapped those together to make sure if we don't mix near roads and skies and so forth well you get this this so style on the left content in the middle so we applied the style on the content and we get yeah the output on the right I think as you can see for the top image it looks it looks alright for the bottom image it does apply to style of divs this image to this content photo however you can see in the clouds yeah they're sort of these artifacts are it's a little wobbly its overall yes the style is there but the geometry has suffered a bit but it's more wobbly than it should be so we apply the second step which we call photo smoothing and it's actually a fairly simple idea what we say is that the omit that the pixel in the resulting image neighboring pixels their relationships should be similar to the original image of the style sorry of the content they shouldn't change all that much right it's ok to apply the style but the the relationship between neighboring pixels should still remain the same and we just imposed this through an optimization which actually has a closed form solution to really make make sure that the similarity between neighboring pixels is the same between the image that we are creating and the original one and one to apply this you get much better results so this is the original output again and once we apply this smoothing step you get this so all the while the artifacts yeah you can see that disappear and it's much more similar to the original content photo but it has to style from the style that are applied so let me show you a few more examples and comparisons so again content and style and we applied our sis on the right and then in the middle is two previous methods one is actually 46 I'll transfer and one is for photographic style transfer and they work yeah okay but you can see for instance this one that was geared towards East I'll transfer it often create some artifacts right so the geometry of the pyramid it's not quite the same anymore whereas for ours it remains the same we show few more examples so again you know style applied to the content photo and this is compared to some older method and just one it's from last year I think or prove it here and this is our result and overall yeah ours works fairly consistently the style is usually applied fairly well and there's hardly any artifacts or you can see a little bit here that it looks a little yeah maybe not quite right but it's overall it's very good we did a user study to see what people thought about different methods that I just showed and we asked two question to compare how well photograph a stylized so how well do we replicate the stylized photo and how well and how photo realistic is theirs image so basically how little artifacts are there and you know overall um ours was preferred in addition Dartmouth is actually fairly fast it's very efficient it's just really an encoder decoder which is fast to do what so this the first part I wanted to show this sort of more classical method in a sense that it's based on an example and you apply it to an image so it's similar to older methods except that you use a neural network in the middle to do some other work whereas all the work that I'm going to show from now on it's all based on training on a training set and learning this function and this function is represented as a neural network of some kind and in particular all the work that I'm going to show for non it's all based on on generative adversarial neural networks you've probably seen ganz at some point in your course that I'll go over it very very briefly so again it's the following so there's two components to again so you have a generator which takes some vector Z it's a latent vector Z and based on this vector it generates an output image and at the beginning it's just a neural network randomized with random sorry initialized with random weights and it'll output some random image all right then there's a discriminator which will check whether your output is realistic or not and the question is what does realistic mean well actually what it means is the discriminator is supposed to check whether the synthesized image is similar or the distribution of the synthesized images is similar to the distribution of your training data set so you might have a training data set of say faces as in this example here then your goal the goal for the generator is to generate images of faces that are similar to the one in a training set and has and with a similar distribution so that's its goal the discriminators goal is to figure out whether the generator synthesizes images that are similar to those in the training set or if they're different so for instance if you were generator accident they were to synthesize a car it would say no this is yeah it's not a face well it's not a realistic face or if you were to generate by accident a cartoon it should also say no it's not a realistic face because it doesn't look like my training set of realistic faces and he trained these two in a ping-pong fashion so both are neural networks but both are generator and the discriminated and neural networks and they're both trained together really well one of the other usually near ping pong fashion and surprisingly you can actually make this work right even though yeah needle generator or the discriminator actually knows anything at the beginning they still manage to converge to a reasonable solution and synthesize realistic images most of the Gantt work works in a supervised fashion meaning you have training samples at Roy training Paris right you have an input image say a shoe given oh sorry notice you give it an image given edges and the output might be a realistic-looking shoes right it might be a training pair so you know that you should go from this edge image to this particular shoe we'll also talk a bit about the unsupervised setting where I have two training sets but they're not in correspondence it might be one training set of data images they might one and the other training set might be nighttime images they might all be driving videos all right so they might they're related to training sets but they're not identical in a sense that for x1 I have a corresponding image x2 that was taken at night type from the same point of view with the same cars in the same place right because it's difficult do actually it's impossible to do to create data sets that are in full correspondence if you want to go from daytime to nighttime our boosters always something that moves then you're unsupervised fashion I have just two separate training sets they're similar but not in direct correspondence in addition we also care about unimodal verses multimodal so what I just showed where the original candidates explained there was a unimodal model meaning if you want to do translations say from dogs two cats it might only output one cat right where's the multimodal model you might be able to give it serve another style vector and you might be able to go not just from one dog to one cat but to potentially different cats that have yea different looks so these are all the different ways of classifying the work that I'm going to present today and there's lots of work in this space and this is just a list of some of the gann papers that came out with the last year or two and I'll talk about some homework and I'll put that in context so first one I'd like to talk about fixed to pics HD which we presented this year sorry last year at cvpr and it's supervised and it's multimodal and and really the goal was here to do a conditional again that goes from a label map which he shot which I showed at the very beginning to an image so given labels of the scene so the labels say yeah this is road this is sidewalk this is car this is you know people and so forth given these labels I want to synthesize an image that looks realistic but corresponds to the label that I'm given all right so now I need to learn this somehow how do I learn this we will make use of Gans with a slight difference to the way I explained to him at the beginning so now the input is not just a latent vector z but rather it's the label map if it's an image so we want to condition the generator on the label image this the discriminator still does the same thing it still will tell you or it's it's task is to tell you whether the synthesized image looks like a real ya image except it also knows what the labels look like so it can sort of check whether or you can try and figure out whether to synthesize the image actually corresponds to the labels and whether it's you know similar to other images in the training set right so in that sense it's similar in addition what we do we add sources multi-modality to it so we add to this additional step so once we finish training um we look at all the features underneath a particular label so if we look at all the streets we look at all the cars and so forth and then we try and cluster the features that we see during training so for instance for the road you might have seen a lot of standard tar right substandard yeah Rhodes but you might also have seen cobblestone and to hope is by clustering you can figure out today sort of two dominant things which is normal Road which is paved and cobblestone road because once we do this we can actually use this for the synthesis we can give it to the decoder and say it should be one or the other so you can actually do a multi-modal synthesis then there's some other tricks that we play to make sure it actually works well and those are really important so if you just use the standard again the quality of results is usually okay but not great so we do two things we do we change the generator and train it in a coarse to fine fashion so we first train it to synthesize low res images and then once we're good at these low resolution images we take it and add sort of layers around it serve it at the beginning at the end to make it synthesize higher resolution images and that really helps a lot furthermore we have these multi scale discriminator so we discriminate so discriminate or not just looks at the high resolution images but images of different resolutions which helps it to server have a better global view of what's going on so it's better discriminating badly synthesized images from from what they actually should look like and we also use this robust objective so instead of just doing the Gantt loss we we also add what people call a feature loss so feature losses are often used when you're when your task is to synthesize or create images of some kind which means that you don't just look whether your synthesized image looks like a particular other image so you don't just use an L to different say an l1 norm between images but rather e to it in the space of a neural network so in this case if we take the features of the discriminator and check what it is similar between our you know training images and there are synthesized images which also helps a lot sometimes it's also called a vgg loss and it's quite useful what so when you do all those things you can do something like this so here ya label map is shown on the top and then we go through different modalities so it's always the same a structure that we create except we say go through different pavements and different cobblestone and change the cars go through different potential cars you know always it does something sensible if you look carefully you can see it it's not always great like the white car you know it doesn't look perfect actually if it comes up again it looks ya slightly funky but overall it's actually surprising that you can learn you know from just the data set off label maps and their corresponding images you can actually learn to synthesize realistic looking or mostly realistic looking images if you're gonna now change some of the labels in the background so some of this guy we make too so here the sky becomes trees you know we change the trees to buildings and so you can actually change this on the fly not on the fly you can actually change this and resynthesize and you get a reasonable image out it's really quite neat to do this you can also if you'd like yes or paint parts that you want to be houses or paint parts that you want to be a tree so you can do modifications of your label map and then at the neural network just synthesize this and an image according to your label map so it's sort of painting by numbers and it works yeah fairly well it's a comparison to some of the previous work it's only top left its previous method that was called picks two picks and then CRN which are more classical method in the top right then ours at the bottom to our ours both with the feature Lawson without the feature loss and it's hard to tell but the one on the right is slightly better than the one on the left so this was picks two picks then more recently we extended this to videos so it was published at nurbs late last year so now the difference is that we don't want to just take a single frame as input and synthesize an image but rather a video of label maps and synthesize a video of realistic looking scenes according to the video of label maps now you could say well why don't just use you know what you just presented right depicts your picks HD and apply it in our per frame basis and you can do this except it looks like this each frame is alright except temporally they're not coherent because we didn't ask you to be temporally coherent it doesn't know anything about previous images for next image this that follow so each image is okay but overall yeah it flickers a lot right so it's not quite what you want so we need to do something to make it too poorly coherent and it's it's fairly clear sorry Tara and it's a really an extension of pics to excite she says it's very similar in architecture so the first thing we do we built a sequential generator so now we say you don't just get a single map as input but what you get you know multiple frames of label Maps as input you also get past images to be synthesized to actually know what what's happened before based on these we created an intermediate output in addition we also compute optical flow between the previous images so we know how things have been moving we use the flow to warp to warp the previous image into what the next image could look like and then we combine our synthesized image with the protein with the warped image what it might look like into a final image and that helps I'm be obvious in network - yeah - combine this then the trick of using multiple scales for the discriminator we use the same thing again we do it not just spatially but also temporally so when we discriminate things we look at different spatial resolutions but also different temporal time frames we look at larger distances as well as smaller distances the training we still do progressive but we don't just do it low resolution first and high resolution but we do it low resolution to high resolution as well as temporally we do it sort of smaller frames smaller temporal time steps and then larger ones so we do it special temporally progressive and then we alternate between space and time when we train so if you combine all this you can now take this video sorry went too far take this video and turn it into a real video in a second there you go so you can see now it's temporally consistent it is still not perfect if you look carefully yeah you can see some things are a little funky - trees seem a little repetitive the cars are glowing a little bit but overall yeah it does let me play again well it does a decent job when this is on running on unseen label map so it hasn't seen this particular sequence of label maps of course right in this corner compared against to the previous method the pics of pics HD on the top right previous method bottom left that one was also not meant for temporal sequences so it also flickers and ours on the right it does exit the autor surprisingly decent job at synthesizing people okay now we can do the same thing for edges so in this case we trained it on edges and with yourself people since we have to style or this is this multi-modality where we look at you know what things usually look like on average so yeah the clustering figured out that's different skin tones missed different hair colors so now we can change skin tone and hair color so input is just the edges on the bottom left and we synthesize three different well the same person three times but with different hair and skin tone and here we've taken wages of someone dancing so those examples of her dancing and then yeah we extracted the pose and then from this post we animated this little colored figure and now we can take new poses and since and go from the colored figures figure to her dancing again and you can see yeah if again you look careful if you look careful it works well ish there's some places where yeah some poses maybe it's not seen during training so it doesn't quite know how to do right the background wobbles sometimes a little bit but you know it's it's a start also you notice that we use this colored figure a previous label maps to are all just uniformly colored right it just said yeah this is the label for trees this is the label for cars and so forth here we did it slightly differently net improved results so we have these different colors and they're basically UV coordinates like on the texture map or like if you texture a person and it helps to neural network to know what's where alright so you can figure out what what hands are it always knows which part of the hand you're looking at and it's same thing for all other body parts so that helps quite a bit this the first time we combine machine learning and computer graph is to do I can't turn off so this one was a NURBS so this is actually running in real time so we take as input a game engine that only outputs label maps and then we take these label maps and in real time we translate them using they're not neural network and we apply different styles and this is just one example just a short example that I found so it's really doing graphics using yeah neural network so all the textures and so forth everything comes from the neural network it only sees label maps that our game engine spits out so the game engine is actually not used to produce graphics it's only used to produce label apps and you can take it and produce like a game looking or something like a game right okay so this was all supervised learning so in this case we always had yeah input examples and what the output examples should look like so we knew it for this particular label map this particular output would be desirable then this work it's just a bit older we didn't use this we used we tried to work with it in the unsupervised case we're really we have these two different data sets yeah like the one on the Left no one on the right that are separate it's not no one-to-one correspondence between them they're just happened to be similar except one is a daytime and one is it so now how can we yeah given that there's no correspondents right how can we train a neural network to take a summer on in which taking it turn the day and output an image during the night that corresponds to it but it's at nighttime right so if we let's say for a moment all we do is we train again again and we tell it here my input images right based on this input image you have to synthesize an output image it looks similar to these other nighttime images and it's all we tell it then what might happen is the following yeah I'm gonna yeah the generator it's gonna generate things given the input image x1 but there that are similar to the distribution of the nighttime images however it doesn't mean that there will be a correspondence between the synthesized night time image and the input image right I didn't tell it that they need to look alike all I said is they need the night time in which images need to look like nighttime images but it didn't say it needs to look like a nighttime image but also like the original image right so what would happen is given the state time in which it'll just never would probably just ignore this daytime image because it doesn't really need it to produce a nighttime in which it's just gonna produce random nighttime images which is not what we want but what we want is the chute being correspondents so we make this assumption what you call the shared latent space so again I want to create an item which is actually responds to the HMI which was just the case here even though I've never seen different correspondents so it makes the assumption that there's some space in which I can describe the scene I'm currently in right so have if I take a point in this latent space Z I can somehow generate a daytime image as well as a nighttime image and this light in space is sufficient to describe either one of them but what I have then is you know different encoders ting can go from a daytime image and encode it into this latent space and they have different generators that can take a code from latent space and generate either daytime or nighttime image so yeah if we want to translate it then I can just use my echo to encode from a daytime to the late in space and I can use my generator to to encode a night time of it but in addition what we do is to fall we make for these different encoders and generators even though they're different ones we share weights near the latent space so the generator itself yeah generate something then encodes it too late in space and they the generator takes it from the latent space and encodes that sorry and creates an image from it and the layers that are near delayed in space we make them the same for all encoders and generators and that in itself is sufficient to make sure that there is some similarity between the input and the output so now what I can do is after shared space I can go between two different encoders and the way you train it there's there's one additional constraint that you know off and that is if i encode an image and they're decoded so I take a nighttime image i encoded with my encoder and then it decoded again with my nighttime encoder it should correspond to the original image right all I did is so went to the late in space I'm going back to the same domain I should be able to it yeah encoded correctly and the same thing for the daytime image and those constraints itself there sufficient to make this work so now here is input and output again trained in an unsupervised fashion but it still manages to figure out that cars during a day usually at night they have the taillights on disguise black they're rodas or illuminated near myself but in further way might not because my headlights might not go far enough it also learns that sometimes you can see here that some of them says lights yeah somewhere so it's almost just adds lights randomly in the sky it's learned that that's common I also learned that the cars usually all look black or dark at night whereas the taillights might be red or white so this is going day tonight I can do the same thing for winter to summer so in this case here we have training for driving videos of during the winter driving videos in the summer and it learns to go yeah really from winter to summer and it learns this on its own and I'll show you some funky thinks that it's learned so you'll you'll see what it learns that vertical structures are usually trees right but it doesn't really know that it's a difference between these electricity posts and trees so sometimes it puts leaves on those because it looks a bit like a tree right so it thinks well yeah the other things have leaves so maybe they should have leaves as well which is not a bad yeah not a bad thing to think here again you can see put leaves on them but it also learns that where there's snow in the winter right yeah I should remove it so you know usually puts some dirt color there or some you know some grass color which is quite interesting so it derives all this on its own right nobody told it that you know these are trees and and snow disappears and it's armored it learned that this is what usually happens and this series goes from sunny to rainy again it's learned that the sky is usually more grave and it rains yeah usually and usually there's reflections on the road so it makes all roads slightly reflective I also learned that things are slightly more hazy when it's raining so I did that it also learned that which is actually interesting that near taillights there's often a red halo around them because of the mist all right so you can see makes this brighter but then it doesn't really know to distinguish red car from tail light so there's a red car and now become sir has just it's like glow around it because it also things it's like a tail light again overall given that it's only seen two separate data sets that are not in correspondence it's managed to figure out now that the road should be more you know more shiny or more reflective this guy should be more great and all of this was done overnight automatically right nobody told it that that's what it's going to be so this was a unimodal translation so you could only create one output all right not multiple different outputs then we have an extension of this work which again is unsupervised but it allows to output things in a multi-modal fashion so you can change the style of the output and it works very similar to before again we have an encoder that goes into some shared latent space but in and then you can come from this Leighton space and generate an out but in addition though we have two styl encoders where we automatically try and learn what the style looks like of something that's being coded and when it decoded I can give it a style and the decoder should adhere to the style a given so it could be things like yeah I'm translating st. leopards or Lions whatever it is to cats I can have different styles during the generation part and the different styles correspond yeah to different cats but they're all cats in the same pose just yeah the different they're looking different but it's overall it's very similar to you know except yeah all this the same except there's this additional styling coder that we use and we play the same trick that we played before that you know if i encode something with the style and decode with this file with the same style it should look the same and again that sort of sufficient to make this all work the structure is very similar to before except now we used two bit more advanced network with residual blocks you know a content encoder which we didn't do before for the style encoder we used this trick which is called ADA and which is adaptive instance arm realization it's really where for each layer I enforce a particular mean and standard deviation essentially and that's yeah sufficient to change the style and it's been showing its work in a number of use cases so what can I do with this so in this case we've create a synthetic say that data said so we actually know what to course what ground truth is even though we learned it without correspondences we actually have correspondences so we have inputs of images of shoes but only their edges and we also have the apparent truth images for them even though we trained it in an unsupervised fashion and now I can ask it they're never given this image of shoes I saw edges give me example shoes and I'll go through different styles and it's gonna give me shoes that all adhere to the edges but of different colors right and they're sensible so it's learn the sensible styles of shoes yes it's yes it's not it's not entirely obvious so for if you ignore the style for a moment it really is this really only one two constraints one is that if i encode myself right and decoded myself against with the same with a corresponding encoded decoder or encoder generator then you have a day time in which i want the same day time image back right that's your work and if ever nighttime image i should get the same night time of inch back so that's the main constraint in addition when you think the other sort of encoder for daytime encoder for night time which is generator for daytime which generator for nighttime and which is these all neural networks but each one is a neural network the neural networks they share weights near the latent space so it's the same identical weights so the encoders need to serve the beginning of the encoders they do something that's domain-specific to daytime or nighttime but then once they're sort of near the latent space they have the same weights so there's some kind of will force them to be in a space that's similar so at that point it's independent if it's daytime or nighttime in the same for the generator at that point at the beginning when they start generating things it's the same way it's either independent of daytime or nighttime and then we have sort of the specific weights to daytime and tonight them which is the the reason why they're in correspondence and that's really the main reason the Train yeah but we train them we just make sure yeah we've forced him to be the same during training actually the main the main inside and yet the same the same here again except with the addition of two style encoder right what so no yeah I can create all kind of shoes I can do it same thing for bags and it learns the interesting styles right so you can translate yeah this image of edges of a back into the various different bags and now we can do the same thing that we did before so let's up at the bottom one we can go from summer to winter or winter to summer except now because it's seen lots of examples already seen some examples where it's more snow and less snow some examples where there might be more Leafs and less leaves it actually learned to use these different styles so now I can go from winter to summer but also maybe to a bit more spring where this you know fewer Leafs and they can go from summer to winter but with more or less know what it's learned that the mapping then we also did from cityscapes which just realistic private leaders to this synthetic images of driving and it's learned that there's things like I think this is rain there's snow and there's near dusk and the same thing here we go from a synthetic to realistic images of silly scenes and it's learn today's surf cobblestone again and different pavements and for different looks and we can do the same thing for ya applying surf photographic style in a way so here we've given it lots of images of summer and winter images of you know your somebody but some of them were taking at night some of them we're taking at dawn some of them are taking yeah yeah when it was nice out some so I'm wearin so it learns different styles so when you go from summer to winter yes they all have some a little bit of snow but some of them are you know during a bright day some of them are sunset and so forth the same when you go from winter to summer somewhere taking at night some we're taking during the day and it's learned today Steve different styles now the different styles they're not disentangle meaning I can't tell you it should be sort of nighttime and more snow or because it doesn't know about this so it's surf one style which encodes all of the different aspects of the image and you can't actually just angle it there's the other work who could apply but in this case we didn't actually disentangle it so now this is an example where we translate from a dog video to a cat video but since we have a style code we can go through different types of cats and then we can animate the dog again and all its scene is really videos of or images of dogs and images of cats and it's learned how to translate them and here we go house cat - big cat but we interpolate in the latent space sorry we interpolate the style code so it's the same cat do you always see or the same pose I should say but different types of cats yeah go from cat dog I get it's always the same pose but different styles it's interesting was actually even learned that yeah some animals or some styles have you know our furrier so they're bigger and so it actually yeah gross them a little bit to accommodate the fur shrinks them again which is which is quite interesting of course once we have this we can also do it you know what's called style transfer meaning I want to apply a particular style given an example image so similar to the very first part of the talk so here I have a tease potential shoes I want to synthesize give them their edges and now I can tell you I want this style or this style or this style and then kind of apply it to all kinds of different shoes right so can have the same style applied to different shoes and the same thing can be done therefore here we going from big cats to house cats and they can tell you I want this particular type of cat but in the pose of the big cat that's on the left and again this is an unsupervised fashion but it learned what the correspondent should be it learned that these poses are similar without being told explicitly right there's a comparison between different methods in all fairness not all of them are actually geared towards this particular use case so yeah ours worked fairly well which is not surprising we made it yeah such that it would work well in this case where some of the sort of more traditional neural our style transferred they're not quite meant for it this is actually the first work that I showed yeah it's not really meant for this case it doesn't work as well because these are meant for sort of just applying style but not changing yet a particular animal as we did in this case in this case right okay so this concludes my talk so I've presented a lot of methods on translating one image to another omits another image in various ways supervised Unser Royce you know model and multi modal and that's it thank you [Applause]
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_Language_Models_and_New_Frontiers.txt
[Music] okay so this is going to be the final lecture given by Alexander or myself in the course and every year this year every year this lecture is my absolute favorite one to give because changes every year and it highlights kind of what are the greatest uh latest and greatest advances in the New Frontiers of deep learning and AI while also complementing that with discussion about what are some of the limitations that we should be mindful of when working with and developing this technology so before we get into the really fun stuff we're going to just do a little bit of housekeeping exciting housekeeping starting with the fact that we have a long-standing tradition of course t-shirts in this class and all of you all for your participation and being here are going to be um able to receive a t-shirt we're going to plan to deliver them and hand them out tomorrow after the end of the class so please be here in person to receive uh your T-shirt okay so kind of where we are standing right now in terms of the schedule and where we are at after this lecture tomorrow we're going to have two awesome and really exciting guest lectures more on that in a little a little bit and then on Friday we're going to be really focused on the final project pitch presentations and competition and so with the lab sessions today and tomorrow hopefully you will have time you've had you will have incubated your ideas for the final projects a little bit and are welcome to come discuss with Alexander and I discuss with other course instructors and get feedback on your ideas there are also a couple of upcoming deadlines with respect to signing up to be in the list for the uh project proposals as well as submitting your slides for the project proposals on Friday a little bit more on that too so you all have been working on these two software Labs with respect to the music generation lab and the computer vision facial detection lab the deadline to submit those labs your entries for entry into the lab competitions is tomorrow night the complete instructions details on how to do that are on the labs themselves as well as on the course syllabus we've received a lot of questions about the final project presentations and also what is required to get credit for the class if you signed up for credit so going to talk about that a little bit the first option for receiving credit is the proposal presentations and these are going to occur on Friday and the concept here is we're going to have kind of a shark tank style pitch uh competition for your ideas about new deep learning research projects Ventures anything of that sort uh that you want to share with the group and also present to us as judges so those are going to occur on Friday you're going to be held to 5 minutes strict and tonight you have to sign up with your intention and your group list to uh participate and again those instructions are on the syllabus can be a group of one or up to five we're going to have our instructors as well as a couple of our guest speakers who are going to be judging the final projects and we have a lot of availability for awesome prizes that you can win as part of that so please please please sign up every year this is an really fun part of the course if you're registered for credit and you don't want to or can't participate in The Proposal competition you have another option to receive credit which is a onepage written report of a recent deep learning or AI paper so here we ask you read the Deep learning paper you know form your opinion and your thoughts about it and summarize that in this one-page report and submit it again instructions are on the syllabus and that too is due Friday in both cases the class is graded past fail the competition is only in the sense of receiving prizes for the final uh project presentations tomorrow we are going to have two awesome sets of guest lectures from a couple of our course sponsors the first is going to be from uh Douglas eek who is a senior director at Google Deep Mind and he has done really really amazing work in music generation music recommendation systems and now leads all of Google deep mind's generative AI for media team so overseeing images audio uh music media and all different forms and he's super Dynamic super engaging and I think it will be a really fun lecture and then that will be immediately followed by a lecture from Comet ml from niik lascaris and Douglas blank so if you've been going through the software Labs you've hopefully played around with this tool and interface Comet ml which allows for automatic logging of your machine learning experiments and they're going to share valuable insights on machine learning in the real world what skills they feel that you know if you're in interested in going into ml industry jobs um would be valuable to develop there and some of the best practices for actually putting deep learning into practice in Industry again please attend live to get shorts and listen to these awesome guest speakers all right so now let's start with the technical content for this lecture so so far in introduction to deep learning we really talked about this idea of how deep learning and AI have been revolutionizing so many different fields different research areas from advances in autonomous vehicles and Robotics to biology medicine and health care to gameplay and reinforcement learning um and a whole range of other applications in our world today and so hopefully you've now got an appreciation for not only the areas in which deep learning has made some of these incredible impacts but also the fundamental workings of these models and the intuition and the math behind how they work and specifically we've primarily dealt with two classes of problems in this class the first where we are given data whether that's images sequence data data of other modalities and we seek to make a forward prediction or decision given that data we've also dealt with what we call inverse problems where we have some output at the end and we want to try to maybe generate data according to that and so in both these instances I think a really powerful framework by which we can think about neural networks is that of function approximators right they're learning to try to infer this mapping from data to Output or mapping of a probability distribution underlying that data but in both cases right this is a notion of function approximation or um probability estimation and to understand this in a little more detail I think it really helps to go back to one of the foundational theorems of deep learning in neural networks which dates back to 1989 and this is this theorem called the universal approximation theorem and it states quite simply that let's say you have a neural network it's a feedforward densely connected neural network and all it consist consists of is a single hidden layer you have some continuous function that you want to approximate with this network what the universal approximation theorem States is that if you have just one layer and you believe that any problem can simply be reduced to input mapping to Output via a continuous function that there exists some neural network that can learn that function to some precision and therefore solve that problem now this may seem at first like a very very powerful and perhaps surprising result right just one layer approximate any function mapping input to Output within some Precision within some error but if we look a little further right there are a couple of caveats here the first being that the universal approximation theorem places no restrictions on how large this layer is going to be right you could have an invisibly large number of units of neurons in this layer necessary to get your approximation secondly this theorem makes no guarantees on how well this model actually generalizes beyond the range of data that it's seen what is the performance go to be and so in this time point of around 1989 and you know when neural Nets were starting to be uh theorized and the universal approximation theorem came out it also puts into context within the broader landscape of the history of AI we're in this moment right now where we have very very powerful AI systems that we're going to talk about later in this lecture and they're broadly useful across acoss many different types of tasks but that hype that excitement wasn't always there in fact in the history of this field there have been notable periods where people were very skeptical about the power of neural networks and the power of deep learning and so I think that helps to contextualize both the current moment that we're in and also the past right we're going to be riding these waves of great growth even exponential growth but that's all always contextualized with times historically where the power of these systems was really called into question and so that's why in the first part of this lecture we really want to focus on some of the limitations that come with neural networks and deep learning because like any technology right it's important to be aware of what those limitations are so that we could best use these systems in practice and so the first example that I always like to highlight is this notion of generalization what do we mean when we talk about a neural Network's capacity to generalize to new domains and new regimes so to do that I'm going to highlight this example from a past paper where they posed this really simple experiment right they took images from large scale image data set and said okay we have some ground truth lab labs for the classes of these images but let's say now we take for every image a random die random chance and we flip a this die we roll This dice where it has K sides and K is the total number of possible classes in our data set and we use this random flip to scramble the labels of our data so now the new labels that we've randomly assigned no longer correspond to the truth that's in that image and we could even have instances where a particular image of the same class let's say these two images of a dog are now reassigned new labels that are distinct so it's complete Randomness complete scrambling of the labels now let's say we use this scrambling as a training data set to try to train a neural network model and the authors of this work did that and they trained this neural network tried to fit to a sample of data from this large data image data set called imag net ranging from if you took the original ground truth true labels to basically increasing amounts of Randomness in the labels all the way to completely random and as you may expect right as you increase the randomness in your labels on that test set where you really care about the performance of that model it drops it drops off the accuracy drops off but what was really interesting was what they saw when now they looked at the accuracy on the training data and this is what they saw that no matter how much you randomize the labels on the training set you can eventually fit a neural network that will get very very high accuracy close to perfect accuracy and what this shows and kind of illustrates is this concept that a powerful neural network as powerful as it may seem can perfectly fit to completely random training data and again this gets at this idea of what is that function approximation operation that the neural network is really learning all the universal appro functional approximation theorem States is that neural networks are effective ly very good at this task right we can always learn a high likelihood estimate of our data such that if we give the model a new data point within the regime of its training data shown here in purple right the neural network is going to produce a maximum likelihood estimate of the output of that function that it's trying to map so now okay this is all well and good but what if we now spread Beyond this regime of the training data outside those bounds we don't have guarantees traditionally on how the neural network is going to perform outside this regime and so this gets at this question of how do we actually understand and calibrate ourselves to the generalization capability of these neural networks how do we know when a network does not know and this is a very fundamental question in deep learning that's still very much an open research question and one that we are all thinking quite a lot about and what is important for you all in practice is that I think with a lot of this hype around deep learning AI ml there's this concept that maybe deep learning is the be all end all cure for all our world's problems right is the Alchemy such that you know you can have this blackbox neural network throw in your data turn the crank and spit out some magical solution to your problem right but there's this notion that we like to call garbage in garbage out your model is only as good as the data it sees meaning that if you feed trash in right very likely you're going to get trash trash out and this is something that you know especially as someone for myself I'm steeped in a very rich and problem specific domain in biology and medicine and there's a lot of excitement around this ideas of deep learning and ML and for me myself as well but it always comes back to what is it that we're actually trying to measure what is it that we're actually trying to represent in these networks and how good are our data that are suited to addressing those questions so this kind of idea of generalization data bounds uncertainty gets out what is one of the most pertinent failure modes of neural networks today and so to highlight that right let's say we have this black and white image of a dog we build a wonderful CNN and we train our model now to color this image in paint or fill in the likely color given this black and white input and this is what it spits out as a result and this is a real example anyone notice something curious going on with this output someone say it a little louder ear the ear is green yes what else tongue the tong F tongue the background yeah some some things are are looking a little bit iffy right so let's say now right if the model has all seen only images of dogs and trained to colorize and dogs happy creatures right they're often putting their tongue out there can be basically biases due to these over representations in the data that end up in the model that we learn as a result and this is obviously not something that's ideal when it comes to the robustness of a model like this in now doing well and doing reasonably across new instances and I've shown this example of colorizing images of a dog but this same concept raise has much bigger implications potentially when we think about robustness to these kind of outof distribution instances or instances that the model may not have seen before during training and so very ously and tragically some years back there was this report of a uh driver that was killed following an accident with an autonomous vehicle operating in uh in autonomous mode and it had actually turned out that the driver who was killed in this crash had actually reported multiple instances when the car was swiveling into a construction barrier that ended up being being the site of the crash and when they went in and investigated why why this actually occurred it turned out that in the Google street view images that were used to train the model there was no presence of this construction barrier that some years later actually showed up such that you know the model the car was passing by in autopilot mode and was swiveling or behaving erratically at this instance that corresponded directly to a point of missingness in the training data and this is fundamentally driven kind of by this machine learning question that we think about as uncertainty how do we know when our network does not know how do we characterize its failure modes and this is particularly important for what we think about as safety critical applications of AI whether that be in autonomous vehicles and driving in medicine and health care in facial detection systems and also in general right in instances that may not NE be safety critical but still could be characterized by imbalances in data sparsity in the data measurement noise from your sensor or from the instrument that you're using to uh generate your data and so it's very very real problem in terms of how we characterize these Notions of uncertainty in neural network models and hopefully mitigate them a third and final failure failure mode that is always fun to consider but has very real implications as well is this notion of adversarial examples or adversarial attacks and the intuition here and kind of a mindblowing concept cep initially is we can take a data example let's say it's this image here on the left and apply a undetectable perturbation of noise such that the result is imperceptibly the same right to our eye the human eye the image on the right seems basically exactly alike as the image on the left but when you input both of these into a neural network standard image classifier the likely class predicted is completely different and so this small perturbation has fooled that classifier model and it comes down to how this perturbation is actually constructed and what it's doing right so remember that in standard neural network training with back back propag ation our goal is to update the weights of a neural network model using this algorithm gradient descent the objective is how does a small we have some loss loss function and the model is performing this operation of making small shifts in its weights to try to decrease the loss and optimize this objective right how do we change our weights W to minimize the objective J or the loss fixing the data X and the true label y adversarial perturbations we can construct them by now kind of doing a Converse operation where now we say how does a small change in the input increase our loss maximally keeping the weights of the network fixed and the target label fixed what this allows us to do is now infer what is that difference between the change the changed input X and the original input X to construct that small perturbation that's effectively uh seemingly random to our eye but is actually an adversarial constructed perturbation that can be used to fool the network and it turns out that you can not only do do this with 2D images or other data modalities but actually create 3D physical objects in the real world that can fool a uh image classification algorithm so this is a physically realized object that has been constructed according to information from a adversarial attack and ends up completely fooling these discriminative image-based models finally the final limitation that I'm going to introduce is the concept and the notion of algorithmic bias right that neural network models AI systems more broadly due to these uncertainties due to these uh uh differences in representation in their training data can result in significant biases that play out in the real world and hopefully in exploration of lab 2 in the course you were able to get some hands-on experience understanding this in the context of facial detection systems so these examples that I just highlighted are just one three or four a few three or four out of a long list of limitations and open research questions that uh we are still thinking about and still working on in the field and I think that when thinking about these limitations yes important to recognize their implications to what models and systems we have today but also thinking about what opportunities they present from a research perspective from a development perspective to actually create new Solutions and advances that can address some of these concerns and so for the remainder of the talk we're going to transition towards that latter question thinking about what are the New Frontiers that have emerged in recent years that have brought the power of deep learning and AI to a new level specifically in these capabilities of being very powerful in their ability to encode knowledge in different forms through general purpose learning and using that to generate new examples go beyond data and actually uh create very very powerful generative systems okay so for the first part so the first part of this of this New Frontiers portion we're going to pick back up where we left off yesterday with generative modeling and generative Ai and I think our motivation behind this is really truly today we're at an inflection point in terms of the capabilities that we have with these generative systems and we're going to go further into that in two respects uh in the remainder of this lecture so thinking back to yesterday and what we introduced with generative modeling we talked about two classes of models vaes and Gans and I briefly introduced that these models have a few key limitations specifically that there can be very unstable to train right we chatted about this a little in the context of Gans they're susceptible to what we call mode collapse meaning that they get really good at producing the average instance right in a data set and it's very hard to get them to extrapolate beyond the training data that they've seen and generate new outof distribution examples and so these kind of gets get at some fundamental model modeling challenges with respect to the novelty of the generations the efficiency and stability of training and really how high quality the generations are and in recent years there has been a new modeling framework that is very related to these Concepts that has kind of taken the field of generative modeling and generative AI by storm and that is what we call a class of models that we call diffusion models and if you are familiar with some of these text to image models that you may have played with like uh Glide Dolly so on that are able to take in a text caption and generate an image corresponding to that caption as a result really the backbone generative architecture behind this is a diffusion model and so we're going to talk about how diffusion models work the intuition the core principles and also talk a little bit about some of their applications in and real world use cases so lecture four right we saw about these vaes and Gans the unifying component there is that the vae or Gan performs this onot generation step right they're taking either random noise or a compression of the data from a latent space and now decoding or generating back out in a single step the core principle behind a diffusion model is fundamentally different rather than doing this one-hot generation diffusion models are able to generate new samples iteratively by again leveraging the power of noise and stochasticity but instead learning a process that learns to repeatedly remove noise to refine the generation iteratively in a stepwise fashion and as a result are able to produce very very high quality Generations and this idea of iteratively denoising or it iteratively removing noise is at the core of diffusion models and to break it down the diffusion process has two key components the first is what we call the forward process or a noising process and what we do is we take our training data let's say it's images and then progressively add noise stepwise effectively slowly wiping out the details in that data until it we have examples of pure random noise and the important thing to keep in mind here is that this noising process we can control the function by which we add noise and we can control how many steps of noise we add so how long it takes to go from data to random noise at the end now our task is to reverse this noising these noised examples now form training data for a model that could then learn the reverse process being able to go from noise to data gradually denoising stepwise until we get a clean data sample out so break it down further right underlying this whole process is that forward noising operation where we're given an image on the left we sample a random noise pattern according to a function and then progressively add more and more noise time step by time Step In This iterative Way such that over these iterative time steps of noising the result is going to be increasingly more noise now what is really clever here is now we've effectively constructed training examples where we have both our true original image random noise at the end and all these iterative steps that are related to each other stepwise by that little residual of noise that was added yes question random how do we know is there a function that defines it as random yes there is a function that is defined as random that defines the noising operation so when you're looking at continuous values like pixels in an image standard thing to do is gausian noise with data that's not continuous like discret data let's say it's language then it is not immediately clear right the translation you can't add a continuous noise like gausian noise so there are other ways to effectively corrupt the uh the data in the discrete case and still perform this still same concept of the stepwise corruption and learned uncorrupt in that way yeah so now right we've basically constructed these these iterative uh examples our goal with our neural network model the backbone model that's going to be the core of our diffusion model is now asking the simple question of given an image a noised image at a particular time can we learn to estimate the image at the prior time step that had a residual of less noise than what we have currently right given an image at this time estimate at the prior time step and this is the objective this is the task that we're going to try to train a neural network to learn right so looking at this example where I have these two images right they're differing by just one time step of this noising operation any ideas how in practice that we could form a loss function to train a neural network to do this denoising operation yes [Music] compare yes exactly so your idea was to compare the the result at the end of the D noising with what is input and this is exactly what is done initially what was tried to what was tried to do was to predict the image all up right you could try to learn the pixel values directly but then it turned out that this is a pretty hard objective and instead What's Done in practice is comparing those and learning now to predict just that residual of noise that was added and this is a pretty simp Le objective that enables well stable training of these diffusion models in practice yes when we do the reverse process do we know that distribution of the noise function or not great question so the the question is what is the does the network know the distribution of the noise function no right that's what it's trying to effectively model and learn so we're asking it can you learn a probability distribution over this data landscape that will then reflect these stepwise differences and this is really really powerful because now you can go in conditioned on a particular time step and basically generate right at the at the prior time step so it's a very powerful idea one more question and then I'll continue does does the the noise mean that you know your latent variables are not recognizing structure and that when youo you have larger values of certain L variabl that correspond to larger structures another great question so the the question is do we still have a sense of structure with respect to the you know the distribution that is learned so you can still actually get like basically an embedding or an encoding of your data from a diffusion model so it can function in practice like this kind of latent variable learning the difference is we're not imposing that strict like latent variable lower dimensional bottleneck it still gives us a mechanism to now get a good encoding of the data but it's a different setup in that uh we're our task is to basically model this residual difference as a way of learning the structure in the data all right so this tells us this reverse process is how you actually learn the weights of the uh backbone neural network architecture uh in a diffusion model now when it comes time to now generate a new sample a brand new data instance what is done is you take your trained neural network diffusion model and then start with complete random noise at the beginning and perform this step one wise decoding operation to iteratively generate these samples step by step repeating the process over this denoising operation such that eventually you can reconstruct and generate a new sample I kind of misspoke there when I said the word reconstruct right because this is slightly different and you may be asking okay well if I start with uh two let's say I start with the same set of random noise and I run it twice through the model will I get different Generations out and the beauty of diffusion is that the answer is yes you can because you have stochasticity in each of these operations right the neural network is learning this this inverse uh process this reverse process such that you can generate the step new samples stepwise and then uh result in new generations at the end so putting it all together right with diffusion models trained to predict and infer this denoising process and then we can sample brand new generations by taking the train model and Performing the iterative D noising to start back in the data space and so the real power of this approach comes from this fact that in starting from random noise you have maximum variability right but by learning that residual stepwise process you're making the task more more stable more uh digestible for the model to learn and as a result you can get very very high fidelity and high quality generations starting from something that captures a high degree of variability and what's really striking again as I kind of alluded to is that different instances of noise will will produce very different Generations as the end result and indeed right as we introduced in lecture one the diffusion models are what underly many of these text to image generation models that are out there right and the ability of these models allows us to actually take representations from data like language and use that to condition or guide the diffusion models generations to now produce uh instances based on these prompts and you've probably seen many examples of this here I'm highlighting just a few so so far we've really talked about images as kind of this example use case to get our heads around the intuition Behind These models and how they work what about other modalities or applications and to me you know I'm I'm very biased here but one of the most exciting spaces to think about the power of these generative models is in biology and chemistry and in molecular design and in recent years we've really found that diffusion in particular is an especially powerful framework for this and so people have shown very promising results in now thinking about not pixels in 2D or 3D spaces but the coordinates of atoms in in space and gener and designing diffusion models that are very similar in concept but can now create uh instances of these position wise coordinates to create new molecular entities and in some of my own own research uh over the past couple years we've really focused on developing new diffusion modeling Frameworks for questions in biology and in protein design in particular where the motivation here is that we are seeking to design new biomolecules like proteins that can expand our therapeutic capabilities our chemical capabilities our functional capabilities and to give you a very brief primer on the concept here proteins right they're kind of the actuators of all of biology and they're encoded by a sequence of these chemical building blocks that we call uh amino acids and the Paradigm is that that sequence that language of amino acids dictates a protein's three-dimensional structure and that dictates its function and so what we've done is we've developed new diffusion-based models that can generate new protein structures so thinking about let's say the set of angles that would represent a protein's configuration in 3D space you can think about the random State as a completely unstructured unfolded state of the protein and we can actually build a diffusion model that looks at these data and has learned to now generate structured sets of angles that correspond to a backbone of a protein in 3D and so this example that I'm showing in this video here is a visualization of that denoising process over time where the model is making these iterative refinements to produce a backb bone structure of of a protein Beyond this right we can think not only of structures in 3D but how we can actually expand this to now think about much much larger scale data sets that are on the scale of all of evolution so we've extended this idea to now not think about continuous values but treating protein sequences as a language where we can take a data set of over 50 million unique protein sequences from many many different organisms many different functions all across the tree of life and build a generative model that can learn to generate new protein sequence instances such that we can create over this language of biology and now realize new sequences that hopefully have strong functional capabilities in the real world and indeed it's not just myself and my colleagues and teammates that are thinking about this this is a really a huge wave in biology right now where we've now arrived at the point where some of these AI generated protein designs have been validated experimentally and starting to think about therapeutic applications in use cases like in designing binders to the covid spike uh protein for example that could in theory be deployed in a therapeutic context so all to say this is a tremendously exciting time for generative models and their intersection with different fields so in the final portion of this lecture and to close out this course we have a new set of content on I think what hopefully is a very exciting and pertinent topic and that's large language models which are a class of models that have kind of taken the World by storm over the past year and indeed right I think we posed this question earlier but pretty much all of us have interacted with models and tools like chat GPT that allow us to now inter interact with these very powerful AI systems through natural language as our interface and so in the remaining portion of this talk and this lecture I really want to break down you know this question of what are large language models or llms how do they work what are they good at what are some of their limitations and where do we go from here so very simply at the highest level large language models are a class of neural networks that are very very very large meaning that they have extraordinarily High numbers of parameters and sets of weights and they're trained on very very very large data sets and it's this combination of of huge data huge size and scale that has kind of given rise to these powerful capabilities of these models and so a model uh an llm like GPT or chat GPT fundamentally how does it work what is so mindblowing about these models is that the way that they're built that the way that they're trained is a very intuitive concept and it's one in fact that we've already introduced and you've seen in this lecture back on day one and the concept here is that you take a big general purpose data set unannotated large amounts of text from the internet from other sources so on you take that text you split it up into chunks what we call tokens it's not necessary that a token is a word often times a token is a word but there different clever ways to do that chunking that's shown to be maximally good for these types of models you then have a model it's very large on the order of hundreds of billions or even more parameters meaning distinct weights and the backbone of these models is almost is very dominantly a Transformer architecture right where we talked about on day one of the class how do you actually train this model a model like J GPT is trained on a pretty beautiful task and an objective which is the following given a sequence of these token chunks predict the next token and we're going to update the model's parameters its weights based on how good it does at that next token prediction task and if you recall back to day one lecture two right this is the next word very similar to that next word prediction task I talked about in the sequence model lecture and so breaking it down stepwise how fundamentally does this next token prediction task work llms like GPT right you start with your text Data it's raw text the first step is this chunking procedure what we call tokenization and oftentimes those tokens are converted to an embedding right again a concept from day one converting text into this numerical representation in a way that we can pass into our neur Network the large language model now sees these tokens and it's going to see them and be asked given a set of tokens what is the likely next token right and so you can think about this in a context that we're effectively Shifting the prediction over by this window of one so let's say and you can see that visualized here right where the predictions of the next tokens are shifted by one relative to the individual tokens in this input example what the large language model then generates is the format of the predictions is a distribution of probabilities over all possible tokens in its vocabulary meaning what are the likely next tokens that uh are predicted and what are the scores the probability scores across that set of tokens and in practice right we can use a function like a softmax which enforces us to produce a uh values between zero and one such that we're guaranteed we have a probability distribution over these tokens now to evaluate how good this prediction task is all we have to do is compare via a pretty simple loss like a cross entropy loss across that predicted probability distribution versus the true next token and that's it right this is so elegant right this principle of next token prediction but the beauty is that by combining these very large scale data sets with this objective our idea is that the model is going to learn a pretty good representation of the world if it sees these text Data from all different sources all different facets of of you know human created content right then once you have your trained model in hand you can use it at deployment at inference to now generate new text instances based on what we call a prompt or an input and again the backbone that's running this is again that same next token prediction task that's actually being used to generate text when you interact with something like chat GPT and so the capabilities that are feasible right here and right now with these large language models are truly significant in their ability to manipulate and master natural language to do things like knowledge retrieval to write and create new content to guide planning through this interface of natural language and yet that being said right there are key limitations that still exist with these llms and what they're not so good at today right often times they produce nonsensical generations and so we still have that question of how robust is that model how is it can it confer to us when a generation is good and confident or when a generation is more uncertain it may produce Generations in which it's confidently wrong or that are completely ungrounded or nonsensical or what are often called hallucinations can we detect these reliably to refine our inter our interaction with the model in that way furthermore right there's this big question of with a powerful system like this how can humans be trusted and instructed to set up reasonable guard rails against these models to mitigate against potential adversarial attacks or adversarial jailbreaks that could derail the model and send it off the rails right and finally in terms of some of the fundamental kind of numerical or mathematical cap abilities with respect to logic these models still demonstrate some key shortcomings and so kind of the theme Here is that this highlevel thinking process is still you know at its nency in terms of how these models are able to generalize and perform in this way but one thing we do know is that there are interesting and curious things that happen as these models get larger and as the data sets that they're trained on get larger and this gets at this notion of emergence meaning a property or an ability of these large language models that's present as you scale bigger and bigger but not present in something that's smaller and it turns out that you can describe this precisely and mathematically by what we call scaling laws where people have charted out a relationship between the size of the model how many parameters it has and how good it performs on kind of cognitive types of tasks and so there's this question of is scale going to be all we need is it true that you know we can scale these models to a very large Point such that the capabilities become greater and greater and people have already started to kind of chart this out with this notion of emergence and made observations that as you scale the parameter count and the size of these models the capacities do seem to increase so all told right we're kind of getting at this very powerful idea this notion of what is a foundational model or a foundation AI system and broadly what we mean by that is a model that can be trained on these unannotated large General data sets but can be adapted and perform well and robustly on a wide range of tasks so beyond this Paradigm of a classifier model for classification a regression model for regression can we arrive at more general purpose systems that can function more as reasoning engines and this starts to get out really big questions right how can we interface with these models and these AI systems most effectively to use them as tools that work alongside us alongside our own creative process to help tackle different problems different questions that our society faces and so with that I'll leave you with that thought and things to ponder on it has been a pleasure to to talk about this and we're all very open and curious about these questions I know I am I hope you are and looking forward to continuing the conversations uh in the remaining hour and also throughout the next couple days thank you
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2018_Deep_Learning_A_Personal_Perspective.txt
so a lot is being written and said about deep learning in the new AI wife and not everything is true so here's one of the things that's not true so the idea that machines program themselves these days and all you have to do is put mediocre data in turn the crank and outcome excellent results but there is what I think of true magic in this deep learning and here's what really fascinates me the most about this field is deep learning lets us solve problems we don't know how to program so we can solve problems we actually don't understand in detail so in insufficient detail that we can program a solution the first time I witnessed this is quite a while ago it was around 1992 I was in at ETH in Switzerland working on reading machine for the blind and we made good progress on printed text but we when we got nowhere for handwritten text and so I was reading through paper after paper with promising titles and just to be disappointed time and again so most people back to in there worked on capital letters from A to F for example did tiny variations to the bitmap and then called it font invariant and then somebody pointed me to the work of young laocoon and I looked into that and was very fascinated because he was working with actual data from the US Postal Service no cheating possible and he claimed to have excellent results and his methods were comparatively simple so we rushed to implement this and well first we had to rush to actually collect data and then train the network and it felt like magic it was actually read and do this seemingly impossible task so a year later I was fortunate to join that very team at Bell Labs so I divided my presentation into two parts the first part they will talk about the early deep learning work at Bell Labs and then the second part I will talk about our current work at Nvidia on self-driving cars so from 1990 from 1985 to 90 1995 was a ten-year period it was incredibly productive so in that team that team not only created convolutional networks it also created support vector machines as VMS and laid a foundation of important machine learning learning theory and also created several generations of neural network chips this is the building in Holmdel New Jersey so gigantic building actually at eighth house about six thousand employees 300 of those were in research and 30 of those were in machine learning so you probably recognize several of the names and since as researchers we tended to show up late in the day we always have to park way out here which added about 10 more minutes to your commute just to walk into the building in into your office there's these two ponds here could you imagine what they are for they're not just landscaping yeah yeah disorder heat exchanger for the for the air conditioning of the building or used to be later they replaced and with cooling towers this is the very first data set that I worked with collected among the researchers themselves light that I got data from the US Postal Service and then that grew into NIST and then list which is still very widely used today so a lot of the work was about what structure will be put into the learning machines what prior knowledge do we equip them with to to perform the task well if you talk about a field if you look at this example here and you needed to build a classifier that can distinguish between the red and the green classes and you wonder what class cuz this X belong to so if you take location east west north south as your criteria to classify then you would probably say the X belongs to green but if you understand that these points are actually real points on the surface of the earth and it happens that the green ones are on border and the red ones are on land and you could change your criteria and say we take level above the height above sea level as criteria in which case the problem becomes trivial so this point is actually on land so it belongs to the red class anybody see where that vertices this is Manhattan New York so this is an example how programming or using prior knowledge actually helps classification tasks enormous Lee of course that led to the creation of convolutional networks this is an old slide that we use to explain how they work naturally you have these convolutions that can learn to do detect lines vertical horizontal and then next layer can detect edges and so on and of course the leap of faith was that we don't have designed these features but actually let a numerical optimization algorithm find optimal solutions here's an old video from Yonder kuhn showing Lynette this was on an old AT&T PC I believe was a DSP accelerator in it so meeow meeow may have seen the video it's it's a new tube and at that time many many many brilliant minds tried so hard for years to solve the problem and here comes this network and just does it was almost looks with ease then Jung gets creative I'm quite sure that this type of characters were not in the training set some other people from the lab this is Tony Henderson and rich Howard who was the lab director at the time so as we became to look at learning is essentially two main things one is to build prior knowledge into the architecture or what vladimir vapnik calls structural risk minimization and the other one is capacity control so you need to match the size of your network to the amount and diversity of the data that you have and one really good tool to do that or to to analyze that is learning curves so this is not the number of epochs here it is actually each point on this chart is a fully trained Network so what you do is you measure the performance of your network on the training set and on the test set while you're growing in the amount of data that you're training the network with so this is the the number of examples that you use for training so if you have very very few examples then the network can essentially memorized training set and DDT error on the training site is zero while the error of the test set is very big and then as you grow the data set the error on the test set comes down and at some point the network can no longer memorize and the entire data set and the training set error starts to grow and empirically it's been shown that these curves eventually they meet and you don't need to plot the entire curve it's usually in the middle between the two curve so you can have a good idea what to expect so if you hit this point here then it becomes clear that it doesn't help you to add more data you have to increase the capacity of your network here's learning her from real life that we did last week so this is here predicting the steering angle of a human so how close does the network steer likely human and you see that around here we reached a point where adding new data wouldn't help us then in the 90s there was sometimes heated debate about what is the best algorithm for classifying characters and the the [Music] manager of the group at the time was larry jackel and he said well let's just find out and compare all of those so this was one of these ideal situations where you had several people competing with ideas and everybody was convinced my idea is the best so we compared them all and if I remember correctly that's what that's the reason that I created the amnesty database for so M stands for modified the result is somewhat interesting so K nearest neighbor that nobody believed that that would be the winner that was just there for reference also fully connected Network which is there for reference then this is Lynette one this was optimized for a much smaller data set from the United States Postal Service so that didn't do very well on this either in fact it was even slightly worse than fully connected and then this year these were the actual competitors so this is convolutional at lock but this time optimized for the larger and this data set I'll get back to boosting in a minute then different variations so here for example K nearest neighbor of course has one big flaw so if you just take the Euclidean distance as your metric as we did here then if you shift two characters which are active the same by a few pixels then the Euclidean distance starts to become enormous so it doesn't really measure how close to or how similar two characters are very well so here the idea was that let's instead of using the Euclidean distance let's do the same thing and used to Euclidean distance but not on the pixel space but in the uppermost feature map and that boosted the performance quite a bit tangent distance was another one of the ideas we talked a lot at a time another clever trick to increase the performance of K nearest neighbors so if you picture each character to be a point in a very high dimensional pixel space and you start doing some perturbations to it without changing the actual character so you rotate you delete a little bit is shifted you grow the stroke a bit then that forms a surface around that point and if you don't deviate too much from the original point and you can model that as a plane and the tangent distance is simply the Euclidean distance between two such planes so clever idea was a lot of prior knowledge of the problem built into and that also gets very good performance and optical margin that's essentially an SVM and so what's interesting is that they all have the exact same performance the only one that actually improves the performance was boosting which is kind of a different type of approach so here you train multiple networks you train the second network on the mistakes of the first one and you train a third one on there refers to disagree so you get multiple expert that specialize on different parts of it of the training sentence so it's not too surprising that that actually would increase the performance so what Larry said in hindsight is that well it's not too surprising that they would all perform the same because they everybody was doing the same type of approach in exploring them the best capacity of their learning system by doing the learning curves the one thing that stands out in my mind is the SVM because that has no bit prior built-in knowledge about the tower at all there all of the others to classification rate is not the only thing that you should actually focus on of course it's also memory consumption which is not on this chart training time and inference time so all of the memory based K nearest neighbor types they use very little training time but then they use a lot of memory so the lessons some of the lessons that I took from that time is one is still very relevant is look at the data I've seen so many people miss important cues confused why you II was RGB for example that doesn't make your system fail completely it just doesn't perform as well as he could solely debugging tools are critical these neural networks have a nasty habit of masking box and they learn to adapt today they tried their best to work around box validator training data again thats related to look at the data so many data sets i've seen actually have grave labeling mistakes and the work is experimental in nature at least that's what it is the state of today so in most cases you can't just copy paste or recipe from somebody else that will now give you the optimal results and the last one is work was real data synthetic data is great for debugging and you can certainly explore certain trends but to get a sense of where you really are you you have to work with real data so what happened after 95 actually right around the 95 AT&T released first production version of convolutional networks for commercial check reading and eventually 20% of all the checks written in the US were processed by that system or read by that system to me that was the real the real sign of success if technology is commercially viable at the same in the same year actually we were in the morning sitting together discussing how we should celebrate this fact this success was the commercial check reading and message came in that AT&T ET announcement came in that AT&T would break up into three parts so that was quite a shock and then in 2002 there were mass layoffs and most people left and some of the important folks from machine learning day found themselves working together again either at DARPA or for DARPA and created several programs such as locker learning applied to ground robots learning locomotion one was even called deep learning and also they're helping to get the various different DARPA challenge is launched and then in 2002 deep learning becomes really popular and I guess it's triggered by the availability of data the availability of compute power and also ready commercial applications so this large scale internet applications in speech recognition in image classification they were ready at East Point and I guess there was just not too much money in check reading for the economy to take notice back then so this brings me to what we're doing today very similar technology applied to self-driving cars in this case so first an overview of the entire stack offer and be yourself driving so we have at the bottom the outer that goes into the car that's the Drive px family of products the most recent is called Pegasus that was just announced at CES then we have the operating system a real-time operating system layer and then the top of that is dry folks that's what I call the middle layer so that does sensor abstraction does all the logging tools inter process communication and it has low-level computer vision libraries such as image transformations and on top of that is drive a B this is the application layer so this is where we put together all of the other technology to form actual applications and some examples are here this is Center fusion from radar and camera on this is light our point cloud processing or deep learning based object detector so they detect the bus here or the pedestrians back there this is deep learning based free free space detection this is localization based on HD maps and then this here is perception based as planning this is actually what comes out of our group in New Jersey so what New Jersey group is doing there we contribute into this layer here and our goal is to solve the heart and unsolved problems and I think there's a lot of those in self-driving car so I think back to character recognition if we can't even program handwritten character recognition then it's very likely that dealing with other humans and all kinds of different situations is likely not going to be solvable by just programming so we concentrate on these hard problems for two reasons one is that we want to provide algorithmic diversity so in a safety critical system it's unlikely that you want to rely on any single algorithm and the second reason is that we can solve functionality that might not be solvable otherwise like taking terms just based on perception without map data or learn too much onto a busy highway where you have to negotiate with the other cars and squeeze yourself in and then other the other labs the other autonomous labs in Nvidia are in California in Boulder in Seattle and in Europe this is uh they're actually back in the same building which is cool the building was empty for over ten years but it's now being redeveloped for mixed-use so there's a number of different tech companies in there so Nvidia is renting space around here and there's also they're going to get restaurants and shops and on the rooftop there's plans to build a hotel the reason why there is not so much sentimental it's because this is actually a very good environment for testing self-driving cars we have plenty of space in the basement of the building to store and work on the cars and then there's two tunnels here and with that we can take and their leaders right up to these private routes here on the campus driving months around here is actually two miles and then right outside we have a very nice mix of Highway local roads and residential roads the location is also quite nice so we're about 45 miles from New York City and there's train connection and ferry so a train is about 60 to 70 minutes and if areas faster actually can take the shortcut so it's around 40 minutes so what is the typical case where rule based systems struggle well this seems clear so we can detect these lane markings localize the car within it and then have a pass planner and the controller and drive but just about 100 meters further up on this road the situation looks like this so it's much harder now so it is you have to double yellow line marker paint it and then the center one is almost gone you can still solve it of course you can build a deck too for this curve what a fence you may be able to reconstruct this yellow lane marking then you can measure the distance here and decide that this is actually wider than on line this is probably two lines and I'm currently here so I'm going to assume that there's a virtual Lane here and I'm going to drive there and then you have solved the problem for tests for this scenario and then around the corner it's different again so you have to start all over and then again and again so it seems a big advantage if we can build a system that can derive the domain knowledge from data so by just observing humans not by somebody telling it what it actually should look for in this scene so what we this is one of the things that we built if this is the lane here that's where the car currently is then we predict there with the human drive so what is the short-term trajectory that the human who take given that image and so collecting the data is relatively simple you just drive you can then derive this trajectory from the emotion of the car so the training labels you essentially collect them for free without additional manual labor while you're collecting the data and so then the difference between what a human drove and what the network predicts that's the error signal that we back propagate this is an example network architecture this is from about two years ago it still looks similar but it's growing so we constantly adapted to the current size of the training set the universal sign for photo these are all rows and with this approach it really doesn't matter whether the road looks horrible grass overgrown or is a nice highway as long as the estate that I can run from it [Music] since these roads are currently not used anymore on our campus so let's just ideal for for experimental knowledge [Music] well cool [Music] so the noises in the background they're not they're not acting this is actually real we struggled so so much to get this to work for a for an important they were deadlines and was just about 12 hours before the drop-dead deadline that this thing started working there was a big relief so this is a made-up construction site with some unpaved section in the back people often asked well if you train this in California can you drive in New Jersey so we showed it here we trained with California data only an hour driving gear on a New Jersey Highway and here at night it was drizzling slightly on an unpaved road in a nearby park [Music] got it so we'll move on to something a little more practical this is highway driving so here we're driving you'll see our here you see this is what the network predicts what the human would drive and you see that line markings here are lousy and in in part they're completely missing and that doesn't seem to impress the network it protects co-products predicts the correct trajectory what you see here is we we show the trajectories predicted from the most recent 10 frames and then correct them for eager motion of the car and any show them all on top of each other this is a debugging tool if the network is confident it will be consistent with its predictions over a few frames and the noodles are nice and tight and if it's not confidence then they'll spread out all over the place it will predict something different each frame so here not to drive taking tected exit or here again this is all stuff that we haven't programmed it just picked it up by observing humans we're not limited to two on line keeping we can also learn on line changes so this is a video showing that the the line changes are triggered manually by this is chained each end here driving and he's he's just tipping the lever for turn signal and thats that indicates the car it should now change lanes we can also learn different lengths of the line changes so we can tell the car to complete the lane change in 200 meters or 100 meters or 50 meters that's actually quite exciting at highway speed completing a lane change in 50 meters we can also allow them to take [Music] this was particularly exciting because I'm not sure there's really any other way to do this [Music] so one of our internal goals is to be able to send the car on its own to get ice cream this is the route from the active what some people is to get back you see that half the Veneto finish here [Music] [Applause] [Music] [Applause] [Applause] yes the entrance so one of the things I mentioned before that it's important that or an important aspect of our work is to learn by observing humans not by manually programming things into the system but nevertheless we want to understand what it actually learned so here we analyzed what what it's looking at is these two green regions these are the regions that the network is sensitive to and you see that it learns to recognize detect these line markings which is interesting because we never labeled a single line marking during training or here in a residential street it learns to look at the cars but if you look closely it ignores the trees here or on our campus if there's no clear line markings it learns to figure out to detect a road edges through other means so this is an important very important debugging tool for us here's a short video you see a line much so while the driving on the highway it's looking at both line markings but this one will disappear in a moment and now instead of looking at both of a doubly wide lane is actually just following this one until the right one comes close enough so it becomes a single line again and it picked that up by observing this is not as manually programming that into the system at one point some day we needed a screenshot from the inside of the car and there's a lot of construction going on on the campus currently and this Bobcat comes out into our roadway and if you look here the network prominently recognizes it even though we're quite certain it's never seen this type of vehicle in the training before so I wanted to close with some open challenges my mind T's are well they have tons of open challenges but these are two of the big ones one is to deal with ambiguous situations so if you imagine you want to learn to merge onto a highway and you have a car right next to you then you can either speed up and go in front or you can slow down and go behind but you can do the average so how we how we deal with that how we can learn from there as more than one possible correct way of driving but you only ever get one one one of these possible options in each example and then the other one is learned from imperfect behavior so we have a ton and ton of data but currently we need to cut out all the imperfect driving you don't want to teach the network to cutting corners and such and and it does learn what you what you do so it may make Stevens so if we can get around and figure out how I can learn on its own what it actually should look at and bodies for their outliers then we would have a ton more data that we can work with and so I'll finish with this video here this was recorded recently in December we got new data from a new sensor set of a new car fleet and so the first 10 hours came in all California data in the Sun very little night no rain and so the first thing we usually do is we train a network we go out and test to see if it meets our expectation is this kind of an end-to-end test to see to make sure that nothing's wrong with this new data that day it happened to snow so we were curious to see what happens and it actually performed surprisingly well so all these network has ever seen is California data and it's now driving in the snow in New Jersey or here at night on a partly snow-covered Road with lights from oncoming traffic you can also see what it's looking at and it looks at what seems correct features on the road here it even looks at the threaded tire tread markings in this now even though it certainly hasn't seen that in in the original training data and that brings me to the end thank you for your attention [Applause]
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2023_Robust_and_Trustworthy_Deep_Learning.txt
foreign I'm really excited especially for this lecture which is a very special lecture on robust and trustworthy deep learning by one of our sponsors of this amazing course themus AI and as you'll see today themus AI is a startup actually locally based here in Cambridge our mission is to design advance and deploy the future of AI and trustworthy AI specifically I'm especially excited about today's lecture because I co-founded Themis right here at MIT right here in this very building in fact this all stemmed from really the incredible scientific innovation and advances that we created right here just a few floors higher than where you're sitting today and because of our background in really cutting edge scientific innovation stemming from MIT themus is very rooted deeply in science and like I said innovation we really aim to advance the future of deep learning and Ai and much of our technology has already grown from published research that we've published a top tier peer review conferences in the AI venues around the world and our work has been covered by high profile International media Outlets this scientific innovation with the scientific innovation themus we are tackling some of the biggest challenges in safety critical AI that exists today and really that stems from the fact that we want to take all of these amazing advances that you're learning as part of this course and actually achieve them in reality as part of our daily lives and we're working together with leading Global industry Partners across many different disciplines ranging from robotics autonomy Health Care and More to develop a line of products that will guarantee safe and trustworthy Ai and we drive this really deeply with our technical engineering and machine learning team and our focus is very much on the engineering very flexible and very modular platforms to scale algorithms towards robust and trustworthy AI this really enables this deployment towards Grand challenges that our society faces with AI today specifically the ability for AI Solutions today are not very trustworthy at all even if they may be very high performance on some of the tasks that we study as part of this course so it's an incredibly exciting time for Themis and specific right now where VC backed we're located our offices are right here in Cambridge so we're local and we have just closed around the funding so we're actively hiring the best and the brightest Engineers like all of you to realize the future of safe and trustworthy Ai and we hope that really today's lecture inspires you to join us on this mission to build the future of AI and with that it's my great pleasure to introduce sadhana sadhana is a machine learning scientist at Themis she's also the lead TA of this course intro to deep learning at MIT her research at Themis focuses specifically on how we can build very modular and flexible methods for AI and building what we call a safe and trustworthy Ai and today she'll be teaching us more about specifically the bias and the uncertainty Realms of AI algorithms which are really two key or critical components towards achieving this Mission or this vision of safe and trustworthy deployment of AI all around us so thank you and please give a big warm Round of Applause for sadhana [Applause] I'm a machine learning scientist here at Themis Ai and the lead TA of the course this year and today I'm super excited to talk to you all about robust and trustworthy deep learning on behalf of Themis so over the past decade we've seen some tremendous growth in artificial intelligence across safety critical domains in the Spheres of autonomy and Robotics we now have models that can make critical decisions about things like self-driving at a second's notice and these are Paving the way for fully autonomous vehicles and robots and that's not where this stops in the Spheres of medicine and Healthcare robots are now equipped to conduct life-saving surgery we have algorithms that generate predictions for critical drugs that may cure diseases that we previously thought were incurable and we have models that can automatically diagnose diseases without intervention from any Health Care Professionals at all these advances are revolutionary and they have the potential to change Life as we know it today but there's another question that we need to ask which is where are these models in real life a lot of these Technologies were innovated five ten years ago but you and I don't see them in our daily lives so what is what's the Gap here between Innovation and deployment the reason why you and I can't go buy self-driving cars or robots don't typically assist in operating rooms is this these are some headlines about the failures of AI from the last few years alone in addition to these incredible Advantage advances we've also seen catastrophic failures and every single one of the safety critical domains I just mentioned these problems range from crashing autonomous vehicles to healthcare algorithms that don't actually work for everyone even though they're deployed out in the real world so everyone can use them now at a first glance this seems really demoralizing if these are all of the things wrong with artificial intelligence how are we ever going to achieve that vision of having our AI integrated into the fabric of our daily lives in terms of safety critical deployment but at them is this is exactly the type of problem that we solve we want to bring these advances to the real world and the way we do this is by innovating in the Spheres of safe and trustworthy artificial intelligence in order to bring the things that were developed in research labs around the world to customers like you and me and we do this by our core ideology is that we believe that all of the problems on this slide are underlaid by two key Notions the first is bias bias is what happens when machine learning models do better on some demographics than others this results in things like facial detection systems not being able to detect certain faces with high accuracy Siri not being able to recognize voices with accents or algorithms that are trained on imbalanced data sets so what the algorithm believes is a good solution doesn't actually work for everyone in the real world and the second notion that underlies a lot of these problems today is unmitigated and uncommunicated uncertainty this is when models don't know when they can or can't be trusted and this results in scenarios such as self-driving cars continuing to operate in environments when they're not 100 confident instead of giving control to users or robots being moving around in environments that they've never been in before and have high unfamiliarity with and a lot of the problems in modern day AI are the result of a combination of unmitigated bias and uncertainty so today in this lecture we're going to focus on investigating the root causes of all of these problems these two big challenges to robust deep learning we'll also talk about solutions for them that can improve the robustness and safety of all of these algorithms for everyone and we'll start by talking about bias bias is a word that we've all heard outside the context of deep learning but in the context of machine learning it can be Quantified and mathematically defined today we'll talk about how to do this and methods for mitigation of this bias algorithmically and how Themis is innovating in these areas in order to bring new algorithms in this space to Industries around the world afterwards we'll talk about uncertainty which is can we teach a model when it does or doesn't know the answer to us to its given task and we'll talk about the ramifications for this for real world AI so what exactly does bias mean and where is it present in the artificial intelligence life cycle the most intuitive form of bias comes from data we have two different two main types of bias here the first is sampling bias which is when we over sample from some regions of our input data distribution and under sample from others a good example of this is a lot of clinical data sets where they often contain fewer examples of diseased patients than healthy patients because it's much easier to acquire data for healthy patients than their disease counterparts in addition we also have selection bias at the data portion of the AI lifecycle think about Apple's series voice recognition algorithm this model is trained largely on Flawless American English but it's deployed across the real world to be able to recognize voices with accents from all over the world the distribution of the model's training data doesn't match the distribution of this type of of language in the real world because American English is highly overrepresented as opposed to other demographics but that's not where uncertain that's not where bias and data stops these biases can be propagated towards models training Cycles themselves which is what we'll focus on in the second half of this lecture and then once the model is actually deployed which means it's actually put out into the real world and customers or users can actually get the predictions from it we may see um further biases perpetuated that we haven't seen before the first of these is distribution shifts let's say I have a model that I trained on the past 20 years of data and then I deploy it into the real world in 2023 this model will probably do fine because the data input distribution is quite similar to data in the training distribution but what would happen to this model in 2033 it's it probably would not work as well because the distribution that the data is coming from would shift significantly across this decade and if we don't continue to update our models with this input stream of data we're going to have Obsolete and incorrect predictions and finally after deployment there is the evaluation aspect so think back to the Apple Siri example that we've been talking about um if the evaluation metric or the evaluation data set that Siri was evaluated on was also mostly comprised of American English then to anybody this model will look like it does extremely well right it can detect it can recognize American English voices with extremely high accuracy and therefore is deployed into the real world but what about its accuracy on subgroups on accented voices on people who for whom English is not their first language if we don't also test on subgroups in our evaluation metrics we're going to face evaluation bias so now let's talk about another example in the real world of how bias can perpetuate throughout the course of this artificial intelligence life cycle commercial facial detection systems are everywhere you actually played around with some of them in lab two when you trained your vae on a facial detection data set in addition to the lock screens on your cell phones um facial detection systems are also present in the automatic filters that your phone cameras apply whenever you try to take a picture and they're also used in criminal investigations these are three commercial facial detection systems that were deployed and um we'll analyze the biases that might have been present in all of them for in the in the next few minutes so the first thing you may notice is that there is a huge accuracy gap between two different demographics in this um plot this accuracy Gap can get up to 34 keep in mind that this facial detection is a binary classification task everything is either a face or it's not a face this means that a randomly initialized model would be expected to have an accuracy of 50 because it's going to randomly assign whether or not something is a face or not some of these facial detection classifiers do only barely better than random on these underrepresented um data on these underrepresented samples in this population so how did this happen why is there such a blatant Gap in accuracy between these different demographic groups and how did these ever these models ever get deployed in the first place what types of biases were present in these models so a lot of facial detection systems exhibit very clear selection bias this model was likely trained mostly on lighter skin faces and therefore learned those much more effectively than it learned to classify darker skin faces but that's not the only bias that was present the second bias that's often um very present in facial detection systems is evaluation bias because originally this data set that you see on the screen is not the data set that these models were evaluated on they were evaluated on one big bulk data set without any classification into subgroups at all and therefore you can imagine if the data set was also comprised mostly of lighter skin faces these accuracy metrics would be incredibly inflated and therefore would cause unnecessary confidence and we could deploy them into the real world in fact the biases in these models were only uncovered once an independent study actually constructed a data set that is specifically designed to uncover these sorts of biases by balancing across race and gender however there are other ways that data sets can be biased that we haven't yet talked about so so far we've assumed a pretty key assumption in our data set which is that the number of faces in our data is the exact same as the number of non-faces in our data but you can imagine especially if you're looking at things like security feeds this might not always be the case you might be faced with many more negative samples than positive samples in your data set in the most so what what's the problem here in the most extreme case we may assign the label non-face to every item in the data set because the model sees items that are labeled as faces so infrequently that it isn't able to learn um an accurate class boundary between the two SIM between the two classes so how can we mitigate this this is a really big problem and it's very common across a lot of different types of machine learning tasks and data sets and the first way that we can try to mitigate class imbalance is using sample re-weighting which is when instead of uniformly sampling from our data set at a rate um we instead sample at a rate that is inversely proportional to the incidence of a class in our data set so in the previous example if the likelihood if faces were much if the number of faces was much lower than the number of non-faces in our data set we would sample the faces with a higher probability than the negatives so that the model sees both classes equally the second example the second way we can mitigate class and balance is through loss re-weighting which is when instead of having every single mistake that the model makes contribute equally to our total loss function we re-weight the samples such that samples from underrepresented classes contribute more to the loss function so instead of the model assigning every single input face to a as a negative it'll be highly penalized if it does so because the loss of the faces would contribute more to the total loss function than the loss of the negatives and the final way that we can mitigate class imbalance is through batch selection which is when we choose randomly from classes so that every single batch has an equal number of data points per class so is everything solved like clearly there are other forms of bias that exist even when the classes are completely balanced because the thing that we haven't thought about yet is latent features so if you remember from Lab 2 and the last lecture latent features are the actual represent is the actual representation of this image according to the model and so far we've mitigated the problem of when we know that we have underrepresented classes but we haven't mitigated the problem of when we have a lot of variability within the same class let's say we have an equal number of faces and negative examples in our data set what happens if the majority of the faces are from a certain demographic or they have a certain set of features can we still apply the techniques that we just learned about the answer is that we cannot do this and the problem is that the bias present right now is in our latent features all of these images are labeled with the exact same label so according to the as the model all we know is that they're all faces so we have no information about any of these features only from the label therefore we can't apply any of the previous approaches that we used to mitigate class imbalance because our classes are balanced but we have feature imbalance now however we can adapt the previous methods to account for bias in latent features which we'll do in just a few slides so let's unpack this a little bit further we have our potentially biased data set and we're trying to build and deploy a model that classifies the faces in a traditional training pipeline this is what that pipeline would look like we would train our classifier and we would deploy it into the real world but this training pipeline doesn't de-bias our inputs in any way so one thing we could do is label our biased features and then apply resampling so let's say in reality that this data set was biased on hair color most of the data set is made up of people with blonde hair with faces with black hair and red hair underrepresented if we knew this information we could label the hair color of every single person in this data set and we could apply either sample re-weading or loss relating just as we did previously wait does anyone want to tell me what the problem is here there are a couple problems here and that's definitely one of them the first is how do we know that hair color is a biased feature in this data set unless we visually inspect every single sample in this data set we're not going to know what the biased features are and the second thing is exactly what you said which is once we have our bias features going through and annotating every image with this feature is an extremely labor-intensive task that is infeasible in the real world so now the question is what if we had a way to automatically learn latent features and use this learn feature representation to dbias a model so what we want is a way to learn the features of this data set and then automatically determine that samples with the highest feature bias and the samples with the lowest feature bias we've already learned a method of doing this in the generative modeling lecture you all learned about variational autoencoders which are models that learn the latent features of a data set as a recap variational autoencoders work by probabilistically sampling from a learn latent space and then they decode this new latent Vector into back into the original input space measure the Reconstruction loss between the inputs and the outputs and continue to update their representation of the latent space and the reason why we care so much about this latent space is that we want samples that are similar to each other in the input to decode to latent vectors that are very close to each other in this latent space and samples that are far from each other or samples that are dissimilar to each other in the input should decode to should encode to latent vectors that are far from each other in the latent space so now we'll walk through step by step a de-biasing algorithm that automatically uses the latent features learned by a variational autoencoder to under sample and oversample from regions in our data set um before I start I want to point out that this debiasing model is actually the foundation of themis's work this work comes out of a paper that we published a few years ago that has been demonstrated to debias commercial facial detection algorithms and um it was so impactful that we decided to make it available and work with companies and industries and that's how themus was started so let's first start by training a vae on this data set the Z shown here in this diagram ends up being our latent space and the latent space automatically captures features that were important for classification so here's an example latent feature that this model captured this is the facial position of an input face and something that's really crucial here is that we never told the model to calculate the to encode the feature Vector of um the facial position of a given face it learned this automatically because this feature is important for the model to develop a good representation of what a face actually is so now that we have our latent structure we can use it to calculate a distribution of the inputs across every latent variable and we can estimate a probability distribution depending on that's based on the features of every item in this data set essentially what this means is that we can calculate the probability that a certain combination of features appears in our data set based on the latent space that we just learned and then we can over sample denser or sparser areas of this data set and under sample from denser areas of this data set so let's say our distribution looks something like this this is an oversimplification but for visualization purposes and the denser portions of this data set we would expect to have a homogeneous skin color and pose in hair color and very good lighting and then in this parser portions of this data set we would expect to see diverse skin color pose and illumination so now that we have this distribution and we know what areas of our distribution are dense and which areas are sparse we want to under sample areas from the under sample samples that fall in the denser areas of this distribution an oversample data points that fall in the sparser areas of this distribution so for example we would probably under sample points with the very common skin colors hair colors and good lighting that is extremely present in this data set and oversample the diverse images that we saw on the last slide and this allows us to train in a fair and unbiased manner to dig in a little bit more into the math behind how this resampling works this approach basically approximates the latent space via a joint histogram over the individual latent variables so we have a histogram for every latent variable Z sub I and what the histogram essentially does is it discretizes the continuous distribution so that we can calculate probabilities more easily then we multiply the probabilities together across all of the latent distributions and then after that we can develop a an understanding of the joint distribution of all of the samples in our latent space based on this we can Define the adjusted probability for sampling for a particular data point as follows the probability of selecting a sample data point x will be based on the latent space of X such that it is the inverse of the joint approximated distribution we have a parameter Alpha here which is a divising parameter and as Alpha increases this probability will tend to the uniform distribution and if Alpha decreases we tend to de-bias more strongly and this gives us the final weight of the sample in our data set that we can calculate on the Fly and use it to adaptively resample while training and so once we apply these this debiasing we have pretty remarkable results this is the original graph that shows the accuracy gap between the darker Mills and the lighter Mills in this data set once we apply the devising algorithm where as Alpha gets smaller we're devising more and more as we just talked about the this accuracy Gap decreases significantly and that's because we tend to over sample samples with darker skin color and therefore the model learns them better and tends to do better on them keep this algorithm in mind because you're going to need it for the lab 3 competition which I'll talk more about towards the end of this lecture so so far we've been focusing mainly on facial recognition systems and a couple of other systems as canonical examples of bias however bias is actually far more widespread in machine learning consider the example of autonomous driving many data sets are comprised mainly of cars driving down straight and sunny roads in really good weather conditions with very high visibility and this is because the data for these cars are for these algorithms is actually just collected by cars driving down roads however in some specific cases you're going to face adverse weather bad um bad visibility near Collision scenarios and these are actually the samples that are the most important for the model to learn because they're the hardest samples and they're the samples where the model is most likely to fail but in a traditional autonomous driving pipeline these samples are often extremely low have extremely low representation so this is an example where using the unsupervised latent debiasing that we just talked about we would be able to upsample these important data points and under sample the data points of driving down straight and sunny roads similarly consider the example of large language models um an extremely famous paper a couple years ago showed that if you put terms that imply female or women into a large language model powered job search engine you're going to get roles such as artists or things in the humanities but if you help input similar things but of the male counterpart you put things like mail into the the search engine you'll end up with roles for scientists and engineers so this type of bias also occurs regardless of the task at hand for a specific model and finally let's talk about Healthcare recommendation algorithms these recommendation algorithms tend to amplify racial biases a paper from a couple years ago showed that black patients need to be significantly sicker than their white counterparts to get the same level of care and that's because of inherent bias in the data set of this model and so in all of these examples we can use the above algorithmic bias mitigation method to try and solve these problems and more so we just went through how to mitigate some forms of bias in artificial intelligence and where these Solutions may be applied and we talked about a foundational algorithm that Themis uses that UL will also be developing today and for the next part of the lecture we'll focus on uncertainty or when a model does not know the answer we'll talk about why uncertainty is important and how we can estimate it and also the applications of uncertainty estimation so to start with what is uncertainty and why is it necessary to compute let's look at the following example this is a binary classifier that is trained on images of cats and dogs for every single input it will output a probability distribution over these two classes now let's say I give this model an image of a horse it's never seen a horse before the horse is clearly neither a cat nor a dog however the model has no choice but to Output a probability distribution because that's how this model is structured however what if in addition to this prediction we also achieved a confidence estimate in this case the model would should be able to say I've never seen anything like this before and I have very low confidence in this prediction so you as the user should not trust my prediction on this model and that's the core idea behind uncertainty estimation so in the real world uncertainty estimation is useful for scenarios like this this is an example of a Tesla car driving behind a horse-drawn buggy which are very common in some parts of the United States it has no idea what this horse-drawn buggy is it first thinks it's a truck and then a car and then a person and um it continues to Output predictions even though it is very clear that the model does not know what this image is and now you might be asking okay so what's the big deal it didn't recognize the horse-drawn buggy but it seems to drive successfully anyway however the exact same problem that resulted in that video has also resulted in numerous autonomous car crashes so let's go through why something like this might have happened there are multiple different types of uncertainty in neural networks which may cause incidents like the ones that we just saw we'll go through a simple example that illustrates the two main types of uncertainty that we'll focus on in this lecture so let's say I'm trying to estimate the curve Y equals X cubed as part of a regression task the input here x is some real number and we want it to Output f of x which is should be ideally X cubed so right away you might notice that there are some issues in this data set assume the red points in this image are your training samples so the boxed area of this image shows data points in our data set where we have really high noise these points do not follow the curve Y equals X cubed in fact they don't really seem to follow any distribution at all and um the model won't be able to um compute outputs for the air um points in this region accurately because very similar inputs have extremely different outputs which is the definition of data uncertainty we also have regions in this data set where we have no data so if we queried the model for a prediction in this part of in this region of the data set we should not really expect to see an accurate result because the model's never seen anything like this before and this is what is called Model uncertainty when the model hasn't seen enough data points or cannot estimate that area of the input distribution accurately enough to Output a correct prediction so what would happen if I added the following blue training points to um the areas of the data set with high model uncertainty do you think the model uncertainty would decrease raise your hand does anyone think it would not change okay so yeah most of you were correct model uncertainty can typically be reduced by adding in data into any region but specifically regions with high model uncertainty and now what happens if we we add these blue data points into this data set would anyone expect the data uncertainty to decrease you can raise your hand that's correct so data uncertainty is irreducible in the real world the blue points and the noisy red points on this image correspond to things like robot sensors let's say I'm I have a robot that's trained to um that has a sensor that is making measurements of uh depth if the sensor has noise in it there's no way that I can add any more data into the system to reduce that noise unless I replaced my sensor entirely so now let's assign some names to the types of uncertainty that we just talked about the blue area or the area of high data uncertainty is known as aliatoric uncertainty it is irreducible as we just mentioned and it can be directly learned from data which we'll talk about in a little bit the green areas of this just the green boxes that we talked about which were Model uncertainty are known as epistemic uncertainty and this cannot be learned directly from the data however it can be reduced by adding more data into our systems into these regions okay so first let's go through aliatoric uncertainty so the goal of out estimating alliatoric uncertainty is to learn a set of variances that correspond to the input keep in mind that we are not looking at a data distribution and we are as humans are not estimating the variance we're training the model to do this task and so what that means is typically when we train a model we give it an input X and we expect an output y hat which is the prediction of the model now we also predict an additional Sigma squared so we add another layer to our model we have the same output size that predicts a variance for every output so the reason why we do this is that we expect that areas in our data set with high data and certainty are going to have higher variance and The crucial thing to remember here is that this variance is not constant it depends on the value of x we typically tend to think of variance as a single number that parameterizes an entire distribution however in this case we may have areas of our input distribution with really high variance and we may have areas with very low variance so our variance cannot be independent of the input and it depends on our input X so now that we have this model we have an extra layer attached to it in addition to predicting y hat we also predict a sigma squared how do we train this model our current loss function does not take into account variance at any point this is your typical mean squared error loss function that is used to train regression models and there's no way training from this loss function that we can learn whether or not the variance that we're estimating is accurate so in addition to adding another layer to estimate alliatoric uncertainty correctly we also have to change our loss function so the mean squared error actually um learns a multivariate gaussian with a mean y i and constant variance and we want to generalize this loss function to when we don't have constant variance and the way we do this is by changing the loss function to the negative log likelihood we can think about this for now as a generalization of the mean squared error loss to non-constant variances so now that we have a sigma squared term in our loss function we can determine how accurately the sigma and the Y that we're predicting parametrize the distribution that is our input so now that we know how to estimate aliatoric uncertainty let's look at a real world example for this task we'll focus on semantic segmentation which is when we label every pixel of an image with its corresponding class we do this for scene understanding and because it is more fine-grained than a typical object detection algorithm so the inputs of this data to this data set are known as it's from a data set called cityscapes and the inputs are RGB images of scenes the labels are pixel wise annotations of this entire image of which label every pixel belongs to and the outputs try to mimic the labels they're also predicted pixel wise masks so why would we expect that this data set has high natural alliatoric uncertainty and which parts of this data set do you think would have aliatoric uncertainty because labeling every single Pixel of an image is such a labor-intensive task and it's also very hard to do accurately we would expect that the boundaries between um between objects in this image have high alliatoric uncertainty and that's exactly what we see if you train a model to predict aliatoric uncertainty on this data set corners and boundaries have the highest aliatoric uncertainty because even if your pixels are like one row off or one column off that introduces noise into the model the model can still learn in the face of this noise but it does exist and it can't be reduced so now that we know about data uncertainty or aliatoric uncertainty let's move on to learning about epistemic uncertainty as a recap epistemic uncertainty can best be described as uncertainty in the model itself and it is reducible by adding data to the model so with epistemic uncertainty essentially what we're trying to ask is is the model unconfident about a prediction so a really simple and very smart way to do this is let's say I train the same network multiple times with random initializations and I ask it to predict the exact I call it on the same input so let's say I give model one the exact same input and the blue X is the output of this model and then I do the same thing again with model 2. and then again with model 3 and again with model 4. these models all have the exact same hyper parameters the exact same architecture and they're trained in the same way the only difference between them is that their weights are all randomly initialized so they're where they start from is different and the reason why we can use this to determine epistemic uncertainty is because we would expect that with familiar inputs in our Network our networks should all converge to around the same answer and we should see very little variance in the um the logits or the outputs that we're predicting however if a model has never seen a specific input before or that input is very hard to learn all of these models should predict slightly different answers and the variance of them should be higher than if they were predicting a similar input so creating an ensemble of networks is quite similar it's quite simple you start out with defining the number of ensembles you want you create them all the exact same way and then you fit them all on the same training data and training data and then afterwards when at inference time we call ever all of the models every model in The Ensemble on our specific input and then we can treat um our new prediction as the average of all of the ensembles this results in a usually more robust and accurate prediction and we can treat the uncertainty as the variance of all of these predictions keep in again remember that if we saw familiar inputs or inputs with low epistemic uncertainty we should expect to have very little variance and if we had a very unfamiliar input or something that was out of distribution or something the model hasn't seen before we should have very high epistemic uncertainty or variance so what's the problem with this can anyone raise their hand and tell me what a problem with training an ensemble of networks is so training an ensemble of networks is really compute expensive even if your model is not very large training five copies of it or 10 copies of it tends to it takes up compute and time and that's just not really feasible when we're training um if on specific tasks however the key insight for ensembles is that by introducing some method of Randomness or stochasticity into our networks we're able to estimate epistemic uncertainty so another way that we've seen about um introducing stochasticity into networks is by using Dropout layers we've seen Dropout layers as a method of reducing overfitting because we randomly drop out different nodes in our in our layer and then we continue to propagate information through them and it prevents models from memorizing data however in the um case of epistemic uncertainty we can add Dropout layers after every single layer in our model and in addition we can keep these Dropout layers enabled at test time usually we don't keep Dropout layers enabled at test time because we don't want to lose any information about the the Network's process or any weights when we're at inference time however when we're estimating epistemic uncertainty we do want to keep Dropout enabled at test time because that's how we can introduce Randomness at inference time as well so what we do here is we have one model it's the same model the entire way through we add Dropout layers with a specific probability and then we run multiple forward passes and at every forward pass different layers get dropped different nodes in a layer get dropped out so we have that that measure of Randomness and stochasticity so again in order to implement this what we have is a model with the exact one model and then when we're running our forward passes we can simply run T forward passes where T is usually a number like 20. um we keep Dropout enabled at test time and then we use the mean of these samples as the new prediction and the variance of these samples as a measure of epistemic uncertainty so both of the methods we talked about just now involve sampling and sampling is expensive ensembling is very expensive but even if you have a pretty large model um having or introducing Dropout layers and calling 24 word passes might also be something that's pretty infeasible and at Themis we're dedicated to developing Innovative methods of estimating epistemic uncertainty that don't rely on things like sampling so that they're more generalizable and they're usable by more Industries and people so a method that we've developed to estimate a method that we've studied to estimate epistemic uncertainty is by using generative modeling so we've talked about vaes a couple times now but let's say I trained a vae on the exact same data set we were talking about earlier which is only dogs and cats the leading space of this model would be comprised of features that relate to dogs and cats and if I give it a prototypical dog it should be able to generate a pretty good representation of this dog and it should have pretty low reconstruction loss now if I gave the same example of the horse to this vae the latency the latent Vector that this horse would be decoded to would be incomprehensible to the decoder of this network the decoder wouldn't be able to know how to project the latent Vector back into the original input space and therefore we should expect to see a much worse reconstruction here and we should see that the Reconstruction loss is much higher than if we gave the model A Familiar input or something that it was used to seeing so now let's move on to what I think is the most exciting method of estimating epistemic uncertainty that we'll talk about today so in both of the examples before sampling is compute intensive but generative modeling can also be compute intensive let's say you don't actually need a variational autoencoder for your task then you're training an entire decoder for no reason and other than to estimate the epistemic uncertainty so what if we had a method that did not rely on generative modeling or sampling in order to estimate the epistemic uncertainty that's exactly what a method method that we've developed here at Themis does so we view learning as an evidence-based process so if you remember from earlier when we were training The Ensemble and calling multiple ensembles on the same input we received multiple predictions and we calculated that variance now the way we frame evidential learning is what if we assume that those data points those predictions were actually drawn from a distribution themselves if we could estimate the parameters of this higher order evidential distribution we would be able to learn this variance or this measure of epistemic uncertainty automatically without doing any sampling or generative modeling and that's exactly what evidential uncertainty does so now that we have many methods in our toolbox for episode for estimating epistemic uncertainty let's go back to our real world example let's say the again the input is the same as before it's a RGB image of some scene in a city and the output again is a pixel level mask of what every pixel in this image belongs to which class it belongs to which parts of the data set would you expect to have high epistemic uncertainty in this example take a look at the output of the model itself the model does mostly well on semantic segmentation however it gets the sidewalk wrong um it assigns some of the sidewalk to the road and other parts of the sidewalk are labeled incorrectly and we can using epistemic uncertainty we can see why this is the areas of the sidewalk that are discolored have high levels of epistemic uncertainty maybe this is because the model has never seen an example of a sidewalk with multiple different colors in it before or maybe it hasn't been trained on examples with sidewalks generally either way epistemic uncertainty has isolated this specific area of the image as an area of high uncertainty so today we've gone through two major challenges for robust steep learning we've talked about bias which is what happens when models are skewed by sensitive feature inputs and uncertainty which is when we can measure a level of confidence of a certain model now we'll talk about how themus uses these Concepts to build products that transform models to make them more risk aware and how we're changing the AI landscape in terms of safe and trustworthy AI so at Themis we believe that uncertainty and bias mitigation unlock a um a host of new solutions to solving these problems with safe and responsible AI we can use bias and uncertainty to mitigate risk in every part of the AI life cycle let's start with labeling data today we talked about aliatoric uncertainty which is a method to detect mislabeled samples to highlight label noise and to generally maybe tell labelers to re relabel images or samples that they've gotten that may be wrong in the second part of this cycle we have analyzing the data before a model is even trained on any data we can analyze the bias that is present in this data set and tell the creators whether or not they should add more samples which demographics which areas of the data set are underrepresented in the current data set before we even train a model on them and then let's go to training the model once we're actually training a model if it's already been trained on a bias data set we can de-bias it adaptively during training using the methods that we talked about today and afterwards we can also verify or certify deployed machine learning models making sure that models that are actually out there are as safe and unbiased as they claim they are and the way we can do this is by leveraging epistemic uncertainty or bias in order to calculate the samples or data points that the model will do the worst on the model has the high the most trouble learning or data set samples that are the most underrepresented in a model's data set if we can test the model on these samples specifically the hardest samples for the model and it does well then we know that the model has probably been trained in a fair and unbiased manner that mitigates uncertainty and lastly we can think about we're developing a product at Themis called AI guardian and that's essentially a layer between the artificial intelligence algorithm and the user and the way this works is this is the type of algorithm that if you're driving an autonomous vehicle would say hey the model doesn't actually know what is happening in the world around it right now as the user you should take control of this autonomous vehicle and we can apply this to spheres outside autonomy as well so you'll notice that I skipped one part of the cycle I skipped the part about building the model and that's because today we're going to focus a little bit on um themes ai's product called capsa which is a model agnostic framework for risk estimation so capsa is an open source Library you all will actually use it in your lab today that transforms models so that they're risk aware so this is a typical training pipeline you've seen this many times in the course by now we have our data we have the model and it's fed into the training algorithm and we get a trained model at the end that outputs a prediction for every input but with capsa what we can do is by adding a single line into any training workflow we can turn this model into a risk-aware variant that essentially calculates biases uncertainty and label noise for you because today as you're probably as you've heard by now there are so many methods of estimating uncertainty and bias and sometimes certain methods are better than others it's really hard to determine what kind of uncertainty you're trying to estimate and how to do so so capsa takes care of this for you by inserting one line into your training workflow you can achieve a risk aware model that you can then further analyze and so this is the one line that I've been talking about um after you build your model you can just create a wrapper or you can call a wrapper that capsa has a an extensive library of and then in addition to achieving prediction or receiving predictions from your model you can also receive whatever bias or uncertainty metric that you're trying to estimate and the way capsule works is it does this by wrapping models for every uncertainty metric that we want to estimate we can apply and create the minimal model modifications as necessary while preserving the initial architecture and predictive capabilities in the case of aliatoric uncertainty this could be adding a new layer in the case of a variational autoencoder this could be creating and training the decoder and calculating the Reconstruction loss on the Fly and this is an example of capsa working on one of the data sets that we talked about today which was the cubic data set with added noise in it and also another simple classification task and the reason why I wanted to show this image is to show that using capsa we can achieve all of these uncertainty estimates with very little additional added work so using all of the products that I just talked about today and using capsa Themis is unlocking the key to deploy deep learning models safely across fields we can now answer a lot of the questions that the headlines were raising earlier which is when should a human take control of an autonomous vehicle what types of data are underrepresented in commercial autonomous driving pipelines we now have educated answers to these questions due to products that Themis is developing and in spheres such as medicine and health care we can now answer questions such as when is a model uncertain about a life-threatening diagnosis one should this diagnosis be passed to a medical professional before this information is conveyed to a patient or what types of patients might drug Discovery algorithms be biased against and today the the application that you guys will focus on is on facial detection you'll use capsa in today's lab to thoroughly analyze a common facial detection data set that we've perturbed in some ways for you so that you can discover them on your own and we we highly encourage you to compete in the competition which the details are described in the lab but basically it's about analyzing this data set creating risk-aware models that mitigate bias and uncertainty in the specific training pipeline and so at Themis our goal is to design advance and deploy a trustworthy AI across Industries and around the world um we're passionate about scientific innovation we release open source tools like the ones you'll use today and our products transform Ai workflows and make artificial intelligence safer for everyone we partner with Industries around the globe and we're hiring for the upcoming summer and for full-time roles so if you're interested please send an email to careers themesai.io or apply by submitting your resume to the Deep learning resume drop and we'll see those resumes and get back to you thank you foreign
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2020_Recurrent_Neural_Networks.txt
you so let's get started my name is avi I'm also a co organizer and lecturer for success 1 9 1 and in the second lecture we're going to really dive in and extend off the foundations that Alexander describes to discuss deep sequence modeling and in the first lecture we really learned about the essentials of neural networks and how we can sack perceptrons to build what are called feed-forward models will now turn our attention to applying neural networks to problems which involve sequential processing of data and we'll learn why these sorts of tasks require a fundamentally different type of network architecture from what we've seen so far so to begin and really motivate the need for for processing sequential data let's consider a super super simple example suppose we have this picture of a ball right and we want to predict where this ball will travel to next now if I gave you no prior information about the trajectory of the ball or its history any guess on its next position would be exactly that just a random guess but if in addition to this current location I also gave you a bit of information about its previous locations the problem becomes much easier and I think we can all agree that we have a very clear sense of where this ball is likely to travel to next and so this is really you know to kind of set up this this idea of processing and handling sequential data and in truth if if you consider it sequential data is all around us for example audio can be split up into a sequence of sound waves and text can be split up into sequences of either characters or words and between beyond these two-bit kritis examples that we encounter every day there are many more cases in which sequential processing may be useful from an medical signals - like aegs - projecting stock prices - inferring and understanding genomic sequences so now that we've gone in a sense of some examples of what sequential data looks like let's take a look at another simple problem and that's the question of being able to predict the next word and here let's suppose we have a language model right where we have a sentence or a phrase and we're trying to predict the next word that comes next in that sentence or afraid or phrase so for example let's consider this sentence this morning I took my cat for a walk and yes you heard that right this morning I took my cat for a walk and let's say we're given these words this morning I took my cat for a and we want to try to predict the next word walk and since we're here to learn about deep learning and this is a class on deep learning let's say we want to build a deep neural network like a feed-forward neural network from lecture 1 to do exactly this and one problem that we will very immediately run into is that this feed-forward Network can only take a fixed length input vector as its input we have to specify the size of that input right from the start and the reason why you can maybe imagine that this is going to be a problem for our task is because sometimes the model will have seen a phrase that has five words sometimes it will have the scene of phrase that has seven sometimes it will have a scene ten words so our model needs a way to be able to handle variable length inputs and one way we can we can do this is by using a fixed window and that means that we force our input vector to be just a certain length so in this case - so given these two words we want to we want to try to predict the next one and this means that no matter where we're trying to make that next prediction our model will only be taking in those previous two words as its infant and you know because we have to also think about how we can numerically represent this data and one way we can do that is by taking a fixed length vector and allocating some space in that vector for the first word and some space in that vector for the second word and in those spaces encoding the identity of each word but this is really problematic and that's because since we're using this fixed window of only two words we're giving ourselves a very limited history in trying to solve this this problem of being able to predict the next word in the sentence and that means that we cannot effectively model long term dependencies which is really important too in sentences like this one where we clearly need information from much earlier in the sentence to be able to accurately predict the next word and if we were only looking at the past two words or the past three words or even the past five words we wouldn't be able to make this next prediction which we all know is French and so this means that we really need a way to integrate the information this in the sentence from start to finish but also that we need to be able to represent this information as a fixed length input vector and so one way we can do this is by actually using the entire sequence but representing it as a set of counts over a particular vocabulary and this representation is commonly called a bag of words where now each flaw in our input vector represents a word and the value in that slot represents the number of times that that word appears in our sentence and so here we have a fixed length vector regardless of the the identity of the sentence but what differs sentence the sentence is how the counts over this vocabulary change and we can feed this into our model as an input to be able to generate a prediction but there's another big problem with this and that's the fact that using just accounts means that we lose all sequential information and all information about the prior history and so these two sentences which have completely opposite semantic meanings would have the exact same representations in this bag of words format and that's because they have the exact same words just in a different order so obviously this is going isn't going to work right another idea could be simply to extend our fixed window and think that by okay let's let's look at more words maybe we can get more of the context we need to be able to generate this prediction and so we can represent our sentence in this way feed it into our model generate our prediction and the problem here is that if we were to feed this vector into say a feed-forward neural network each of these inputs right this morning took the cat would have a separate weight content connecting it to the network and so if we repeatedly were to see the words this morning at the beginning of the sentence the network may be able to learn that this morning represents a time or a setting but if this morning were to suddenly appear later in that fixed fixed length vector at the end of a sentence the network may have difficulty recognizing that this morning means this morning because the parameters that are here at the end of this they have seen the end of this vector may never have seen the phrase this morning before and the parameters from the beginning of the sentence weren't shared across the entire sequence and so this really this really motivates this need for for being able to attract long term dependencies and have parameters that can be shared across the entirety of our of our sequence and so I hope that this this simple example of having a sentence where we're trying to predict the next word motivates sort of a concrete set of design criteria that we need to be keeping in mind when we are thinking about sequence modeling problems specifically we need to develop models that can handle variable length input sequences that are able to track long term dependencies in the data that can maintain information about the sequences order and to share parameters across across the entirety of the sequence and today we're going to explore how we can use a particular type of network architecture called a recurrent neural network or RNN as sort of this general paradigm for sequential processing and sequence modeling problems and so first I'll go through the the key principles behind are n ends and how they're a fundamentally different type of architecture from what alexander introduced and how the the RNN computation actually works and before we do that let's take one step back again and consider the standard feed-forward neural network that we discussed in the first lecture and in this architecture as you can see data is propagating in one direction right from input to output and we already motivated Y and network like this can't really handle sequential data and recurrent neural networks are particularly well-suited for handling cases where we have a sequence of inputs rather than a single input and they're great for problems in which a sequence of data is being propagated to give a single output so for example we can imagine where we're training a model that takes as input a sequence of words and outputs a sentiment or an emotion associated with that sequence we can also consider cases where instead of returning a single output we could have a sequence of inputs propagate them through our network and and I each time step in the sequence generate an output so an example of that would be in text or music generation and you'll actually get the chance to explore a model like this later on in the lab and beyond these two examples there are really many many other cases in which recurrent neural networks have been applied for a sequential processing so I still haven't told you the answer of what fundamentally is a recurrent neural network and as I mentioned before and hopefully that we've hammered home standard neural networks go from input to output in one direction and we see that they're not able to maintain information about a previous events in a sequence of events in contrast recurrent neural networks or RNs are these networks that have these loops in them which actually allows for information to persist over time and so in this diagram at some time step denoted by T or RN n takes in as input X of T and at that time step it computes this value Y hat of T which is then outputted as the output of the network and in addition to that output it's computing this internal state update H of T and then it passes this information about its internal state from this time step of the network to the next and we call these networks with loops in them recurrent because information is being passed from one time step to the next internally within the network and so what's going on sort of under the hood of this network and how is information actually being passed time step to time step the way that's done is by using a simple recurrence relation to process the sequential data specifically RN ends maintain this internal state H of T and I each time step they apply a function f that's a parameter I by a set of weights W to update this state H and the key concept here is that this stay update is based both on the previous state from the previous time step as well as the current input the network is receiving and the really important point here is that in this computation it's the same function f of W and the same set of parameters that are used at every time step and it's those set of parameters that we actually learn during the course of training and so to get more intuition about rnns in in sort of a more codified manner let's step through the aren't an algorithm to get a better sense of how these networks work and so this pseudocode kind of breaks it down into a few simple steps we begin by initializing our RM and we also initialize the hidden state of that network and we can know a sentence for which we are interested in predicting the next word the RNN computation simply consists of them looping through the words in this sentence and I each time step we feed both the current word that we're considering as well as the previous hidden state of our Arnon into the network which can then generate a prediction for the next word in the sequence and uses this information to update its hidden state and finally after we've looped through all the words in the sentence our prediction for that missing word is simply the ardennes output at that final time step after all the words have been fed through the model and so as I mentioned as you've seen right this RNN computation includes both this internal state update as well as this this formal output vector and so we'll next walk through how these computations actually occur and the concept is really similar to what alexander introduced in lecture 1 given our input vector X of T the RNN applies a function to update its hidden state and this function is simply a standard neural net operation just like we saw in the first lecture it consists of multiplication by a weight matrix and application of a non-linearity but the key difference is that in this case since we're feeding in both the input vector X of T and the previous state as inputs to this function we have to separate weight matrices and we then can apply our non-linearity to the sum of the two terms that result after multiplication by these two weight matrices and finally our output Y of T at a given time step is then a modified transformed version of this internal state and that simply results from fall from another multiplication by a separate weight matrix and that's how the RNN can actually update both its hidden state and actually produce an output and so so far we've we've seen this one depiction of our n ends that shows them as having these loops that feed back in on themselves but another way we can represent this is by actually unrolling this loop over time and so if we do that what we'll see is that we can think of our n ends as having sort of multiple copies of the same Network where each passes a message on to its descendant and what is that message that's passed it's based on H of T the internal state and so when we chain the RNN module together in this chain like structure this can really highlight how and why RNs are so well suited for processing sequential data and in this representation we can actually make our weight matrices explicit beginning with the weights that transform the input to the hidden state the weights that are used to transform the previous hidden state to the current hidden se and finally the hidden state to the output and to remind you once again it's important to know here that we are using the same weight matrices at every time step and from these outputs we can compute a loss at each time step and this computation of the loss will then complete our forward pass our forward propagation through the network and finally to define the total loss we simply sum the losses from all the individual time steps and since our loss consists of these individual contributions over time this means that in training the network we will have to also somehow involve this time component so now that you've gone a bit of a sense of how these rnns are constructed and how they function we can actually walk through a simple an example of how we can implement an RNN from scratch in tensor flow and so the RNN is defined as a layer and we can build it from inheriting the inheriting from the layer class that alexander introduced in the first lecture and so we can also initialize our weight matrices and initialize the hidden state of our RNN cell to zero and they're really the key step here is defining the call function which describes how we make a forward pass through the network given an input X and to break down this call function what occurs is first we update the hidden state according to that equation we saw earlier we take the previous hidden state and the input X multiply them by the relevant weight matrices sum them and then pass them through a non-linearity like at an H shown here and then the output is simply a transformed version of the hidden state and at each time step we return both the current output and the the updated hidden state and as a kind of Alexander showed you in how you know we can define a dense layer from scratch but tensorflow has made it really easy for us by having this built-in dense layer the same applies for our Nance and conveniently tensorflow has implemented these types of Arlen's cells for us and it's called the simple RNN layer and you'll get some practice using these in the lab later on all right so our next step is how to how do we actually develop an algorithm for training RN ends and that's an algorithm that's called back propagation through time and to remind you as we saw we trained feed-forward models by first making a forward pass through the network that goes from input to output and this is the standard feed-forward model where the layers are densely connected and in order to train this model we can then back propagate the gradients back through the network taking the derivative of the loss with respect to each weight parameter in the network and then adjusting the parameters in order to minimize the loss but in RN ends as we walk through earlier right our forward pass through the network also consists of going forward across time updating the cell state based on the input and the previous state generating an output Y at that time step computing a loss at that time step and then finally summing these losses from the individual time steps to get our total loss and so that means that instead of back propagating errors through a single feed-forward network at a single time set in RN ends errors are back propagated at each individual time step and then finally across all time steps all the way from where we are currently to the beginning of the sequence and this is the reason why it's called back propagation through time right because as you can see right all the errors are flowing back in time to the beginning of our data sequence and so if we take a quick closer look at how gradients actually flow this chain of repeating modules you can see that between each time step we need to perform a matrix and multiplication involving this way matrix W and remember right that this the cell update also results from a nonlinear activation function and that means that this computation of the gradient that is the derivative of the loss with respect to the parameters all tracing all the way back to our initial state requires many repeated multiplications of this weight matrix as well as repeated use of the derivative of our activation function and this can be problematic and the reason for that is we can have one of two scenarios that could be particularly problematic if many of these values that are involved in these repeated multiplications such as the way matrix or the gradients themselves are large greater than one we can run into a problem that's called the exploding gradients problem and that describes what happens when in gradients become extremely large and we can to optimize them and one way we may be able to mitigate this is by performing what's called gradient clipping which essentially amounts to scaling back large gradients so that their values are smaller we can also have the opposite problem where now our wave values or gradients are too small and this is what is known as the vanishing gradient problem when gradients become increasingly and increasingly smaller as we make these repeated multiplications and we can no longer train the network and this is a big and very real problem when it comes to training Arn ins and today we'll we'll go through three ways in which we can address this problem choosing our activation function initializing our weights cleverly and also designing our network architecture to actually be able to handle this efficiently and so to provide some further motivation and intuition as to why this vanishing gradient issue is a problem let's consider an example where you keep multiplying a number by some some number by something that's in between zero and one and as you keep doing this repeatedly that number is going to keep shrinking and shrinking and in eventually it's going to vanish and when this happens two gradients what this means is that it's harder to propagate errors further back into the past because the fact that the gradients are becoming smaller and smaller which means that will actually end up biasing our network to capture more short term dependencies which may not always be a problem right sometimes we only need to consider very recent information to perform our tasks of interest and so to make this concrete let's consider our example from the beginning of lecture a language model where we are trying to predict the next word and so in this phrase if we're trying to predict the last word in this phrase it's relatively obvious what the next word is going to be and there's not that much of a gap between the key relevant information like the word cloud and the place where the prediction is needed and so here an RNN can you know use relatively short term dependent dependencies to generate this prediction however there are other cases where more context is necessary like this example and so here the more recent information suggests that the next word is most likely the name of a language but we need we need the context of France which is way earlier in this sentence to be able to fill in the relevant the gaps that are missing and to identify which language is is correct and so as this gap between what is semantically important grows standard rnns become increasingly unable to sort of connect the dots and link these relevant pieces of information together and that's because of this issue of the vanishing gradient problem so how can we alleviate this right the first trick is pretty simple we can choose our activation function that the network is actually using specifically both the tan H and sigmoid activation functions have derivatives that are less than 1 except for the 10 H which can have derivatives that except for when when tan H is equal to 0 when X is equal to 0 in contrast if we use this particular activation function called arulu the derivative is 1 whenever X is greater than 0 and this helps prevent the value of our derivative F prime from shrinking the gradients but keep keep in mind that this relu function is only only has a gradient of 1/4 when x is greater than 0 which is another significant consideration another simple trick we can do is to be smart in how we initialize the weights the parameters of our network and it turns out that initializing the weights to the identity matrix helps them helps prevent them from shrinking to zero too rapidly during back propagation but the final and really most robust solution is to use a slightly more complex recurrent unit that can more effectively track long term dependencies in the data by controlling what information is passed through and what information is used to update its internal state specifically this is the the concept of a gated cell and many types of gated cells and architectures exist and today we're going to focus our attention on one type of gated cell called a long short-term memory network or LST M which is really well-suited for learning long-term dependencies to overcome this vanishing gradient problem and Ellis hams work really well on a bunch of different types of tasks and they're extremely extremely widely used by the deep learning community so hopefully this gives you a good overview and a concrete sense of why they're so powerful okay so to understand why Ellis hams are special let's think back to the general structure of an RNN and in this depiction I'm it's a slightly different depiction than what I've showed you before but it reinforces this idea that all recurrent neural networks have this form of a series of repeating modules and what we're looking at here is a representation where the the inner functions of the RNN are defined by by these black lines that depict how information is flowing within the RNN cell and so if we break this down in a standard RNN we have this single neural net layer that performs this computation in this case it's a tan H layer our cells state H of T is a function of the previous cell state H of T minus one as well as the current input X of T and as you can see that the red lines depict sort of how this information is flowing within the RN and so and at each time step we also generate an output Y of T which is a transformation of the internal RNN state l STM's also have this chain like structure but now the repeating module that's contained within the RN n cell is slightly more complex and so in an LS TM this repeating unit contains these different interacting layers that you know I don't want you to get too bogged down into the details of what these computations are but the key the key point is that these layers interact to selectively control the flow of information within the cell and we'll walk through how how these how these layers work and how this enable LSTs to track and store information throughout many time steps and the key building block behind the LS TM is this structure called a gate which functions to enable the L SCM to be able to selectively add or remove information to its cell state and gates consist of a neural net layer like a sigmoid shown in yellow and a point wise multiplication shown in red and so if we take a moment to think about what a gate like this may be doing in this case the sigmoid function is forcing its input to be between 0 and 1 right and intuitively you can think of this as capturing how much of the information that's passed through the gate should be retained it's between nothing right which is 0 or it's between everything 1 and this is effectively gating the flow of information right and Ellis hams process information through four simple steps and if there's anything that you take away from this lecture I hope this is that point they first forget they're irrelevant history they then perform computation to store relevant parts of new information use these two steps together to selectively update their internal state and finally they generate an output forget store update output so to walk through this a little bit more the first step in the LCM is to decide what information is going to be thrown away from the cell state to forget irrelevant history and that's a function of both the prior internal state H of T minus one as well as the input X of T because some of that information may not be important next the LCM can decide what part of new of the new information is relevant and use this to store store this information into its cell state then it takes both the relevant parts of both the prior information as well as the current input and uses this to selectively update its so state and finally it can return an output and this is known as the output gate which controls what information encoded in the cell state is sent to the network in the next as input in the next time step and so to re-emphasize right I don't want you two to get too stuck into the details of how these computations work and in the mathematical sense but the intuition and the key takeaway that we want you to have about LS TMS is the sequence of how they regulate information flow and storage forgetting irrelevant history storing what's new and what's important using that to update the internal state and generating an output so this hopefully gives you a bit of sense of how Ella stamps can selectively control and regulate the flow of information but how does this actually help us to train the network an important property of STM's is that all of these different gating and update mechanisms actually work to create this internal cell state C which allows for the uninterrupted flow of gradients through time and you can think of this as sort of a highway of cell states where gradients can flow uninterrupted shown here in red and this enables us to alleviate and mitigate that vanishing gradient problem that's seen with vanilla or standard RN ends yeah question yeah so forgetting irrelevant information goes back to the question that was asked a little bit earlier and if you think back to the example of the the language model where you're trying to predict the now next word there may be some words very early on in the sequence that carry a lot of content and semantic meaning for example France but there could be other words that are superfluous or don't carry that much semantic meaning so over the course of training you want your LS TM to be able to specifically learn what are those bits of prior history that carry more meaning that are important in in trying to actually learn the problem of predicting the next word and you want to discard what is not relevant to to really enable more robust training okay so to review the key concepts behind LS EMS and I know this was you know this is when Alexander said this is a bootcamp course he he was not joking around right things go quickly and things yeah and so I hope you know by sort of providing these periodic summaries that we can really distill down all this material into the key concepts and takeaways that we want you to have at the end of each lecture and ultimately at the end of the course and so for LST ends right let's break it down Ellis hams are able to maintain this separate cell state independent of what is outputted and they they use gates to control the flow of information by forgetting irrelevant history storing relevant new information selectively updating their cell state and then outputting a filtered version as the output and the key point in terms of training and Alice TM is that maintaining this separate independent cell state allows for the efficient training of an Ellis TM to back propagation through time alright so now that we've gone through the fundamental workings of RN ends the back propagation through time algorithm and a bit about the LS TM architecture in particular I like to consider a few concrete examples of how RNs can be used and some of the amazing applications they enable imagine that we're trying to learn an RNN that can predict the next musical note in a sequence of music and to actually use this to generate brand new musical sequences and the way you could think about this potentially working is by inputting a sequence of notes and the output of the RNN at each time step in that sequence is a prediction of what it thinks the next note in the sequence will be and then we can actually train this network and after we've trained it we can sample from the train network to predict to generate a brand-new musical sequence that has never been heard before and so for example here's a case where an RNN was trained on the music of my favorite composer Chopin and the sample I'll play for you is his music that was completely generated by this ai [Music] so it gives you a little sample that it sounds pretty realistic right and you'll actually get some practice doing this in today's lab where you'll be training an RNN to generate brand new Irish folk songs that have never been heard before and there's a competition for you know who has the sweetest tunes at the end of the at the end of the lab so we encourage you to to you know try your best okay as another cool example we can go from an input sequence to just a single output and an example of this is in training and RNN that takes as input the words in the sentence outputs the sentiment or the feeling or the emotion of that particular sentence which can be you know either positive or negative and so for example if we train a model like this with a bunch of tweets source from Twitter we can train our RNN to predict that this first tweet about our class six s-191 has a positive sentiment but that this other tweet wishing for cold weather and snow maybe has a negative sentiment the another example that I'll mention quickly is one of the most powerful and widely used applications of our Nance in industry and it's the backbone of Google Translate and that's machine translation where you input a sequence in one language and the task is to train the RNN to output that sequence in a new language and this is done by having sort of this dual structure with an encoder which encodes their original sentence in its original language into a state vector and a decoder which then takes that encoded representation as input and decodes it into a new language there's a key problem though in this approach and that's the fact that the entire content that's fed into the encoder structure has to be encoded into a single vector and this can be actually a huge information bottleneck in practice because you may be you may imagine having a large body of text that you may want to translate but to get around this problem the researchers at Google were very clever and developed a extension of rnns called attention and the idea here is that instead of the decoder only having access to the final encoded state it can now actually access the states of all the time steps in the original sentence and the weighting of these vectors that connect the encoder States to the decoder are actually learned by the network during training and it's called attention because when the network learns this waiting its placing its attention on different parts of the input sentence and in this sense it's actually very effectively capturing a sort of memory access to the important information in that original sentence and so with these building blocks like attention and gated cells like LS TMS RN ins have really taken off in recent years and are being used in the real world world for many complex and impactful applications and so for example let's consider autonomous vehicles and at any moment in time an autonomous vehicle like a self-driving car needs to have an understanding of not only where every object in its environment is but also have a sense of where those objects may move in the future and so this is an example of a self-driving car from the company way mo from Google that encounters a cyclist on its right side which is denoted in red and the cyclist is approaching a stopped vehicle and the way Moe car can actually recognize that it's very likely that the cyclist is going to cut into its lane and so before the cyclist begins to cut the car off right as you can see right there the way Moe vehicle actually pulls back and slows down allowing for the cyclists to enter another example of how we can use a deep sequence modelling is in environmental modeling and climate pattern analysis and prediction and so this is a visualization of the predicted patterns for different environmental markers ranging from wind speeds to humidity to levels of particulate matter in the air and effectively predicting the future behavior of these markers could really be key in projecting and planning for the into the climate impact of particular human interventions or actions and so in this lecture hopefully you've gotten a sense of how our needs work and why they are so powerful for processing sequential data we've discussed why and how we can use rnns to perform sequence modeling tasks by defining this recurrence relation how we can train our intends and we also looked at how gated cells like lc-ms can help us model long-term dependencies and finally we discussed some applications of rnns including music generation and so that leads in very nicely to the next portion of today which is our software lab and today's lab is going to be broken down into two parts we're first going to have a crash course in tensor flow that covers all the fundamentals and then you'll move into actually training RN ends to perform music generation and so for those of you who who will stay for the labs the instructions are up here and alexander myself and all the TAS will be available to assist and field questions as you work through them so thank you very much [Applause]
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2020_Neural_Rendering.txt
you thanks a lot for having me here today I'm going to talk about newer rendering because rendering is such a massive topic start with some clarifications so as far as this lecture goes rendering can be both forward process and the inverse process the forward rendering computes a imaging from some 3d scene parameters such as the shape of a serve and the shape of object the color of object the surface with material and the light source etc for rendering has been one of the major focus of came the graphics for many years the orbit of this problem is thinking was rendering it studies the problem that given some images we are trying to work out what are the three DS things that was used to produce this imagery and it was rendering is closely related to come division with applications such as 3d reconstruction and motion capture etc and forward rendering and in what's running increasingly related because the high-level presentation of a vision system should look like not even tation abusing condom affix in this lecture I'm going to talk about how much in learning can be used to improve the solution to both of these two problems but the people we drive into neural networks let's first to take a very quick tour to the conventional method this is a sort of a toy example of a ray tracing which is a widely used forward rendering technique imagine you are inside of a cave the red bar is a light source and the greeter is in the image plane rate risk works by shooting rays from a imaginary eye to every pixel in the grid in the piece of onion imaging grid and it tries to compute the color of the object than the rate that you can see through an array in this case the rate directly hastened by source so we're using a color of the nice or nice sauce to occur in the pixel however more often than not Lorraine will hear some object surface before either bouncing to no light source in this case we need to calculate the color of the surface the color of surface can be computed as an integral of the instance radiant however this is very typical do in the analytic way so what a PMO normally do is to use bode Carlo sampling which generated random rays within the integral domain and then compute an average of its range as an approximation of the integral we can also change the sampling function to approximate surface material this is how we can make the surface look more glassy or rough most of the time are trained in multiple balance before either hitting the light source and this soon develop into a recursive problem which is very expensive to do so there are many other one street racing techniques have been invented to deal with this problem which I'm going I'm not I'm not going to talk here but a general concerns is rate racing with Monte Carlo sampling super expensive because of the very high estimate of variance and a low convention convergence rate for complex thing like this you need 100 minions or maybe Binion's array to render so the question we ask is whether in version learning can be used to speed up his process and the answer as we see later in the lecture is yes but before we dive into the answer that quickly switch to the universe problem for a moment so this is a classical stereo from a safe shape on stereo problem where you have cue images on the same object and in trying to work on the 3d shape of the object you do this by first finding the features across two images then you can compute the camera motion between the two photos the camera motion usually is parameterised by a rotation and the translation with this camera motion we can work on the 3d location of these features by translation more cameras can be brought into improving the result and the modern method can scale up to work with thousand and even hundred thousand photos this is truly amazing however the output of such compute vision system is always is often noisy and sparse and in contrast compared aphek application needs very clean that I mean in the nice razor-sharp details so often times people human height have two-stepping and a cleaner result and sometimes we even need to handcraft from scratch so every time you hear the word handcraft nowadays is a strong signal for machine learning to stabbing and automate the process so in the rest of the lecture I'm going to talk about how neural networks can be used as a sub module and endian pipeline for forward rendering and also I'm going to talk about how neural networks can be used I say differentiable renderer that opens the door for many interesting inverse applications but let's first start from the for rendering process as we mentioned before Monte Carlo sampling is very expensive and this example on the top left we have done noisy rendering with one sample rate per pixel and then the number of samples doubled from left to right and from top to bottom as you can seen the result also improves but the same time the confessional cost also improves also increase and I'm going to make a very fast analogy here most of you should be familiar with alphago by now which user a policy network and the value network that you speed up in a multicolored research for those who skip some of the previous lectures and the value network takes an input board precision and a predict a scalar value has no probability no winning probability he asses it reduced in the tabs of that research and similarly the policy network taken a board position as input and output a probability distribution for the next move in essence it introducing the breath of the search so the analogues I'm trying to make here is we can also use policy network and a value network to speed up the Monte Carlo sampling for rendering for example we can use a very network to denoise the rendering with low sample per pixel PC you try to predict in the correct piece of value from some noisy input as far as policy Network goes we can use a net way to generate a useful policy that is smartly example the race so in the whole rendering converged faster as first to take a look at the value based approach this is a reasonable work we did for the noisy multicolor rendering on the Left we have a noisy implemented sample that for picks for sample per pixel in the middle isn't the noisy result on the right it's not ground chose reference imagery render with the 32 km per pixel it takes about 90 minutes to do our traffic or CPU in contrast that the noisy result only takes about a second to run on a commodity GPU so there's a very good trade-off between speed and quality the whole network is trade to an end to end as auto encoder with two laws the first loss is our one loss of a BG feature of the output imagery the second loss is again lost pagano's here is obviously trying to retain the details in the output imaging so this is a side-by-side comparison between the result of training with and without again lost in owing natural images has been studied for a long time but in order Monte Carlo rendering has some very unique point the first thing is we can suffer in a diffuse and specular components and a random through different paths of network and then version result together and this tends to improve the result a lot secondarily there are some very inexpensive by-product of the rendering pipeline that we have used to further improvements out such as a better map normal map and depth this by-product can be used as a I can scenery feature that a generator contacts where the noise should be conditioned on however how to feed this a gazillion feature in the pipeline is still pretty much a open research question the way we did in this paper is something called a mean and whites biasing and scaling the animal was bouncing takes event scenery features and random through a bunch of levels and layers and the atom the result in tune input the feature X and no wisely what improve this is equivalent to feature contamination I know why scaling runs animal wise multiplication between auxiliary features and input feature X the argument you have both scaling and biasing here is they capture different aspect of the relationship between two inputs you can think I know wise biasing is sort of a or operator which check if I feature is in annual fees to input in contrast annual scaling is an end operator which checks whether like feature in the feature is in both of these two inputs so by combining them together the zero feature can be utilized in a better way and this is retinoids result and taking noisy input of sample data at the full range per pixel and then we compare our in other ways alternative method in general our method has a nice annoyed no it has a nice noise and the more details now let's move on to the policy based approach I'm now going to cover the entire literal here but just want to point point you to a very recent work from Disney research which is called the neuro important sampling so the idea is we want to find for each location in the scene a very good policy then it had helped us to sample race Muttley and reduce the convergence time and in practice the best possible policy is actually the instant irradiance map at that point because it literally tells you where the lies come from so the question is having generated this instant radius map from some local surface property through a new network and the answer is yes just like how we can how we can nowadays generated images from London impre noise because also trying a German network that a gerund is instant radius map from some local surface property such as location the the direction of the incoming rain and then the surface normal however the catch is such mapping from the surface local property to an instant radius map varies from scene to scene so the learning has to be carried online during the rendering process meaning the natural start front generally some random policies and the gradually learns in the scene structure so it's able to produce better policies as an easily result on the left is the conventional ray tracing on the right is no ray tracing which gyro important sampling which can work much faster as you can see here okay so far we have been talking about how neural networks can be used as a sub module for forwarding rendering the next I'm going talk about how we can use your networks I said end-to-end pipeline remember we talked about ray tracing which start from a catchment rays from the pixel to a 3d scene this is so called a image centric approach approach it's actually this approach it's kind of very difficult phone your neighbor to learn because first of all is recursive second secondarily we need new sample you need to do to screen sample which is very difficult do analytically and in contrast there's another way of doing rendering which is called a reservation which called which is object centric what it does is for every 3d point you can kind of shooting risk towards in the image and it only needs to shoot one primary range so there's no recursion and then you do not need to do any sampling so this turns out to be very different how to be easier for anyone that what you learn and the rustle rate rasterization contains two main step the first step is for every 3d primitives the kind of project nub primitives to know to dmg playing and impose them onto each other based on their distance tuner image so in this way the front of most surface can always be visible in the final rendering next step is to compute a shading basically it calculates in the pixel color by interpolating the color of the 3d primitives such as a vertex color in general reservation is faster than retracing and as we mentioned before it's easier for neural network to learn because it does not have a recursion or screen sampling process Oh sounds great a path around lesson another catch which is the import data format here are some major mainstream 3d format depth map voxels 3d point cards and match and some of them are not very friendly to neural networks and my policy language tells me I should avoid them in this lecture so anyway that start on the tabs map this is probably the easiest one because all you needed only three is to change the number of input channels for the first layer then you can run your favorite new networks with it and it's also very memory efficient because nowadays the celebrators are designed to run images another reason for the EPS map to be convenient to use is we do not need to cover calculating the visibility because every information in the - map is already in front of front of most surface all we need to do is compute the trading and there are many works in rendering depth map in two images on the other way around and I'm not going to talk about them in this lecture so let's move on to work so what I saw it's also kind of friendly to neural networks because owner data are arranged in a grid structure however once always very memory-intensive it is actually one order of magnitude higher than image data so conventional neural networks only run work sauce of very low resolution but what make voxel very ancient to us is in a niche to computer both visibility and shading so this is a very good opportunity for us to learn and and pipeline for newer rendering so we try this end-to-end neural voxel rendering called render night it starts from transforming the input of walk so into your camera Queen late I'd like to quickly access here that such a 3d Richard party transformation is something we actually do not want that which learn because it's very easy to do with coordinated transformation but very hard for convolution in such operation to to to perform and we will come back to this later having having transform implement so in your camera during the next step is to learn a neural voxel representation of the 3d shape what we can do here is to pass the info box off to a sequence of three deconvolution then output neuron voxel contains deep features that is going to be used for computing the shading and the visibility next time you come to know visibility one might be attempted to basically say okay we can use no standard and taps buffer algorithm here but it turns out this is not so easy because when you do this free deconvolution you kind of distribute diffusional value within the entire box of grid so it's not clear that that which grades are from the front most surface at the same time since every voxel contains two features now you have to integrate allow across all these channels to compute the visibility to deal with this problem we use something called a projection unit this projection unit first taken the 40 internal tensor a neuro Box old and then reshape it into a 3d tensor by squeezing the last two dimensions last two dimensions at a depth dimension and the future channel dimension then you learn say multi-layer perception which learns to computer visibility from allowing the squeeze last channel so on a higher level you can think it's multi-level perception is a inception network that I learned to integrate visibility along the both steps and the features last step is to use a sequence of 2d up convolution to render the project to the neural voxel into a in depict into a picture and the we train is network and to end with a mean square pixel loss here are some result the first row is an input of voxels the second row is no output of the new render net as you can see running at the table to learn to how we do the commutation visibility and the shading you can also learn you run an ad queue generated different and rendering effect such as contour map tone shading and ambient occlusion in terms of generalization performance we can use in the retina to trade on chair model to render some unseen object such and bounty and the scene with multiple of a scene with sabbatical objects you can also handle data with corruption and the low resolution the first row renders English ape that is readily corrupted the second row renders implicate with teddys has a resolution that is 50% lower than a training resolution the retina can also be used to render textured models in this case we learn a additional texture network that encodes in protector into a new rotational voxels and these are neural natural tightrope walkers will be concatenated with no-english a voxel in the channel right way and a concatenated voxel is going to be fed into a network to render these are some without of render detection models the first row is the input of voxel the second row is no ground truth reference image the third row is our result from render net as you can see that the ground truth image obviously has more Sheppard details but in general the retina is able to capture the major facial features and the computer it miscibility and shading correctly as a fine experiment we try to mix and match the shape and the texture in this example in the first row the image are rendered with the same shape in Provocateur and a different texture box on the second row render is rendered or is in the same texture by different shapes okay now let's move on to a 3d point cloud 3d point cloud is actually not so friendly to neural networks the first of all the data is not ranged on a range on a grid structure in the meantime depends on how your sample points on a surface the number of a point and order of the points can also vary I just want to quickly point out this recent work called neural point based graphics from I think it's a song AI lab but before we talk about that as first talk about how conventional reservation is done for 3d point cards basically for every point in the 3d scene we project the point in unit imaging as a square and size of the square is kind of inversely proportional to the distance between the distance on the point tuner image obviously you have to impose no projective square on top of each other based on depth to next what we do is you try to encourage the squares using a RGB color on the 3d points however if we do this there's a lot of holes in the result in the same time you can see a lot of color blocks so what this neural point based graphics did is they replace the RGB color by a learned neuro descriptor this new the script is sort of an 8 dimensional vector that is associated with each input point you can think about is a deep feature that a compensate not sparsely on the point cloud this is a visualization of the newer descriptor using the first three piecing component we start by randomly initialize these newer descriptors for each point in the scene and optimize for that particular scene obviously you cannot use native of one scene to describe another thing so this optimization has to be done posting the training and the testing stage for every eye before each scene and then then the authors use a autumn encoder to encode the projected neural descriptor into into a photo risky imaging this render network is jointly trained with no optimization of the neuro descriptor during the training but can be reused in the testing stage these are some result which i think is really amazing the first row is no rendering which conventional RGB descriptor the second row is the rendering with the neuro descriptor as you can see there's no hole and the result is in general much sharper and the very cool thing about this method is the newer descriptor is training that you'll be viewing variant meaning once they are optimized for seeing you can render the scene from different angles that's really cool okay last we have these Nash models which which is difficult for neural networks because of its graphical representation I just want to quickly point how these two papers the first one is quality fern we were rendering it actually use a very similar idea so we just talked about which is indecent neuro sorry these newer point-based graphics they use very similar ideas but this paper applies ideal to render mesh models another paper is your 3d mesh render it is cool in a way that you can even do a 3ds star transfer however the new Annette were part of his method is used may need to change in a vertex color and the position is not so much internal rendering powder but I just put this reference here for people who are interested okay so far we have been talking about the for the rendering let's move on to let me know universal rendering we call the universe rendering is the problem that given some input the image we want to work out a 3d scene that was used to generate this image the particular method I'm going to talk about today is called differentiable rendering it works as follows first we start from a target image then we generate a some kind of approximation of a 3d scene this approximation does not to be very good as far as we can render it we can compare and result with no target image and then we can define some meant metric to measure the different quantity major difference between the render image and the target image and even though rendering process is differentiable we come back propagate the loss to update an input model and if we integrated we do this eventually in the hope is the implement will converge to something meaningful and the key point here is the photo rendering process has to be differentiable the all that you calculate is backpack backpack for gate operation and this is where we immediately seen the value of neural networks because modern journals are designed to be differentiable designed to perform its publication so we got the gradient for free another reason for new and I would to be helpful here is as you can imagine this intuitive optimization process is going to be very expensive so what we can do within your network is to learn a field forward process that are approximate is intuitive organization for example we can learn an auto encoder which encode that input imaging into some sort of a latent presentation that enables some really interesting downstream tasks such as noble view synthesis however in order to that encoded to learn the use for invitation we needed you to know correct in tactical BIOS and why inactive pile and very interesting very exciting about it is like that a learning can be a lot easier if we can separate in the post from the appearance and I truly believe is something human do this is my four-year-old son playing shaped puzzle the task is to build a compact shape in some basic shape primitive such as strangle and square in order to do this task he has to apply through the rigid partition mission through these primitive shapes in order to match what is required on the board it's amazing that human can do this work rather effortlessly well this is something the newer networks was invented to suffer with for example the two deconvolution or the general in the convolution is a local operation which there's no way they can carry this global transformation fully kinda layers might be able to do this but at the course of a network capacity because it has to memorize owner different configurations on the same object so we ask the question how about we just use simple coordinated transformation to encode the pose of the object and suffer in the post from the appearance whether that will make the learning easier so we try this idea called hologram which which learns a 3d representation of from natural images without the 3d supervision by without a 3d vision I mean there's no 3d data that there's no grunge was label from the pose of object in the image during the learning unit training process everything is learned from purely from 2d unlabeled data so the cool thing about this idea is the learning is driven by inductive bias as opposed to supervision let's first take a look at how conventional German German German TV network works conventional for example conventional joining networks Germany images use 2d convolutions we survive a few assumptions about a 3d world for example this condition again and concatenate post vectors or apply feature by feature wise transformation to control the post of the genital nope no post of the object in the geometry images a less branch of the label was used in the training process the post is called to be learned as a latent variable which is hard to interpret at the same time using 2d convolution to generate the 3d motion will generate an artifact in a result in contrast hollow again generated much better without by separating the post from the motion listen to some random faces generated by collagen and I'd like to emphasize there's no 3d data used in the training process and the key point here is hologram used a neuro voxel as its latent reputation to learn such neuro voxels we use a 3d generator Network and the learned 3d voxel is going to be rendered by render net so we just talk about the 3d generator is basically a extension of star gang into 3d it has two input the first one is a 4 D tensor account learned learned 4 D tensor a constant tensor which is learned and as a sort of templates of a particular class of object and this this tensioner is going to run through a sequence of 3d convolution and to become no neuro voxel with intention the second input is this random vector used as a control controller that will be first transformed into a fine parameter of the adaptively instant normalization right here Osuna pipeline node and generate the learned 3d vox orientation is called as I said before it's going to be rendered rendered by render net and then you already trained this network in an unsupervised way we use a discriminative work to classify not render image against real water image the key here is it is crucially important that during a training process we happy to apply random rigid participation in the until this box of invitation and this actually how the internship bias is injected during a learning process because in this way the network is forced to learn some very strong information that is unbreakable under arbitrary post and in fact if we do not apply random transformation doing a learning process the net OVA was not able to learn these are some result as you can see hologram is pretty robust to view transition and also complex backrub one limitation of hollow gain is it can only learn post that exists in the dataset in the training dataset for example in Scardino said there's very little variation in the elevation direction so the network cannot Expo late however when they are in knob training data the network has truly learned for example we use shift Netta to generate more poses for chairs a network is able to learn to do 180 degree rotation in elevation we also try to network with some really Chinese dataset for example this background data set in students is very challenging due to the fact that there's a very strong appearance for vibration across the dataset you can hardly find a new bedroom that looks like each other from different views in this sense that's why we wake post signal in the dataset however the network is still able to generate something reasonable and I think that's really interesting another surprise is the network is able to further decompose the appearance into shape and the texture as a test we feed to different control vectors one tune of 3d part of network neither to a 2d part of network and it turns out on the 3d controller the controller fit into the 3d part of network controls the shape and the control 3d into 2d part of network controls in picture so these are some result every row in this image using a saying texture controller but a different shape controller I recall him in this image is the same shape controller but different controller I think it is truly amazing because it obviously reminds me about the vertex shader and the fragment shader in a conventional kind of graphics pipeline we're underwater shader changed in geometry and the conventional trader doing a coloring okay I think it's probably a good time to start with your conclusions at the beginning of this talk we asked a question can your network be helpful to for the rendering and the inverse rendering and I think the answer is yes we haven't seen you in her books being used as a sound module to speed up retracing and then we have a scene example of no value based approach and the policy based approach we also have seen neural network being used as a end-to-end the system that helps 3d as a reason and as far as no inverse problem goes we see new network can be usually is a very powerful differential differentiable renderer that opens not water many entry to downstream applications such as view synthesis and the key thing here is your network is able to use the correct internet files to learn a strong replantation and before I finish my talk I just want to say this is a pretty much still a opening question a very new research frontier there's a lot of opportunities for example the still huge gap between the quality of the end-to-end rendering and the conventional physical based rendering and as far as I know there's really no good solution for euro based next renderer and the in terms of the universe problem we have seen in careers out of learning strongly plantation however it's easy to see what more effective inductive bias and network architecture can be used to push in learning forward and I before and then talk I'd like to thank my colleague and collaborators who did an amazing job all these papers especially to could be the most of worked for and again and the whole again I'm sorry Rena net and hooligan and also being put in the mortal work from the neural multicolor denoising with that I thank you
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2023_Reinforcement_Learning.txt
foreign hi everyone welcome back today I think that these two lectures today are really exciting because they start to move Beyond you know a lot of what we've talked about in the class so far which is focusing a lot on really static data sets and specifically in today in this lecture right now I'm going to start to talk about how we can learn about this very long-standing field of how we can specifically marry two topics the first topic being reinforcement learning which has existed for many many decades together with a lot of the very recent advances in deep learning which you've already started learning about as part of this course now this marriage of these two Fields is actually really fascinating to me particularly because like I said it moves away from this whole Paradigm of uh or really this whole Paradigm that we've been exposed to in the class thus far and that Paradigm is really how we can build a deep learning model using some data set but that data set is typically fixed in in our world right we collect we go out and go collect that data set we deploy it on our machine learning or deep learning algorithm and then we can evaluate on a brand new data set but that is very different than the way things work in the real world in the real world you have your deep learning model actually deployed together with the data together out into reality exploring interacting with its environment and trying out a whole bunch of different actions and different things in that environment in order to be able to learn how to best perform any particular tasks that it may need to accomplish and typically you want to be able to do this without explicit human supervision right this is the key motivation of reinforcement learning you're going to try and learn through reinforcement making mistakes in your world and then collecting data on those mistakes to learn how to improve now this is obviously a huge field in or a huge Topic in the field of Robotics and autonomy you can think of self-driving cars and robot manipulation but also very recently we've started seeing incredible advances of deep reinforcement learning specifically also on the side of gameplay and strategy making as well so one really cool thing is that now you can even imagine right this this combination of Robotics together with gameplay right now training robots to play against us in the real world and I'll just play this very short video on Starcraft and deepmind perfect information and is played in real time it also requires long-term planning and the ability to choose what action to take from millions and millions of possibilities I'm hoping for a 5-0 not to lose any games but I think the realistic goal would be four and one in my favor I think he looks more confident than Taylor Taylor was quite nervous before the room was much more tense this time really didn't know what to expect playing Starcraft pretty much since he's five I wasn't expecting the AI to be that good everything that he did was proper it was calculated and it was done well I thought I'm learning something it's not very unexpected I would consider myself a good player right but I lost every single one of five games we're way ahead of why right so let's take maybe a start and take a step back first of all and think about how reinforcement learning fits into this whole Paradigm of all of the different topics that you've been exposed to in this class so far so as a whole I think that we've really covered two different types of learning in this course to date right up until now we've really started focusing in the beginning part of the lectures firstly on what we called supervised learning right supervised learning is in this domain where we're given data in the form of x's our inputs and our labels y right and our goal here is to learn a function or a neural network that can learn to predict why given our inputs X so for example if you consider this example of an apple right observing a bunch of images of apples we want to detect you know in the future if we see a new image of an apple to detect that this is indeed an apple now the second class of learning approaches that we've discovered yesterday in yesterday's lecture was that of unsupervised learning and in these algorithms you have only access to the data there's no notion of labels right this is what we learned about yesterday in these types of algorithms you're not trying to predict a label but you're trying to uncover some of the underlying structure what we were calling basically these latent variables these hidden features in your data so for example in this apple example right using unsupervised learning the analogous example would basically be to build a model that could understand and cluster certain certain parts of these images together and maybe it doesn't have to understand that necessarily this is an image of an apple but it needs to understand that you know this image of the red apple is similar it has the same latent features and same semantic meaning as this black and white outline sketch of the Apple now in today's lecture we're going to talk about yet another type of learning algorithms right in reinforcement learning we're going to be only given data in the form of what are called State action pairs right now states are observations right this is what the the agent let's call it the neural network is going to observe it's what it sees the actions are the behaviors that this agent takes in those particular States so the goal of reinforcement learning is to build an agent that can learn how to maximize what are called rewards right this is the third component that is specific to reinforcement learning and you want to maximize all of those rewards over many many time steps in the future so again in this apple example we might now see that the agent doesn't necessarily learn that okay this is an apple or it looks like these other apples now it has to learn to let's say eat the apple take an action eat that Apple because it has learned that eating that Apple makes it live longer or survived because it doesn't starve so in today right like I said we're going to be focusing exclusively on this third type of learning Paradigm which is reinforcement learning and before we go any further I just want to start by building up some very key terminology and like basically background for all of you so that we're all on the same page when we start discussing some of the more complex components of today's lecture so let's start by building up you know some of this terminology the first main piece of terminology is that of an agent right an agent is a a being basically that can take actions for example you can think of an agent as as a machine right that is let's say an autonomous drone that is making a delivery or for example in a game it could be Super Mario that's navigating inside of your video game the algorithm itself it's important to remember that the algorithm is the agent right we're trying to build an agent that can do these tasks and the algorithm is that agent so in life for example all of you are agents in life the environment is the other kind of contrary approach or the contrary perspective to the agent the environment is simply the world where that agent lives and where it operates right it where it exists and it moves around in the agent can send commands to that environment in the form of what are called actions right you can take actions in that environment and let's call first notation purposes let's say the possible set of all actions that it could take is let's say a set of capital a right now it should be noted that agents at any point in time could choose amongst this let's say list of possible actions but of course in some situations your action space does not necessarily need to be a finite space right maybe you could take actions in a continuous space for example when you're driving a car you're taking actions on a continuous angle space of what angle you want to steer that car it's not necessarily just going right or left or straight you may stare at any continuous degree observations is essentially how the environment responds back to the agent right the environment can tell the agent you know what it should be seeing based on those actions that it just took and it responds in the form of what is called a state a state is simply a concrete and immediate situation that the agent finds itself in at that particular moment now it's important to remember that unlike other types of learning that we've covered in this course reinforcement learning is a bit unique because it has one more component here in addition to these other components which is called the reward now the reward is a feedback by which we measure or we can try to measure the success of a particular agent in its environment so for example in a video game when Mario grabs a coin for example he wins points right so from a given State an agent can send out any form of actions to take some decisions and those actions may or may not result in rewards being collected and accumulated over time now it's also very important to remember that not all actions result in immediate rewards you may take some actions that will result in a reward in a delayed fashion maybe in a few time steps down the future or maybe in life maybe years you may take an action today that results in a reward many uh some time from now right and but essentially all of these try to effectively evaluate some way of measuring the success of a particular action that an agent takes so for example when we look at the total reward that an agent accumulates over the course of its lifetime we can simply sum up all of the rewards that an agent gets after a certain time T right so this capital r of T is the sum of all rewards from that point on into the future into Infinity and that can be expanded to look exactly like this it's reward at time t plus the reward time t plus one plus t plus two and so on and so forth often it's actually very useful for all of us to consider not only the sum of all of these rewards but instead What's called the discounted sum so you can see here I've added this gamma factor in front of all of the rewards and and that discounting factor is essentially multiplied by every future reward that the agent sees and it's discovered by the agent and the reason that we want to do this is actually this dampening factor is designed to make future rewards essentially worth less than rewards that we might see at this instant right at this moment right now now you can think of this as basically enforcing some kind of short-term uh a greediness in the algorithm right so for example if I offered you a reward of five dollars today or a reward of five dollars in 10 years from now I think all of you would prefer that five dollars today simply because we have that same discounting factor applied to this to this processing right we have that factor that that five dollars is not worth as much to us if it's given to us 10 years in the future and that's exactly how this is captured here as well mathematically this discounting factor is like multiple like I said multiplied at every single future award exponentially and it's important to understand that also typically this discounting factor is you know between zero and one there are some exceptional cases where maybe you want some Strange Behaviors and have a discounting factor greater than one but in general that's not something we're going to be talking about today now finally it's very important in reinforcement learning this special function called The Q function which ties in a lot of these different components that I've just shared with you all together now let's look at what this Q function is right so we already covered this R of T function right R of T is the discounted sum of rewards from time T all the way into the future into time Infinity but remember that this R of T right it's discounted number one and number two we're going to try and build a q function a function that captures the the maximum or the best action that we could take that will maximize this reward so let me say that one more time in a different way the Q function takes as input two different things the first is the state that you're currently in and the second is a possible action that you could execute in this particular state so here s of T is that state at time t a of T is that action that you may want to take at time T and the Q function of these two pieces is going to denote or capture what the expected total return would be of that agent if it took that action in that particular state now one thing that I think maybe we should all be asking ourselves now is this seems like a really powerful function right if you had access to this type of a function this Q function I think you could actually perform a lot of tasks right off the bat right so if you wanted to for example understand how to what actions to take in a particular State and let's suppose I gave you this magical Q function does anyone have any ideas of how you could transform that Q function to directly infer what action should be taken yep given a state you can look at your possible action space and pick the one that gives you the highest Q values exactly so that's exactly right so just to repeat that one more time the queue function tells us for any possible action right what is the expected reward for that action to be taken so if we wanted to take a specific action given in a specific State ultimately we need to you know figure out which action is the best action the way we do that from a q function is simply to pick the action that will maximize our future reward and we can simply try out number one if we have a discrete action space we can simply try out all possible actions compute their Q value for every single possible action based on the state that we currently find ourselves in and then we pick the action that is going to result in the highest Q value if we have a continuous action space Maybe we do something a bit more intelligent maybe following the gradients along this Q value curve and maximizing it as part of an optimization procedure but generally in this lecture what I want to focus on is actually how we can obtain this Q function to start with right I I kind of skipped a lot of steps in that last slide where I just said let's suppose I give you this magical Q function how can you determine what action to take but in reality we're not given that Q function we have to learn that Q function using deep learning and that's what today's lecture is going to be talking about primarily is first of all how can we construct and learn that Q function from data and then of course the final step is use that Q function to you know take some actions in the real world and broadly speaking there are two classes of reinforcement learning algorithms that we're going to briefly touch on as part of today's lecture the first class is what's going to be called value learning and that's exactly this process that we've just talked about value learning tries to estimate our Q function right so to find that Q function Q given our state and our action and then use that Q function to you know optimize for the best action to take given a particular state that we find our cell then the second class of algorithms which we'll touch on right at the end of today's lecture is kind of a different framing of the same approach but instead of first optimizing the Q function and finding the Q value and then using that Q function to optimize our actions what if we just try to directly optimize our policy which is what action to take based on a particular state that we find ourselves in if we do that if we can obtain this function right then we can directly sample from that policy distribution to obtain the optimal action we'll talk more details about that later in the lecture but first let's cover this first class of approaches which is Q learning approaches and we'll build up that intuition and that knowledge onto the second part of policy learning so maybe let's start by just digging a bit deeper into the Q function specifically just to start to understand you know how we could estimate this in the beginning so first let me introduce this game maybe some of you recognize this is the game of called Atari Breakout the the game here is essentially one where the agent is able to move left to right this paddle on the bottom left or right and the objective is to move it in a way that this ball that's coming down towards the bottom of the screen can be you know bounced off of your pedal reflected back up and essentially you want to break out right reflect that ball back up to the top of the screen towards the rainbow portion and keep breaking off every time you hit a pixel on the top of the screen you break off that pixel the objective of the game is to basically eliminate all of those rainbow pixels right so we want to keep hitting that ball against the top of the screen until you remove all the pixels now the Q function tells us you know the expected total return or the total reward that we can expect based on a given State an action pair that we may find ourselves in this game now the first point I want to make here is that sometimes even for us as humans to understand what the Q value should be is sometimes quite unintuitive right so here's one example let's say we find these two State action pairs right here is a and b two different options that we can be presented with in this game a the ball is coming straight down towards us that's our state our action is to do nothing and simply reflect that ball back up uh vertically up the second situation the state is basically that the ball is coming slightly at an angle we're not quite underneath it yet and we need to move towards it and actually hit that ball in a way that you know will will make it and not miss it hopefully right so hopefully that ball doesn't pass below us then the game would be over can you imagine you know which of these two options might have a higher Q value for the network which one would result in a greater reward for the neural network or for the agent so how many people believe a would result in a higher return okay how about B okay how about someone who picked B can you tell me why B agency you're actually doing something okay yeah how about more for a you only have like the maximum you can take off is like one because after you reflect your automatically coming back down but then be you can bounce around and there's more than one at least exactly and actually there's a very interesting thing so when I first saw this actually it's uh uh it was very unintuitive for me why a is actually working much worse than D but in general this very conservative action of B you're kind of exactly like you said the two answers were implying is that a is a very conservative action you're kind of only going up and down it will achieve a good reward it will solve the game right it's in fact it solves the game exactly like this right here you can see in general this action is going to be quite conservative it's just bouncing up hitting one point at a time from the top and breaking off very slowly the board that you can see here but in general you see the part of the board that's being broken off is towards the center of the board right not much on the edges of the board if you look at B now with B you're kind of having agency like one of the answers said you're coming towards the ball and what that implies is that you're sometimes going to actually hit the corner of your paddle and have a very extreme angle on your paddle and hit the sides of the board as well and it turns out that the algorithm the agent can actually learn that hitting the side of the board can have some kind of unexpected consequences that look like this so here you see it trying to enact that policy it's targeting the sides of the board but once it reaches a breakout on the side of the board it found this hack in the solution where now it's breaking off a ton of points so that was a kind of a trick that this neural network learned it was a way that it even moves away from the ball as it's coming down just so it could move back towards it just to hit it on the corner and execute on those those Corner parts of the board and break out a lot of pieces for free almost so now that we can see that sometimes obtaining the Q function can be a little bit unintuitive but the key Point here is that if we have the Q function we can directly use it to determine you know what is the best action that we can take in any given state that we find ourselves in so now the question naturally is how can we train a neural network that can indeed learn this Q function so the type of the neural network here naturally because we have a function that takes us input two things let's imagine our neural network will also take as input these two objects as well one object is going to be the state of the board you can think of this as simply the pixels that are on the screen describing that board so it's an image of the board at a particular time maybe you want to even provide two or three images to give it some sense of temporal information and some past history as well but all of that information can be combined together and provided to the network in the form of a state and in addition to that you may also want to provided some actions as well right so in this case the actions that a neural network or an agent could take in this game is to move to the right to the left to stay still right and those could be three different actions that could be provided and parametrized to the input of a neural network the goal here is to you know estimate the single number output that measures what is the expected value or the expected Q value of this neural network at this particular state State action pair now oftentimes what you'll see is that if you wanted to evaluate let's suppose a very large action space it's going to be very inefficient to try the approach on the left with the with a very large action space because what it would mean is that you'd have to run your neural network forward many different times one time for every single element of Your Action space so what if instead you only provided it an input of your state and as output you gave it let's say all n different Q values one Q value for every single possible action that way you only need to run your neural network once for the given state that you're in and then that neural network will tell you for all possible actions what's the maximum it simply then look at that output and pick the action that has the Chi's Q value now what would happen right so actually the question I want to pose here is really you know we want to train one of these two networks let's stick with the network on the right for Simplicity just since it's a much more efficient version of the network on the left and the question is you know how do we actually train that Network on the right and specifically I want all of you to think about really the best case scenario just to start with how an agent would perform ideally in a particular situation or what would happen right if an agent took all of the ideal actions at any given State this would mean that essentially the target return right the the predicted or the the value that we're trying to predict the target is going to always be maximized right and this can serve as essentially the ground truth to the agent now for example to do this we want to formulate a loss function that's going to essentially represent our expected return if we're able to take all of the best actions right so for example if we select an initial reward plus selecting some action in our action space that maximizes our expected return then for the next future State we need to apply that discounting factor and and recursively apply the same equation and that simply turns into our Target right now we can ask basically what does our neural network predict right so that's our Target and we recall from previous lectures if we have a Target value in this case our Q value is a continuous variable we have also a predicted variable that is going to come as part of the output of every single one of these potential actions that could be taken we can Define what's called a q loss which is essentially just a very simple mean squared error loss between these two continuous variables we minimize their distance over two over many many different iterations of buying our neural network in this environment observing actions and observing not only the actions but most importantly after the action is committed or executed we can see exactly the ground truth expected return right so we have the ground truth labels to train and supervise this model directly from the actions that were executed as part of random selection for example now let me just stop right there and maybe summarize the whole process one more time and maybe a bit different terminology just to give everyone kind of a different perspective on this same problem so our deep neural network that we're trying to train looks like this right it takes us input a state is trying to Output n different numbers those n different numbers correspond to the Q value Associated to n different actions one Q value per action here the actions in Atari Breakout for example should be three actions we can either go left we can go right or we can do nothing we can stay where we are right so the next step from this we saw if we have this Q value output what we can do with it is we can make an action or we can even let me be more formal about it we can develop what's called a policy function a policy function is a function that given a state it determines what is the best action so that's different than the Q function right the Q function tells us given a state what is the best or what is the value the return of every action that we could take the policy function tells us one step more than that given it given a state what is the best action right so it's a very end-to-end way of thinking about you know the agent's decision making process based on what I see right now what is the action that I should take and we can determine that policy function directly from the Q function itself simply by maximizing and optimizing all of the different Q values for all of the different actions that we see here so for example here we can see that given this state the Q function has the results of these three different values has a q value of 20 if it goes to the left has a q value of 3 if it stays in the same place and it has a q value of zero it's going to basically die after this iteration if it moves to the right because you can see that the ball is coming to the left of it if it moves to the right the game is over right so it needs to move to the left in order to do that in order to continue the game and the Q value reflects that the optimal action here is simply going to be the maximum of these three Q values in this case it's going to be 20 and then the action is going to be the corresponding action that comes from that 20 which is moving left now we can send this action back to the environment in the form of the game to execute the next step right and as the agent moves through this environment it's going to be responded with not only by new pixels that come from the game but more importantly some reward signal now it's very important to remember that the reward signals in pong or sorry in in Atari Breakout are very sparse right you get a reward not necessarily based on the action that you take at this exact moment it usually takes a few time steps for that ball to travel back up to the top of the screen so usually your rewards will be quite delayed maybe at least by several time steps sometimes even more if you're bouncing off of the corners of the screen now one very uh popular or very famous approach that showed this was presented by deepmind Google deepmind several years ago where they showed that you could train a q value Network and you can see the input on the left hand side is simply the raw pixels coming from the screen all the way to the actions of a controller on the right hand side and you could train this one network for a variety of different tasks all across the Atari Breakout ecosystem of games and for each of these tasks the really fascinating thing that they showed was for this very simple algorithm which really relies on random choice of selection of actions and then you know learning from you know actions that don't do very well that you discourage them and trying to do actions that did perform well more frequently very simple algorithm but what they found was even with that type of algorithm they were able to surpass human level performance on over half of the game there were some games that you can see here were still below human level performance but as we'll see this was really like a such an exciting Advance because of the Simplicity of the algorithm and how you know clean the formulation of the training was you only needed a very little amount of prior knowledge to impose onto this neural network for it to be able to learn how to play these games you never had to teach any of the rules of the game right you only had to let it explore its environment play the game many many times against itself and learn directly from that data now there are several very important downsides of Q learning and hopefully these are going to motivate the second part of today's lecture which we'll talk about but the first one that I want to really convey to everyone here is that you learning is naturally um applicable to discrete action spaces right because you can think of this output space that we're providing it's kind of like one number per action that could be taken now if we have a continuous action space we have to think about clever ways to work around that in fact there are now more recently there are some solutions to achieve queue learning and continuous action spaces but for the most part Q learning is very well suited for discrete action spaces and we'll talk about ways of overcoming that with other approaches a bit later and the second component here is that the policy that we're learning right the Q function is giving rise to that policy which is the thing that we're actually using to determine what action to take given any state that policy is determined by you know deterministically optimizing that Q function we simply look at the results from the Q function and apply our or we we look at the results of the Q function and we pick the action that has the best or the highest Q value that is very dangerous in many cases because of the fact that it's always going to pick the best value for a given State there's no stochasticity in that pipeline so you can very frequently get caught in situations where you keep repeating the same actions and you don't learn to explore potentially different options that you may be thinking of so to address these very important challenges that's hopefully going to motivate now the next part of today's lecture which is going to be focused on policy learning which is a different class of reinforcement learning algorithms that are different than Q learning algorithms and like I said those are called policy gradient algorithms and policy gradient algorithms the main difference is that instead of trying to infer the policy from the Q function we're just going to build a neural Network that will directly learn that policy function from the data right so it kind of Skips one step and we'll see how we can train those Networks so before we get there let me just revisit one more time the queue function illustration that we're looking at right queue function we are trying to build a neural network outputs these Q values one value per action and we determine the policy by looking over this state of Q values picking the value that has the highest and looking at its corresponding action now with policy networks the idea that we want to keep here is that instead of predicting the Q values themselves let's directly try to optimize this policy function here we're calling the policy function Pi of s right so Pi is the policy s is our state so it's a it's a function that takes as input only the state and it's going to directly output the action so the outputs here give us the desired action that we should take in any given state that we find ourselves in that represents not only the best action that we should take but let's denote this as basically the probability that selecting that action would result in a very desirable outcome for our Network so not necessarily the value of that that action but rather the probability that selecting that action would be the highest value right so you don't care exactly about what is the numerical value that selecting this action takes or gives rise to rather but rather what is the likelihood that selecting this action will give you the best performing value that you could expect exact value itself doesn't matter you only care about if selecting this action is going to give you with high likelihood the best one so we can see that if these predicted probabilities here right in this example of Atari right going left has the probability of being the highest value action with 90 percent staying in the center that's a probability of uh 10 going right is zero percent so ideally what our neural networks should do in this case is 90 of the time in this situation go to the left 10 of the time it could still try staying at where it is but never it should go to the right now note that this now is a probability distribution this is very different than a q function a q function has actually no uh structure right the Q values themselves can take any real number right but here the policy network has a very formulated output all of the numbers here in the output have to sum to one because this is a probability distribution right and that gives it a very rigorous version of how we can train this model that makes it a bit easier to train than Hue functions as well so one other very important advantage of having an output that is a probability distribution is actually going to tie back to this other issue of Q functions and Q neural networks that we saw before and that is the fact that Hue functions are naturally suited towards discrete action spaces now when we're looking at this policy Network we're outputting a distribution right and remember those distributions can also take continuous forms in fact we've seen this in the last two lectures right in the generative lecture we saw how vaes could be used to predict gaussian distributions over their latent space in the last lecture we also saw how we could learn to predict uncertainties which are continuous probability distributions using data and just like that we could also use this same formulation to move Beyond discrete action spaces like you can see here which are one possible action a probability Associates at one possible action in a discrete set of possible actions now we may have a space which is not what action should I take go left right or send Center but rather how quickly should I move and in what direction should I move right that is a continuous variable as opposed to a discrete variable and you could say that now the answer should look like this right moving very fast to the right versus very slow to the or excuse me very fast to the left versus very slow to the left has this continuous spectrum that we may want to model now when we plot this entire distribution of taking an action giving a state you can see basically a very simple illustration of that right here this this distribution has most of its mass over or sorry it has all of its mass over the entire real number line first of all it has most of its mass right in the optimal action space that we want to take so if we want to determine the best action to take we would simply take the mode of this distribution right the highest point that would be the speed at which we should move and the direction that we should move in if we wanted to also you know try out different things and explore our space we could sample from this distribution and still obtain some stochasticity now let's look at an example of how we can actually model these continuous distributions and actually we've already seen some examples of this in the previous two lectures like I mentioned but let's take a look specifically in the context of reinforcement learning and policy gradient learning so instead of predicting this probability of taking an action giving all possible states which in this case there is now an infinite number of because we're in the continuous domain we can't simply predict a single probability for every possible action because there is an infinite number of them so instead what if we parametrized our action Space by a distribution right so let's take for example the gaussian distribution to parametrize a gaussian distribution we only need two outputs right we need a mean and a variance given a mean and a variance we can actually have a probability mass and we can compute a probability over any possible action that we may want to take just from those two numbers so for example in this image here we may want to Output a gaussian that looks like this right its mean is centered at uh let's see negative 0.8 indicating that we should move basically left with a speed of 0.8 meters per second for example and again we can see that because this is a probability distribution because of the format of policy networks right we're enforcing that this is a probability distribution that means that the integral now of this of this outputs right by definition of it being a gaussian must also integrate to one okay great so now let's maybe take a look at how policy gradient networks can be trained and you know step through that process as well as we look at a very concrete example and maybe let's start by just revisiting this reinforcement learning Loop that we've started this class with now let's specifically consider the example of training an autonomous vehicle since I think that this is a particularly very intuitive example that we can walk through the agent here is the vehicle right the state could be obtained through many sensors that could be mounted on the vehicle itself so for example autonomous vehicles are typically equipped with sensors like cameras lidars Radars Etc all of these are giving observational inputs to the to the vehicle the action that we could take is a steering wheel angle this is not a discrete variable this is a continuous variable It's actually an angle that could take any real number and finally the reward in this very simplistic example is the distance that we travel before we crash okay so now let's take a look at how we could train a policy gradient neural network to solve this task of self-driving cars as a concrete example so we start by initializing our agents right remember that we have no training data right so we have to think about actually reinforcement learning is almost like a data acquisition plus learning pipeline combined together so the first part of that data acquisition pipeline is first to initialize our agent to go out and collect some data so we start our vehicle our agent and in the beginning of course it knows nothing about driving it's never it's been exposed to any of these rules of the environment or the observation before so it runs its policy which right now is untrained entirely until it terminates right until it goes outside of some bounds that we Define we measure basically the reward as the distance that we traveled before it terminated and we record all of the states all of the actions and the final reward that it obtained until that termination right this becomes our mini data set that we'll use for the first round of training let's take those data sets and now we'll do one step of training the first step of training that we'll do is to take excuse me to take the later half of our of our trajectory that our agent ran and decrease the probability of actions that resulted in low rewards now because the vehicle we know the vehicle terminated we can assume that all of the actions that occurred in the later half of this trajectory were probably not very good actions because they came very close to termination right so let's decrease the probability of all of those things happening again in the future and we'll take all of the things that happened in the beginning half of our training episode and we will increase their probabilities now again there's no reason why there shouldn't necessarily be a good action that we took in the first half of this trajectory and a bad action in the later half but it's simply because actions that are in the later half were closer to a failure and closer determination that we can assume for example that these were probably sub-optimal actions but it's very possible that these are noisy rewards as well because it's such a sparse signal it's very possible that you had some good actions at the end and you were actually trying to recover your car but you were just too late now repeat this process again re-initialize the agent one more time and run it until completion now the agent goes a bit farther right because you've decreased the probabilities at the ends increase the probabilities of the future and you keep repeating this over and over again until you notice that the agent learns to perform better and better every time until it finally converges and at the end the agent is able to basically follow Lanes usually swerving a bit side to side while it does that without crashing and this is actually really fascinating because this is a self-driving car that we never taught anything about what a lane marker means or what are the rules of the road anything about that right this was a car that learned entirely just by going out crashing a lot and you know trying to figure out what to do to not keep doing that in the future right and the remaining question is actually how we can update you know that policy as part of this algorithm that I'm showing you on the right on the left hand side right like how can we basically formulate that same algorithm and specifically the update equation steps four and five right here these are the two really important steps right of how we can use those two steps to train our policy and decrease the probability of bad events while promoting these likelihoods of all these good events so let's assume the let let's look at the loss function first of all the loss function for a policy gradient neural network uh looks like this and then we'll start by dissecting it to understand why this works the way it does so here we can see that the loss consists of two terms the first term is this term in green which is called the log likelihood of selecting a particular action the second term is something that all of you are very familiar with already this is simply the return at a specific time right so that's the expected return on rewards that you would get after this time point now let's assume that we got a lot of reward for a particular action that had a high log probability or a high probability right if we got a lot of reward for a particular action that had high probability that means that we want to increase that probability even further so we do it even more or even more likelihood we sample that action again into the future on the other hand if we selected or let's say if we obtained a reward that was very low for an action that had high likelihood we want the inverse effect right we never want to sample that action again in the future because it resulted in a low reward right and you'll notice that this loss function right here by including this negative we're going to minimize the likelihood of achieving any action that had love Awards in this trajectory now in our simplified example on the the car example all the things that had low rewards were exactly those actions that came closest to the termination part of the neural of the vehicle right all the things that had high rewards were the things that came in the beginning that's just the assumption that we make when defining our reward structure now we can plug this into the the loss of gradient descent algorithm to train our neural network when we see you know this policy gradient algorithm which you can see highlighted here this gradient is exactly of the policy part of the neural network that's the probability of selecting an action given a specific State and if you remember before when we defined you know what does it mean to be a policy function that's exactly what it means right given a particular state that you find yourself in what is the probability of selecting a particular action with the highest likelihood and that's you know exactly where this method gets its name from this policy gradient piece here that you can see here now I want to take maybe just a very brief second towards the end of the class here just to talk about you know some of the the challenges and keeping in line with the first lecture today some of the challenges of deploying these types of algorithms in the context of the real world right what do you think when you look at this training algorithm that you can see here right what do you think are the shortcomings of this training algorithm and which Step I guess specifically if we wanted to deploy this approach into reality yeah exactly so it's step two right if you wanted to do this in reality right that essentially means that you want to go out collect your car crashing it a bunch of times just to learn how to not crash it right and that's you know that's simply not feasible right number one it's also you know very dangerous number two um so there are ways around this right the number one way around this is that people try to train these types of models in simulation right simulation is very safe because you know if we're not going to actually be damaging anything real it's still very inefficient because we have to run these algorithms a bunch of times and crash them a bunch of times just learn how not to crash but at least now at least from a safety point of view it's much safer but you know the problem is that modern simulation engines for reinforcement learning and generally very broadly speaking modern simulators for vision specifically do not at all capture reality very accurately in fact uh there's a very famous notion called The Sim to real gap which is a gap that exists when you train algorithms in simulation and they don't extend to a lot of the phenomena that we see and the patterns that we see in reality and one really cool result that I want to just highlight here is that when we're training reinforcement learning algorithms we ultimately want them to be you know not operating in simulation we want them to be in reality and as part of our lab here at MIT we've been developing this very very cool brand new photorealistic simulation engine that goes beyond basically the Paradigm of how simulators work today which is basically defining a model of their environment and trying to you know synthesize that that model essentially these simulators are like glorified game engines right they all look very game-like when you look at them but one thing that we've done is taken a data-driven approach using real data of the real world can we build up synthetic environments that are super photorealistic and look like this right so this is a cool result that we created here at MIT developing this photorealistic simulation engine this is actually an autonomous agent not a real car driving through our virtual simulator in a bunch of different types of different scenarios so this simulator is called Vista it allows us to basically use real data that we do collect in the real world but then re-simulate those same real roads so for example let's say you take your car you drive out on Mass Ave you collect data of Mass Ave you can now drop a virtual agent into that same simulated environment observing new viewpoints of what that scene might have looked like from different types of perturbations or or types of angles that it might be exposed to and that allows us to train these agents now entirely using reinforcement learning no human labels but importantly allow them to be transferred into reality because there's no sim to real Gap anymore so in fact we we did exactly this we placed agents into our simulator we trained them using the exact algorithms that you learned about in today's lecture these policy gradient algorithms and all of the training was done entirely in simulation then we took these policies and we deployed them on board our full-scale autonomous vehicle this is now in the real world no longer in simulation and on the left hand side you can see basically this car driving through this environment completely autonomous in the real world no transfer learning is is done here there is no augmentation of data from Real World data this is entirely trained using simulation and this represented actually the first time ever that reinforcement learning was used to train a policy end to end for an autonomous vehicle that could be deployed in reality so that was something really cool that uh we we created here at MIT but now that we covered you know all of this foundations of reinforcement learning and policy learning I want to touch on some other maybe very exciting applications that we're seeing and one very popular application that a lot of people will tell you about and talk about is the game of Go so here reinforcement learning agents could be actually tried to put against the test against you know Grand Master Level go players and you know at the time achieved incredibly impressive results so for those of you who are not familiar with the game of go the go game of Go is played on a 19 by 19 board the rough objective of go is to claim basically more board pieces than your opponent right and through the grid of uh sorry through the grid that you can see here this 19 by 19 grid and while the game itself the The Logical rules are actually quite simple the number of possible action spaces and possible states that this board could be placed into is greater than the number of atoms in the universe right so this this game even though the rules are very simple in their logical definitions is an extraordinarily complex game for an artificial algorithm to try and master so the objective here was to build a reinforcement learning algorithm to master the game of Go not only beating you know these gold standard softwares but also what was at the time like an amazing result was to beat the Grand Master Level player so the number one player in the world of go was a human a human Champion obviously so Google deepmind Rose to this challenge they created a couple years ago developing this solution which is very much based in the exact same algorithms that you learned about in today's lecture combining both the value part of this network with residual layers which we'll cover in the next lecture tomorrow and using reinforcement learning pipeline they were able to defeat the grand champion human players and the idea at its core was actually very simple the first step is that you train a neural network to basically watch human level experts right so this is not using reinforcement learning this is using supervised learning using the techniques that we covered in lectures one two and three and from this first step the goal is to build like a policy that would imitate some of the rough patterns that a human type of player or a human Grandmaster would take based on a given board State the type of actions that they might execute but then given this pre-trained model essentially you could use it to bootstrap in reinforcement learning algorithm that would play against itself in order to learn how to improve even beyond the human levels right so it would take its human understandings try to imitate the humans first of all but then from that imitation they would pin these two neural networks against themselves play a game against themselves and the winners would be receiving a reward the losers would try to negate all of the actions that they may have acquired from their human counterparts and try to actually learn new types of rules and new types of actions basically that might be very beneficial to achieving superhuman performance and one of the very important auxiliary tricks that brought this idea to be possible was the usage of this second Network this auxiliary Network which took as input the state of the board and tried to predict you know what are all of the different possible uh board states that might emerge from this particular State and what would their values be what would their potential returns and their outcomes be so this network was an auxiliary Network that was almost hallucinating right different board states that it could take from this particular State and using those predicted values to guide its planning of you know what action should it take into the future and finally very much more recently they extended this algorithm and showed that they could not even use the human Grand Masters in the beginning to imitate from in the beginning and bootstrap these algorithms what if they just started entirely from scratch and just had two neural networks never trained before they started pinning themselves against each other and you could actually see that you could without any human supervision at all have a neural network learn to not only outperform the solution that or outperform the humans but also outperform the solution that was created which was bootstrapped by humans as well so with that I'll summarized very quickly what we've learned today and and conclude for the day so we've talked a lot about really the foundational algorithms underlying reinforcement learning we saw two different types of reinforcement learning approaches of how we could optimize these Solutions first being Q learning where we're trying to actually estimate given a state you know what is the value that we might expect for any possible action and the second way was to take a much more end-to-end approach and say how given a state that we see ourselves in what is the likelihood that I should take any given action to maximize the potential that I I have in this particular State and I hope that all of this was very exciting to you today we have a very exciting lab and kickoff for the competition and the deadline for these competitions will be well it was originally set to be Thursday which is uh tomorrow at 11 pm thank you foreign
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2022_Recurrent_Neural_Networks_and_Transformers.txt
all right hi everyone and welcome back my name is ava and before we dive into lecture two of success 191 which is going to be on deep sequence modeling i'll just note that as you probably noticed we're running a little bit late so we're going to proceed with the lecture you know in full and in completion and at the time it ends then we'll transition to the software lab portion of the course just immediately after at the time that this lecture ends and i'll make a note about the structure and how we're going to run the software labs at the end of my lecture okay so in alexander's first lecture we learned about really the essentials of neural networks and feed-forward models and how to construct them so now we're going to turn our attention to applying neural networks to tasks that involve modeling sequences of data and we'll see why these sorts of tasks require a fundamentally different type of network architecture from what we've seen so far and to build up to that point we're going to walk through step by step building up intuition about why modeling sequences is different and important and start back with our fundamentals of feed forward networks to build up to the models we'll introduce in this lecture all right so let's dive into it let's first motivate the need for sequence modeling and what we mean in terms of sequential data with a super intuitive and simple example so suppose we have this picture of a ball and our task is to predict where this ball is going to travel to next now if i don't give you any prior information on the ball's history any guess on its next position is just going to be that a random guess but now instead if in addition to the current location of the ball i also gave you some information about its previous locations now our problem becomes much easier and i think we can all agree that we have a sense of where this ball is going to next and beyond this simple example the fact of the matter is that sequential data is all around us for example audio like the waveform of my voice speaking to you can be split up into a sequence of sound waves while text can be split up into a sequence of characters or a sequence of words and beyond these two examples there are many many more cases in which sequential processing may be useful from medical signals like ekgs to stock prices to dna sequences and beyond so now that we've gotten a sense of what sequential data looks like let's consider applications of sequential modeling in the real world in alexander's first lecture we learned about this notion of feed-forward models that operate sort of on this one-to-one fixed setting right a single input to a single output and he gave the very simple example of a binary classification task predicting whether you as a student will pass or fail this class of course we all hope you will pass but in this example there's no real component of time or sequence right in contrast with sequence modeling we can now handle a vast variety of different types of problems where for example we have a sequence of temporal inputs and potentially a sequential output so let's consider one example right where we have a natural language processing task where we have a tweet and we want to classify the emotion or the sentiment associated with that tweet mapping a sequence of words to a positive or negative label we can also have a case where our input initially may not have a time dimension so for example we have this image of a baseball player throwing a ball but instead the output that we want to generate has a temporal or sequential component where we now want to caption that image with some associated text and finally we can have a final case where we have a sequential input and we want to map it to a sequential output for example in the case of translating text from one language to another and so sometimes it can be really challenging to kind of wrap your head around and get the idea about how we can add a new temporal dimension to our models and so to achieve this understanding what i want to do is really start from the fundamentals and revisit the concept of the perceptron that alexander introduced and go step by step from that foundation to develop an understanding of what changes we need to make to be able to handle sequential data so let's recall the architecture and the the diagram of the perceptron which we studied in the first lecture we defined a set of inputs and we have these weights that are associated with connecting those inputs to an internal node and we can apply those weights apply a non-linearity and get this output and we can extend this now to a layer of individual neurons a layer of perceptrons to yield a multi-dimensional output and in this example we have a single layer of perceptrons shown in green taking three inputs shown in blue predicting four outputs in purple but does this notion does this have a notion of time or of sequence not yet let's simplify that diagram right what i've done here is just i've collapsed that layer of those four perceptrons into the single green box and i've collapsed those nodes of the input and the output into these single circles that are represented as vectors so our inputs x are some vectors of a length m and our outputs are vectors of another length n still here what we're considering is an input at a specific time denoted by t nothing different from what we saw in the first lecture and we're passing it through a feed-forward model to get some output what we could do is we could have fed in a sequence to this model by simply applying the same model that same series of operations over and over again once for each time step in our sequence and this is how we can handle these individual inputs which occur at individual time steps so first let's just rotate the same diagram i've taken it from a horizontal view to a vertical view we have this input vector at some time sub t we feed it into our network get our output and since we're interested in sequential data let's assume we don't just have a single time step we now have multiple individual time steps starting from t equals zero our first time step in our sequence and extending forward right again now we're treating the individual time steps as isolated time steps right we don't yet have a notion of the relationship between time step zero and time step one time step two and so on and so forth and what we know from the first lecture is that our output vector at a particular time step is just going to be a function of the input at that time step what could be the issue here right well we have this transformation yet but this is inherently sequential data and it's probably in a sequence for some important reason and we don't yet have any sort of interdependence or notion of interconnectedness across time steps here and so if we consider the output at our last time step right the fundamental point is that that output is related to the inputs at the previous time steps how can we capture this interdependence what we need is a way to relate the network's computations at a particular time step to its prior history and its memory of the computations from those prior time steps passing information forward propagating it through time and what we consider doing is actually linking the information and the computation of the network at different time steps to each other via what we call a recurrence relation and specifically the way we do this in neural recurrent recurrent models is by having what we call an internal memory or a state which we're going to denote as h of t and this value h of t is maintained time set to time step and it can be passed forward across time and the idea and the intuition here is we want the state to try to capture some notion of memory and what this means for the network's computation its output is that now our output is dependent not only on the input at a particular time step but also this notion of the state of the memory that's going to be passed forward from the prior time step right and so this output just to make this very explicit this output at a particular time step t depends both on the input as well as the past memory and that past memory is going to capture the prior history of what the what has occurred previously in the sequence and because this output y of t is a function of both current input past memory what this means is we can define and describe these types of neurons in terms of a recurrence relation and so on the right you can see how we visualize these individual time steps as sort of being unrolled extended across time but we could also depict this same relationship via a cycle which i've shown on the left which shows and highlights this concept of a recurrence relation all right so hopefully this builds up some intuition about this notion of recurrence and why it can help us in sequential modeling tasks and this intuition that we've built up from starting with the feed forward model is really the key to the recurrent neural network or rnns and we're going to continue to build up from this foundation and build up our understanding of how this recurrence relation defines the behavior of an rnn so let's formalize this just a bit more right the key idea that i mentioned and i'm going to keep driving home is that the rnn maintains this internal state h of t which is going to be updated at each time step as the sequence is processed and we do this by applying this recurrence relation at every time step where our cell state is now a function yeah our cells ourselves say h of t is now a function of the current input x of t as well as the prior state h of t minus 1. and importantly this function is parametrized by a set of weights w and this set of weights is what we're actually going to be learning through our network over the course of training as the model is being learned and as these weights are being updated right and the key point here is that this set of weights w is the same across all time steps that are being considered in the sequence and this function that computes this hidden state is also the same we can also step through this intuition behind the rnn algorithm in sort of pseudo code to get a better sense of how these networks work so we can begin by initializing our rnn right what does it take to initialize it well first we have to initialize some first hidden state which we're going to do with a vector of zeros and we're going to consider a sentence that's going to serve as our input sequence to the model and our task here is to try to predict the next word that's going to come at the end of the sentence and our recurrence relation is captured by this loop where we're going to iterate through the words in the sentence and at each step we're going to feed both the current word being considered as well as the previous hidden state into our rnn model and that's going to output a prediction for what the likely next word is and also update its internal computation of the hidden state and finally our last our token prediction that we're interested in at the end is the rnn's output after all the words all the time points in the sequence have been considered and that generates our prediction for the likely next word and so that's that hopefully provides more intuition about how this rnn algorithm is working and if you notice the internal computation of the rnn includes both this internal state output as well as ultimately trying to output the prediction that we're interested in our output vector y of t so to walk through how this we actually derive this output vector let's step through this what we do is given an input our input vector we pass that in to compute the rnn's internal state computation and breaking this function down what it's doing is just a standard neural net operation just like we saw in the first lecture right it consists of multiplication by weight matrices right donated as w and in this case we're going to multiply both the past hidden state by a weight matrix w as well as the current input x of t by another wave matrix and then we're going to add them together and apply a non-linearity and you'll notice as i just mentioned right because we have these two inputs to the state update equation we have these two independent weight matrices and the final step is to actually generate the output for a given time step which we do by taking that internal state and simply modifying it following a multiplication by another weight matrix and then using this as our generated output and that's it that's how the rnn updates its hidden state and also produces an output at a given time step so so far right we've seen the rnns depicted largely as having these loops that feed back in on themselves and as we as we built up from we can also represent this loop as being unrolled across time where effectively starting from the first time step we have this unrolled network that we can continue to unroll across time from time step 0 to our n time step time sub t and in this diagram let's now formalize things a little bit more we can also make the weight matrices that compute that are applied to the input very explicit and we can also annotate our diagram with the way matrices that relate the prior hidden state to the current hidden state and finally our predictions at individual time steps are um are generated by a a separate weight matrix matrices okay so as i as i mentioned right the key point is that these weight matrices are reused across all of the individual time steps now our next step that you may be thinking of is okay this is all great we figured out how to update the hidden state we figured out how to generate the output how do we actually train this thing right well we'll need a loss because as alexander mentioned the way we train neural networks is through this optimization this iterative optimization of a loss function or an objective function and as you may may predict right we can generate an individual loss for each of these individual time steps according to what the output at that time step is and we can generate a total sum loss by taking these time steps and summing them all together and when we make a forward pass through our network this is exactly what we do right we generate our output predictions and we sum sum the loss uh functions across individual time steps to get the total loss now let's walk through let's next walk through an example of how we can implement an rnn from scratch the previous code block that i showed you was kind of an intuitive pseudo code example and here now we're going to get into things in a little bit more detail and build up the rnn from scratch our rnn is going to be defined as a neural network layer and we can build it up by inheriting from the neural network layer class that alexander introduced in the first lecture and as before we are going to start by initializing our weight matrices and also initializing the hidden state to zero our next step which is really the important step is defining the call function which is what actually defines the forward pass through our rnn model and within this call function the key operations are as follows we first have a update of the hidden state right according to that same equation we saw earlier incorporating the previous hidden state incorporating the input summing them passing them through a non-linearity we can then compute the output transforming the hidden state and finally at each time step we return both the current output and our hidden state that's it that's how you can code up an rnn line by line and define the forward pass but thankfully tensorflow has very very conveniently summarized this already and implemented these types of rnn cells for us in what they wrap into the simple rnn layer and you're going to get some practice using this um this class of neural network layer in today's software lab all right so to recap right we've we've seen how uh we've seen the the function and the computation of rnns by first moving from the one-to-one computation of a traditional feed-forward vanilla or vanilla neural network excuse me and seeing how that breaks down when considering sequence modeling problems and as i mentioned right we can we can apply this idea of sequence modeling and of rnns to many different types of tasks for example taking a sequential input and mapping it to one output taking a static input that's not resolved over time and generating a sequence of outputs for example a text associated with an image or translating a sequence of inputs to a sequence of outputs which can be done in machine translation natural language processing and also in generation so for example in composing new musical scores entirely using recurrent neural network models and this is what you're going to get your hands-on experience with in today's software lab and beyond this right you know you all come from a variety of different backgrounds and interests and disciplinary domains so i'm sure you can think of a variety of other applications where this type of architecture may be very useful okay so to wrap up this section right this simple example of rnns kind of motivates a set of concrete design criteria that i would like you to keep in mind when thinking about sequence modeling problems specifically whatever model we design needs to be able to handle sequences of variable length to track long-term dependencies in the data to be able to map something that appears very early on in the sequence to something related later on in the sequence to be able to preserve and reason and maintain information about order and finally to share parameters across the sequence to be able to keep track of these dependencies and so most of today's lecture is going to focus on recurrent neural networks as a workhorse neural network architecture for sequence modeling criteria design problems but we'll also get into a new and emerging type of architecture called transformers later on in the lecture which i think you'll find really exciting and really interesting as well before we get into that i'd like to spend a bit of time thinking about the these design criteria that i enumerated and why they're so important in the context of sequence modeling and use that to move forward into some concrete applications of rnns and sequence models in general so let's consider a very simple sequence modeling problem suppose we have this sentence right this morning i took my cat for a walk and our task here is to use some prior information in the sentence to predict the next word in the sequence right this morning i took my cat 4a predict the next work walk how can we actually go about doing this right i've introduced the intuition and the diagrams and everything about the recurrent neural network models but we really haven't started to think about okay how can we even represent language to a neural network how can we encode that information so that it can actually be passed in and operated on mathematically so that's our first consideration right let's suppose we have a model we're inputting a word and we want to use our neural network to predict the next work word what are considerations here right remember the neural network all it is is it's just a functional operator they execute some functional mathematical operation on an input they can't just take a word as a string or as as a as a language as a sequence of language characters passed in as is that's simply not going to work right instead we need a way to represent these elements these words numerically to be fed in to our neural network as a vector or a matrix or an array of numbers such that we can operate on it mathematically and get a vector or array of numbers out this is going to work for us so how can we actually encode language transform it into this vector representation the solution is this concept of what we call an embedding and the idea is that we're going to transform a set of indices which effectively are just identifiers for objects into some vector of fixed size so let's think through how this embedding operation could work for language data for example for this sequence that we've been considering right we want to be able to map any word that could appear in our body of language our corpus into a fixed sized vector and so the first step to doing this is to think about the vocabulary right what's the overall space of unique words in your corpus in your language from this vocabulary we can then index by mapping individual words to numerical unique indices and then these indices can then be mapped to an embedding which is just a fixed length vector one way to do this is by taking a vector right that's length is just going to equal the total number of unique words in our vocabulary and then we can indicate what word that vector corresponds to by making this a sparse vector that's just binary so it's just zeros and ones and at the index that corresponds to that word we're going to indicate the identity of that word with a one right and so in this example our word is cat and we're going to index it at the second index and what this is referred to is a one hot embedding it's a very very popular embed choice of embedding which you will encounter across many many different domains another option to generating and embedding is to actually use some sort of machine learning model it can be a neural network to learn in embedding and so the idea here is from taking an input of words that are going to be indexed numerically we can learn an embedding of those words in some lower dimensional space and the motivation here is that by introducing some sort of machine learning operation we can map the meaning of the words to an encoding that is more informative more representative such that similar words that are semantically similar in meaning will have similar embeddings and this will also get us our fixed length encoding vector and this idea of a learned embedding is a super super powerful concept that is very pervasive in modern deep learning today and it also motivates a whole nother class of problems called representation learning which is focused on how we can take some input and learn use neural networks to learn a meaningful meaningful encoding of that of that input for our problem of choice okay so going back to our design criteria right we're first going to be able to try to handle variable sequence lengths we can consider again this this problem of trying to predict the next word we can have a short sequence we can have a longer sequence or an even longer sequence right but the whole point is that we need to be able to handle these variable length inputs and feed forward networks are simply not able to do this because they have inputs of fixed dimensionality but because with rnns we're unrolling across time we're able to handle these variable sequence lengths our next our next criteria is that we need to be able to capture and model long-term dependencies in the data so you can imagine an example like this where information from early on in the sentence is needed to make a accurately make a prediction later on in the sentence and so we need to be able to capture this longer term information in our model and finally we need to be able to retain some sense of order right that could result in differences in the overall contact or meaning of a sentence so in this example these two sentences have the exact same words repeated the exact same number of times but the semantic meaning is completely different because the words are in different orders and so hopefully this example shows a very concrete and common example of sequential data right language and motivates how these different design considerations uh play into this general problem of sequence modeling and so these points are something that i really like for you to take away from this class and keep in mind as you go forward implementing these types of models in practice our next step as we as we walk through this lecture on sequence modeling is to actually go through very briefly on the algorithm that's used to actually train recurrent neural network models and that algorithm is called backpropagation through time and it's very very related to the backpropagation algorithm that alexander introduced in the first lecture so if you recall the way we train feedforward models is go from input and make a forward pass through the network going from input to output and then back propagate our gradients back downwards through the network taking the derivative of the loss with respect to the weights learned by our model and then shifting and adjusting the parameters of these weights in order to try to minimize the loss over the course of training and as we saw earlier for rnns they have a little bit of a different scenario here because our forward pass through the network consists of going forward across time computing these individual loss values at the individual time steps and then summing them together to back propagate instead of back propagating errors through a single feed forward network now what we have to do is back propagate error individually across each time step and then across all the time steps all the way from where we currently are in this sequence to the beginning of the sequence and this is the reason why this algorithm is called backpropagation through time because as you can see errors flow backwards in time to the beginning of our data sequence and so taking a closer look at how these gradients flow across this rnn chain what you can see is that between each time step we need to perform these individual matrix multiplications right which means that computing the gradient that is taking the loss with respect to an internal state and the weights of that internal state requires many many matrix in multiplications involving this weight matrix as well as repeated gradient computation so why might this be problematic well if we have many of these weight values or gradient values that are much much much larger than one we could have a problem where during training our gradients effectively explode and the idea behind this is the gradients are becoming extremely large due to this repeated multiplication operation and we can't really do optimization and so a simple solution to this is called gradient clipping just trimming the gradient values to scale back bigger gradients into a smaller value we can also have the opposite problem where now our weight values are very very small and this leads to what is called the vanishing gradient problem and is also very problematic for training recurrent neural models and we're going to touch briefly on three ways that we can mitigate this vanishing gradient problem in recurrent models first choosing our choice of activation function initially initializing the weights in our model intelligently and also designing our architecture of our network to try to mitigate this issue altogether the reason why before we do that to take a step back right the reason why vanishing gradients can be so problematic is that they can completely sabotage this goal we have of trying to model long-term dependencies because we are multiplying many many small numbers together what this effectively biases the model to do is to try to preferentially focus on short-term dependencies and ignore the long-term dependencies that may exist and while this may be okay for simple sentences like the clouds are in the blank it really breaks down in longer sentences or longer sequences where where information from earlier on in the sequence is very important for making a prediction later on in the case of this example here so how can we alleviate this our first strategy is a very simple trick that we can employ when designing our networks we can choose our activation function to prevent the gradient from shrinking too dramatically and the relu activation function is a good choice for doing this because in instances where our input x is greater than zero it automatically boosts the value of the activation function to one whereas other activation functions don't do do that right another trick is to be smart in how we actually initialize the parameters in our model what we can do is initialize the weights that we set to the identity matrix which prevents them from shrinking to zero too rapidly during back propagation and the final and most robust solution is to use a more complex recurrent unit that can effectively track long-term dependencies in the data and the idea here is that we're going to introduce this computational infrastructure called the gate which functions to selectively add or remove information to the state of the rnn and this is done by you know standard operations that we see in neural networks for example sigmoid activation functions pointwise matrix multiplications and the idea behind these gates is that they can effectively control what information passes through the recurrent cell and today we're going to touch very briefly on one type of gated cell called a lstm a long short term memory network and they're fairly good at using this gating mechanism to selectively control information over many time steps and so i'm not going to get into the details right because we have more interesting things to uh touch on in our on our limited time but the key idea behind the lstms is they have that same chain-like structure as a standard rnn but now the internal computation is a little bit more complex right we have these different gates that are effectively interacting with each other to try to control information flow and you would implement the lstm in tensorflow just as you would a standard rnn and while that diagram i just like blew past that right the gated structure we could spend some time talking about the mathematics of that but really what i want you to take away from this lecture is the key concepts behind what the lstm is doing internally so to break that down the lstm like a standard rnn is just maintaining this notion of self-state but it has these additional gates which control the flow of information functioning to effectively eliminate irrelevant information from the past keeping what's relevant keeping what's important from the current input using that important information to update the internal cell state and then outputting a filtered version of that cell state as the predictive output and what's key is that because we incorporate this great gated structure in practice our backpropagation through time algorithm actually becomes much more stable and we can mitigate against the vanishing gradient problem by having fewer repeated matrix multiplications that allow for a smooth flow of gradients across our model okay so now we've gone through the fundamentals of rnns in terms of their architecture and training and i'd next like to consider a couple of concrete examples of how we can use recurrent role models the first is let's imagine we're trying to use a recurrent model to predict the next musical note in a sequence and use this to generate brand new musical sequences what we can do is we can treat this as a next next input predict sorry a next time step prediction problem where you input a sequence of notes and the output at each time step is the most likely next note in the sequence and so for example it turns out that this very famous classical composer named franz schubert had what he called a very famous unfinished symphony and that was left as the name suggests partially undone and he did not get a chance to actually finish composing the symphony before he died and a few years ago some researchers trained a neural network model on this uh on the prior movements of that symphony to try to actually generate new music that would be similar to schubert's music to effectively finish the symphony and compose two new movements so we can actually take a listen to what that result turned out like [Music] so hopefully you you were able to hear that and and appreciate the point that maybe there are some classical music aficionados out there who can recognize it as being stylistically similar hopefully to schubert's music and you'll get practice with this exact same task in today's lab where you'll be training a recurrent model to generate brand new irish folk songs that have never been heard before as another cool example which i kind of motivated at the beginning of the lecture we can also do a classification task where we take an input sequence and try to predict a single output associated with that sequence for example taking a sequence of words and assigning an emotion or a sentiment associated with that sequence and one use case for this type of task is in tweet sentiment classification so training a model on a bunch of tweets from twitter and using it to predict a sentiment associated with given tweets so for example we can take we can train a model like this with a bunch of tweets hopefully we can train an rnn to predict that this first tweet about our course has a positive sentiment but that this other tweet about winter weather is actually just having a negative sentiment okay so at this point you know we focus exclusively on recurrent models and it's actually fairly remarkable that with this type of architecture we can do things that seem so complex as generating brand new classical music but let's take a step back right with any technology they're going to be strengths and there are going to be limitations what could be potential issues of using recurrent models to perform sequence modeling problems the first key limitation is that these network architectures fundamentally have what we like to think of as an encoding bottleneck we need to take a lot of content which may be a very long body of text many different words and condense it into a representation that can be predicted on and information can be lost in this actual encoding operation another big limitation is that recurrent neurons and recurrent models are not efficient at all right they require sequentially processing information taking time steps individually and this sequential nature makes them very very inefficient on the modern gpu hardware that we use to train these types of models and finally and perhaps most importantly while we've been emphasizing this point about long-term memory the fact is that recurrent models don't really have that high of memory capacity to begin with while they can handle sequences of length on the order of tens or even hundreds with lstms they don't really scale well to sequences that are of length of thousands or ten thousands of time steps how can we do better and how can we overcome this so to understand how to do this right let's go back to what our general task is with sequence modeling we're given some sequence of inputs we're trying to compute some sort of features associated with those inputs and use that to generate some output prediction and with rnns as we saw we're using this recurrence relation to try to model sequence dependencies but as i mentioned these rnns have these three key bottlenecks what is the opposite of these these three limitations if we had any capability that we desired what could we imagine the capabilities that we'd really like to achieve with sequential models is to have a continuous stream of information that overcomes the encoding bottleneck we'd like our model to be really fast to be paralyzable rather than being slow and dependent on each of the individual time steps and finally we wanted to be able to have long memory the main limitation of rnn's when it comes to these capabilities is that they process these individual time steps individually due to the recurrence relation so what if we could eliminate the recurrence relation entirely do away with it completely one way we could do this is by simply taking our sequence and squashing everything together concatenating those individual time steps such that we have one vector of input with data from all time points we can feed it into a model calculate some feature vector and then generate an output which maybe we hope makes sense right naively a first approach to do this may be to take that squashed concatenated input pass it into a fully connected network and yay congrats we've eliminated the need for recurrence but what are issues here right this is totally not scalable right our dense network is very very densely connected right it has a lot of connections and our whole point of doing this was to try to scale to very long sequences furthermore we've completely eliminated any notion of order any notion of sequence and because of these two issues our long memory that we want all along is also made impossible and so this approach is definitely not going to work we don't have a notion of what points in our sequence is important can we be more intelligent about this and this is really the key idea behind the next concept that we're going to i'm going to introduce uh in the remaining time and that's this notion of attention right which intuitively we're we're going to think about the ability to identify and attend to parts of an input that are going to be important and this idea of attention is an extremely powerful and rapidly emerging mechanism for modern neural networks and the it it's the core foundational mechanism for this very very powerful architecture called transformers you may have heard of transformers before in popular news media what have you and the idea when you maybe want to try to look at the math and the operation of transformers can seem very daunting it was definitely daunting for me as it tends to be presented in a pretty complex and complicated way but at its core this attention mechanism which is the really key insight into transformers is actually a very elegant and intuitive idea we're going to break it down step by step so you can see how it's computed and what makes it so powerful to do that we're specifically going to be talking about this idea of self-attention and what that means is the ability to take an input and attend to the most important parts of that input i think it's easiest to build up that intuition by considering an image so let's look at this picture of our hero iron man how do we figure out what's important a naive way could just be to scan across this image pixel by pixel but as humans we don't do this our brains are able to immediately pick up what is important in this image just by looking at it that's iron man coming at us and if you think about it right what this comes down to is the ability to identify the parts that are important to attend to and be able to extract features from those regions with high attention and this first part of this problem is very very similar conceptually to search and to build up understanding of this attention mechanism we're going to start their first search how does search work so maybe sitting there listening to my lecture you're thinking wow this is so exciting but i want to learn even more about neural networks how can i do this one thing you could do is go to our friend the internet do a search and have all the videos on the internet accessible to you and you want to find something that matches your desired goal so let's consider youtube right youtube is a giant database of many many many videos and across that database the ranging of different topics how can we find and attend to a relevant video to what we're searching for right the first step you do is to input some query some query into the youtube search bar your topic is deep learning and what effectively can be done next is that for every video in this database we're going to extract some key information which we call the key it's the title maybe associated with that video and to do the search what can occur is the overlaps between your query and the keys in that database are going to be computed and as we do this at each check we make we'll ask how similar is that key the title of the video to our query deep learning our first example right this video of a turtle is not that similar our second example lecture from our course is similar and our third example kobe bryant is not similar and so this is really this idea of computing what we'll come to see as an attention mask measuring how similar each of these keys these video titles is to our query our next and final step is to actually extract the information that we care about based on this computation the video itself and we are going to call this the value and because our search was implemented with a good notion of attention gratefully we've identified the best deep learning course out there for you to watch and i'm sure all of you sitting there can hopefully relate and agree with with that assessment okay so this concept of search is very very related to how self-attention works in neural networks like transformers so let's go back from our youtube example to sequence modeling for example where we have now this sentence he tossed the tennis ball to serve and we're going to break down step by step how self-attention will work over the sequence first let's remember what we're trying to do we're trying to identify and attend to the most important features in this input without any need for processing the information time step by time step we're going to eliminate recurrence and what that means is that we need a way to preserve order information without recurrence without processing the words in the sentence individually the way we do this is by using an embedding that is going to incorporate some notion of position and i'm going to touch on this very very briefly for the sake of time but the key idea is that you compute a word embedding you take some metric that captures position information within that sequence you combine them together and you get an embedding an encoding that has this notion of position baked in you can we can talk about the math of this further if you like but this is the key intuition that i want you to come away with our next step now that we have some notion of position from our input is to actually figure out what in the input to attend to and that relates back to our search operation that i'm motivated with the youtube example we're going to try to extract the query the key and the value features and recall we're trying to learn a mechanism for self-attention which means that we're going to operate on the input itself and only on the input itself what we're going to do is we're going to create three new and unique transformations of that embedding and these are going to correspond to our query our key and our value all we do is we take our positional embedding we take a linear layer and do a matrix multiplication that generates one output a query then we can make a copy of that same positional embedding now we can take a separate and independent a different linear layer do the matrix multiplication and get another transformation of that output that's our key and similarly do this also for the value and so we have these three distinct transformations of that same positional embedding our query our key and our value our next step right is to take these three uh features right and actually compute this attention weighting figuring out how much attention to pay to and where and this is effectively thought of as the attention weighting and if you recall from our youtube example we focus on the similarity between our query and our key and in neural networks we're going to do exactly the same right so if you recall these query and key features are just numeric matrices or vectors how can we compute their similarity their overlap so let's suppose we have two vectors q and k and for vectors as you may recall from linear algebra or calculus we can compute the similarity between these vectors using a dot product and then scaling that dot product and this is going to quantify our similarity between our query and our key matrix vectors this metric is also known as the cosine similarity and we can apply the same exact operation to matrices where now we have a similarity matrix metric that captures the similarity between the query matrix and the key matrix okay let's visualize what the result of this operation could actually look like and mean remember we're trying to compute this self-attention we compute this dot product of our query and our key matrices we apply our scaling and our last step is to apply a function called the softmax which just squashes every value so that it falls between 0 and 1. and what this results in is a matrix where now the entries reflect the relationship between the components of our input to each other and so in this sentence example he tossed the tennis ball to serve what you can see with this heat map visualization is that the words that are related to each other tennis serve ball post have a higher weighting a higher attention and so this matrix is what we call the attention weighting and it captures where in our input to actually self attend to our final step is to use this weighting matrix to actually extract features with high attention and we simply do this it's super elegant by taking that attention weighting multiplying it by our value matrix and then getting a transformed version of of what was our value matrix and this is our final output this reflects the features that correspond to high attention okay that's it right i know that could be fast so let's recap it as as sort of the last thing that we're going to touch on our goal identify and attend to the most important features in the input how does this look like in an architecture right our first step was to take this positional encoding copy three times right we then pass that into three separate different linear layers computing the query the key and the value matrices and then use these values multiply them together apply multiply them together apply the scaling and apply the soft max to get this attention weighting matrix and our final step was to take that attention weighting matrix apply it to our value matrix and get this extraction of the features in our input that have high attention and so it's these core operations that form this architecture shown on the right which is called a self-attention head and we can just plug this into a larger network and it's a very very very powerful mechanism for being able to attend to important features in an input okay so that's it that's i know it's it's a lot to go through very quickly but hopefully you appreciate the intuition of this attention mechanism and how it works final note that i'll make is that we can do this multiple times right we can have multiple individual attention heads so in this example we're attending to iron man himself but we can have independent attention heads that now pay attention to different things in our input for example a background building or this little region shown on the far right which is actually an alien spaceship coming at iron man from the back all right this is the fundamentals of the attention mechanism and its application as i mentioned at the beginning of this section has been most famous and most notable in these architectures called transformers and they're very very very powerful architectures that have a variety of applications most famously perhaps is in language processing so you may have seen these examples where there are these really large language transformers that can do things like create images based on sentences for example an armchair in the shape of an avocado and many other tasks ranging from machine translation to dialogue completion and so on beyond language processing we can also apply this mechanism of self-attention to other domains for example in biology where one of the breakthroughs of last year was in protein structure structure prediction with a neural network architecture called alpha fold 2. and a key component of alpha fold 2 is this exact same self-attention mechanism and what what these authors showed was that this achieved really a breakthrough in the quality and accuracy of protein structure prediction and a final example is that this this mechanism does not just apply only to traditional sequence data we can also extend it to computer vision with an architecture known as vision transformers the idea is very similar we just need a way to encode positional information and then we can apply the attention mechanism to extract features from these images in a very powerful and very high throughput manner okay so hopefully you've gotten a sense over this course of this lecture about sequence modeling tasks and why rnns as a introductory architecture are very powerful for processing sequential data in that vein we discussed how we can model sequences using a recurrence relation how we can train rnns using back propagation through time how we can apply rnns to different types of tasks and finally we in this new component of the sequence modeling lecture we discussed how we can move beyond recurrence and recurrent neural net networks to build self-attention mechanisms that can effectively model sequences without the need for recurrence okay so that concludes the two lectures for today and again i know we're we're running a bit late on time and i apologize but i hope you enjoyed both of today's lectures in the remaining hour that we have dedicated it will focus on the software lab sessions a couple important notes for the software labs we're going to be running these labs in a hybrid format you can find the information about downloading the labs on the course website both the intro to deep learning course website as well as the canvas course website all you will need to run the software labs is a computer internet and a google account and you're going to basically walk through the labs and start executing the code blocks and fill out these to do action items that will allow you to complete the labs and execute the code and we will be holding office hours both virtually on gallertown the link for that is on the course canvas page as well as in person in mit room 10 250 for those of you who are on campus and would like to drop by for in-person office hours alexander and myself will be there so yeah with that i will i will conclude and thank you thank you again for your attention thank you for your patience and hope you enjoyed it hope to see you in the software lab sessions thank you
MIT_6S191_Introduction_to_Deep_Learning
MIT_6S191_2020_Deep_Generative_Modeling.txt
you so so far in this class we've talked about how we can use neural networks to learn patterns within data and so in this next lecture we're going to take this a step further and talk about how we can build systems that not only look for patterns and data but actually can learn to generate brand-new data based on this learned information and this is a really new and emerging field within deep learning and it's enjoying a lot of success and attention right now and in the past couple years especially and so this broadly can be considered at this field of deep generative modeling so I'd like to start by taking a quick poll so study these three phases for a moment these are three phases raise your hand if you think phase a is real-- is real-- is real-- okay so that roughly it follows with with the first vote face B is real raise your hand if you think face B is real okay and finally face C it doesn't really matter because all these faces are fake and so hopefully you know this is this is really recent work this was just posted on archive in December of 2019 the results from this latest model and today by the end of this lecture you're going to have a sense of how a deep neural network can be used to generate data that is so realistic that it fooled many of you or rather all of you okay so so far in this course we've been considering this this problem of supervised learning which means we're given a set of data and a set of labels that go along with that data and our goal is to learn a functional mapping that goes from data to labels and in this course right this is a course on deep learning and we've been largely talking about functional mappings that are described by deep neural networks but the core these mappings could be anything now we're going to turn our attention to a new class of problems and that's what is called unsupervised learning and is the topic of today's lecture and in unsupervised learning we're given only data and no labels and our goal is to try to understand the hidden or underlying structure that exists in this data and this can help us get insights into sort of what is the the foundational level of explanatory factors behind this data and as we'll see later to even generate brand-new examples that resemble the true data distribution and so this is this topic of generative modeling which is an example of unsupervised learning and as I kind of alluded to our goal here is to take input examples from some training distribution and to learn and infer a model that represents that distribution and we can do this for two main goals the first being this concept of density estimation where we we see a bunch of samples right and they lie along some probability distribution and we want to learn model that approximates the underlying distribution that's describing or generating where this data was drawn from the other task is this idea of sample generation and so in this in this context we're given input samples and we want our model to be able to generate brand-new samples that represent those inputs and so that's the idea with the fake faces that we we showed in the first slide and really the core question behind generative modeling is how can we learn some probability distribution how can we model some probability distribution that's similar to the true distribution that describes how the data was generated so why care about generative modeling besides the fact that could it can you know be used to generate these realistic human-like faces right well first of all generative approaches can under uncover the underlying factors and features present in a data set so for example if we consider the problem of facial detection we may be given a data set with many many different faces and we may not know the exact distribution of faces in terms of different features like skin color or gender or clothing or occlusion or the orientation of the face and so our training data may actually end up being very biased to particular instances of those features that are over-represented in our data set without us even knowing it and today and in the in the lab as well we'll see how we can use generative models to actually learn these underlying features and uncover the over and underrepresented parts of the data and use this information to actually create fair more representative datasets to train an unbiased classification model another great example is this question of outlier detection so if we go back to the example problem of self-driving cars most of the day that we may want to train a control network that Alexander was describing may be very common driving scene so on a freeway or on a straight road where you're driving but it's really critical that our autonomous vehicle would be able to handle all cases that it could potentially encounter in the environment including edge cases and rare events like crashes or pedestrians not just you know the straight freeway driving that is the majority of the time on the road and so generative models can be used to actually detect the outliers that exist within training distributions and use this to train a more robust model and so we'll really dive deeply into two classes of models that we call latent variable models but first before we get into those data details we have a big question what is a latent variable and I think a great example to describe the difference between latent and observed variables is this little parable and story from Plato's Republic from thousands of years ago which is known as the myth of the cave and in this myth a group of prisoners are constrained to face a wall as punishment and the only things that they can see and observe are the shadows of objects that pass in front of a fire that's actually behind them so they're just observing the shadows in front in front of their faces and so from the prisoners perspective these shadows are the observed variables they can measure them they can give them names because to them you know that's their reality they don't know that behind them they're these true objects that are actually casting the shadows because of this fire and so those objects that are behind the prisoners are like the latent variables they're not directly observable by the prisoners but they're the true explanatory factors that are casting the shadows that the prisoners see and so our goal in generative modeling is to find ways of actually learning these hidden and underlying latent variables even when we are only given the observed data okay so let's start by discussing a very simple generative model which tries to do this by encoding its input and these models are known as auto-encoders so to take a look at how an auto encoder works what is done is we first begin by feeding raw input data into the model it's passed through a series of neural network layers and what is outputted at the end of that encoding is what we call a low dimensional latent space which is a feature representation that we're trying to predict and we call this network an encoder because it's mapping this data X into a vector of latent variables Z now let's ask ourselves a question right why do we care about having this low dimensional latent space see any ideas yeah yeah it's easier to process and the I think the key that you're getting at is that it's a compressed representation of the data and in the case of a pair of images right these are pixel based data they can be very very very highly dimensional and what we want to do is to take that high dimensional information and encode it into a compressed smaller latent vector so how do we actually train a network to learn this latent variable vector Z do we have training data Frizzi do we observe these the true values of Z and can we do supervised learning the answer is no right we don't have training data for those latent variables but we can get around this by building a decoder structure that is used to reconstruct a resemblance of the original image from this learned latent space and again this decoder is a series of neural network layers which can be convolutional layers or fully connected layers that are now taking the hidden latent vector and mapping it back to the dimensionality of the input space and we called this reconstructed output X hat right since it's going to be some imperfect reconstruction or estimation of what the input X looks like and the way we can train a network like this is by simply considering the input X and our reconstructed output X hat and looking at how they are different right and we want to try to minimize the distance between the reconstruction and the input to try to get as realistic of a reconstruction as possible and so in the case of this this image example we can simply take the mean squared error right X minus X hat and square it from the input to the the reek outputted reconstructions and so the really important thing here is that this loss function doesn't have any labels right the only components of the loss are input X and the reconstructions X hat it's not supervised in any sense and so we can simplify this this diagram a little bit further by take abstracting away those individual layers in the encoder and the decoder and this is this idea of unsupervised learning is really a powerful idea because it allows us to in a way quantify these latent variables that we're interested in but we can't directly observe and so a key consideration when building a model like an auto encoder is how we select the dimensionality of our latent space and the the latent the latent space that we're trying to learn it presents this sort of bottleneck layer because it's a form of compression and so the lower the dimensionality of the latent space that we choose the poorer the quality of the reconstruction that's generated in the end and so in this example this is the data set of very famous data set of written digits called em missed and on the right you can see the ground truth for example digits from this data set and as you can hopefully appreciate in these images by going just from A to D late in space to a 5d late in space we can drastically improve the quality of the reconstructions that are produced as output by an auto encoder structure okay so to summarize autoencoders are using this bottlenecking hidden layer that forces the network to learn a compressed latent representation of the data and by using this reconstruction loss we can train the network in a completely unsupervised manner and the name auto encoder is comes from you know comes from the fact that we're automatically encoding information within the data into this smaller latent space so we can now we will now see how we can build on this foundational idea a bit more with this concept of variational autoencoders or V AES and so well auto-encoders with with a traditional auto encoder what is done as you can see here is going from input to a reconstructed output and so if I feed in an input to the network I will get the same output so long as the weights are the same this is a deterministic encoding that allows us to reproduce the input as best as we can but if we want to learn a more smooth representation of the latent space and use this to actually generate new images and sample new images that are similar to the input data set we can use a structure called a variational auto encoder to more robustly do this and this is a slight twist on the traditional auto encoder and what it's done is that instead of a deterministic bottleneck layer z we've replaced this deterministic layer with a stochastic sampling operation and that means that instead of learning the latent variables directly for each variable we learn a mean and a standard deviation Sigma that actually parameterize a probability distribution for each of those latent variables so now we've gone from a vector of latent variable Z to learning a vector of means mu and a vector of standard deviations Sigma which describe the probability distributions associated with each of these latent variables and what we can do is sample from from these described distributions to obtain a probabilistic representation of our latent space and so as you can tell and as I've emphasized the VA II structure is just an autoencoder with this probabilistic twist so now what this means is instead of deterministic functions that describe the encoder and decoder both of these components of the network are probabilistic in nature and what the encoder actually does is it computes this probability distribution P of Phi of the latent space Z given an input X while the decoder is doing sort of the reverse inference it's computing a new probability distribution Q of theta of X given the latent variables Z and because we've introduced this this probabilistic probabilistic aspects to this network our loss function has also slightly changed the reconstruction loss in this case is very is basically exactly like what we saw with the traditional auto encoder the reconstruction loss is capturing the pixel wise difference between our input and the reconstructed output and so this is a metrics of how well our network is doing at generating outputs that are similar to the input then we have this other term which is the regulars regularization term which gets gets back to that earlier question and so because the VA II is learning these probability distributions we want to place some constraints on on how this probability distribution is computed and what that probability distribution resembles as a part of regularizing and training our network and so the way that's done is by placing a prior on the latent distribution and that's P of Z and so that's some initial hypothesis or guess about what the distribution of Z looks like and so this essentially helps enforce the learn Z's to follow the shape of that of that prior distribution and the reason that we do this is to help the network not over fit right because without this regularization it may it may over fit on certain parts of the latent space but if we enforce that each latent variable adopts something similar to this prior it helps smooth out the landscape of the lane space and the learned distributions and so this regularization term is a function that captures the divergence between the inferred latent distribution and this fixed prior that we've placed so as I mentioned a common choice for this prior distribution is a normal Gaussian which means that we center it around with a mean of 0 and a standard deviation 1 and as the great question pointed out what what this enables us to do is derive some very nice properties about the optimal bounds of how well our network can do and by choosing this normal Gaussian prior what is done is the encoder is encouraged to sort of put the distribute the latent variables evenly around the center of this latent space distributing the encoding smoothly and actually the network will learn to penalise itself when it tries to cheat and cluster points outside sort of this smooth Gaussian distribution as it would be the case if it was overfitting or trying to memorize particular instances of the data and what also can be derived in the instance of when we choose a normal Gaussian as our prior is this specific distance function which is formulated here and this is called the KL divergence the KU black lie blur divergence and this is specifically in the case when the prior distribution is a zero one Gaussian the divergence that measures the the separation between our inferred latent distribution and this prior takes this particular form okay yeah so to re-emphasize this term is the regularization term that's used in the formulation of the total loss okay so now that we've seen a bit about the reconstruction loss and a little more detail into how the regularization term works we can discuss how we can actually the train the network and to end using back propagation and what is what immediately jumps out as an issue is that Z here is the prod is the result of a stochastic sampling operation and we cannot back propagate gradients through a sampling layer because of their stochastic nature because back propagation requires deterministic nodes to be able to iteratively iteratively pass gradients through and apply the chain rule through but what was a really breakthrough idea that enabled the AES to really take off was to use this little trick called a reap reap parameterization trick to repair amateur eyes the sampling such that the network can now be trained end to end and I'll give you the key idea about how this operation works and so instead of drawing Z directly from a normal distribution that's parametrized by mu and Sigma which doesn't allow us to compute gradients instead what we can do is consider the sampled latent vector Z as a sum of a fixed vector mu a fixed variance vector Sigma and then scaled this variance vector by a random constant that is drawn from a prior distribution so for example from a normal Gaussian and what is the key idea here is that we still have a stochastic node but since we've done this repair motorisation with this factor epsilon which is drawn from a normal distribution this stochastic sampling does not occur directly in the bottleneck layer z we've repairmen tries where that sampling is occurring and a visualization that I think really helps drive this point home is as follows so originally if we were to not perform this repair motorisation we have our our flow looks a little bit like this where we have deterministic nodes shown in blue the weights of the network as well as our input vector and we have this stochastic node z that we're trying to sample from and as we saw we can't do back propagation because we're going to get stuck at this stochastic sampling node when the network is parametrized like this instead when we re parameterize we get this diagram where we've now diverted the sampling operation off to the side to this stochastic node epsilon which is drawn from a normal distribution and now the latent variable z are deterministic with respect to epsilon the sampling operation and so this means that we can back propagate through Z without running - into errors that arise from having stochasticity and so this is a really powerful trick because this simple repair motorisation is what actually allows for VA es to be trained and to end okay and so what what do these latent variables actually look like and what do they mean because we impose these distributional priors on the latent variables we can sample from them and actually fix fix all but one latent variable and slowly tune the value of that latent variable to get an interpretation of what the network is learning and what is done is after that value of that latent variable is tuned you can run the decoder to generate a reconstructed output and what you'll see now in the example of these faces is that that output the results from tuning a single latent variable has a very clear and distinct semantic meaning so for example this is differences in the pose or the orientation of a face and so to really to re-emphasize here the network is actually learning these different latent variables in a totally unsupervised way and by perturbing the value of a single latent variable we can interpret interpret what they actually mean and what they actually represent and so ideally right because we're learning this compressed latent space what we would want is for each of our latent variables to be independent and uncorrelated with each other to really learn the richest and most compact representation possible so here's the same example as before now with faces again right where we're looking at faces and now we're walking along two axes which we can interpret as pose or orientation on the X and maybe you can tell the smile on the y axis and to re-emphasize right these are reconstructed images but the Ricans structured by keeping all other variables fixed except these two and then tuning the the value of those latent features and this is this idea of disentanglement by trying to encourage the network to learn variables that are as independent and uncorrelated with each other as possible and so to motivate the use of V AE in generative models a bit further let's go to back to this example that I showed from the beginning of lecture the question of facial detection and as I kind of mentioned right if we're given a data set with many different faces we may not know the exact distribution of these faces with respect to different features like skin color and why this could be problematic is because imbalances in the training data can result in unwanted algorithmic bias so for example the faces on the Left are quite homogeneous right and standard classifier that's trained on a data set that contains a lot of these types of examples will be better suited recognizing and classifying those faces that have features similar to those shown on the left and so this is can generate unwanted bias in our classifier and we can actually use a generative model to learn the latent variables present in the data set and automatically discover which parts of the feature space are underrepresented or over-represented and since this is a topic of today's lab I want to spend a bit of time now going through how this approach actually works and so what is done is a bae network is used to learn the underlying features of training data set in this case images of faces in a unbiased and unsupervised manner without any annotation and so here we're showing an example of one such learned latent variable the orientation of the face and again right we never told the network that orientation was important learned this by looking at a lot of examples of faces and deciding right that this was an important factor and so from from these latent distributions that are learned we can estimate the probability distribution of each of the learned latent variables and certain instances of those variables may be over-represented in our data set like homogeneous skin color or pose and certain instances may have lower probability and fall sort of on the tails of these distributions and if our data set has many images of a certain skin color that are over represented the likelihood of selecting an image with those features during training will be unfairly high right that can result in unwanted bias and similarly these faces with rarer features like shadows or darker skin or glasses may be underrepresented in the data and so the likelihood of selecting them during sampling will be low and the way this algorithm works is to use these inferred distribution x' to adaptively resample data during training and then and this is used to generate a more balanced and more fair training data set that can be then fed into the network to ultimately result in a unbiased classifier and this is exactly what you'll explore in today's lab ok so to reiterate and summarize some of the key points of V AES these VA is in encode a compressed representation of the world by learning these underlying latent features and from this representation we can sample to generate reconstructions of the input data in an unsupervised fashion we can use the repair motorisation trick to train our networks and to end and use the this perturbation approach to in her print and understand the meaning behind each of these hidden latent variables okay so these are looking at this question of death probability density estimation as their core problem what if we just are concerned with generating new samples that are as realistic as possible as the output and for that we'll transition to a new type of generative model called a generative adversarial Network or again and so the idea here is we don't want to explicitly model the density or the district distribution underlying some data but instead just generate new instances that are similar to the data that we've seen which means we want to try to accomplish ample from a really really complex distribution which we may not be able to learn directly in an efficient manner and so the approach that Ganz take is really simple they start they have a generator network which just starts from random noise and this generator in network is trained to learn a transformation going from that noise to the training distribution and our goal is we want this generated fake sample to be as close to the real data as possible and so the breakthrough to really--it reach eaving this was this gam structure where we have two neural networks one we call a generator and one we call a discriminator that are competing against each other their adversaries specifically the goal of the generator is to go from noise to produce imitations of data that are close to real as possible then the discriminator network takes both the fake data as well as true data and learns how to classify the fake from the real to distinguish between the fake and the real and by having these two networks competing against each other we can force the discriminator to become as good as possible at distinguishing fake and real and the better it becomes at doing that the better and better the generator will become at generating new samples that are as close to real as possible to try to fool the discriminator so to to get more intuition behind this let's break this down into a simple example so the generator is starting off from noise and it's drawing from that noise to produce some fake data which we're just representing here as points on a 1d line the discriminator then sees these points and it also sees some real data and over the course of the training of the discriminator it's learning to output some probabilities that a particular data point is fake or real and in the beginning if it's not trained its predictions may not be all that great but then you can train the discriminator and it starts increasing the probabilities of what is real decreasing the probabilities of what is fake until you get this perfect separation where the discriminator is able to distinguish real from fake and now the generator comes back and it sees where the real data lie and once it sees this it starts moving the fake data closer and closer to the real data and the and it goes back to the discriminator that receives these new points estimates the probability that each point is real learns to decrease the probability of the fake points maybe a little bit and it continues to adjust right and now the cycle repeats again right one last time the generator sees the real examples and it starts moving these fake points closer and closer to the real data such that the fake data is almost following the distribution of the real data and eventually eventually it's going to be very hard for the discriminator to be able to distinguish between what's real what's fake while the generator is going to continue to try to create fake data points to fool the discriminator so and and with this with this example what I'm hoping to convey is really sort of the core intuition right behind the not necessarily the detailed specifics of of how these networks are actually trained okay and so this this is you can think of this as an adversarial competition right between these two networks the generator and the discriminator and what we can do is after training use the trained generator network to create to sample and create new data that's not been seen before and so to look at examples of what we can achieve with this approach the the examples that I showed at the beginning of the lecture were generated by using this idea of progressively growing Ganz to iteratively build more detailed image generations and the way this works is that the generator in the discriminator start by having very low spatial resolution and as training progresses more and more layers are incrementally added to each of the two networks to increase the spatial resolution of of the outputted generation images and this is good because it allows for stable and robust training and generates outputs that are quite realistic and so here are some more examples of fake celebrity faces that were generated using this approach another idea is to involves unpaired image to image translation or style transfer which uses a network called 'cycle Gann and here we're taking a bunch of images in one domain for example the horse domain and without having the corresponding image in another domain we want to take the input image generate an image in a new style that follows the distribution of that new style so this is essentially transferring the style of one domain from a second and this works back and forth right and so the way this cycle gann is trained is by using a cyclic loss term where if we go from domain X to domain Y we then take the result and go back from domain Y back to domain X and we have two generators and two discriminators that are working at this at the same time so maybe you'll notice in this example of going from horse to zebra that the network has not only learned how to transform the skin of the horse from brown to the stripes of a zebra but it's also changed a bit about the background right in the scene it's learned that zebras are more likely to be found in maybe the savanna grasslands so the grass is browner and maybe more savanna like in the zebra example compared to the horse and well we actually did is to use this approach of cycle Gann to synthesize speech in someone else's voice and so what we can do is we can use a bunch of audio recordings in one voice and audio recordings in another voice and build a cycle Gantt to learn to transform representations of that one voice to make them appear like they are representations from another voice so what can be done is to take an audio waveform convert it into an image representation which is called a spectrogram and you see that image on the bottom and then train a cycle Gantt to perform the transfer from one domain to the next and this is actually exactly how we did the speech transformation for yesterday's demonstration of Obama's introduction to the course and so we're showing you what happened under the hood here and what we did is we took original audio of Alexander saying that script that was played yesterday and took the audio waveforms converted them into the spectrogram images and then trained a cycle Gann using this information and audio recordings of Obama's voice to to transfer the style of Obama's voice onto our script so welcome s-191 April introductory course on deep learning here at MIT and so yeah on the on the left right that was original of the Alexander's original audio spectrogram and the spectrogram on the right was what was generated by the cycle Gann in the style of Obama's voice okay so to summarize today we've we've covered a lot of ground on auto-encoders and variational auto-encoders and generative adversarial networks and hopefully this these this discussion of these approaches gives you a sense of how we can use deep learning to not only learn patterns and data but to use this information in a rich way to achieve generative modeling and I really appreciate the great questions and discussions and where all of us are happy to continue those that dialogue during the lab session and so our lab today is going to focus on computer vision and as alexander mentioned there is another corresponding competition for lab two and we encourage you all to to stick around if you wish to ask us questions and thank you again [Applause]
MIT_8286_The_Early_Universe_Fall_2013
16_BlackBody_Radiation_and_the_Early_History_of_the_Universe_Part_II.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT at [email protected]. PROFESSOR: OK, in that case, let's get started. As usual, I like to begin by giving a review of what we talked about last time. This time on slides instead of on the blackboard. We're talking mainly about relativistic energy, or relativistic energy and momentum, and pressure, sometimes. The key equation is probably the most famous equation in physics, Einstein's e equals m c squared. And I gave some numerical examples. I actually looked up some more numbers since then, when I was revising the lecture notes. So these are slightly more up to date. But it's still true that if you could burn matter at the rate of one kilogram per hour, you would have about one and a half times the total power output of the world. And that's apparently still valid in 2011. I only had 2008 figures, actually, at the lecture last time. And if you imagine the 15-gallon tank of gasoline, and you could figure out how much that-- what it's mass is, and convert that to energy, it turns out that a 15-gallon tank of gasoline could power the world for about two and half days, if you could convert all of it into energy. The catch, of course, is that we can't convert matter into energy. We can't get around the problem that, at least at the energies that we deal with, baryon number. And that number of protons and neutrons is conserved, so we can't make protons and neutrons disappear. And that means that we're limited in what we can do. In particular, one of the most efficient things we can do is fission uranium 235. But when uranium 235 undergoes fission, less than 1/10 of 1% of the mass is actually converted into energy, which is why we can't actually avail ourselves of these fantastic numbers that would apply, if we could literally just convert matter into energy. We went on to talk about the relativistic definitions of energy and momentum, and how they come together to form a Lorentz four vector, and the underlying theme here is that we consider ourselves users of special relativity. Most of you I know have taken special relativity courses, and for those of you who have, this is a review. For those you who have not, and there are some of those also, no need to panic. I intend to tell you every fact that you need to know about special relativity. I won't tell you how to derive them all, but I'll tell you all you'll need to know for this class. So in particular, it's useful for this class to recognize that energy and momentum can be put together into a four vector, where the zeroth component is the energy divided by the speed of light. And the three spatial components are just the three components of the spatial momentum, although they have to be defined relativistically. The relativistic definition of momentum, at least how it relates to velocity, is that it's equal to gamma times the rest mass times the velocity, where gamma is the famous factor that we've been seeing all along when we've talked about relativity. The Lorentz contraction factor, one over the square root of 1 minus v squared over c squared. The energy of a particle, relativistically, is the same gamma, times m 0 times c squared, and it can also be written as the square root of m 0 c squared squared, plus the momentum squared, times c squared. Since the momentum forms a four vector, its Lorentz and variant square should be Lorentz and variant, and that means that the momentum squared minus p 0 squared should be the same in all inertial reference frames. And that's just equal when you put in what p 0 means, the momentum squared minus the energy squared, divided by c 0 squared. And to know what value it's equal to in all frames, it's efficient to know what it's equal to in one frame. And the one frame where we do know what it's equal to is the rest frame of a particle. So in the rest frame, the momentum vanishes, and the energy is just m 0 c squared. So in the rest frame, we can evaluate this, and we get minus m 0 c squared squared. And that means that has to be the value in every frame. And this in fact is the easy way to derive the relationship between energy and momentum. If we go back, the equation we had relating energy and momentum is really exactly that equation, rearranged. Just to give an example of how this works, when we actually have energy exchanges, I pointed out that we could talk about the energy of a hydrogen atom. And because energy and mass are equivalent, the hydrogen atom clearly has a little bit less energy than an isolated proton, plus an isolated electron. Because when you bring them together there's a binding energy, and that binding energy is called delta e, and has a value of 13.6 electron volts for the ground state of hydrogen. So that tells us the mass of an hydrogen atom is not the sum of the two masses, but rather has this correction factor, because we've taken out a little bit of energy for the binding. And that means we've taken out a little bit of mass. OK, then we talked about the mass density of radiation and how-- building up to how that will affect the universe. And we said that the mass density of radiation is just the energy density divided by c squared. And that can be taken, really, as a definition of what we call relativistic mass, and hence relativistic mass density. But the important point is that this mass density actually does gravitate the same as any other mass density of the same value. It really does create gravity in the same way. Now I mentioned that things are much more complicated if you want to talk about the gravitational field produced by a single moving particle. That's asymmetric, the velocity of the particle shows up in the equations that describe the metric surrounding a single moving particle. But if you have a gas of particles moving at high velocities, where the velocities nonetheless average to zero, which tends to happen, in a gas at least in the rest frame of the gas. Then that gas will produce gravitational fields, just like a static mass density. Where the mass density is this relativistic energy divided by c squared, the relativistic definition of the mass density. It's also useful to know that the photon-- if we want to describe it as a particle, is a particle of zero rest mass. Which means that it can never be at rest, it always moves at the speed of light. And it also means that its energy can be arbitrarily small, because the energy is proportional of momentum, and the momentum of a photon can be as small as you like. For giving frequency, of course, the energy of a photon is fixed. It's h times nu but if you're allowed to vary the frequency, which you can do if you just look at it in different frames, you can make the energy as small as you like. And the famous equation then, p squared minus e squared c squared, which would have on the right, minus m 0 squared c to the 4th m 0 squared c squared, excuse me-- has zero on the right hand side, because m 0 is 0 And that means that for photons, the energy is just the speed of light times the momentum. And that's a famous relationship that photons obey. Now, thinking about how this gas of photons will behave in the universe, we realized immediately that it does not behave the same way as a mass density of ordinary non-relativistic particles. Which is what we have been dealing with to date. The important difference is that in both cases, the number density falls off like 1 over a cubed, as the universe expands, these particles are not created and destroyed in significant numbers, they just persevere. So the number density of either non-relativistic particles, or photons, just falls off like one over the volume, as the volume increases and the number density dilutes. But what makes photons different from non-relativistic particles, is that a non-relativistic particle will maintain the energy of that particle as the universe expands, but photons will redshift as the universe expands. So each photon will itself lose energy. And it loses energy proportional to one over the scale factor. And that's just because the frequency shift proportionally to the scale factor. And that means that the energy per photon shifts, because quantum mechanically we know that the energy of a photon is proportional to its frequency. So if the frequency redshifts so must the energy. In exactly the same way, 1 over a f t. Yes, question? AUDIENCE: You said previously that neutrinos behave like radiation in the sense that theta energy falls is 1 over a. What is it about them that makes this happen? Because there are also particles with standard kinetic energy, right? PROFESSOR: OK, the question, in case you didn't hear it, is why-- how did neutrinos fit in here? I've made the claim that neutrinos act like radiation in the early universe, but neutrinos have a non-zero mass, so they should obey the standard formulas for particles with nonzero masses. The answer to that is-- there is, I think, a simple answer, which is that as long as the energy is large compared to the mass, particles with masses will still act like massless particles. It doesn't really matter if the mass is zero or not, the key thing, really, is this equation. So if the term on the right hand side is small compared to either of the two on the left, it's not much different from being zero. And that's what happens for neutrinos in the early universe. And we'll see soon that if you go to early enough times, it's true even for electron-positron pairs. They will also act like radiation. Any particle will act like radiation as long as the energy is large compared to the rest energy. So getting back to the discussion of the early universe, if the energy of each photon falls off like one over the scale factor, and the number density falls off like one over the scale factor cubed, it means that the energy density, and hence, the mass density of radiation will fall off like one over a to the fourth, in contrast to the one over a cubed, that we found when we were talking about non-relativistic matter. And that, of course, is going to make a difference. Because those issues play a key role in our discussions about how the universe evolved. An important feature, which we see immediately, is that if we extrapolate backwards in time, since the radiation mass density is falling off like one over a to the fourth, and the matter density is falling off like one over a cubed, it means that as you go back in time, the radiation becomes more and more important relative to the matter, by factor of the scale factor. So if we go back far enough, we will even find a time when the mass density in radiation equaled the mass density in non-relativistic matter. And we calculated about when that would be. We said that the energy density in radiation today is given by this number, 7 times 10 to the minus 14, joules per meter cubed. And I just gave you this number, I didn't derive it yet. We will derive it, probably later today. But for now, we're just accepting it. And that implies we can calculate from that the ratio of the mass densities in radiation and ordinary matter. Here, we use the fact that ordinary matter can be described by having an omega ratio to the critical density of about 0.3. And we know how to calculate the critical density, and that allowed us to calculate the density of ordinary matter. And then this ratio turned out to be 3.1 times 10 to the minus four. So radiation in today's universe is almost negligible in its contribution to the overall energy balance, compared to non-relativistic matter. But if you extrapolate backwards, we know that this ratio will vary as one over the scale factor. And we could figure out what constants to put this equation by putting in the right constant, so that this equation gives us the right value today. Where the right value today is 3.1 times 10 to the minus four. And notice that this works, if we let t be equal to t sub zero, this factors one and we get 3.1 times 10 to the minus four. So these two factors together they have t zero and the 3.1 times 10 to the minus four, are just the right factors to put in to give us the right constant of proportionality in that equation. Having this equation, we can then ask, how far back do we have to go, how much we have to change t, for the ratio to be one? And that's a straightforward calculation. And the ratio of the a is then just one over 3.1 times 10 to the minus four, or 3,200. So if we talk about it in terms of a redshift, which is how astronomers always talk about distances, or times, we're talking about going back to a redshift of 3,200. We can figure out what time, then, corresponds to also, if we know how a f t depends on time. And we do, approximately. For this calculation, I assume for now, we could do better later, and we will-- but I assume for now that we could just treat the period between matter radiation equality, so-called t x and now as being entirely described by a matter-dominated universe. That's only a crude approximation, but it will get us the right order of magnitude. And we'll learn how to do better later in the course. So if we assume that, then t x is just this number to the 3/2 power, cancelling the 2/3, times the age of universe, t naught. And that turned out to be about 75,000 years. So somewhere in the range of 100,000 years, 50,000 years, is the time in the history of the universe when radiation ceased to be more important than matter. And for earlier times than that, the radiation dominated. And that's what we refer to as the radiation dominated era. Any questions? OK, now I think we get on to what is really the important subject that we want to understand, and most of this you did yourself on the homework. But I'll summarize the argument here. We want to understand what this tells us about the Friedmann Equations. And first, we'd like to understand what it says about pressure. Because it turns out that pressure is the crucial issue in determining how fast row falls off as a expands. So if row is proportional to one over a cubed, we can just differentiate that, putting in a constant proportionality temporarily, just to keep track of what we're doing. Since we know how to differentiate qualities and we're less familiar with how to differentiate proportionalities. But what we find immediately, is that row dot is then minus 3, where that 3 is that 3, times a dot over a times row. On the other hand, if row of t falls like one over a to the fourth, row dot is minus 4 times a dot over a times row, just by differentiation. So we get different expressions from row dot, between radiation and matter. And we want to explore the consequences of that difference. It's related to the pressure of the gas, because we can relate the pressure to row dot. Because we know that as a gas expands, it loses energy, which is just equal in amount to pdV. And we illustrated this by a piston thought experiment, but it's true in general. So we can apply this famous formula, dU equals minus pdV, to a patch of the expanding universe. And by a patch I mean some fixed region and coordinate space. So the total energy in that region of coordinate space will be the physical volume, which will be a cubed times the coordinate volume. Which is going to cancel out of this equation on both sides. So it's a cubed times the coordinate volume times the energy density, which is row c squared. The rate of change of that is Du. And then on the right hand side, we have minus p minus b times dV, which is the rate of change of a cubed, again times the coordinate volume that we're talking about. But that will cancel out on the two sides of the equation. So this is really just a description for the universe of the dU equals minus pdV equation. And this can just be rearranged, expanding the time derivatives, to give us row dot, and we get minus 3 a dot over a, times row plus p over c squared. So this tells us how to relate row dot to the pressure. And it tells us that the formula that we started with a long time ago, which just said that row fell off like 1 over a cubed, was synonymous with saying the pressure is zero, for a gas of non-relativistic particles, the pressure is negligible. But for radiation, clearly if we're going to get a four instead of a three, the pressure will be non-negligible. And in fact, it implies that the pressure is exactly equal to one third of the energy density for a gas of radiation. OK, knowing that, we can now look back at the Friedmann equations and ask, how do they stand up? Are they still consistent, or do we have to modify something? And this is really the crucial point. What we know are these three equations, which are the two Friedmann equations and the equation for row dot. And we could see immediately that those equations are not independent of each other. The easiest thing to see is that if we start with the top equation, we could differentiate it. And since the top equation has a dot in it, when we differentiate, we'll get an equation for a double dot. But the equation will also involve row dot, if we take the time derivative of that top equation. But if we know what row dot is, we could put that in, and in the end we'll get an equation for a double dot by itself. And it will in fact agree with the equation on the middle line. Again, things would be inconsistent. Things are consistent, we didn't make any mistakes. If we derive the equation for a double dot by using the first and third of those equations, then we'll get the second of those equations. But the catch is that now we want to modify the third of those equations, the equations for row dot. And then the Friedmann equations as we've written them will not be consistent anymore. Because we'll have a different equation for row dot, we'll get a different equation for a double dot. So we have to decide what gives. What can we change to make everything consistent? And here, the rigorous way of proceeding is to look at general relativity and see what it says, and the answer we're going to write down is exactly what general relativity says. But we can motivate the answer in, I think, a pretty sensible way, by noticing that as the universe expands, we'd expect the energy density to vary continuously, because energies are conserved. And we also expect a dot to vary continuously, because basically, the mechanics of the universe are like ton's laws. And velocities don't change discontinuously. You can apply a force, and that causes velocities to have a rate of change. But velocities don't change instantaneously, unless you somehow apply an infinite force. And the same thing will be true with the universe. On the other hand, accelerations can change instantaneously. You could change the force acting on a particle, in principle, as fast as you want, and the acceleration of the particle will change at that same rate. So if we look at these equations, we would expect that the first equation and the third equation would not be allowed to involve the pressure. Because the pressure basically is a measure of a force. Pressures can change instantaneously. So what you need to do, if we're going to make these equations consistent in the presence of pressure-- which changes the row dot equation, the only equation we can change is the second one. And then we can ask ourselves, what do we have to change it to make the three equations consistent? And this is what you looked at on your homework. And the answer is that the a double dot equation has to be modified to give the equation at the bottom of the screen here. And this is the correct form of the a double dot equation in cosmology. And this is what we'll be using for the rest of the term, this is exactly what you would get from general relativity. As long as we're talking about homogeneous and isotropic universes, this formula as exact as far as we know. OK, any questions about that? Yes. AUDIENCE: Why when we derive that equation do we use-- PROFESSOR: This equation? AUDIENCE: Or the one above that. PROFESSOR: Yeah. AUDIENCE: We use dU equals minus pdV? I mean, I agree with that, but couldn't we use a more complete version? Like, the complete version of the first law of thermodynamics, that dU equals TDS minus pdV. PROFESSOR: OK, yeah. The question was when we wrote down dU equals minus pdV, why did we not include a plus TDS term here, which could also be relevant. The answer is that for the applications we're interested in-- you're quite right, it could be important, but for the applications that we're interested in, which is the expanding gas in the universe, the expanding gas in the universe will be making use of this fact. It really does expand adiabatically, that is, there's nothing putting heat in or out, and everything is remaining very close to thermal equilibrium, which means that entropy does not spontaneously change. So the TDS term we will be assuming is very, very small, and that's accurate. And you're right, if that were not the case, there would be further complications in terms of figuring out what row dot is. Let me point out here that this equation actually does contain a somewhat startling perhaps fact about gravity it says that in the context of general relativity. And that's really the context that we're in, even though we haven't learned a lot of general relativity. But it says that in the context of general relativity, pressures, as well as mass densities, contribute to the gravitational field. A double dot is basically a measure of how fast gravity is slowing down the universe. And this says that there's a pressure. It can also help to slow down the universe. Meaning that pressure itself can create a gravitational field. In the early universe, where we go back to this radiation dominated period, we know that the pressure is one third of the energy density. That says that this pressure term is the same size exactly as the mass density term. So in the radiation dominated phase, the pressure is just as important in effect for slowing down the universe as is the mass density. In today's universe it's negligible. Well, we'll come back that. The dark energy has a non-trivial pressure, but the pressure of ordinary matter in today's universe is negligible. The other important fact about this equation is that energy densities, so far as we know, are always positive. We don't know for sure what the ultimate laws of physics are, but for all the laws of physics that we know, energy densities are positive. On the other hand, pressures actually can be negative for some kinds of material. And we'll talk a little bit more about how to get a negative pressure later. But this formula tells us that positive pressures act the same way as positive mass densities, creating an attractive gravitational field which slows down the expansion of the universe. But if there could be a material with a negative pressure, this same equation, which would presumably still hold, and believe it does, would tell us that that negative pressure would actually cause the universe to accelerate, because of its gravitational effects. Now, we're not talking about the mechanical effects of the pressure. Mechanical effects of pressure only show up when there are pressure gradients, when the pressure is uneven. So the very large air pressure in this room, which really is quite large, we don't feel all, because it's acting equally in all directions. Uniform pressures do not produce forces. So the mechanical effects of the pressure in the early universe, which we're assuming is completely homogeneous, is zilch. There is no mechanical effect. But what we're seeing in this equation is a gravitational effect caused by the pressure. And it's obviously gravitational, business the effect is proportional to Newton's capital G, a constant determining the strength of gravity. So the equation is telling us that a positive pressure creates a gravitational attraction, which would cause the universe to slow down in its expansion. But a negative pressure would produce a gravitational repulsion, which would cause universe to speed up. And we now know that, today-- in fact, for the last five billion years or so-- our universe itself is actually accelerating under the influence of something. And the only explanation we have is that the something that's causing the universe to accelerate is the repulsive gravity caused by some kind of a negative pressure material. And that negative pressure material is what we call the dark energy. And we'll talk a little more later about what it is. It's very likely just vacuum energy. But we'll come back to that later in the course. Yes? AUDIENCE: When we looked at the toy example of the piston in the cavity, the pressure of the gas was pushing outwards against the wall of the container. But we can't have that view of the universe, really, because there's no exteriors in the universe. PROFESSOR: There's no walls, right. AUDIENCE: So how should we view pressure in the sense that-- PROFESSOR: OK. Yeah. OK, the question is, in our toy problem involving the piston, we had walls for the pressure to push against. And that was where the energy went. It went into pushing the walls. When we're talking about the universe, there are no walls. How does that analogy work? What plays the role of the walls? And the answer, I think, is that the role of the walls, when we're talking about the universe, first of all, you can ignore it if you just took in a small region. You could still just say, the small region is pushing out on the regions around it. And I think that's enough to make the logic clear. But it still leaves open the question of, ultimately, where does this energy go? So saying it goes from here to there doesn't help you unless you know where it goes after it goes from and there and there. So you might want to ask the question more generally, where does the energy ultimately end up? And then I think the answer is that it ends up in gravitational potential energy. You could certainly build a toy model, where you just have a gas in a finite region, self-contained under gravity. And then you'd have to make up some kind of a mechanism to cause it to expand. But when you cause it to expand, you'll be pulling particles apart, which are attracting each other gravitationally. And that means you'll be increasing the gravitational potential energy as you pull the gas apart. So I think, ultimately, the answer is the energy imbalance that we seem to be seeing here is taken up by the gravitational field so that, all in all, energy still conserved. Any other questions? OK, in that case, let us continue on the blackboard. OK, first thing I want to look at it is just the behavior of a radiation-dominated flat universe. So a flat universe is going to obey H squared equals 8 pi over 3 G rho, and then the potential minus kc squared over a squared. Hard to write dotted lines on the blackboard. But this potential term is not there, because k equals 0. We're talking about the flat case. So for a flat case, we could just express H in terms of rho. And we know how rho behaves for radiation. Rho falls off as 1/a to the fourth. So H squared is proportional to 1/a to the fourth. That means that H itself is proportional to 1/a squared. So we can do that. a-dot over a is equal to some constant over a squared, a-dot over a being H. And now we can multiply both sides by a, of course. And we get a-dot is equal to a constant over a. And this we can integrate. The way to integrate is to put all the a's on one side and all of the dt's on the other side. So we get ada, writing this as da/dt. So ada is equal to the constant times dt. And then, as we've done before, when we're talking about matter, it's the same calculation there. I just did different power of a appears so we'd know how to do it. Integrating, we get 1/2a squared is equal to the constant times t, and then plus a new constant of integration, constant prime. Now we make the same argument as we've made in the past. We have not yet said anything that determines how our clocks are going to be set. So we can choose to set our clocks in the standard way, which is to set our clocks so that t equals 0 corresponds to the moment where a is equal to 0. And if a is going to be equal to 0 when t is equal to 0, it means constant prime is going to be equal to 0. So by choosing the value of constant prime, we really are just determining how we're going to set our clocks, how we're going to choose the 0 of time. And we'll do that by setting constant prime equal to 0. And then we get the famous formula for a radiation-dominated universe, a of t is just proportional to the square root of t, or t to the 1/2 power. And this is for a radiation-dominated flat universe, replacing the t to the 2/3 that we have for the matter-dominated flat universe. Once we know that a is proportional to the square root of t-- and for the flat universe, the constant proportionality mean nothing, by the way. It's not that we haven't been smart enough to figure out what it is. It really has no meaning whatever. You could set it equal to whatever you want, and it just determines your definition of the notch, your definition of how you're going to measure the comoving coordinate system. Once we know this, we should know pretty much everything. So in particular, we can calculate h, which a-dot over a. And the constant proportionality drops when we compute a-dot over a. As we expect, it has no meaning. So it should not appear in the equation for anything that does have physical meaning. So H is just 1/2t, the 1/2 here coming from differentiating the t the 1/2 power. We can also compute the horizon distance. So the physical horizon distance, l sub p horizon, where p stands for physical, is equal to the scale factor times the coordinate horizon distance. And the coordinate horizon distance is just the total coordinate distance that light could travel from the beginning of the universe. And we know the coordinate velocity of light is c divided by a. So we just integrate that to get the total coordinate distance. So it's the integral from 0 to t of c over a of t prime dt prime. And since a of t is just t to the 1/2, this is a trivial integral to do. And the answer is 2 times c times t. So in a radiation-dominated universe, the horizon's distance is twice c times t. For a static universe, the horizon distance would just be c times t. It would just be the distance light can travel in time t, but more complicated in an expanding universe. For the matter-dominated case, we discovered that the horizon distance was 3ct, if you remember. For the radiation-dominated case, it's 2ct. And finally, an important equation is that, going back to here, where we started, this equation relates H to rho. We found out that, merely by knowing the universe is radiation-dominated, without even caring about what kind of radiation it is, how much neutrinos, how much photons, whatever-- doesn't matter-- merely by knowing the universe is radiation-dominated, we were able to tell that H is 1/2t. And if we know what H is, that formula tells us we also know what rho is. So without even knowing what kind of radiation is contributing, we know that, for a radiation-dominated universe, rho is just equal to 3 over 32 pi Newton's constant G times time, little t, squared. It's rather amazing that we can write down that formula without even knowing what kind of radiation is contributing. But as long as that radiation falls off as 1 over the scale factor to the fourth, and as long as we know the universe is flat, then we know what that energy density has to be. This is crucial here, by the way. The energy could be anything if we did not assume that the universe was flat. OK, any questions about this? Yes? AUDIENCE: If we assumed that it was almost flat, would we be able to have any bounds on it? PROFESSOR: OK, question is, if we assumed that it was almost flat, would we be able to have any bounds on it? The answer is, yeah, if you were quantitative about what you meant by "almost flat," you could know how almost true that formula would have to be. OK, if there are no other questions, I want to switch gears slightly now and go back to talk about some of the basic underlying physics that we are going to need, and in particular, the physics of black-body radiation. So this is really just a little chapter of a stat mech course that we're inserting here, because we need it. And because it comes from another course, we're not going try to do it in complete detail. But I'll try to write down formulas that make sense. And that will give us what we need to know to proceed. So that will be the goal. So what is black-body radiation? The physical phenomenon is that, if one imagines a box with a cavity in it-- that's supposed to be a box with a cavity in it, in case you can't recognize the picture-- if the box is held at some uniform temperature t-- t is temperature-- then is claimed and verified experimentally that the cavity will fill up with radiation-- in this case, we're really just talking about electromagnetic radiation-- the calving will fill up with electromagnetic radiation whose characteristics would be determined solely by that temperature t and will therefore be totally independent of the material that makes up the box. Roughly speaking, I think the way to think about it is to say that the box will fill up with radiation at temperature t. And saying that the radiation has temperature t is enough to completely describe the radiation. It doesn't matter what kind of a box that radiation is sitting in. So the box will fill with radiation at temperature t. And that radiation is called black-body radiation. Like many things in physics, it has a variety of names, just to confuse us all. So it's also called cavity radiation, which makes a lot of sense, given the description we just gave. And it's also sometimes called just thermal equilibrium radiation. This is radiation at temperature t. I haven't really justified the word "black-body" radiation yet, so let me try to do that quickly. The reason why it can be called black-body radiation-- and this will be important for some things in cosmology; I'm not sure if it will be important to us or not, but certainly important to know-- the reason why it's called black-body radiation is because we imagine inserting into this cavity a black body, in the literal sense. What is the literal sense of a black body? It's a body which is black in the sense that all radiation that hits it is absorbed. Now, this black body is still going to glow. If you heat a piece of iron or something to very high temperatures, you see it glow. That glow is not reflection. That glow is emission by the hot atoms in the piece of iron or whatever. Emission is different from reflection. When we say it absorbs everything, we mean it does not reflect anything. But it will still admit by thermal de-excitation. The crucial distinction between reflection and thermal emission is that reflection is instantaneous. When a light beam comes in, if it's reflected, it just goes back out instantaneously. Emission, thermal emission, is a slower process. Atoms get excited, and eventually, they de-excite and emit radiation. So it takes time. And that's the distinction. We're going to assume that this body is black in the sense that there's no reflection. OK, now we're going to make use of the fact that we know that thermal equilibrium works. That is, if you let any isolated system sit long enough, it will approach a unique state of thermal equilibrium determined by its constituents, which you've put in to begin with, but otherwise independent of how exactly you arrange those constituents. So if we put in, for example, a cold black body, it will start to get harder, warming up to the same temperature as everything else. If we put in an extra hot black body, it would emit energy and start to cool down to the temperature of everything else. But eventually, this black body will be at the same temperature as everything else. And we're going to be assuming here that the box itself is being held at some fixed temperature t. So wherever energy exchange occurs because of this black body, it will be absorbed by whatever is holding the outer box at the fixed temperature. So in the end, if we wait long enough, this black body is going to acquire the same temperature as everything else and hold that temperature. Now, if it's holding that temperature, it means that the energy input to the box, to the black body, will have to be the same as the energy output of the black body. Now, the black body is going to be absorbing radiation, because we have radiation here, and we said that any radiation that hits it is absorbed. That was the definition of "black." So it's clearly absorbing energy. If it's not going to be heating up-- and we know that it's not, because it's in thermal equilibrium; the temperature will remain fixed-- in order for it to not heat up, it has to radiate energy, as well. And the energy it radiates has to be exactly the same as the energy it's absorbing once it reaches thermal equilibrium. So in equilibrium, the black body, BB, radiates at same rate that it absorbs energy. This radiation process is this slow process of thermal emission. There are atoms inside this black body that are excited. Those atoms will de-excite over time, releasing photons that will go off. And the important thing about that slow mechanism is that, if we imagine taking this black body out of its cavity, but not waiting long enough for its temperature to change-- so we'll assume its temperature is still the same, t. So this is a picture of the same black body at temperature t, but now outside the cavity. Its radiation rate is not going to change when we take it outside the cavity, because the radiation was caused by things happening inside the black body, which are not changed when we put it in or out of the cavity. So it will continue to radiate at exactly the same rate that it was radiating when it was in the cavity. And that means it's going to radiate at exactly the same rate as the energy that it would have absorbed if it were bathed by this black-body radiation. So essentially, it means it will emit black-body radiation with exactly the intensity that the black-body radiation would have on the outside if the black body were still inside the cavity. So it radiates with exactly the same intensity as the energy that it would receive if it were inside the cavity. And furthermore, you could even elaborate a bit on his argument to show that the radiation that it radiates has exactly the same spectrum, exactly the same decomposition into wavelengths, as the black-body radiation inside the cavity. And the way to see that is to imagine surrounding this black body by absorption filters that only let through certain frequencies. And the point is that, no matter what frequencies you limit going through this filter, you have to stay in equilibrium. It will never get hotter or colder. So that means that each frequency by itself has to balance, has to have exactly the same emission as it would have abortion if the black body were just exposed to black-body radiation surrounding it. So it radiates black-body radiation. And the intensity and spectrum must match what we call black-body or cavity radiation. So the cavity radiation has to exactly mimic the radiation emitted by this black body. And that's the motivation for calling it black-body radiation. Now, if this black body absorbed some radiation and reflected some, then it would emit different radiation. So it is important that this body be black, in the sense that it doesn't reflect anything. All radiation hitting it is absorbed. And only under that assumption do we know exactly what it's going to emit. Yes? AUDIENCE: So is this only true for right after you take it out? PROFESSOR: Well, it will start to cool after you take it out. And as it cools, its temperature will change. But if you account for the changing temperature, it will be true at anytime, actually. But the temperature will change. AUDIENCE: Because, in the black cavity, it has things exciting it. And when you take it out, there's no photons, no constant radiation to excite it. So it can radiate-- PROFESSOR: Yes. Once you take it out, it's no longer being excited. And I think, technically, you're right. Once you take it out, it will not only cool, but it will cease to be at a uniform temperature. And that's basically what we're saying if we're saying that the atoms that are excited won't necessarily be in the right thermal distribution as they would be if it was on the inside. That would be a statement that is not any longer in thermal equilibrium. But as long as the radiation is slow, you could just account for the changing temperature. You would know how it radiates. And I think that's a very good approximation. Although in principle, it will cease to be in thermal equilibrium, as soon as you take it out, the edges will be cool, and the center will be hot. And you'd have to take into account all of those things to be able to understand how it radiates. Any other questions? OK, next, I want to talk a little bit about what this black-body radiation is. And one can begin by trying to understand it purely classically, which, of course, is what happened historically. In the 1800s, people tried to understand cavity radiation or black-body radiation using classical physics, Maxwell's equations, to describe the radiation. And then, in a nutshell-- we're just trying to establish basic ideas here-- one can try to treat a field statistical-mechanically by imagining not fields in empty space, but fields in some kind of a box. In this case, it doesn't necessarily have to be the cavity that we're talking about. It could be a big box that just enclosed the system somehow to make it easier to talk about. And in the end, you could imagine taking that box to infinity, this theoretical box that you use to simplify the problem. But once you put the system in a box, then a field, like the electromagnetic field, can always be broken up into normal modes, standing wave patterns that have an integer or a half integer number of wavelengths inside the box. And no matter how complicated the field is inside the box, you could always describe it as a superposition of some set of standing waves. In general, it takes an infinite number of standing wave components to describe an arbitrary field-- that is, with shorter and shorter wavelengths. But you can always-- and this is Fourier's theorem-- you can always describe an arbitrary field in terms of the standing waves. And that's good for the point of view of thinking about statistical mechanics, because you could think about each standing wave almost as if it were a particle. It really is a harmonic oscillator. So if you think you know the statistical mechanics of harmonic oscillators, each standing wave in the box is just a harmonic oscillator, so simple. We now try to ask what is the thermodynamics of this system of harmonic oscillators. And the rule for harmonic oscillator is simple. Stat mech tells you that you have 1/2 kT per degree of freedom in thermal equilibrium. The energy of a system should just be 1/2 kT per degree of freedom. Having said that, all the complicate questions come about by asking ourselves what is meant by degree of freedom. But for the harmonic oscillator, that has a simple answer. A harmonic oscillator has two degrees of freedom-- the kinetic energy and the potential energy. So the energy of a harmonic oscillator should just be kT per degree of freedom. And we could apply that to our gas in the box and we could, ask how much energy should the gas absorb at a given temperature? What should be the energy density of the gas at a given temperature-- this gas of photons. But it was noticed in the 1900-- the 1800s that this doesn't work because there's no limit to how short the wavelengths can be. And therefore, there's not just some finite set of harmonic oscillators. There's an infinite set of harmonic oscillators where you have more and more harmonic oscillators at shorter and shorter wavelengths ad infinitum, no limit. And that came to be known as Jean's Paradox. So what it suggests is that if this classical stat mech worked-- which obviously it's not working. But if it did work, it would mean that as you tried to put a gas in just an empty box in contact with something at a fixed temperature, the box would absorb more and more energy without limit. And ultimately, it would presumably cause the temperature of the whatever is trying to maintain the temperature to go to 0 as energy gets siphoned off to shorter and shorter wavelengths of excitations. That obviously isn't the way the world behaves. We'd all freeze to death if it did. So something has to happen to save it. And it wasn't at all obvious for many years what it was that saves it. But this Jean's Paradox turns out to be saved by quantum mechanics. And the important implication of quantum mechanics is that the energy of a harmonic oscillator is no longer allowed to have any possible value, but is now quantized as some integer times h times nu, the frequency-- h being Planck's constant, nu being frequency, n being some integer where this integer might be called the excitation level of the harmonic oscillator. Depending how you choose your 0, you might have an n plus 1/2 there. But that's not important for us right now. It will be important later actually. But for now, we'll just allow ourselves to readjust the 0 and just think of it as n times h nu, or h bar omega. Now, this makes all the difference, statistical mechanically. One can apply statistical mechanics using basically the same principles to the quantum mechanical system. And the key thing now is that for the very short wavelengths, which are the ones that were giving us trouble-- the infinities came to short wavelengths. For the short wavelengths where nu is high, h nu is high. And it means that there's a minimum ante that you could put in to excite those short wavelength harmonic oscillators. And it's a large number. You either put in a large amount of energy or none at all. Quantum mechanics doesn't let you do anything in between. Now remember, the classical mechanics answer was that you have kT in each harmonic oscillator. And kT would be small compared to h nu, if we're talking about a very short wavelength. So the classic answer is just not allowed by quantum mechanics. You either have to put in nothing or an amount much, much larger than the classical answer. And when you do the statistical mechanics quantum mechanically, which is not a big deal really, you find that when you're confronted with that choice, the most likely answer is to put in no energy at all. So quantum mechanics freezes out these short wavelength modes. And then the n produces a finite energy density for a gas of photons. Yes? AUDIENCE: Seems like if you were to sum over like all the possible wave numbers, that-- well, so the energy is inversely related to wavelength, right? So even if you quantize it, like for large wavelengths, isn't the sum still like a sum of one over lambda wavelength, with its derivative? PROFESSOR: You're saying, isn't there also a divergence at the large wavelength n? AUDIENCE: Because that sum doesn't seem like it would work. PROFESSOR: Right. No, that's important. The reason that's not a problem is that if you're talking about the energy in a box, the wavelength can't be bigger than the box. The largest possible wavelength is twice the box so that half a wavelength fits in the box. If you're talking about the energy in the whole infinite universe, then we expect the answer to infinite. And it is. There's no problem with having infinite total energy if you want to have a finite energy density throughout an infinite universe. So the size of the box cuts off the large wavelengths. And quantum mechanics cuts off the small wavelengths. So in the end, one does get a finite answer for the energy density of black-body radiation. And that's crucial for our survival, crucial for the existence of the universe as we know it, and also crucial for the calculations that we're about to do. OK, so when one does these calculations initially for photons only, what we'd find is that the energy density is equal to a fudge factor, which I'm going to call g. And you'll see later why I'm introducing a fudge factor. For now, g is just 2. But later, we'll generalize the application of this formula, and g will have different values. But for now, we're dealing with photons. There's a factor of 2 there, but I'm going to write 2 as g, writing g equals 2 underneath. And then the pi squared over 30-- you can really calculate this-- times kT to the fourth power divided by h bar c cubed, h bar being Plank's constant divided by 2 pi and little k being Boltzmann constant. So this is calculated just by thinking of the gas in a box as a lot of harmonic oscillators and applying standard stat mech to each harmonic oscillator, but you apply the quantum mechanical version of the stat mech to each harmonic oscillator. And you can also find, by doing the same kind of analysis, that the pressure is 1/3 the energy density, which we also derived earlier by different means. And it's all consistent so you get the same answer every time, even if you think about it differently. So here, I have mine deriving it directly from the stat mech. You can also, from the stat mech, calculate the number density of photons in thermal equilibrium. And that will be equal to -- again, there's a factor of 2. But this time, I'll call the factor of 2 g star, where g star also equals 2 for photons. But when we generalize these formulas, g will not necessarily equal g star, which is why I'm giving it two names. And this g star multiplies zeta of 3, where zeta refers to the Riemann zeta function, which I'll define in a second, divided by pi squared times kT cubed divided by h bar c cubed. OK. OK, so I need to define this zeta of 3. It's 1 over 1 cubed plus 1 over 2 cubed plus 1 over 3 cubed plus dot dot dot. It's an infinite series. And if you sum up that infinite series, at least to three decimal places, it's 1.202. OK, then there's one other formula that will be of interest to us. And that'll be a formula for the entropy density. Now, if you've had a stat mech course, you have some idea of what entropy density means. If you have not, suffice it to say for this class that it is some measure of the disorder in the sense of the total number of different quantum states that contribute to a given macroscopic description. The more different microstates there are that contribute to a macroscopic description, the higher the entropy. And the other important thing about entropy to us besides that vague definition-- which will be enough-- but the important thing for us is that under most circumstances, entropy will be conserved. It's conserved as long as things stay at or near thermal equilibrium. And in the early universe as the universe expands, that's the case. So for us, the entropy of our gas will simply be a conserved quantity that we can make use of. And we will make use of it in some important ways. And we could write down a formula for the entropy density of photons. And it's g, where this g in fact the same g as over there. It is related to the energy. So it's the same g that appears in two cases, 2 in both cases for protons by themselves. And then there are factors that you can calculate-- 2 pi squared over 45 times k to the fourth T cubed over h bar c cubed. OK, this time, the number of k's and T's do not match. That's mainly due to the conventions about how entropy is defined. It's not really anything deep. I might mention at this point that the 2's that I've been writing for everything-- g equals 2, g star equals 2-- the reason those 2's are written explicitly rather than just absorbing the factor of 2 into the other factors is that photons are characterized by the fact that there are two polarizations of photons. So if I have a beam of photons, they could be right-handed or left-handed. And anything else could be considered a superposition of those two. So there are two independent polarizations. And it's useful to keep track of these formulas as the amount of energy density per polarization. Thus, different kinds of particles will have different numbers of polarizations. So if we know the amount per polarization, we'll be able to more easily apply it to other particles. Yes? AUDIENCE: Sorry, just to kind of bring up the same question, if we-- I read that in the early universe, the temperature is constantly changing or it's cooling. PROFESSOR: Right. AUDIENCE: So then if the temperature is changing, then how can we say that the entropy is constant? PROFESSOR: OK, important question-- we'll be getting to it very soon. But since you asked the question, I'll ask it now. The question was if the universe is expanding, and the entropy density is going down because it thins, how can that happen-- I guess it was asked the other way around. If the temperature is falling, how can entropy be conserved if this is the formula for entropy density? And the answer-- when I tell you, you'll see it's obvious. We don't expect the entropy density to be conserved if the entropy is conserved. The entropy thins out as the universe expands. So if we just had a gas with nothing else changing, we expect the entropy density to go down like 1 over the scale factor cubed, just like the number density of particles. So if s is going to go down like 1 over the scale factor cubed, that would be consistent with this formula if the temperature also fell as 1 over the scale factor. So that to cubing it made things match. And that's what we'll find. The temperature falls off, like 1 over the scale factor. And that's consistent with everything that we said about energy densities and so on. OK. Next thing I want to talk about is neutrinos, which I told you earlier contributes in a significant way to the radiation energy density in the universe today. Neutrinos are particles which for a long time, were thought to be massless. Until around 2000 or so, neutrinos were thought to be massless. Now we know that in fact, they have a very small mass, which complicates the description here. It turns out that cosmologically, neutrinos still act as if they were massless for almost all purposes and for really all purposes that we'll be dealing with in this class, although if we were interested in the effects of neutrinos on structure formation, we'd be interested in whether or not the neutrinos have a small mass or whether it's smaller than that. We know it's non-zero. We don't know what the neutrino mass is, by the way. What we actually know from observations is that there are three types of neutrinos. And those are called flavors. And I'll use the letter nu for the word neutrino. And those three types of neutrinos are called nu sub e, called the electron neutrino; nu subbed mu, called the muon neutrino, and nu sub tau, called the tau neutrino. And these letters e, mu, and tau link to the names of particles. This is the electron neutrino. This is the muon neutrino connected to a particle called the muon, which is like the electron but heavier and different. And this is linked to a particle called the tau, which is also like the electron but much more heavier but otherwise similar in its properties. And the neutrinos are linked in the sense that when a neutrino is produced, depending on how you start, it is very typically produced in conjunction with one of these other particles. So an electron neutrino is typically produced in conjunction with an electron. And similarly, a muon neutrino is typically produced in conjunction with a muon. And a tau neutrino is typically produced in conjunction with a tau. Now, what does this have to do with neutrino masses? We've never actually measured the mass of neutrino. So we only know that they have mass indirectly. What we have seen is one flavor of neutrino turn into another flavor. And it turns out, in the context of quantum field theory and I think this does make a certain amount of sense just by intuition, if a particle is massless, it can never change into anything. The process by which one changes into another is pretty quantum mechanical and a little hard to understand anyway. But if the particles were really massless, they would move at the speed of light. And if the particles were moving at the speed of light, if the particle had any kind of a clock on the particle, that clock would literally stop with the particle moving at the speed of light. So if particles are massless, any internal workings that that particle might have to be frozen. That is, if it's a clock, it has to be a clock that's stopped completely. And for reasons that are essentially that, although they can be made more formal and more rigorous, a truly massless particle could never undergo any kind of change whatever. It would have to stay exactly like it looks like to start with. Because it just has no time. So the fact that these neutrinos turn into each other implies that they must have a nonzero mass. It must not really be moving at quite the speed of light. And that's the way the formalism works. And we could set limits on the masses based on what we know about the transitions between one kind of neutrino and another. Yes? AUDIENCE: How do we explain photon decaying to an electron-positron pair then? PROFESSOR: A photon decaying to an electron-positron pair? AUDIENCE: Yeah. PROFESSOR: The answer is a free photon never does decay to an electron-positron pair. Photons can collide with something and produce an electron-positron pair. But that collision, that's a more complicated process. What I'm saying is-- I'm sorry. This process of conversion-- maybe I should have clarified-- happens just as the neutrinos travel. It's not due to collisions. Due to collsions, complicated things can happen whether the particle is massless or not. But a massless particle simply in transit cannot undergo any kind of transition. And these neutrinos are seen to undergo transitions simply being in transit without any collisions. In terms of my clock analogy and a stopped clock, I think the reason the photon can convert into electron-positron pairs if it collides with something is that when it collides, it essentially breaks the clock. You don't have a photon that's just moving along without time anymore. OK, so these neutrinos have masses. And maybe I should, at this point, write down some bounds on these masses. m squared 21 times c to the fourth is equal to 7.50 plus or minus 0.2 times 10 to the minus 5 electron volts squared. These numbers come from the latest particle data tables, which I gave references for in the notes. So this is the difference of the mass squared. And delta m 23 squared times c to the fourth to turn it into the square of an energy is 2.32 plus 0.12 minus 0.08 times 10 to the minus third electron volts squared. So one thing you notice immediately is that these are incredibly small masses. Remember, the proton weighs 938 MeV, three million electron volts. And these are fractions of one electron volt. So by the standards of particle physics, these are unbelievably small energies, unbelievably small mass differences. But they're there. They have to be there for the physics we know to make sense. The other thing that you may notice about this notation-- which I don't want to elaborate on but I'll just mention-- this is called 21. This is called 23. There's no 1, 2, or 3 there. There's an e and a mu and a tau. The complication here is something very quantum mechanical. The e, mu and tau labels are labels which basically label the neutrino according to how the neutrino is created. It turns out that their mass eigenstates-- states which actually have a definite mass-- are not the e, the mu or the tau. In fact, if the e had a definite mass, that would be saying that an e would just propagate as a particle with a certain mass. It would not convert into anything else. The fact that an e converts into other particles-- a nu sub e converts into other particles is really the statement that nu sub e is not a state with a definite mass. But there are states with definite masses which could be expressed quantum mechanically as superpositions of these flavor eigenstates. So nu sub 1, nu sub 2 and nu sub 3 are states of neutrinos that have definite masses. And each one of them is a superposition of nu sub e, nu sub mu and nu sub tau. Yes? AUDIENCE: How come we don't have a delta n from 31? PROFESSOR: Good question. I think it's just the lack of knowledge. I don't think there's any reason it's not defined. I'm sure it is defined. I think it's just lack of knowledge. And if I knew more details about how these things were measured, I could give you a better story about that. But I don't, frankly. So in the end, it's a rather complicated quantum mechanical system which we're not going to go into any details about. What more should I say today? OK, let me just mention for today and we'll continue next time after the quiz, for our purposes, we're going to treat these neutrinos as if they're massless. And it turns out that that's actually extraordinarily accurate from the point of view of cosmology, at least for the kind of cosmology that we're doing where we're just interested in the effect of these neutrinos on the expansion rate of the universe. And treating them as massless particles, I will shortly give you the formulas for how they contribute to the black-body radiation. But it I think there's no point in my writing them now. I'll just write them again in the beginning of next period. But they do contribute to the black-body radiation and in a way that we actually know how to calculate. And they have a noticeable effect on the evolution of our universe. OK, that's all for today. Good luck on the quiz on Thursday. I'll be here to help proctor. I think Tim [INAUDIBLE] will be here too. And I'll see you more intimately either at my office hour tomorrow or at lecture a week from now.
MIT_8286_The_Early_Universe_Fall_2013
4_The_Kinematics_of_the_Homogeneous_Expanding_Universe.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK. I think it's time for us to start. Last time we talked about the Doppler shift and a little bit of special relativity. Today we'll be going on to talk more about cosmological topics. We'll be talking about kinematically how one describes a homogeneously-expanding universe like the one that we think we're living to a very good approximation. In that case, let's get started. What I want to do today is talk about some of the basic descriptive properties of the universe as we will describe it. The universe is, of course, a very complicated place. It includes you and me, for example, and we're pretty complicated structures. But cosmology is not really the study of all that. Cosmology is the study of the universe in the large, and we'll begin by discussing the universe on its largest scales in which you view approximated by a very simple model, which we'll be learning about. So in particular on very large scales, the universe is pretty well described by threes properties, which we will talk about one by one. The first is isotropy, and that just comes from some Greek root, which means the same in all directions. Now, of course, as we look around say the room here, the room doesn't look the same in all directions. The front of the room looks different from the back of the room. And looking towards Mass Ave looks different from looking towards the river, and looking further out into space, looking towards the Virgo cluster, which is the center of our local super cluster, looks rather different from looking in the opposite direction. But when one gets out to looking at things on the very large scale where in this case very large means on the scale of a few hundred million light years, things begin to look very isotropic. That is no matter what direction you look, as long as you're averaging over these very large scales, you find that you see pretty much the same thing. This becomes most emphatic when one looks at the cosmic background radiation, which is really the furthest object that we can look at. It's radiation that was admitted shortly after the Big Bang. The history of the cosmic background radiation in a nutshell is worth keeping in mind here. I'll refer to it as the CMB for cosmic microwave background. And in a nutshell, the things to keep in mind in thinking about this history is that until about 400,000 years after the beginning, the universe was a plasma, or maybe I should say more accurately that the universe was filled with plasma. And within a plasma, photons essentially go nowhere. They're constantly moving at the speed of light, but they have a very large cross section for scattering off of the free electrons that fill the plasma. And that means that the photons are constantly changing directions and the net progress in any one direction is negligible. So the photons are frozen with the matter, I'll say frozen inside the matter, which means that the net velocity relative to this plasma is essentially zero. But according to our calculations, and we'll learn later how to do these calculations, at about 400,000 years after the Big Bang, the universe cooled enough so that it neutralized and then it became a neutral gas like the air in this room. And the air in this room you know this is very transparent to photons, and that means that light travels from my face to your eyes on straight lines and allows you to see an image of what my face looks like and vice versa, by the way. And it's a little dicey to extrapolate something from the room to the universe. The orders of magnitude are very different. But in this case, the physics actually ends up being exactly the same. Once the universe becomes filled with a neutral gas, it really does become transparent to the photons of the cosmic microwave background. So these photons have for the most part been travelling on perfectly-straight lines since 400,000 years after the Big Bang. And that means that when we see them today, we are essentially seeing an image of what the universe looks like at 400,000 years after the Big Bang. So at 400,000 years, gas neutralized and became transparent. This by the way has a name, which is universally what is called in cosmology, nobody actually understands why it's called this, by the way, but the name is recombination. And the mystery is what the re is doing there because as far as we know, the gas is combining for the first time in the history of universe, but that's otherwise what everybody calls it. I did actually once ask Jim Peebles who might be the person who first called it this why it was called this, and he told me that this is what the plasma physicists called it, so it was natural to just pick up the same word when he was doing cosmology, so maybe that's how the word originated. But coming from the point of cosmology, it is a misnomer in that for the theory that we're discussing the prefix re here has absolutely no business being there. So what do we see when we look at the cosmic microwave background? We see that it is unbelievably isotropic. What we find is that there are deviations in the temperature of the radiation. The intensity is measured as an effective temperature. There are deviations in the temperature of the radiation of a fractional amount of about 10 to the minus 3, which is a very small number, but it's even stronger than that. This deviation of one part in 10 to the 3 has a particular angular pattern, and it's not the angular pattern that you would expect if the source system were moving through the cosmic microwave background, and that's how we interpret this 10 to the minus 3 effect. Motion of solar system through the CMB. And after removing the effect of the motion. Now actually when we move it, it's not like we have an independent way of measuring it. We don't really, not to enough accuracy. So we're really just fitting it to the data and removing it. But when we do the split to the data, it's a three-parameter fit, that is, we have three components of a velocity to fit. We have a whole angular pattern on the sky, and we only have three numbers to play with. So it's strongly constrained even though we're using the data itself to determine what we think our velocity is relative to the cosmic microwave background. And after removing it, then what we find is that the residual deviations, delta t over t, are only at the level of about 10 to the minus 5, 1 part in 100,000, which is really unbelievably isotropic, unbelievably uniform. One time I decided to think about how round that is, how much the same in all directions it is by asking myself the question, is it possible to grind a marble that would be spherical to an accuracy of 10 to the minus 5. And you can think about that yourself. The answer I came up with was that yes it is, but it really strains the limits of our technology. It correspond to sort of the best technology we have for building highly-precise lenses basically fractions of a wavelength of light. So to round to 1 part in 10 to the 5 is really being unbelievable round, unbelievably isotropic. And that's the way the universe looks. Next item in our description of the universe is homogeneity. Homogeneity is harder to test with precision because it means looking out into space and trying to see, for example, if the density of galaxies is uniform as a function of distance. We always talked about as a function of angle, that's isotropy, and it's very uniform where one could make very precise statements about the cosmic microwave background. But to talk about homogeneity, one has to be able to talk about how the galaxy distribution varies with distance, and distances are very hard to measure cosmologically. So as far as we could tell, the universe is perfectly compatible with being homogeneous, again, on length scales of a few hundred million light years, but it's hard to make any very precise statement. There is, of course, relationships between isotropy and homogeneity. Homogeneity, by the way I didn't define that. I assumed you know what it meant, but I should definitely define it. Isotropy means the same in all directions. Homogeneity means the same at all places. So sometimes these are just put together and called uniformity because they are very similar concepts. They are, however, distinct concepts logically, and it is worth spending a little time understanding how they connect to each other, in particular how you can have one without the other is the best way to understand what they individually mean. So suppose, for example, we had a universe that was homogeneous but not isotropic. Is that possible, and if so, what would be an example of a feature that would be described that way? Let me throw it out to you. We want to be homogeneous, but not isotropic. Yes. AUDIENCE: It would be parallel universes like a cylinder pointing in a z direction, and I mean, matter is all homogeneous with a cylinder but there is preferred directions for isotropic. PROFESSOR: A preferred direction fixed by the direction of the periodicity? That is an example. That's right. That's right. Let me ask if there are other examples people could think of. Yes. AUDIENCE: There are galaxies everywhere with constant density, but they're all aligned in a particular direction. PROFESSOR: That's right. That's right. Galaxies have a shape, in particular they have an angular momentum. The angular momentum could be a line, and that would be an example of a universe that would be homogeneous but not isotropic. Very good. Very good. Another example that I'll just throw out, which I think maybe is simple to think about is the universe is filled with this cosmic microwave background radiation. suppose all the photons going in the z direction were more energetic than the ones going in the x and y direction. That would be a possible situation that could be completely homogeneous, but would be an example of something that would not be isotropic. So there are many examples you can come up with. I'm very glad you came up with the ones you did. That's great. Going the other way it's harder. Suppose we try to think of the universe that's isotropic but not homogeneous. Isotropic, by the way, does depend on the observer, so let's first talk about isotropic relative to us. I was just going to say imagining a universe that would be isotropic relevant to us, but would not be homogeneous. Yes. AUDIENCE: Could it be like if we lived in some shell. PROFESSOR: That's right. A shell structure. AUDIENCE: In all direction, the shell would be there. PROFESSOR: That's right. That's right. I think I'll even draw that on the blackboard. Example of isotropy without homogeneity. So we would be here, and the matter could be distributed in a perfectly spherically-symmetric distribution with us at the center. And that would be an example of something that would be isotopic to us but not homogeneous. Now, things like that, of course, are considered weird because we don't think of ourselves as living in any special place in the universe, and that's basically what the Copernican Revolution was all about. And the Copernican Revolution is sunken very deeply into the psychology of scientists. So I think scientists would be very loathed to imagine the universe that look like this, but it does help to understand what these words mean. If a universe is going to be isotropic to all observers, then it does have to be homogeneous, and that's part of the reason why we're pretty confident that our universe is basically homogeneous, because we just decided that's isotropic to us, and we decide we're not special then it has to be isotropic to everybody and then it has to be homogeneous. If the universe is isotropic to all observers, it is homogeneous Now, a thought which I will leave for you to think about between now and the next lecture is whether or not really knowing that a universe is isotropic with respect to two observers is enough to prove that it's homogeneous. That turned out to be a more subtle question than it might sound. I don't know if it sounds subtle or not. I should maybe just tell you basically what the answer is and then you can try to think if you can understand the answer. In the Euclidean space, isotropy about two distinct observers is enough to make it homogeneous, which is kind of what you visualize. But if you can allow yourself to think about non-Euclidean spaces, and I know we haven't talked about non-Euclidean spaces yet so you might not have in the way of tools to think about it. But think, for example, about surfaces in three dimensions. Surfaces are very good examples of non-Euclidean two-dimensional geometries. And see if you can invent a two-dimensional geometry that would be isotropic about two points, but would not be homogeneous. So that's your thought assignment for next time, not to be handed in just to be talked about in the lecture next time. So isotropy and homogeneity are two of the key properties that define the simplicity of our universe on very large scales. The next thing I want to talk about is the expansion of the universe, which is basically characterized by Hubble's law. Last time I think I said I was going to call it the Lemaitre-Hubble law. I decided I'll probably call it Hubble's law. Now, Hubble, I think, really does deserve credit for demonstrating observations that the law is true, and that's really what he is getting credit for and that was not believed until he discovered it. So it really did have a tremendous effect on the course of cosmology. So Hubble's law says that on average all galaxies are receding from us with a velocity which is equal to a constant, H, called the Hubble constant-- Hubble called it K, by the way, capital K-- times the distance to the Galaxy, r. And so it's not true exactly for our universe, but it's true in some average sense, just as isotropy and homogeneity are, we're only true on an average sense. I want to tell you about the units in which it's measured and that leads me to the parsec. Let me write this on the board. But astronomers always measure the Hubble constant or I will sometimes call it the Hubble expansion rate in kilometers per second per megaparsec. And it's a relationship between a velocity and distance, so kilometers per second is velocity and velocity per megaparsec is the velocity per distance, which is what it should be. Notice, however, that I wrote that. A kilometer and a megaparsec are both units of distance. So they actually just have some fixed ratio. So in the end, this Hubble constant really is just an inverse time, and obviously, if you multiply an inverse time times the distance you get a distance per time, which is the velocity, so that works. But it's very seldom quoted as simply an inverse time, instead it's quoted by the units that astronomers like to use. They measure velocities as a normal person would in kilometers per second, but they measure distances in megaparsecs, where a megaparsec is a million parsecs, and a parsec is defined by that diagram. The base of this triangle is one astronomical unit, the mean distance between the Earth the sun. And the distance at which the angles attended by one astronomical unit is one second of arc is what's called a parsec and abbreviated pc. And a parsec is about three light years. I'll write these things on the board. One parsec equals 3.2616 light years, and a megaparsec is a million of those. Another useful number to keep in mind for converting, if you want to think of H as inverse years, then the useful equality is that 1 over 10 to the 10 years is equal to 97.8, and it's suitable to remember this as being 100-- you can look up the exact number when you need it-- and these funny things kilometers per second per megaparsec. So what is the value of Hubble's constant? It actually has a very interesting and historically-significant history. It was first measured in this paper by George Lemaitre and in 1927, published only in French and ignored by the rest of world, at the time at least. It got discovered later. And Lemaitre was not an astronomer. He was a theoretical cosmologist. I mentioned a few times I think he had a PhD from MIT in theoretical cosmology in physics, in principle. And the value that he got based on looking at other people's data, in 1927, had the value of-- I guess actually, I'll give you the range. He gave two different methods of calculating it. We've got two slightly different answers. So we had 575 to 625 of these [INAUDIBLE] units kilometers per second per megaparsec. And two years later in his famous paper "Hubble," got the value of 500 kilometers per second per megaparsec. I have a picture of Hubble too. Yes. AUDIENCE: That last in the board right there where you have 1 over 10 to the 10 amperes, is that H? PROFESSOR: That's just an equality of units. AUDIENCE: Quality of units. PROFESSOR: That's just the unit equality. It's relevant to H, because H is measured in those units. But it really is just an equality of units. 1 over 10 to the 10th years has units of inverse time, and kilometers per second per megaparsec has units of inverse time also because kilometers is distance and megaparsec is inverse distance. So both sides have the same units and the same dimensions, I should say, and it's just two different ways of measuring the same thing, inverse times. So in 1929, Hubble published his famous paper which he got the value of 500, and there's an important difference really between the papers by Lemaitre and Hubble. First of all, Hubble was using largely his own data. Lemaitre was using other people's data mostly Hubble's actually. And furthermore, Hubble made the claim that the data justified the relationship that v is equal to a constant times r. Lemaitre knew that relation theoretically for a uniformly-expanding universe, which we'll be talking about shortly. But he did not claim to be able to get it from the data. The data he had he decided was not strong enough to reach that conclusion, but he was still able to get a value for H by taking the average velocity dividing it by the average distance and got a number. I think I have Hubble's data next. Yeah, here's Hubble's data. The data obviously was not very good. It only goes up to about 1,000. One curiosity of this graph that you might notice is that the vertical axis is a velocity meaning it should be measured in kilometers per second, but nonetheless Hubble wrote it as kilometers. Not getting his units right, so minus 10 or something like that on the graded sheet. But somehow it did not stop the paper from getting published in the proceedings of the National Academy of Sciences and had become, of course, a monumentally-famous paper. But you can see that the data is scattered, and it has those nice lines drawn through which guide your eye, but if you imagine taking away the lines, it's not that clear on the data itself that it really is a linear relationship. But it's suggested, at least, and Hubble thought it was pretty convincing and later Hubble gathered more data for this project, and it did become quite convincing that there is a linear relationship, and today there's no doubt that there is a linear relationship between velocity and distance. At very large distances there's deviations, which we can understand and we'll be talking about later, but basically, at least for moderate distances, one has this linear relationship. I should mention that the velocity of the solar system through the CMB is also the velocity of the solar system through this pattern of Hubble expansion, and both Hubble and Lemaitre had to make estimates of the velocity of the solar system relative to these galaxies and subtract that out to get things that resemble a straight line. Lemaitre estimated the velocity of our solar system as 300 kilometers per second, and Hubble estimated it as 280 kilometers per second. So it was a relevant feature because remember, the maximum velocity there is only 1,000 kilometers per second, so the correction that he's putting in is about a third of the maximum velocity seen. So it's a very important and not that it was easy to determine. AUDIENCE: What were they using to determine the [INAUDIBLE] CMB? PROFESSOR: I think they were just looking for what they could assume that would make the average expansion in all directions about the same. To be honest, I'm not sure about that. But that's the only thing I can see that they would have, so I think that must be what they were using. Now since these ancient times, there have been many measurements of the Hubble expansion rate, and they changed a great deal. So in the '40s through '60s, there was a whole series of measurements dominated by people like Walter Baade and Allan Sandage. And generally speaking, the values came down steadily from the high values that were measured by Hubble and Lemaitre in the very early days. When I was a graduate student, if you asked anybody what the Hubble constant was, you always got the same answer. It was somewhere between 50 and 100, still uncertain by a factor of 2, but much lower by a factor of 5 or 10 from the values that Hubble was talk about, and was still a major source of uncertainty in talking about cosmology. Values started to become more precise around 2001. So in 2001, there was the Hubble Key Project that released these results. The word Hubble here refers to the Hubble satellite, which was named after Hubble-- Hubble, Edwin. And they were able to use the Hubble telescope to see Cepheid variables and galaxies, that was significantly further than Cepheid variables can ever be seen before and thereby make a much better calibration of the distance scale. As you'll learn about when you do your reading, Cepheid variables are crucial to determining the cosmological distance scale. So the value that they got was much more precise than anything previous, 72 plus or minus 8 of these [? quad ?] units kilometers per second per megaparsec. Meanwhile things were still controversial. I should have added that when people said it was 50 to 100 when I was a graduate student, it wasn't that people really understood the error bars to be that large. The real situation is that there were a group of astronomers that claimed adamantly it was 50, and there were other groups of astronomers that claimed adamantly that it was 100. Anyway On person is shouting in your ear saying it's 100, another person is shouting in your ear saying it's 50 the conclusion is that it's 50 to 100, and that's the situation when I was a graduate student. So this was a somewhat high value relative to the argument. The people who are arguing on the low side were still in business at this time and still in fact also using Hubble telescope data. So Tammann and Sandage, the same year using the same instrument-- let me put the year here, and it's 2001-- Tammann and Sandage were estimating 60 plus or minus they said less than 10%. so these didn't quite mesh. Coming to more modern times, in 2003, WMAP, the satellite called the Wilkinson Microwave Anisotropy Probe, a satellite dedicate to measuring these minute variations of the cosmic microwave background at the level of 1 part in 100,000, it turns out that those measurements are estimated at the Hubble expansion rate by fitting the data to a theoretical model. And their initial number was 72 plus or minus 5. And that was based on one year of data. And in 2011, the same WMAP satellite team was based on seven years of data, came up with a number of 70.2 plus or minus 1.4, so to very precise. And the most recent number comes from a similar satellite to WMAP but more recent and more powerful satellite called Planck, which just released its data last March. And it came up with a somewhat surprisingly low number 67.3 plus or minus 1.2. Yes. AUDIENCE: The other measurements there are kind of inconsistent with one another and then with one measurement sort of 20th century make this big jump down suggesting those early guys were making the same kind of mistake. What was it? PROFESSOR: Good question. The early guys were making a big mistake in estimating the distance scale, and I'm not sure I understand the details of that, but I think it had something to do with misidentifying Cepheid variables, equating two different types that should not have been compared with each other. But I'm not altogether sure of the details, but it was definitely the distance scale they had wrong. The velocities are pretty easy to measure accurately, and they were very wrong. Yes. AUDIENCE: There's two types of Cepheids, one has a certain period of velocity relation that would give it, and it's like a completely different type of star, and so we got mixed up, and we got completely different absolute magnitudes, which will give you two completely different distance estimates. So I don't know how far, but measuring Cepheids and Andromeda was way off the distance scale because we thought they were Type 1, but they were actually Type 2. AUDIENCE: I think the difference between Type 1 and Type 2 are a factor of 4, so that would make sense. AUDIENCE: Yeah. It's like two completely different linear relations. PROFESSOR: An intensity goes like 1 over the distance squared, so I think that, I mean, a factor of 4 in intensity I think would mean a factor of 16 in distance estimates. Yes. AUDIENCE: I'm following so much that these are like, they both have error bars, but they're not within error of each other. PROFESSOR: Right. AUDIENCE: Well, this is like current data. PROFESSOR: So what's going on? Nobody knows for sure. One thing I should mention though is that these are what are called 1 sigma error bars, which means that you don't expect them to necessarily agree. You expect the right answer to be within one sigma error bar 2/3 of the, time but 1/3 of the time it should be outside the error bar. The questions is, the error bar is on both. But the comparison of this, and this is usually viewed as something like 2 and 1/2 sigma effect, which naively, I think, means the probability of something on the order of 1% or something like that of getting errors that large at random. And it's debated whether or not it's significant or not. It's the abstract of the Planck paper use words something like there's a tension between their value and other recent values. When somebody does see things like that happen more frequently than the probabilities indicate, which I think it proves a theorem that experimenters always underestimate their error bars. But there's no absolute proof of that theorem. So these thing were early debatable. People don't know-- there are many things that turn up in experimental physics and especially in cosmology that turn up regularly where people have different opinions about whether or not it's pointing to something very important or something that's going to go away. So very often they go away, that's a fact. But you never know in any one case, whether it's something important that will become more definite as for the measurements are made or whether it's just a spurious effect that will disappear in a few years. Yes. AUDIENCE: So I imagine in the 1940s when people started saying yes, that Hubble, for whom the constant is named, was off by a factor of 10. That's very controversial. Was there any kind of sloping trend where people may have changed their data to make it seem, oh, we're not that far off the Hubble standards. Has this happened a couple of times before? PROFESSOR: Question is did people perhaps try to fudge their data during a period in the middle to make it look more like Hubble's. I think, I don't know, and there were, as I said, pretty much through the middle of the 20th century two groups, one of which was getting a high value, and one of which was getting a low value. The high value is where, in fact, disciples of Hubble, rather directly-- wait a minute. That's not right. The most direct disciple of Hubble was Allan Sandage, and he was, in fact, abdicating the low value. So the sociological trends are not that clear. What is clear is that they were way off. I was going to add concerning the way offness, that it really does have or did have a very significant effect on the history of cosmology because when one looks at a Big Bang model and tries to use that model to estimate when it all started, what you're doing is you're trying to extrapolate backwards, ask when was everything on top of each other given that things are moving at the speed now. There is more that goes into the calculation then just H. It depends on your model, the matter, and things like that. But nonetheless H is obviously a crucial ingredient there. The faster things are moving now outward when you extrapolate backwards, the faster they're moving inward and the younger the universe is. And to a very good degree of reliability, any age estimate-- and we'll make age calculations later-- but any age estimate is proportional to 1 over the Hubble parameter, 1 over the Hubble expansion rate. So if you're off by now we would say a factor of 7 between Hubble's value and the current value, 70 versus 500, if you're off by a factor of 7, you get ages for the universe, which are factors of 7 smaller than what you should be getting. And this was noticed early on. People were calculating ages of the universe and Big Bang models and getting numbers like 2 billion years instead of 14 billion years, a factor of 7. And even back in the '20s and '30s, there was significant geological evidence that the Earth was much older than 2 billion years, and people understood something about the evolution of stars, and it would seem pretty clear that the stars were older than 2 billion years, so you couldn't tolerate a universe that was only 2 billion years old. And it led to very significant problems with the development of the Big Bang theory, and in particular, it certainly gave extra credence to what was called the steady state theory, which you may have heard of, which held that the universe was infinitely old and as it expanded, more matter was created in the steady state theory to fill in the gaps so the density of matter would be constant. And Lemaitre himself in his 1927 paper, built a very complicated, from my standards, theory in order to get the age to be compatible. Instead of having a Big Bang model, his 1927 model was not a Big Bang model. His 1927 model started out in a static equilibrium where he had a positive cosmological constant which produces a repulsive gravity, like what we talked about in my opening lecture, balancing against the normal attractive gravity of ordinary matter, producing what was almost a static universe of exactly the type that Einstein had been advocating. But Lemaitre's universe started out with just a slightly less mass density than Einstein would have had, so it gradually started to get bigger and bigger. The force of ordinary gravity wasn't quite enough to hold it together, but when it did that, it starts to get bigger and bigger very slowly initially and then picks up speed and allows you to have universities that are much older than what you would get in a straightforward Big Bang model. Let's go on. What I want to talk about next is what this Hubble expansion is telling us about the universe, and I want to go through this a little bit carefully because it's a very important point even though it's possible you've already figured it out from the reading. I don't know for sure. Naively, Hubble's law makes it sound like we're saying that we are the center of the universe after all. Copernicus was really wrong. Everything is moving away from us, so we must be the center. But that's actually not the case. It turns out that when you look at things a little bit carefully, and that's what we'll do in this diagram, if Hubble's law looks like it holds to one observer, it in fact, also looks like it holds to any other observer as long as you recognize that there's no way to measure absolute velocity. So we think that we're at rest, but that's really just our definition of the rest frame. If we lived on some other galaxy, we would equally well attribute the state of being at rest to that other galaxy. And that's what's being shown in this picture, which I Xeroxed from Steve Weinberg's book so this might seem familiar to you if you've read that chapter yet. It shows just expansion in one direction, but that's enough to illustrate the point. And the top diagram we imagine that we are living on the galaxy labeled A. The other galaxies are moving away from us with velocities proportional to the distance, and we've spaced these galaxies from the diagram evenly, so the other galaxies are moving away at v, and then the next one is moving away at 2v. And if we continue, it would be 3v, 4v, et cetera, all the way out to infinity. And what we want to do in going from A to B is to ask suppose we were living in exactly this universe as described on line A. But suppose we were living in galaxy B and considered galaxy B to be at rest. So we'd describe everything from the rest frame of galaxy B. Then galaxy B would have no velocity, because that would defined the rest frame. When you change frames, this was all done in the context of Galilean transformations. We'll build more relativistic models later. Then the context of the Galilean transformation, if you go from one frame to a frame moving at a constant velocity, the only thing you have to do to transform velocity is you add to each velocity a fixed velocity, that velocity difference between the two frames. So to go from the top to the bottom picture, what we do in all cases is just add a velocity, v, to the left to each velocity, and that takes the velocity of v here when we move it with v to the right. When we add a v to the left, we get 0. It does the right thing there, which is what defines the transformation we're trying to make. We're trying to make the transformation that brings B to rest. And that means that when we add v to the left to the velocity of z, where we already had a v to the left, we get 2v to the left. When we add v to the left to y, which had 2v to the left, we get 3v to the left. Going the other direction, when we add v to the left to c, which had a velocity 2v to the right, we're left with the velocity of 1v to the right, and that gives us what we have on the second row. And if we look from the second row from the point of view of v, the galaxy is one way or each moving away from us with a velocity, v. The velocity is two way and moving away from us with velocities 2v, et cetera. That's exactly the same. So even though the Hubble expansion pattern is phrased in a way that makes it look like you're talking about yourself as the center of the universe, in fact, it does describe a completely homogeneous picture. And it's a picture that, in fact, has a very simple description. It's a picture of just uniform expansion, and I think I have my favorite, at least the best picture I've every drawn of uniform expansion on the next slide here. The idea is that if you look at some region of the universe, the claim-- and the claim is just called homogeneous expansion-- is that each picture at successive times would look identical, but it would look like a photographic blowup. Each picture would just be a bigger image of the same picture with one important exception, and I did try to draw this correctly, the positions of the galaxies-- this little lob there is supposed to be a galaxy, by the way, in case you can't tell my great artistry. The positions of each galaxy is just expand uniformly, the pattern of positions, but each individual galaxy does not expand. The individual of the galaxies maintain their size as the universe undergoes this public expansion. Now if we're talking about the very early universe before there were any galaxies, you would just have basically a uniform distribution of matter of gas, and that would just uniformly expand every molecule and move away from every other molecule on average. So this is the picture of Hubble expansion. And now what I'd like to do is provide a description of how we're going to treat this mathematically. If we have this uniformly-- I'm sorry. Yes. AUDIENCE: I'm still getting confused whether like the expansion is the galaxies expanding into the universe or if the universe itself is expanding. PROFESSOR: Yeah. The question in case you didn't hear it was there's some confusion here about whether we should think of the galaxies as moving through space or whether we should think of space itself as expanding. And the answer really is that both points of view should be right. If space were like water, then you could imagine putting little dust in the water, little grains of salt or something you can see and see if they are carried by the water or apart. But there's no way to mark space. It's intrinsic to the principle of relativity that you can't tell if you're moving relative to space or not. There's no meaning to moving relative to space. And there's no meaning for you to move relative to space. There is also no meaning for space to move relative to you. they do the same thing. So you can't really tell, and both points of view should be correct. There are cases where you can tell, however, which is not locally but if, for example, you had a closed universe, which we'll talk about later how that works exactly, then you could ask does the volume of a closed universe get bigger with time as this Hubble expansion takes place. And the answer there is yes. AUDIENCE: That would mean actually, the universe is expanding. PROFESSOR: The actual universe is expanding. So we will normally think of it, globally at least, as the actual universe expanding. That is how we will think about it. But locally, there's not really any distinction between that and saying that these galaxies are just moving through space. Yes. AUDIENCE: So given that the galaxies are actually two points, why can it be claimed that the galaxies themselves are not expanding? PROFESSOR: OK. How do we understand the fact that the galaxies themselves are not expanding is what you're asking. And I'll give you a nutshell answer, and we might be talking about it more later. One should imagine that this starts out shortly after the Big Bang as an almost perfectly uniform gas, which is just uniformly expanding, everything moving away from everything else. But the gas is not completely uniform. It has tiny ripples in the matter density, which are the same ripples that we see in the cosmic microwave background radiation today or at least the cosmic ripples that we see in the cosmic background radiation are caused by the ripples in the mass density of the early universe. These ripples eventually form galaxies because they're gravitationally unstable. Wherever there's a slight excess of mass, that will create a slightly stronger gravitational field in that region pulling in more mass creating a still stronger gravitational field pulling in more mass, and eventually instead of having this nice uniform distribution with just ripples at 1 part in 100,000, you eventually have huge clumps of matter which are galaxies. And as you go from this transition of things being almost completely uniform and uniformly expanding to these lumps that form galaxies, the ones that are being formed by extra gravity pulling in the matter. And what happens is that extra gravity that forms the galaxy overcomes the Hubble expansion. The matter that makes up the galaxy had been expanding in the early days, but the gravitational pull of the matter that forms the galaxy pulls it back in. So the galaxy actually reaches a maximum size and then, in fact, starts to get smaller and then reaches equilibrium, an equilibrium where the rotational motion keeps a finite size. Yes. AUDIENCE: So the diagrams that you're applying up here that all the distance relations have been galaxies that are being preserved. Is that a potential or is it exactly [INAUDIBLE]? PROFESSOR: Well, yeah. It's supposed to be just a photographic blow up as far as where the relocations of dots are. Yeah. I mean is that what you're asking? AUDIENCE: Well, like will there be equal distance between galaxies as a sub [INAUDIBLE] member? PROFESSOR: Yeah. I think the picture shows that, doesn't it? AUDIENCE: Well, the notches basically are spaces between [INAUDIBLE]. PROFESSOR: Oh, that's right. That's right. I haven't talked about the notches yet. The diagrams are supposed to show actual physical distances. So the physical distance between this galaxy and this galaxy is a little bit there and much more there. So that's how you're supposed to interpret that picture. But what I was about to get to and you've got there so I'll continue, is that the best way to describe this uniformly-expanding system is to introduce a coordinate system that expands with it, and that's what these notches are. The notches are artificial things that we create, and we could think of them as just being labels on a map. Once we know that the expansion is uniform this way, we could take any one of these pictures and think of it as a map of our region of the universe. And we can then get to any other picture on the slide simply by converting units on a map to physical distances with a different scale factor. So if Massachusetts was forever getting bigger and bigger and we had a map of Massachusetts, we would not have to throw away that map every day and buy a new map. We could handle the expansion of Massachusetts keeping the same map just crossing out the place in the corner of the map where it says 1 inch equals 7 miles, and the next day 1 inch equals 8 miles, and 1 inch equals 9 miles, and 1 inch equals 10 and 1/2 miles. So by changing the scale factor on the map and the use of the word scale factor here is exactly the same meaning as it would have for a map, you can allow yourself to describe an expanding system without ever throwing away the original map. And that's the kind of coordinate system that we will be using, and these are called comoving coordinates. And the idea here is that galaxies are at-- I'm sticking in the word approximately here because none of this is exact, but we'll be thinking in a toy model as it was exact. So galaxies are at approximately constant values of the coordinates, and the scale factor, which means the physical distance per coordinate distance increases with time. So that describes this all-important comoving coordinate system that we'll be using for the rest of the course to describe the expanding universe. Yes. AUDIENCE: Do we have to do anything funny to the Lorentz transform to come to the fact that coordinates are now not moving at the same velocity relative to each other? PROFESSOR: It depends on what questions you ask. There are questions where you do have to think carefully, and we'll have one of those shortly as probably an extra credit problem. But for most things, it actually makes things very simple, and you can ignore most of the complications of special relativity. And we'll try to think as we go along where we need to worry about special relativity and where we don't, and usually we don't. So the key relationship is that the physical distance between any two points on the map, by physical distance, I mean what it is in the real world, miles if we're talking about Massachusetts, and this is miles between the real physical points, is equal to a time-dependent scale factor times the coordinate distance. Now, here I'm going to use conventions for defining things that are slightly different from what are often used. A common procedure where I think it's done in most of the books, is to think of both the coordinate distance and the physical distances being measured in normal distance units, meters, and then the scale factor is dimensionless, and it just tells you how much you have to blow up the map to be able to make it the physical map. I find it significantly easier to know what you're doing as things go along to label the map in units that are not centimeters, but are what shown on the picture as notches. One advantage of that logically is it means that if you have different copies of the map that you've made on a Xerox machine with different scalings, you have a big copy of the map and a little copy of the map, that they're marked off with notches. The notches grow with the physical size of the page, the scale factor is the same no matter which copy of the map you're using. But most importantly, it allows you to, I think, do dimensional tests. The size of the map is clearly something that's unrelated to the actual length of a meter, it's just how many units you put on your map. So there's a clear and logical separation between what is meant by a certain number of units and distance here and a real meter by any standards. So you can keep that straight by just imagining that your map is calibrated in some new arbitrary unit which is just special to the map, and I'm going to call that a notch. So notches are just arbitrary units that you use to mark off your map. And then the physical distance is, of course, measured in meters or any other standard unit of distance. And then the scale factor is measured in units of meters per notch instead of being dimensionless. And the basic advantage of this is that when you're all done, nothing had better have any notches in it as you're calculating something physical, that is physical why you should not depend on the size of your map. So you have a nice dimensional check to make sure that the notches drop out of any physical calculation that you try to do. What I want to do next is to show you that this relationship leads me to Hubble's law and furthermore will allow us to figure out what this Hubble expansion rate is in terms of what the scale factor is doing. So it's an easy enough calculation if we're looking at some object out there and it's physical distance l sub p is given by that formula, and we want to know what its velocity is. It's velocity is by definition, just a time derivative of that quantity. So the velocity of some object, some distance out in space, is just equal to d dt of l sub p of t, and that will be da dt times l sub c, because l sub c is constant. On average, all our galaxies are at rest in this coordinate system, in this expanding coordinate system. Now this could be written in a way that ends up being more useful by dividing and multiplying by a. So I could write it as 1 over a times da dt times a of t times lc. And the advantage of multiplying and dividing by a this way is that this quantity is again just l sub p, the physical distance. So now we say that the velocity of any distant object is equal to 1 over a da dt times the distance to that object. And that is Hubble's law, and it tells us that the Hubble expansion rate, which is itself going to be a function of time, is equal to 1 over a times da dt. And this allows us to illustrate the unit check that I talked about for the first time. Notice that a is measured in meters per notch, so the meters per notch here cancels the meters per notch here, and you just get inverse time, and the really important thing is that the notches have gone away. And again, notches have to go away in any calculation of any physical, and that makes a nice check. And once you know how a of t is behaving, you know exactly how the Hubble expansion rate behaves. It's determined by a of t. You might mention one notational item. Nowadays almost everybody calls this scale factor little a. In the early days, it was very first introduced by Alexander Friedmann who first invented the equation describing expanding universe in the early 1920s. He used the letter R, capital R. Lemaitre also used the letter capital R, and I guess Einstein probably used the capital R. I'm not sure. And going into more modern times, Steve Weinberg wrote a book on gravitation and cosmology which still used the letter capital R, but that was sort of the last major work that used the capital R for the scale factor. The disadvantage of it is at the same time this capital R means something else in general relativity. It's the standard symbol for what's called the scalar curvature in general relativity. So to avoid confusion between those two quantities, nowadays almost everybody calls the scale factor little a. If you look at old A286 notes, I used to follow Steve Weinberg's textbook on gravitation and cosmology and call it capital R, but now it's hopefully all switched to little a. OK. Next item. If we're going to understand what it would look like to live in a universe like this, we're going to need to know how to trace light rays through our expanding universe. And that turned out to be easy. If I let x be equal to a coordinate, that means it's measured in notches, and if I imagine I have a light ray moving in the x direction, I can describe how that light ray is going to move. If I can write down a formula for the dx dt. Tells me how fast in the coordinate system the light ray travels. Well the basic principle that we're going to use here is that light, in fact, always travels at the speed of light at some fixed value c, but c is the physical velocity of light, the velocity measured in meters per second. But dx dt is the velocity measured in notches per second because our coordinate system is marked off not in meters, but in notches. And that really is very important because meters are going to be constantly changing lengths relative to our notches, and we want to keep track of things in notches so we have a nice picture within our coordinate description of the universe that we can think about. So we're going to want to know what dx dt is, but it's just a unit conversion problem. dx dt is the speed of light in notches per second. We know the speed of light in meters per second, c. So to convert is just a matter of multiplying by the scale factor to convert the units of meters to notches. And here again it helps to have this meters versus notches because it guarantees that you can't get it wrong if you just check your units. So this is not really a question mark. It is just c divided by a of t, the scale factor. And we can make sure we got it right by checking our units. I'm going to use brackets to indicate units of. So we're going to work out what the units are of c over a of t, trivial problem, of course, but we'll make sure we got the right answer. The units of c are, of course, meters per second. a of t we said is meters per notch. So the meters cancel, and we get notches per second. Now, I told you that you should never get notches in the physical answer because this is not a physical answer. This is a coordinate velocity of light, so it does depend on our coordinates depending on what coordinates we've chosen. So it should certainly be notches per second because x is measured in notches and t is measured in seconds. So we put in the a of t in the right place. It does belong in the denominator and not the numerator. Yes. AUDIENCE: Why aren't we worrying about the fact that as the universe expands, there's also a velocity component with a light rate from its position moving according the Hubble expansion? PROFESSOR: Right. The reason we don't worry about that is that special relativity tells us that all inertial observers are equivalent and that the speed of light does not depend on the cannon that the light beam was shot out of. So if I'm at rest in this expanding coordinate system, I'm not really an inertial observer because there is gravity in this whole system, but we're going to ignore that. If we're really being rigorous here, we have to do the full general relatively thing. But I think the intuitive explanation is pretty obviously valid. It is, in fact, rigorously valid. If I'm standing still in this expanding coordinate system, I am an inertial observer. And if a light beam comes by me, I should measure it's B and C, no matter where it started, no matter what's happened in the past. So the conversion between my units of distance and physical units of distance, my coordinate distances and physical distances, is just a of t. So that's the only factor that appears and this is completely rigorous. One can drive this in a more general context in general relativity. Here we're starting out with a premise that the light pulse travels at speed say if one had the full theory of general relativity coupled to Maxwell's equations we could really derive the fact of exactly how rays travel and would give us this. So we have a very simple equation for how light rays travel. Now I want to spend a little bit of time, and this, I guess, will be the last topic I'll talk about today discussing the synchronization of clocks in this world system, in this cosmological coordinate system. In special relativity, you know that it's hard to talk about synchronizing clocks at large distances. The synchronization of clocks depends on the velocity of the observer. That was one of the principles we learned about when I wrote down those three kinematic properties of special relativity. So in general, in the context of special relativity, there is no universal way of synchronize the clocks. You could synchronize clocks with respect to one observer but then they would not be synchronized with respect to another observer moving with respect to that observer. In this case, we have perhaps even a further complication although in the end everything is simple, but we have the further complication that the different clocks that we're talking about, which are clocks carried by these observers that are stationary in our comoving coordinate system-- clocks carried essentially by galaxies that are uniformly expanding-- all these clocks are moving relative to each other. because the Hubble expansion tells us that. So the notion of trying to synchronize clocks seems a bit formidable. Turns out however that we can synchronize clocks and one can develop a notion of what we call cosmic time, which is the time that would be right on all these clocks where all these clocks, I mean all the clocks that are stationary with respect to the local matter, in other words stationary with respect to this comoving but expanding coordinate system. So why can't we synchronize clocks? What we're using as our core assumption, which is what makes everything simple, is that the model universe that we're building this homogeneous and that means that what I would see if I was living in this universe would not depend on where I was. So if I were living on galaxy number one and took out my stopwatch and timed how long it took before say the Hubble parameter changed from Hubble's value to the current value say, as an example, any two numbers, if I were living any place else and timed the same thing, how long it took for the Hubble expansion rate to change from A to B, I would have to get exactly the same time interval, otherwise, it would not be homogeneous. Homogeneous means everybody sees exactly the same thing. So we all have, no matter where we live in this universe, a common history, and that means that the only thing we don't know is how to synchronize our clocks what time on my watch might correspond to what time on your watch. But if we imagine that we could send signals or that we're some global observer watching the whole thing, then we could just tell each other let's all set our clocks to noon when the Hubble expansion rate is 500 kilometers per second per megaparsec. And then we would have a well-defined synchronization. And once we synchronized our watch that way, if we each looked at how the Hubble expansion rate changed with time, we would get exactly the same function of time by this principle of homogeneity. None of us could see anything different as long as we're measuring time intervals, and we've fixed it so now we're measuring nothing but time intervals because we've arranged to all set our clocks to the same time at some particular value to the Hubble parameter. So to synchronize, we can ask what are the options. I mentioned the Hubble parameter. That's certainly one parameter that can be used in principle to synchronize clocks throughout our model universe. You might wonder if we can use the scale factor itself to synchronize times. And the answer there I would say is no because of this ambiguity of the notch. I have no way of comparing my notch to your notch. We can compare physical distances because they're related to physical properties as the size of a hydrogen atom is a certain physical size, no matter where it is in this universe. So we could use hydrogen atoms to measure meters, and we would all be talking about the same meters. And we could use those meters to define seconds by how long it takes light to travel through a meter, and so on. So meters and seconds, we can all agree on because they're related to physical phenomena that we can all see and that will be the same everywhere in our homogeneous model universe. But notches, not so, everybody gets to make up his own notch. It's just the size of the map he happens to draw. So we cannot compare scale factors and say, we'll set our clocks to a certain time when both of our scale factors have a certain value. We would get different synchronizations depending on what choices we've made about having to find a notch. So the scale factor does not work as a synchronization mechanism, Hubble expansion rates does. Also we haven't really talked about an ideal cosmic microwave background, but we certainly talked about it, the cosmic microwave background has a temperature which is falling as the universe expands, so that could be used to synchronize clocks as well. Might mention in the last 30 seconds one interesting phenomenon. For our universe, the Hubble expansion rate is changing with time, the temperature is changing with time. There's no problem talking about this synchronization. But if you're talking about different kinds of mathematical models of the universes, you can imagine a universe where the Hubble expansion rate is just constant, and in fact that is a space that was studied very early in the history of general relativity. It's called de Sitter space. And it's approximately what happens during inflation, so we'll even be talking about de Sitter space later in the course. In de Sitter space, the Hubble constant is absolutely constant, so at least one of the mechanisms I mentioned to synchronize the clocks goes away. There's also, in fact, no cosmic microwave background radiation in pure de Sitter space, so that goes away. You could ask, is there anything else, it turns out there is not, so you really can construct a well-defined model of the universe, the so-called de Sitter space, where there really is no way of synthesizing clocks. And you could really show that you could make transformations so that if you synchronize the clocks one way, you could make a symmetry transformation if we take all those clocks out of synchronization and otherwise the space would be just as good as what you started with. So the synchronization is subtle, and it depends on having something which actually changes with time, but that will be the case where our real universe and for the model universes that we'll be talking about. So I'll stop there. See you folks on Thursday.
MIT_8286_The_Early_Universe_Fall_2013
9_The_Dynamics_of_Homogeneous_Expansion_Part_V.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: A quick review of what we talked about last time. So the first thing we did last time was to discuss the age of universe, considering so far only at this point, flat matter-dominated universes where the scale factor goes like t to the 2/3. And we were easily able to see that the age of such a universe was 2/3 times h inverse. And we did discuss what happens if you plug-in numbers into that formula using the best current value of h, the value obtained by the Planck team last March. The age turns out to be about 9.5 to 9.9 billion years. And that can't be the real age of the universe, we think, because there are stars that are older than that. The age of the stars seems to indicate that the university should be at least, according to one paper I cited last time, 11.2 billion years old, and this is younger. So the conclusion is that our universe is not a flat, moderate, matter-dominated universe. We do in fact have good evidence that the universe is very nearly flat, so it's the matter-dominated part that has to fail, and it does fail. We also have good evidence that the universe is dominated today by dark energy, which we'll be talking about later. But one of the pieces of evidence for this dark energy is this age calculation. The age calculation just does not work unless you assume that the universe has a significant component of this dark energy, which we'll be discussing later. We then talked about the big bang singularity, which is an important part of understanding, when you talk about the age, what exactly you mean by the age, age since when. And the point that I tried to make there is that the big bang singularity, which gives us mathematically the statement that the scale factor at some time which we call zero was equal to zero, and if you put back that formula into other formulas, you discover that the mass density, for example, was infinite at this magical time that we call zero. That singularity is certainly part of our mathematical model and doesn't go away even when we make changes in the mathematical model. But we don't really know if it's an actual feature of our universe, because there's certainly no reason to trust this mathematical model all the way back to t equals zero, where the densities become infinite. We know a lot about matter and we think we can predict how matter will behave, even the temperatures and energy densities somewhat beyond what we measure in laboratory. But we don't think we can't necessarily extrapolate all the way to infinite matter density. So these equations do break down when you get very close to t equals zero and nobody really knows exactly what should be said about t equals zero. When we talk about the age, we're really talking about the age since the extrapolated time at which a would have been zero in this model, but we don't really know that it ever actually was zero. We next discussed the concept of the horizon distance. If the universe, at least the universe as we know it, we want to be agnostic about what happened before t equals zero, but certainly the universe as we know it, really began at t equals zero in the sense that that's when structure and complicated things started to develop. So since t equals zero there's been only a finite amount of time elapsed. And since light travels at a finite speed, that means that light could only have traveled some finite distance since the big bang, since t equals zero. And that means that there's some object which is the furthest possible object that we could see, and any object further than that would be in a situation where light from that object would not yet have had time to reach us. And that leads to this notion of our horizon distance, where the definition of the horizon distance is that it is defined to be the present distance of the most distant objects which we are capable of seeing, limited only by the speed of light. And once able to calculate that, and in particular for the model that we so far understood at this point, the matter-dominated flat universe, the horizon distance turned out to be three times c t. Tt Now remember, if the universe were just static and appeared at time t ago, then the horizon distance would just be c t, the distance that light could travel in time t. What makes it larger is the fact that the universe is expanding, and that means that everything was closer together in the early time and light could make more progress at the early time, and then these objects have since moved out to much larger distances. So that allows the horizon distance to be larger than c t, and in this particular model, it's three times c t. Next we began a calculation, which is what we're going to pick up to continue on now. We were calculating how to extend our understanding of a of t, the behavior of the scale factor, away from the flat case, to ultimately discuss the two other cases, the open and closed universe. And we decided on a flip of a coin-- I promise you I flipped a coin at some point-- to start with the closed universe. We could have done either one first. The equation that we start with is basically the same, it's the sine of k that makes the difference. This is the so-called Friedmann equation, and for a closed universe k is positive. This is the evolution equation, but it has to be coupled with an equation that describes how rho behaves with time. And for a matter-dominated universe, rho is just representing non-relativistic matter, which is nothing but spread as the universe expands. And the spreading gives a factor of one over a cubed as the volume grows as a cubed. And that means that rho times a cubed is a constant, and that expresses everything that there is to know about how rho behaves with time. OK, then after writing these equations, we said that things will simplify a little bit, not a lot, but a little bit if we redefine variables basically to incorporate all the constants that appear in these equations into one overall constant. And we decided, or I claimed, that a good way to do that, an economical way to do that, is to define things so that the variables all have units which are easily understood. And in this case the units of length can describe everything that we need, so we chose to express everything in terms of variables that have units of length. So the scale factor itself is units of meters per notch, and that's not a length. And notches we'd like to get rid of because we know they're un-physical. That is, there's no standard for what the notch should be. So if we divide a of t by the square root of k, the notches disappear, and we get something which just has the units of meters, or units of length, and I call that a twiddle of t. Similarly, but more obviously, t can be turned into a length by just multiplying by c, the speed of light. So I defined a variable t tilde which is just c times t. So both of these new variables, with the tildes, have units of length. And then the Friedmann equation can be rewritten, just reshuffling things according to these new definitions, in this way where all the constants are lumped into this variable alpha, where alpha is this complicated expression, which absorbs all the constants from the earlier equation. Alpha also has units of length, and even though it has a rho times a tilde cubed in it, and both rho and a tilde each depend on time, the product of rho times a tilde cubed is a constant because rho times a cubed is a constant and a tilde differs only by the square root of k, which is also a constant. So alpha is a constant even though it has time dependent factors. The time dependence of those two factors cancel each other out to give something which is time independent and can be evaluated any old time. So this now is our equation, and we proceeded to manipulate it. So the first thing we did was to take it's square root, and to rearrange it so that d t tilde appeared on one side, and everything else was on the other side. And everything else depends on a tilde but not explicitly on time. So this completely separates everything that depends on t tilde on the left, and everything that depends on a tilde on the right. And now we can just integrate both sides of that equation, and and they point to that, that there are basically two ways of proceeding here, one of which we already did when we did the flat case. When we did the flat case, we integrated both sides as indefinite integrals. And when you carry out an indefinite integration, you get a constant of integration, which then becomes a constant in your solution. And in that case we discovered that the constant really just shifted the origin of time. And since we had not said anything previously that in any way determined the origin of time, we used that constant to arrange the origin of time so that a of zero would equal zero, and that eliminated the constant. Just for variety, I am going to do it another way this time. Instead of doing an indefinite integral, I will do a definite integral. And if you do a definite integral, you have to make sure you're integrating both sides over the same range, or at least corresponding ranges. We have different names for the variables, but the range of integration of the two sides has to match in order to maintain the equality between the two sides. So we're going to integrate the left side from zero to some final time. And I'm going to call the final time t tilde variable sub f, for f to stand for final. And there's no real final time here. It could be any time, it's just the final time for the integration, and in the end discover then how things behave at time t tilde sub f. And then once we've figured that out we can drop the effort. It will be a formula that will be valid for any time. So the left hand side is integrated from zero to time t tilde f, the right hand side has to be integrated over the corresponding time interval. And we would like to define a tilde and the origin of time so that a tilde equals zero at time zero, the same convention we used for the other case. The standard convention in cosmology, t equals zero, is the instance of the Big Bang. And the instance of the Big Bang is the time at which the scale factor vanished. So that means when we have a lower limit of integration of t tilde equals zero on the left hand side, we should have a lower limit of integration of a tilde equals zero on the right hand side. And similarly the upper limits of integration should correspond to each other. So the upper limit of integration on the left hand side was t tilde sub f, really just an arbitrary time that we designated by the subscript f. So the right hand side the limit should be the value of a tilde at that time. And I'll call that a tilde sub f. So a tilde sub f is just defined to be the value of a tilde at the time t tilde sub f. And in this way the limits of integration on the two sides correspond and now we can integrate them and we don't need any new integration constants. These definite integrals have definite values, which is why they're called definite integrals, I suppose. OK any questions about that, because that's where we're ready to take off and start doing new material. Yes. AUDIENCE: I have a general question. So regarding the issue of how we're not really sure how to extrapolate up to t equals zero, this is just kind of a general question. I was wondering how we're kind of assuming throughout that time kind of flows uniformly, and so I've heard about something like gravitational time dilation. So at the beginning especially when there's such a high density of matter or radiation, then wouldn't that affect how, I guess, time flows? PROFESSOR: OK good question, good question. The question was, does things like general realistic time dilation affect how time flows, and are we perhaps being overly simplistic and assuming that time just flows smoothly from time zero onward. The answer to that is that general relativity does predict an extra time dilation, which in fact is built into the Doppler shift calculation that we already did. And there are other instances where similar things happen. If a photon travels from the floor of this room to the ceiling of this room, there's a small Doppler shift, a small shift in the timing. And you could see it in principle with clocks as well. If you had a clock on the floor, and a clock on the ceiling, they would not run at quite the same rate. But to talk about time dilation, you always have to have two clocks to compare. In the case of the universe, we have this thing that we call cosmic time, which can be measured on any clock. The homogeneity assumption implies that all clocks will do the same thing, so so the issue of time dilation really does not arise. Our definition of cosmic time defines time that is the time variable that we will use. And when we say it flows uniformly from zero up to the present time, that word uniformly sounds sensible. But if you think about it, I don't know what it means. So I don't even know how to ask if it's really uniform or not. It's certainly true that our time variable evolves from zero up to some final time, but smoothly or uniformly is not really a question we can ask until one has some other clock that one can compare it to. PROFESSOR: Yes. AUDIENCE: So you said at the beginning that something like an infinite density is just an effect of our equation. Aren't a lot of theories like inflation come out of assuming that the universe had gotten some infinite, dense, small area? So what effect does this assumption that may or may not be correct have on theories? PROFESSOR: OK, the question is, if we are not sure we should believe the t equals zero singularity, how does that affect other theories like inflation, which are based on extrapolating backwards to very early times. And there is an answer to that. The answer may or may not sound sensible, and it may or may not be sensible. It's hard to know for sure. But one can be more detailed and ask how far back do we think we can trust our equations? And nobody really knows the answer to that of course, that's part of the uncertainty here. But a plausible answer which is kind of the working hypothesis for many of us, is that the only obstacle to extrapolating backwards is our lack of knowledge of the quantum effects of gravity, and therefore the quantum effects of what space time looks like. And we can estimate where that sets in. And it's at a time called the Planck time, which is about 10 to the minus 43 seconds, and inflation, which we'll see later as sort of a natural timescale of about probably 10 to the minus 37 seconds. So although that's incredibly small, it's actually incredibly large compared to 10 to the minus 43 seconds. So we think there is at least a basis for believing that things like our discussions of inflation which we'll be talking about later, are valid even though we don't think we can extrapolate back to t equals zero. Yes. AUDIENCE: I have a question about use of the definite integrals. PROFESSOR: Yes. AUDIENCE: So we have a twiddle defined as a over square root of k. And we noticed in our equation that a goes to zero as t goes to zero, so a twiddle also goes to zero. How do we know then that that definite integral is convergent, because then we have zero over zero competing case. [INAUDIBLE] PROFESSOR: Let me think. Yeah, we'll see, I think is the answer. How do we know it's convergent? Well, we're going to actually do the integral, and then the integral does converge. You are right. The integrand does become zero over zero, but that in fact means the integrand is some finite number actually. Both numerator and denominator go to zero, I mean let me think about this. I guess a tilde squared becomes negligible, so the denominator goes like one over the square root of a tilde. So in fact the numerator divided by the denominator goes like the square root of a tilde as time goes to zero. Because you have a square root in the denominator, the a tilde squared becomes negligible, so it's manifestly convergent. The integrand does not even blow up, even though a tilde does go to zero. It's certainly worth looking at, you're right. One should always check to make sure integrals are convergent. But since we will actually be explicitly doing the interval, if it was divergent we would get a divergent answer, and we will not as you'll see in a couple minutes. Any other questions? OK, so in that case, to the blackboard. OK, so I will write on the blackboard the same equation we have up there, so we can continue. t twiddle f is equal to the integral from zero to a twiddle sub f, a twiddle d a twiddle over the square root of two alpha a twiddle minus a twiddle squared. OK, so what we'd like to do now is to carry out the integral on the right hand side. Ideally what we'd like to do is to carry out the integral on the right hand side and get some function of a tilde f and then we'd like to convert that function to be able to write a tilde f as a function of time. We actually won't quite be able to do that. We'll end up with what's called the parametric solution, and you'll see how that arises and what that means. I don't need to try to describe exactly in advance what that means. Doing the integral can be done by some tricks, some substitutions. And the first substitution is based on completing the square in the denominator, and that motivates the substitution that we will make. So we can rewrite this just by doing some algebra on the denominator, which is called completing the square for reasons that you'll see when I write down what it is. The numerator will stay a tilde d a tilde. And the denominator can be written as alpha squared minus a tilde minus alpha quantity squared. So completing the square just means to put the a tilde inside a perfect square. And the nice thing about this is that now we can shift our variable of integration, and turn this into just a single variable instead of the sum of the two. And then you have an expression, which is clearly simpler looking than this one, which had a tildes in both places. Now the variable integration will appear only there. And substitution which does that obviously enough, is we're going to let something, we can choose whatever we want, and then once I call it x, we're going to let x equal a tilde minus alpha. And we're just going to rewrite that integral in terms of x. So what we get when we do that is t sub f tilde is equal to the rewriting of that integral, and just substituting the a tilde becomes x plus alpha. So we get x plus alpha where a tilde was, and d a tilde becomes just d x. And the denominator, which was our motivation for making the substitution in the first place, becomes just alpha squared minus x squared. So this is perfectly straight forward. The next step which is important, is to get the limits of integration right, because with this definite integral method we really have to make sure that our limits of integration are correct. So straightforward to do that, the lower limit of integration was a tilde equals zero, and if a tilde equals zero, x is equal to minus alpha. So the lower limit of integration expressed as a value of x is minus alpha. And the upper limit expressed as x was a tilde f, and that means x is equal to a tilde f minus alpha. So the limit here is a tilde sub f minus alpha for the upper limit of integration. OK, now this integral is still not easy, but it can be made easy by one more substitution. And the one more substitution is a trigonometric substitution to simplify the denominator. We can let x equal minus alpha cosine of theta, where theta is our new variable of integration. And then alpha squared minus x squared becomes alpha squared minus alpha squared cosine squared theta. And the alphas factor out and you have the square root of 1 minus cosine squared theta. 1 minus cosine squared theta is sine squared theta, which is a convenient thing to take the square root of, you just get sine theta. And everything else also simplifies. And the bottom line, which I will just give you, is that now we find that t sub s tilde is just the integral of 1 minus cosine theta d theta. That's it. Everything simplifies to that which is easy to integrate. We also have to keep track of our limits of integration. The limit of integration, the lower limit of integration, is where x equals minus alpha. And if x equals minus alpha, that means cosine theta equals 1. And cosine theta equals 1 means theta equals 0. So the limit of integration on theta is easy, it starts at 0. And the final value is really just a value that corresponds to the final value of x which is a twiddle f minus alpha. For now I'm just going to call it theta sub f. Things that we have just determined is the value of theta that goes with the upper limit there. We'll figure out in a second how to write it more explicitly. Let me first do the integral to just get that out of the way. Doing the integral, you get alpha times 1 minus cosine theta sub f. So this in fact becomes half of our solution. It expresses t sub f in terms of theta sub f. And now that we're done with the whole problem, I'm going to drop the subscript f. There's just some time that we're interested in which is called t and the value of a at that time will be called a, and the value of theta at that time will be called theta. So I'm just going to drop the subscript everywhere because we are now in a situation were the subscript is everywhere, so dropping everywhere does not lose any information. So one of our equations is going to be simply t is equal to alpha times 1 minus cosine theta. And another equation will come from figuring out what theta sub f really is, which I said comes from making sure that the upper limit of integration here corresponds to the upper limit of integration in the previous integral. So theta sub f, I had to determine this on another blackboard and then put the final equations together. AUDIENCE: [INAUDIBLE] PROFESSOR: Yes, that's right I don't want to drop twiddles. T twiddle, thank you, is equal to alpha times 1 minus cosine theta, and and then we have the final value of x. Xx x is equal to a tilde minus alpha. So the final value of x is equal to the final value of a tilde minus alpha. And the final value of x is also related to theta by this equation, which is what we're trying to get, how theta is related to the other variables. So this is equal to minus alpha cosine of theta sub f. And now we might want to, for example, solve this for a tilde sub f, which just involves looking at the right hand half of the equation, bringing this alpha to the other side making a plus alpha. So this implies that a tilde sub f is equal to alpha times 1 minus cosine theta sub f. So this equation now just says that theta sub f means what it should mean to give us a final limit of integration that corresponds to the final limit of integration on our original integral, which we called a tilde sub f. Yes. AUDIENCE: Sorry, perhaps I missed it. But when you are doing the integral of 1 minus cosine theta, how come-- PROFESSOR: Oh, I got it wrong. Good point. The integral of cosine theta d theta is sine theta, not cosine theta. AUDIENCE: [INAUDIBLE] PROFESSOR: Wait a minute. Hold on. Do I still have it wrong? AUDIENCE: [INAUDIBLE] PROFESSOR: OK hold on. There was an alpha missing here. That's part of the problem, coming from the original equation here. Let's see if I have this right now. This is copying from a long line altogether. So I got both factors wrong. And then wait a minute. I think this is right now. There's still a wrong [INAUDIBLE]? AUDIENCE: [INAUDIBLE] PROFESSOR: If I differentiate sine, I get cosine. So if I differentiate minus sine, I get minus cosine. So differentiating this, I get this. That should mean that integrating this I get that. I think I have it right. Sometimes I get things right. It's a surprise, but-- AUDIENCE: [INAUDIBLE] PROFESSOR: What? AUDIENCE: The last equation you wrote. PROFESSOR: The last equation needs to be changed, right. This was copied from that, right. Thanks for reminding me. So t tilde is equal to alpha times theta minus sine theta. OK, and now I was working out the relationship between theta and a tilde. And let me just remind you that all I did was make sure that the upper limits of integration correspond to each other. So I'm just basically rewriting the upper limit of integration in terms of the new variable each time when we change variables starting from a tilde going to a tilde to x, and from x to theta. And then I use the equality of these two to write a tilde in terms of theta. And these equations now hold with the subscript f present. When subscript f's appear everywhere we can just drop them and say that the time that we called t sub f now we're just going to call t. And now we could put together our two final results which are maybe right over here. We have ct. I'll eliminate my tildes altogether now. ct, which is t tilde, is equal to alpha times theta minus sine theta. And a divided by the square root of k, which is a tilde and its sub f, but we're dropping the sub f, is equal to alpha times 1 minus cosine theta. And this is as good as it gets. Ideally, it would be nice if we could solve the top equation for theta as a function of t, and plug that into the bottom equation, and then we would get a as a function of t. That's what ideally we would love to have. But there's no way to do that analytically. In principle of course, this equation can be inverted. You could do it numerically. For any particular value of t you could figure out what value of theta makes this work, and then plug that value of theta into here. But there's no analytic way that you can write theta as a function of t. It's not a soluble equation. So this is called a parametric solution in the sense that theta is a parameter. And as theta varies, both t and a vary in just the right way so that a is always related to t in the correct way to solve our original differential equation. That's what's meant by a parametric solution. We can also see from this equation, or maybe from thinking back about how things are defined in the first place, how theta varies over the lifetime of our model universe. Theta we discovered starts at 0. We discovered that when we wrote our integral over there. And we could also probably see it from here. We start our universe at a equals 0. And at a equals 0, we want 1 minus cosine theta to be 0 and theta equals 0 does that. So theta starts at 0, which corresponds to a equals 0, and it also corresponds putting theta equals 0 here to t equals 0. So we have arranged things the way we intended so that a equals 0 happens the same time t equals 0 happens. And then theta starts to grow. As theta grows, a increases, the universe gets bigger until theta reaches pi. And when theta reaches pi, cosine of pi is minus 1, you get a factor of 2 here, 2 alpha. That's as big as our universe gets. And then as theta continues beyond that, 1 minus cosine theta starts getting smaller again. So our universe reaches a maximum size when theta equals pi, and then starts to contract. And then by the time theta equals 2 pi, you are back to where you started from, a is again equal to 0. We have universe which starts at 0 size, goes to a maximum size, goes back to 0 size, giving a Big Crunch at the end. And that's the way this closed universe behaves. It turns out that these equations actually correspond to some simple geometry. It corresponds to a cycloid. And you may or may not remember what a cycloid is. There are the equations written on the screen, which are hopefully the same equations that I have on the blackboard. Can't always count on these things it turns out. But yeah, they are the same equations, that's healthy. And I have a diagram here which explains this cycloid correspondence. A cycloid is defined as what happens in this picture. Let me explain the picture. We have a disk shown in the upper left in its original position with a dot on the disk, which is initially at the bottom. And initially we're going to put this disk at the origin of our coordinate system to make things as simple as possible. And then we're going to imagine that this disk rolls without slipping to the right. And the path that this dot traces out, which is shown along that line, is a cycloid by definition. That's what defines a cycloid. It's the path that a point on a rolling disk evolved through. So what I would like to do is convince you that this geometric picture corresponds to those equations. Let me put the equations higher to make sure everybody can see them. And it's actually not so complicated once you figure out how to parse the pictures. So what I've drawn in the upper right is a blow up of this disk after it's rolled through some angle theta. And I even made it the same angle theta in the two cases. But this is just a bigger version of what's in the corner there showing the disk after it rolled a little bit. So after it's rolled through an angle theta, what we want to verify is that the horizontal and vertical coordinates here correspond to these two equations. And if they do, it means that we're tracing out the behavior of those two equations. So look first at the horizontal component. The horizontal component is the ct axis. So that should correspond to alpha times theta minus alpha times sine theta. And you can see that in the picture. We're talking about the horizontal component of the coordinates of this point P. And the point is that you can get to P starting at the origin. Now remember we're only looking at horizontal motion so we could start anywhere on that line. We could start at the origin, go to the right by alpha theta, and to the left by alpha times sine theta, and we get there. And that's exactly what this formula says. So if we just understand the alpha theta and the alpha sine theta of those two lines we have it made. So let's look first at what happens where this first arrow comes from, the alpha theta line. That's just the total distance that the point of contact has moved during the rolling process. And the claim is that as something rolls, really the definition of rolling without slipping, is that the arc length that is swept out by the rolling is the same as the length along the surface on which it's rolling. You could imagine, if you like, that as it rolls there's a tape measure that's wrapped around it that gets left on the ground as it rolls. And if you could picture that happening, the existence of that movie really guarantees that the length on the ground is the same as what the length was when the tape measure was wrapped around the cylinder. So the length that the point of contact has moved is just alpha times the angle through which the disk is rolled. So that explains the alpha theta label on that line. To get the alpha sine theta on the line above, that's the distance between the point P and a vertical line that goes through the center of the disk. And that's just simple trigonometry on this triangle. The radius of our circle is alpha. And then by simple trigonometry, this length is alpha times sine theta, which is what the label says. So to summarize what we've got here we can find the x-coordinate of the horizontal according to the point P by going to the right a distance alpha theta, and then back to the left the distance of minus alpha theta. And that gives us an x-coordinate, which is exactly the formula that appears in the ct formula. So ct works. The horizontal component of that dot is just where it should be to trace out the equations that describe the evolution of a closed universe. Similarly we can now look at the vertical components of that dot. Again it's most easily seen as the difference of two contributions. We're trying to reproduce this formula that says that's alpha minus alpha cosine theta. So if we start at vertical coordinate 0, which means on the x-axis, we can begin by going up to the center of the disk. And the disk has radius alpha, so that's going up the distance alpha. And then we go down the distance of this piece of the triangle, going from the center of the disk to the point which is parallel to the point P. And that again is just trigonometry on that triangle, and it's alpha cosine theta by simple trigonometry. So we can get to the elevation of point P by going up by alpha and down by alpha times cosine theta. And that's exactly that formula. So it works. The x and y components, the horizontal and vertical components of that dot, are exactly the two formulas here. So one of them can be thought of as the x-axis, and one of them can be thought of as the y-axis. And the rolling of the disk just traces out the evolution of our closed universe. So closed universes evolve like a cycloid. Any questions about that? OK, great. OK, Let me just mention that this angle theta is sometimes given a name. It's called the development angle of the universe or of the solution. And that is just intended to have the connotation that theta describes how developed the universe is and theta has a fixed scale. It always goes from 0 to 2 pi over the lifetime of this closed universe, no matter how big the closed universe might be. That brings me to my next question I want to mention. How many parameters do we have in this solution? The way we've written it, it looks like it can depend on both alpha and k because both of them appear in the answer. And k is positive for our closed universes. These formulas will not make sense if k were negative, square root of k appears there. We don't want anything to be imaginary. But k can have any value in principle and these equations would still be valid. So on the surface it appears like there's a two parameter class of closed universe solutions. But that's actually not true. Can somebody tell me why it's not true? Yes. AUDIENCE: [INAUDIBLE] PROFESSOR: Exactly. Yes, since k has units of 1 over notch squared, you can change k to anything you want by changing your definition of a notch. And there's nothing fixed about the definition of a notch. So k is an irrelevant parameter. If we change k we're just rescaling the same solution, and not actually creating a new solution. So there's really only one parameter class of solutions. One could, for example, fix k to always be 1, and then we'd have a one parameter class of solutions indicated by alpha. Alpha, unlike k, really does have a clear, physical meaning related to the behavior of the universe. And we could see what that is if we ask, what is the total lifetime of this universe from beginning to end, from Big Bang to Big Crunch. We can answer that by just looking at the ct equation. From Big Bang to Big Crunch, we know that theta evolves from 0 to pi back to 2 pi, which is the same as 0. So theta goes through one cycle of 2 pi during a lifetime of our model universe. As theta goes from 0 to 2 pi, sine theta starts at 0 and eventually comes back to 0 when theta equals 2 pi. But theta increases from 0 to 2 pi over one cycle. So over one cycle of our universe, ct increases by alpha times 2 pi. So that tells us what the total lifetime of the universe is, I'll call it t total. And we get it by just writing c times t total equals 2 pi times alpha. And I think I can do this one without making a mistake. t total is then 2 pi alpha divided by c. And we can even check our units there. Alpha has units of length, so length divided by c becomes a time, c being a velocity. So it has the right units. And that's the total lifetime of the universe. It's just determined by alpha. So alpha can be viewed as just the measure of the total lifetime of the universe, which can be anything for different sized closed universes. Alpha is also related to the maximum value of a over root k. And a has no fixed meaning, this is meters per notch. But a over root k does have units of meters. We haven't yet really seen what it means physically, because that's related to the geometry of the closed universe which we'll be discussing later. But in any case as a mathematical fact, we could always say that the maximum value of a over root k is determined by alpha. So a max over root k is equal to the maximum value of this expression. And this expression has a maximum value when theta equals pi, which gives cosine theta equals minus 1, which gives a 2 here. And that's as big as it ever gets, so that's just equal to 2 alpha. So alpha is related in a very clear way to the total lifetime of our universe, and is also related to a over root k, although we haven't really given a physical meaning to a over root k yet. But we know it has dimensions of meters. OK, the next calculation I want to do is to calculate the age of the universe as a function of measurable things. We learned for the flat matter dominated case that there was a simple answer to that. The age was just 2/3 times the inverse Hubble parameter. So what you do now is get the analogous formula here, it follows in principle immediately from our description of the evolution. But we have to do a fair number of substitutions before we can really see how to express the age in terms of things that we're interested in. The formula here tells us directly how to express the age of the universe. t is the age of the universe as a function of alpha and theta. But if you tell an astronomer to go out and measure alpha and theta so I could calculate the age, he says what in the world are alpha and theta. So what we'd like to do is to express the age in terms of things that astronomers know about. And the characterizations of the universe like this that an astronomer would know about would be the Hubble expansion rate, and some notion of the mass density. And the easiest way to talk about the mass density is in terms of omega, the fraction of the critical density that the actual mass density has. So our goal is going to be to manipulate these equations. All the information is already there. But our goal will be to manipulate these equations to be able to express the age t in terms of h and omega. OK, so first we need to remind ourselves what omega is. The critical density is defined as that density which makes the universe flat. And we've calculated that the critical density is equal to 3 h squared over 8 pi times newtons constant, capital G. We can then write the mass density rho as omega times the critical density, which is just the definition of omega. Omega is rho divided by rho c, the actual mass density divided by the critical density. And putting what rho c is, we can express rho as 3h squared omega divided by 8 pi G. And being very pedantic, I'm just going to rewrite that in the form that we're actually going to use it by multiplying through. 8 pi over 3 G rho, taking these factors and bringing them to the other side, becomes just equal to h squared times omega. And you might recognize this particular combination as what appears in the Friedmann equation. The Friedmann equation told us that a dot over a squared is equal to 8 pi over 3 G rho, minus kc squared over a squared. And in order to get the substitutions that I want, I'm going to just rewrite this putting h squared for a dot over a squared. 8 pi over 3 G rho we said we could write as 8 squared times omega. And then we have minus kc squared over a squared. Note that we really have here a tilde squared. This is a squared divided by k if I put them together. So this term can be written as minus c squared over a tilde squared. And this accomplishes one of our goals. It allows us to express a tilde in terms of the quantities that we want in our answers, h and omega. And if we can do the same for theta, we have everything we need to express the age. So the implication here is that a tilde squared is equal to c squared divided by h squared times omega minus 1. To take the square root of that equation, to find out what a tilde is, we need to think a little bit about signs and things like that. A tilde is always positive. This is the scale factor divided by the square root of k, square root of k is positive, scale factors are always positive the way we defined it. So we can take the square root of that taking the positive square root of the right hand side. Omega is bigger than 1 for our case, so omega minus 1 is a positive number, 8 squared is a positive number. So taking the square root there offers no real problem. We can write a tilde is equal to c over, I guess this point I might not notice until later. h h can be positive or negative over the course of our calculation. We're going to talk about an expanding phase and a contracting phase. So when we take the square root of h squared, we want the positive number to give us a positive a tilde. So it's the positive square root that we want, which is the magnitude of h, not necessarily h. When h is positive, the magnitude of h is h. h could be negative though, and the magnitude of h is still positive, and then the square root of omega minus 1, which is always positive. So that's our formula for a tilde in terms of h and omega. OK, now we want to evaluate alpha, and I guess I did not keep the formula for alpha quite as long as we needed it. When we defined alpha in the first place, let me remind you how it was defined, 4 pi over 3 G rho a tilde cubed over c squared. And that can be evaluated using our formula for rho and we'll put that in for rho, and what we get is c over 2 times the magnitude of h using this formula for a tilde as well. And then omega over omega minus 1 to the 3/2 power. Just using this formula, and we know how to express rho from the right hand side, and we know how to express a tilde from this formula here. So everything's straightforward, and this is what we get. And now I want to use these to rewrite this equation, a over the square root of k is equal to alpha times 1 minus cosine theta. I'm going to replace a over root k by this formula. I'm going to replace alpha by that formula. So this implies, rewriting it, that c over the magnitude of h times the square root of omega minus 1 is equal to c over twice the magnitude of h times omega minus 1 to the 3/2 power times 1 minus cosine theta. And now we've had to survive some boring algebra. But notice that now most things cancel away here. We get a very simple relationship between theta and h and omega, actually just omega . In particular, when we solve that, we get simply that cosine theta is equal to two minus omega over omega. So theta is directly linked to omega. If you know omega, you know theta, If you know theta, you know omega, by that formula. And we can rewrite this the other way around by solving for omega if we want. Omega is equal to 2 over 1 plus cosine theta. Now we can look at this qualitatively to understand how omega is going to behave. At the very beginning cosine theta is equal to 1, theta is equal to 0, so omega is 2 over 1 plus 1, which is 1. So at very early times omega is driven to 1 even in a closed universe. As theta gets larger, cosine theta gets less than 1. This then becomes more than 1. So omega starts to grow as the universe starts to evolve. At the turning point, when the universe has reached its maximum size, theta is pi, cosine theta is minus 1, omega is infinite at the turning point. That may or may not be a surprise. But it you think about it, it's obvious. At the turning point h is 0, therefore the critical density is 0, but the actual density is not 0. And the only way the actual density can be 0 while the critical density is 0 is for omega to be infinite, so we should have expected that. And then the return trip, the collapsing phases, a mirror image of the expanding phase, omega goes from infinity at the turning point back to 1 at the moment of the Big Crunch. Yes. AUDIENCE: I'm confused with what the universe would look like when it gets to infinity. PROFESSOR: It would look static. It's temporarily static. It's reached a maximum size and is about to turn around and collapse, so h is 0. AUDIENCE: OK, but like with the density. PROFESSOR: Well we could calculate the density. It's some number which depends on alpha. AUDIENCE: OK, but that doesn't diverge or anything. PROFESSOR: It doesn't diverge or anything, no. It's just some finite density. At the turning point, it's just a finite density that can be expressed in terms of alpha. Sounds like a good homework problem. I think maybe I'll do that. OK, so this now allows us to express theta as a function of omega, which is what we wanted. If we express theta as a function of omega, and alpha as a function of omega and h, we have our answer. We have t expressed as a function of h and theta. So there are choices about how exactly to express omega in terms of theta that will involve inverse trigonometric functions. And anything that can be expressed in terms of an inverse cosine, can also be expressed as an inverse sine by doing a little bit of manipulations. We have our choice here about what we want to do. But in any case, the answer already has a factor of sine theta in it. So it's most useful to express theta as the inverse sine of what we get from that formula, to express the answer in what at least to me is the simplest form. So I'm going to manipulate this a little bit to find out what sine theta is in terms of omega, and then invert that to express theta in terms of omega. So sine theta is of course plus or minus the square root of 1 minus cosine squared theta. And cosine theta we know in terms of omega, so we can express this in terms of omega. And if you do that it's straightforward enough algebra. It's plus or minus, depending on which sign of the square root is relevant, and we'll talk about that in a minute. It's plus or minus the square root of 2 times omega minus 1 over omega. So we can express theta as the inverse sine of this expression. And now what I want to do is to make use of this to put into this expression using the value for alpha that we've already calculated and wrote over there. So we get our final answer, which is that t is equal to omega over twice the magnitude of the Hubble expansion rate times omega minus 1 to the 3/2 power times the arc sine or inverse sine of twice the square root of omega minus 1 over omega. That's just the theta that appears in this formula written in terms of sine theta or the inverse sine of the quantity that we determined was the sine of theta. And then I see here I should have a plus or minus because we haven't figured out our signs yet. Either is actually possible, depending on where we are in the evolution. And then minus or plus twice the square root of omega minus 1 over omega. That's a plus or minus, this is a minus or plus. And the reason I wrote one upstairs and one downstairs is that we don't yet really know how to evaluate theta, but the sign of this term is always going to be the negative of the sign of that term, this or that minus sign. So theta can be the inverse sine of this expression with either sign of the sine. Sorry for the puns. But whichever it is, it's the same on there as it is here but with a minus sign in front, that minus sign. OK, so this is our final formula for the age. But we still need to think a little bit about the s-i-g-n signs of these inverse functions that appear here, that and that, and straightforward if you just take it case by case. And in the notes we have a table which I'll put shortly on the screen. But let's start by just talking about the earliest phase where the universe is shortly after the Big Bang, so that the development angle is a small angle. We know that theta is going to go from 0 to 2 pi over the lifetime of our universe. So I now want to think of theta being small. And small means small compared to any number you think about. So when theta is small the sine theta is nearly theta. Both are positive. And in that case the sine theta being positive means this is the positive root in this equation, and therefore the positive root in that equation. So for early times this would be the plus sign, and that would mean that this would be the minus sign. Again the minus sign just coming from there and theta being positive. So for early times it's a plus and minus. And the arc sine is itself ambiguous. For early times the angle we know is going to be just a little bit bigger than 0. So that's the evaluation that we make of the arc sine function. Pi plus that would also give us the same sine. It would be another possible value for the arc sine. And of course 2 pi plus that would also be another possible root. So you have to know which root to take to know the right answer here because as an angle, 0 is equivalent to 2 pi, but as a time, 0 is not at all equivalent to 2 pi. So you do have to know the right one to take. We'll continue doing that on a case by case basis. Those are the equations, that's the formula for the age, and that's the formula for the age with a description of which roots to take for each case, which just comes out by following the evolution, we know that theta is going from 0 to 2 pi, and this last column, the inverse sine of the expression which means the expression that appears here. For the smallest angle is 0 to pi over 2. We can think of this actually as defining our columns. Theta starts at 0 so the time lengths between 0 and pi over 2, a time length between pi over 2, a time length between pi and 3 pi over 2, and a final time length between 3 pi over 2 and 2 pi. And the first two correspond to the expanding phase, second two correspond to the contracting phase. We can easily see what values of omega are relevant in those cases. Omega we said starts at 1 and gets larger, the borderline where the angle is pi over 2 one could just plug into these formulas and see amounts to omega equals 2. So that is a division line between these first two quadrants just calculated from the value of theta. Omega then goes to infinity as we said, comes backwards and back to 1 in the end. And in this column we just figure out which sign choice corresponds to getting the right value for omega and the angle that appears in the arc sine of our formula for the age, the formula there. So any one of these I claim is very obvious. Seeing the whole picture takes time because I think you really have to look at each case one at a time to make sure you understand it in detail. But if you understand the initial expanding phase that's what corresponds to our universe if our universe were closed. And the others are just as easy. You just have to take them one at a time I think. OK, we're going to end there. We will continue on Thursday.
MIT_8286_The_Early_Universe_Fall_2013
11_NonEuclidean_Spaces_Closed_Universes.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: I'd like to begin by reviewing the last lecture, where we introduced the idea of non-Euclidean spaces. The important idea here is that general relativity describes gravity as a distortion of space and time. And that idea becomes crucial in cosmology, so we want to understand what that's about. As I explained last time, there are really two aspects to this general subject. There's the question of, how do we treat curved spacetimes and how do particles, for example, behave when they move through curved spacetimes. That we will be doing. There's also the important question of how does matter distort the spacetime, which is the Einstein field equations. That we will not do. That we will leave for the general relativity course that you may or may not want to take at some point, or maybe you already have. The idea of non Euclidean geometry really goes back to Euclid himself, and the fifth postulate. The distinction between Euclidean geometry and what is generally called non Euclidean geometry is entirely in the fifth postulate. We never bothered changing the first four postulates. This fifth postulate is explained better in the next slide, where we have a diagram. The fifth postulate says that if a straight line falls on to other straight lines, such that the sum of the two included angles is less than pi, then these two lines will meet on the same side where the sum of the angles is less than pi, and will not meet on the other side. This postulate was something that attracted attention, really from the very beginning, because it seems like a much more complicated postulate than the other four. And for many years, mathematicians tried to derive this postulate from the others, thinking that it couldn't really be a fundamental postulate. But that never worked, and eventually in the 17 and 1800s, mathematicians realized that the postulate was independent. And you could either assume it's true or assume it's false. And you get different versions of geometry in the two cases. We learned last time that there are four other ways, at least, of stating the things that are equivalent to the fifth postulate, and they're diagrammed here. If a straight line intersects one of two parallel lines, it will also intersect the other. If one has a straight line and another point, there is one and only one line through that point, parallel to the original line. If one has a figure, one can construct a figure which is similar to it, of any size. And finally, the famous statement about sum of the angles that make up the vertices of a triangle. If there exists just one triangle for which it's 180 degrees, then that's equivalent to the fifth postulate, and you can prove that every triangle has 180 degrees. The fifth postulate was questioned in a serious way by Giovanni Geralamo Saccheri in the 16, 1700s, who wrote a detailed study of what geometry would be like if the fifth postulate were false. He wrote this believing that the fifth postulate must be true. And he was looking for a contradiction, which he never found. Things went further in the later 1700s, with Gauss and Bolyai and Lobachevski, who independently developed the geometry that we call Gauss Bolyai Lobachevski geometry, which is a two dimensional geometry, non Euclidean. It corresponds to what we now call an open universe, that we'll be learning about in more detail later today. The Gauss Bolyai Lobachevski geometry was treated purely axiomatically by the three authors that I just mentioned. But it was given a coordinate representation by Felix Klein in 1870, which was really the first demonstration that it really existed. When one treats it axiomatically, one still always has the possibility that some contradiction could be found someplace. But by the time you put it into algebraic equations, then it becomes as consistent as our understanding of the real numbers, which we have a lot of confidence in, even though I don't think mathematicians really know how to prove the consistency of anything. But we have a lot of confidence in this kind of mathematics. So by 1870, it was absolutely clear that this open geometry, this non-Euclidean geometry, was a perfectly consistent, is a perfectly consistent, formulation of geometry. An important development coming out of Kline's work is that the idea about how one describes geometry changed dramatically. Prior to Kline, essentially all of geometry was done in the same way that Euclid did it, by writing down axioms and then proving theorems. Kline realized that you could gain a lot of mileage by taking advantage of our understanding of algebra and calculus by describing things in terms of functions. And in particular, geometry is described by giving a distance function between points. This was further developed by Gauss, who realized that distances are additive. So if distances mean anything like what we think distances mean, it would be sufficient to describe the distance between any two arbitrarily close points. And then if you want to know the distance between two distant points, you draw a line between them, and measure the length of that line by adding up an infinite number of infinitesimal segments. So the idea that distances need only be defined infinitesimally was very crucial to our current understanding of geometry. Gauss also introduced another important idea, which is a restriction on what that infinitesimal distance function should look like. Gauss proposed that it should always have the same quadratic form that it has for Euclidean distances. For Euclidean distances, the Pythagorean theorem tells us that the distance between any two points is the sum of the squares of the coordinate distances. And for non-Euclidean geometry, we generalize that by allowing each term in this quadratic expansion to have its own prefactor, and those prefactors could be functions of position. So g sub xx of xy is just a function of x and y. And g sub xy is another function of x and y. And g sub yy is another function of x and y. And the distance function is taken as the sum of those three terms. The important feature that that quadratic form corresponds to, which was noticed by Gauss, is that if the distance function has that form, it means that even though the space is not Euclidean and will not obey, in general, the axioms of Euclidean geometry-- and in particular the fifth postulate-- it is still true that in a very tiny neighborhood, it will resemble Euclidean geometry, where the resemblance will become more and more exact as you confine yourself to tinier and tinier neighborhoods. And we're kind of aware of this in everyday life. The surface of the earth is approximately spherical-- we'll ignore little things like mountains and roads and bumps, and pretend the surface of the Earth is spherical. Nonetheless, the surface of the Earth looks flat to us. And the reason it looks flat is that we see only a tiny little bit of it. And a tiny piece of a curved surface always looks flat. And mathematically speaking, the way to introduce enough assumptions to validate that conclusion is to assume that the local distance function is a quadratic function of this form. And what Gauss originally proved is that if the distance function is of that form, it is always true that in a tiny neighborhood, to an arbitrary accuracy, you could define new coordinates-- x prime and y prime in the notation of this diagram-- where in terms of the new coordinates in the tiny neighborhood, the distances are just the Euclidean distances. ds squared equals dx prime squared plus dy prime squared. And that's a very crucial fact that we will be making use of, Einstein made use of, in the context of general relativity, which we'll be getting to. OK, that finishes the review. Any questions about anything that we talked about last time? OK, great. Now what we want to do is to go on to apply these ideas in detail. In one of them in particular, build up a full description of closed and open universes today. And we are going to begin by giving a mathematical description of the simplest non-Euclidean geometry that we have available, which is just the surface of a sphere-- a two dimensional sphere embedded in three dimensions. That is, a two dimensional surface embedded in a three dimensional space, as is intended to be shown in that diagram. The sphere is described simply by x squared plus y squared equals plus z squared equals r squared, where x, y, and z are just Euclidean coordinates. So in this case, our curved space, which is the surface of the sphere, can be embedded in a Euclidean space of one higher dimension. That's not always the case. We should not pretend that that will always be the case. But when it is the case, it allows us to study that curved surface in a very straightforward way, because everything is really determined by the Euclidean geometry of the space in which this sphere is embedded. Nonetheless, when we're done formalizing our description of the surface of the sphere, the goal will be to concentrate on what Gauss called the "inner properties," namely the properties of the surface itself. And we will try to pretend that the three dimensional space never even existed. It won't be required for anything that we'll be left with, once we have a solid description of the surface itself. And this will be very important for what we'll be doing later. So it is important to get in touch with the idea that we're going to study this sphere, making use of the fact that it could be embedded in three Euclidean dimensions. But in the end, we want to think of it as a two dimensional geometry, which is non Euclidean. OK, so our goal will be to write down the distance function for some coordinization of the surface of the sphere. I should say at the beginning, when this picture makes it most obvious, that one of the reasons we might be interested in the surface of a sphere, if we're interested in cosmology, is that we know that we're trying to build cosmological models that are consistent with homogeneity and isotropy. Because we discussed earlier those are, to a very good approximation, valid features of the universe that we're living in. So the surface of a sphere has those properties. It's certainly homogeneous, in the sense that any point on the surface of a sphere will look exactly like any other point. If you were living on that sphere, and you didn't have any other landmarks, you'd have no way of knowing where on the sphere you were. Furthermore, it's isotropic-- same in all directions. And when I say that, it's important that I really mean it in the context of the two dimensional surface, not the three dimensional geometry. So the three dimensional geometry is isotropic. If you sat at the center of that sphere and looked any direction in three dimensions, everything would look the same. But that's not the isotropy that's important for us. We want to imagine ourselves as two dimensional creatures living on the surface. And then you can imagine that if you were a two dimensional creature living on the surface-- so you happen to be at the North Pole, because that's the easiest to describe-- you could imagine looking around in a circle, 360% available, and the world that you'd be living in would look exactly the same in all directions on the surface. And that's the isotropy that's important to us, because the two dimensional surface here is what we're soon going to generalize to be our three dimensional world. And it's isotropy within that world that we're talking about. OK, so, first thing we wanted to do is to put coordinates on our two dimensional surface. If we want to ultimately forget the third dimension and live in the surface, we want to have coordinates to use in the surface. It's a two dimensional surface, so it should have two coordinates. And when we use the usual coordinization of a sphere, polar coordinates, well, it will be two angles, theta and phi. And there are some different dimensions that are used in different books, but I think almost all physics book use these conventions. Theta is an angle measured from the z-axis, and phi is an angle measured by taking the point that you're trying to describe, which is that dot there, projecting it down into the xy plane, and in the xy plane, measuring the angle from the x-axis. So that's phi. And theta and phi are the polar coordinates describing a point on the surface of the sphere. And what we want to do is describe the distance function in terms of those polar coordinates-- that's our goal. OK, to describe the distance function, what we want to imagine is two infinitesimally nearby points-- one described by coordinates theta and phi, and one described by theta plus d theta and phi plus d phi. So we have one point described by theta and phi, and another point described by theta plus d theta, phi plus d phi. So the coordinate changes are just d theta and d phi. What we want to know is how much distance is undergone by moving from the first point to the second point. And the easiest way to see it is to make the changes one at a time. So first, we can just vary theta. And if we just vary theta, we see that the point described by theta and phi moves along a great circle, which goes through the z-axis. And the distance that the point goes is just an arc length as a piece of that great circle. And since this attended angle is d theta, and the radius is r, the distance of that arc length really just follows from the definition of an angle in radians. The arc length is r times d theta, and that's really the definition of d theta in radians. So if we vary theta only, the distance ds is just equal to r times d theta. Everybody happy with that? OK. Now if we vary phi, it's slightly more complicated, but not much. If we vary phi, the point being described would-- if you vary phi all the way around-- make a circle around the z-axis. That circle does not have radius r. That's the one thing that may be a little bit surprising, until you look at the picture and see that it's true. The radius of that circle is r times sine theta. So in particular, if theta were 0, if you're up around the North Pole, going around that circle would just be going around the point. 0 radius. And you have maximum radius when you're at the equator, going all the way around. So again, we're going in a circle through an angle-- in this case d phi. So the arc length is just the angle times the radius. But the radius is r times sine theta, not r itself. So when we vary phi only, ds is equal to r times sine theta times d phi. Any questions? OK, now, the next important thing to notice is that these two variations that we made are orthogonal to each other. When we varied phi, we moved in the horizontal plane. There's only motion in the x and y directions when we vary phi. When we vary theta, we move in the vertical direction. And those two vectors are orthogonal, as you can see from the diagram. And because we have two orthogonal distances that we're adding up, and because we're in underlying Euclidean space here-- we can think of those distances as being distances in the three dimensional Euclidean space that we're embedded in-- we get to use the Pythagorean theorem. So putting together these two variations, we get ds squared is equal to an overall factor of r squared, times d theta squared plus sine squared theta, d phi squared. And that formula then describes the metric on the surface of a sphere. And it describes a non-Euclidean geometry. And once we have that metric, we can forget the three dimensional picture that we've been drawing, and just think of a world in which there are two coordinates-- theta and phi-- with that distance function. And that's the way we want to be able to think about it. OK, everybody happy? OK, I want to mention, because it is perhaps useful in other cases, depending on your taste of how you like to solve problems-- the description I just gave of deriving this formula was geometric, that is, we drew pictures and wrote down the answer based on visualizing the pictures. But it can also be done purely algebraically. To do it purely algebraically, one would first write down formulas that relayed the angular coordinates-- I should go back to slides. What's going on here? Is my computer frozen? [INAUDIBLE] Yes? AUDIENCE: For that formula, are you assuming that d theta and d phi are really small so that you can [INAUDIBLE] the triangle? PROFESSOR: Yes, these are infinitesimal separations only. That's the key idea of Gauss. And yes, we're making use of that. This formula will not hold if d theta and d phi were large angles. Holds only when they're infinitesimal. Yes? AUDIENCE: And then similiarly, we can use the line integral for calculating the distances based on this metric? PROFESSOR: Yes, that's right. If we wanted to know the distances between two finitely separated points, we would construct a line between them, and then integrate along that line. And by line what we mean is the path of shortest distance, which we'll be learning about more next time, probably. Those are not necessarily easy to calculate. In this case, they're calculable, but not really easy. Any other questions? OK, so what I wanted to do was to look at the definition of the coordinate system, as now shown on the screen. And from that, we can write down the relationship between x, y, and z, and theta and phi. And those relationships are that x is equal to r times sine theta times cosine phi. y is equal to r times sine theta times sine phi. And z is equal to r times cosine theta. And once one writes those formulas, then one can just use straightforward calculus to get the metric, without needing to draw any pictures at all. You may have wanted to draw pictures to get these formulas, but once you have these formulas, you can get that by straightforward calculus. I'll sketch the calculation without writing it out in full. But given this formula for x, we can write down what dx is by calculus, by chain rule. So dx-- it's a function of two variables. So it would be the partial of x, with respect to theta, times d theta, plus the partial derivative of x, with respect to phi, times d phi. And we'll go work out what these partial derivatives are if I differentiate this with respect to theta. The sine theta turns into cosine theta. So the first term becomes r cosine theta, cosine phi, d theta. And then plus, from here we have the partial of x with respect to phi. We just differentiate this expression with respect to phi. The derivative of cosine phi is minus sine phi. So the plus sign becomes a minus sign-- r sine theta, sine phi d phi. And then I won't continue, but we could do the same thing for dy and dz. And once we have expressions for dx, dy, and dz, we can calculate ds squared, using the fact again that all of this is embedded in Euclidean space. That's where we're starting, although in the end, we want to forget that Euclidean space. But we could still make use of it here, and write ds squared is equal to dx squared, plus dy squared, plus dz squared. One can then plug in the expression that we have for dx, in terms of d theta and d phi, and the analogous expressions that I'm not writing down for dy and dz. And when one puts them in here, one makes lots of use of the identity that cosine squared plus sine squared equals 1 And after using that identity a number of times, what you get when you just put together this algebra is exactly what we had before-- r squared times d theta squared, plus sine squared theta, d phi squared. So the important point is that once you have the identities that relate these two different coordinate systems, and if you know the distance function in the xyz coordinate system, you're home free, as far as geometry is concerned. One could just use calculus from there on if one wants to. Although usually the geometric pictures make things easier. OK, any questions about that? OK in that case, we are now ready to move on, having discussed the two dimensional surface of a sphere embedded in three Euclidean dimensions. The next thing I'd like to do-- and this really will be our closed universe cosmology-- we're going to discuss the three dimensional surface of a sphere in four Euclidean dimensions by analogy. The previous exercise was a warm- up. Now we want to introduce a four dimensional Euclidean space. So our sphere now will obey the equation x squared plus y squared plus z squared plus-- we need a new letter for our new dimension in our four dimensional space, and I'm using w. x, y, z, and w. So that equation describes a three dimensional sphere in a four dimensional Euclidean space. And our goal is to do the same thing to that equation that we just did to the two dimensional sphere embedded in a three dimensional space. Now of course it becomes much harder to visualize anything. If you ask how do we do visualize things in four dimensions, I think probably the best answer is that we usually don't. And what we're going to take advantage of mainly is that once you know how to write things in terms of equations, you don't usually have to visualize things. Or if you do, you can usually get by, by visualizing subspaces of the full space. If you want to visualize the full sphere in some rational way, I think the crutch that I usually use, and that most people use, is if you have just one extra dimension, in this case w, try to think of w as a time coordinate. Even though it's not really a time coordinate, it still gives you a way of visualizing things. So if we think of w as a time coordinate here, the smallest possible value of w would be minus r. The maximum possible value of w would be plus r, consistent with this constraint, consistent with being on the sphere. So when w is equal to minus r, the other coordinates have to be 0 to be on the surface of the sphere. So you can think of that as a sphere that's just appearing at time minus 1, with initially 0 radius. Then as w increases, x squared plus y squared plus z squared increases. So you could think of it sphere that starts at 0 size, gets bigger, gets as big as r in radius when w equals 0 and then gets smaller again and disappears. So that's one way of thinking of this four dimensional sphere. Yes? AUDIENCE: Is that kind of like looking at the cross sections of the sphere? PROFESSOR: Is that kind of like looking at the cross sections of the sphere? Exactly, yes. One is looking at cross sections of the sphere-- successive cross sections at successive values of w. And if you do it in succession, it makes w act like a time coordinate. But you're right. The fact that w is a time coordinate is kind of irrelevant. You could imagine just drawing them on a piece of paper in any order. And for any fixed value of w, you're seeing a cross section of what this sphere looks like. That is how the xyz coordinates behave for a fixed value of w. Now to coordinatize the surface of our sphere. Last time we used two coordinates, because we had a two dimensional surface. This time we're going to want to use three coordinates, because this is a three dimensional surface that we're describing. And that means that we need at least one new coordinate. And the new coordinate that I'm going to introduce will be another angle, which I'm going to call psi. And the angle I'm going to define, as in this diagram, as the angle from the new axis, the w axis. So psi is the angle of any arbitrary point to the w axis. And therefore the w coordinate itself is going to be r times cosine psi, just by projecting. And the square root of the sum of x squared plus y squared plus z squared is then the other component of that vector. And beyond the sphere, it's easy to see that the square root of x squared plus y squared plus z squared has to be r times sine psi. OK, now we still need two more coordinates. This is only one coordinate-- we want to have three. But the two other coordinates will just be our old friends, theta and phi We're going to keep theta and phi. And in order to keep them, what we'll imagine doing is that for any point on the surface of this three dimensional sphere of the four dimensional space, we could imagine just ignoring the w coordinate. And then we have x, y, and z-coordinates, and we can just ask what are the values of theta and phi that would go with those x, y, and z-coordinates. So theta and phi are just defined by quote, "projecting" the original point into the three dimensional xyz space, which just means ignore the w coordinates, look at xyz, and ask what would be the angular coordinates, theta and phi, for those values of x,y, and z. And it's easy to take those words I just said and turn them into equations. We like to write down the analog of these equations. But we want to have four equations now that will specify x, y, z, and w as a function of our three angles, psi, theta, and phi. But that's not hard. I'll show you at the bottom with w. w we already said is just r times the cosine of psi. Just coming from the fact that psi is defined as the angle from the point to the w-axis, and that's enough as you see from the picture to imply that w is equal to r times cosine psi. The other point, x,y, and z, really just follow by induction from what we already know. Each of these formulas will hold in the three dimensional subspace, except that r, the radius of a sphere in the three dimensional subspace, is not r anymore but is r times sine psi. So x is equal to r times sine psi times what it was already-- sine theta cosine phi. y is equal to 4 times sine psi times sine theta sine phi. And z is equal to r times sine psi times cosine theta. So I just take each of these equations and multiply them by sine psi to get the new equations. And you can straightforwardly check-- if we take x squared plus y squared plus z squared plus w squared here, it makes successive use of the identity that sine squared plus cosine squared equals 1. We'll be able to show that x squared plus y squared plus z squared plus w squared equals r squared, like it's supposed to. OK, so is everybody happy with this coordinatization? OK, I should mention, by the way, that if you ever have the need to describe a sphere in 26 dimensions or whatever, this process easily iterates, once you've known how to do it once. That is, every time you add a new dimension, you invent a new letter for the new axis. You define a new angle, which is the angle from that access. And then the new coordinatization is just to set the new coordinate equal to r times the cosine of the new angle. And then take all the old equations and put in an extra factor of the sine of the new angle. And you got it. So you could do that as many times as you want, if you want to describe a very high dimensional sphere. We should say something about the range of these angles. The original angle phi-- maybe I should go back a few slides now. The original angle phi, as you can see from the slide, goes around the xy plane, so it has a range of 0 to 2 pi. The original angle theta is an angle from the z-axis, and the furthest you could never be away from pointing towards an axis is pointing away from it. And that's pi, not 2 pi. So theta has a range of 0 to pi. And similarly, psi is also defined as an angle from an axis. So again, the furthest you could ever be away from pointing towards an axis is pointing away from it. So psi, like theta, will have a range of 0 to pi. And if you ever need to coordinatize a 26 dimensional sphere, as I just mentioned, you keep adding new angles. Each of the new angles goes from 0 to pi. Thus each new angle was introduced as an angle between the point that you're trying to describe and the new axis. So, they're all angles like theta and psi. So 0 is less than phi, is less than 2 pi. But 0 is-- these should be less than or equal to's-- 0 is less than or equal to theta, is less than or equal to pi. And 0 is less than or equal to psi, is less than or equal to pi. OK, next we want to get the metric of our three dimensional spherical surface embedded in our four dimensional Euclidean space. And I'm going to do it by the geometric sort of way. I'll try to just motivate the pieces. There'll be some cross here between actually algebra and geometry. Let's see, first I should mention that once you have this, you can get the answer by the same brute force process that we describe, but didn't really carry out here. That's tedious, but it's pretty well guaranteed to get you the right answer if you're careful enough, and does not require drawing any pictures or having any the visualization of the geometry, so it does have some advantages. But I will not do it that way. I will do it in a geometric sort of way, because I think it's easier to understand the geometric sort of way. So here goes. As we did over there, we will vary our coordinates one at a time, and then see how we can combine the different variations. Again we'll find that they're orthogonal to each other. So I'll be able to combine them just by adding the squares. But we don't necessarily know that the beginning. So let's start by varying psi, the new coordinate. And there, things are very simple because our new coordinate is just defined as the angle from an axis. So if we vary psi, the point in question just makes a circle around the origin, and the circle has radius, capital R. So the variation if I vary psi is just r times d psi. So ds is equal to r times d psi. What could be simpler? OK, now we'll imagine varying either theta or phi or both. And since we already pretty well understand this three dimensional subspace-- this is the previous problem-- I'm going to talk about varying them simultaneously. So vary theta and phi. Well then we know that ds squared is really given by this formula. We are just varying theta and phi in a three dimensional space. The fourth direction doesn't change in this case. But the radius involved is not what we originally called r. But the radius in the three dimensional space is r times sine psi. That's the square root of x squared plus y squared plus z squared. So what we get is ds squared is equal to R squared times sine squared psi times d theta squared plus sine squared theta, d phi squared. Everybody happy with that? OK, now I'm going to first jump ahead and then come back and justify what we're doing. But if these variations are orthogonal-- which I will argue shortly that they are, so I'm not doing this for nothing-- if the variations are orthogonal, then we just add the sum of squares using the generalized Pythagorean theorem. In this case, Pythagorean theorem in four dimensions-- that's four Euclidean dimensions, so we should be able to use it. So what we get for our final answer is ds squared is an overall factor of r squared, times d psi squared plus sine squared psi, times d theta squared, plus sine squared theta, d phi squared. I just added the sum of the squares. Now I need to justify this orthogonality. OK, to do that, let me introduce a vector notation in the four dimensional space of x,y z, and w. OK, we're justifying this in the Euclidean embedding space. And the Euclidean embedding space of these transformations are orthogonal. So let me imagine varying psi. And then I could construct a four dimensional vector dr-- I'll call it sub psi because it arises from varying psi. So this is the four dimensional vector that describes the motion of this point r as psi is varied. And first let me just give these components names. I'll call it-- rather than repeat the r, I'm just going to call this d psi sub x, d psi sub y, d psi sub z, and d psi sub w. This is just by definition. I'm just naming the components of that vector. And since the vector already has a subscript, I don't want to have two subscripts. So I've changed the name of the vector for writing as components. And similarly, here I will just vary one of these two angles, theta and phi I'll just vary theta, and let you know that varying psi is no different, and you can easily see that. So if I vary theta, the variation of r when I vary theta will be dr sub theta. And its components I will just call d theta x, d theta y, d theta z, and d theta w. So these are just definitions. I haven't said any actual facts yet. But I've defined these two vectors, and given names to their components. And now we want to look at them and take their dot product. Their dot product is just a four dimensional Euclidean dot product. So the dot product is just dx times d theta d psi y d theta y plus d psi z times d theta z plus d-- blah. I think we should write it. So this actually is now a fact about Euclidean geometry in four dimensions. The dot product of these two vectors is just equal to the product of the x components, plus the product of their y components, plus the product of their z components, plus the product of their w components. And now what we want to do is to look at this sum and argue they're 0, because if they're orthogonal, the dot product of two vectors should be 0. OK, so let me first look at the dr sub theta vector. OK, what do we know about it? Well, we know that when we vary theta, from these formulas, w does not change-- and we can easily see that from the picture as well. So d theta sub w equals 0. And since these are all products, that means that this last term vanishes, no matter what d psi w is. So we know we don't need to worry about that term. We only need to worry about these three terms. Now what do we know about those three terms? If you look at dr sub theta, and look at its three spacial components-- x,y, and z-- from here, we could see that varying theta does the same thing to x, y, and z as it did over here, except a different overall factor out front. So in particular, what I want to point out is that varying theta does not change x squared plus y squared plus z squared. It leaves it constant. So if we think of the xyz space, we could imagine a sphere in the xyz space, and varying theta always causes a variation that's tangential to that sphere. It never moves in the radial direction. So the three vector, defined by d theta x, d theta y, d theta z, is tangential in the three dimensional subspace xyz. Just as it was when we didn't have a w coordinate. The w coordinate doesn't change anything here. So that's a little bit [? soluable. ?] Are people happy with that? Do you know what I'm talking about? OK. Now we want to look at dr sub psi, and it will have a w component-- d psi sub w-- but we don't care about that. We know we don't care about it because that piece already dropped out of our expression. So we want to know what this vector looks like in the xyz space. We don't care about what it looks like in the w space. So in the xyz space, we can look at these formulas here. As we vary psi, x, y, and z could change. But they all change by the same factor-- whatever factor psi changes by. The same sign psi appears in all three lines. So changing psi can only multiply x, y, and z all by the same factor. And what that means is that if you think of this geometrically in the xyz space, varying psi moves the point only in the radial direction. If you multiply all of the coordinates by a constant, you are just moving in the radial direction. So dr psi has the property that when we look at only its x, y, and z components-- d psi x d psi y, d psi z, it is radial in the three dimensional subspace. So, the sum of these three terms-- this is what we're trying to evaluate-- is the dot product of a radial vector and a tangential vector. And the dot product of a radial vector and the tangential vector will always be 0, because there are orthogonal to each other. Sorry for the overlap here, but equal 0, and that's because radial is perpendicular to tangential. OK, everybody happy with that? OK if so, we have important result now. We have derived the metric for the three dimensional surface of a four dimensional sphere embedded in four Euclidean dimensions. And that, in fact, is precisely the closed universe of cosmology. It's the homogeneous isotropic description of a closed universe. OK, next thing I want to point out is just a definition. An important feature of non Euclidean geometry and general relativity-- because they're connected to each other-- is that there never is a unique, useful coordinate system. And Euclidean spaces, there is a unique, useful coordinate system. It's the Cartesian system. Sometimes it's also useful to use polar coordinates or something else, but by and large the Cartesian coordinate system is the natural description of Euclidean spaces. And the coordinates of a Cartesian coordinate system really are distances. Once, however, you go from Euclidean geometry to non Euclidean geometry-- from flat spaces to curved spaces-- you're usually in a situation where there just is no natural coordinate system. When we invented this psi, theta, phi, we really made a number of arbitrary choices there. We could have defined things quite differently if we wanted to. So in general, one has to deal with the fact that the coordinates no longer represent distances, and therefore there's a lot of arbitrariness in the way you choose the coordinates in the first place. So in particular, we could think of psi equals zero as the center of our new coordinate system, with coordinates psi, theta, and phi. psi equals 0 corresponds to being along the w axis, so it's a unique point. When you say that psi is equal to 0, it no longer matters what theta and phi are. You're at the point w equals r and x, y, and z equals 0. So we can think of that as the origin of our new coordinate system. And we can think of then the value of psi as measuring how far we are from that origin. So psi will become our radial coordinate. So thinking of it as our sphere, psi equals 0 we might think of as the North Pole this sphere. But we're also going to think of it as the origin of psi, theta, phi space. And we will sometimes use other radial variables-- other variables for the distance from the origin. So in particular, another coordinate that's very commonly used is u, which is just defined to be the sine of the angle psi. And notice that the sine of the angle psi shows up in a lot of equations. So taking that as our natural variable is a reasonable thing to do on occasion. Both are useful. If we do use this, then we can rewrite the metric in terms of u, instead of psi. And in order to do that, we just have to know how du relates to d psi, because the metric is written in terms of the differentials of the coordinates. So that's easy to calculate. du would be equal to cosine of psi d psi. But if we're trying to rewrite the metric solely in terms of u, we don't want to have to divide or multiply by cosine psi, because that's written in terms of psi. But we could express cosine psi in terms of u, because if u equals sine psi, then cosine psi is the square root of 1 minus u squared. So I can rewrite this as the square root of 1 minus u squared times d psi. And then d psi squared, which is what appears in our metric, can be rewritten just using that as du squared divided by 1 minus u squared. And the full metric now, in terms of the u theta and psi coordinates, can be written as r squared times du squared over 1 minus u squared, plus u squared times d theta squared plus sine squared theta d phi squared. So this is another way of writing the metric for this three dimensional sphere embedded in four Euclidean dimensions. Any questions? Yes? AUDIENCE: The value of u doesn't uniquely determine a point on the sphere, right? Because [INAUDIBLE]. PROFESSOR: Very good point. In case you didn't hear the question, it was pointed out that the value of u, unlike the value of psi, is not uniquely indicate a point, because on the entire sphere, there are two points u for every value of sine psi-- one in the northern hemisphere and one in the southern hemisphere, if we think of hemispheres as dividing whether w is positive or negative. So in fact, that's right. If we use the u coordinate, we should remember that if we want to talk about the whole sphere, we should remember that the whole sphere is twice as big as what we see if we just let u vary between 0 and its maximum value, 1. u equals 1 corresponds to the equator of this sphere. Another point which I'd like to make now that we've written it this way is that writing it this way is the easiest way to see-- although we can see it in other ways as well-- that if u is very small, if we look right in the vicinity of the origin of our new coordinate system, if u is very small, 1 minus u squared is very close to 1. The square of a small quantity is extra small, so this denominator is extra close to 1. And that means that for very small values of u, what we have is du squared plus u squared times this quantity. And this is just polar coordinates in Euclidean space. So for u very small, we do see that we have a local Euclidean space. And that, if you might remember, was one of the key points about writing the metric as the sum of squares in the first place. And it's true about every point, although the coordinate system here only makes it obvious about the origin. But we know that the space actually is homogeneous, from the way we constructed it. So what's true about the origin is true about any point. So the coordinate system, the metric around any point, if you look close enough in the vicinity of the point, looks like a Euclidean space, which is what we expected from the very beginning, but we can see it very explicitly here. OK. So far, this is just geometry. But this will be a model for a homogeneous, isotropic universe. We know it's homogeneous, we know it's isotropic from the way we discussed it. When we write the metric this way, it's obvious that it's isotropic about the origin, because this construction we know is just polar coordinates, and we know polar coordinates don't really single out any direction, even though manifestly they look like they do, because you're measuring angles from the z-axis. But we know that really describes an isotropic sphere. It's not obvious that this formula describes a homogeneous space, because it makes it look like u equals 0 is a special point. But we do know that it did correspond to a homogeneous space from the way we constructed it. It really is just a three dimensional sphere embedded in four Euclidean dimensions. And if you think of it as the sphere, there clearly is no special point on the sphere. So the homogeneity is a feature of this metric, but a hidden feature. It's hard to see how you would transform coordinates to, for example, put a different point at the origin. But it is possible, and we know it's possible, because of the way we originally constructed. And if we had to do it, we go back to the original construction and actually do it. That is, if I told you I wanted some other point in that system to be the origin, you could trace it all way back to the four dimensional Euclidean space, and figure out how you have to do a rotation in that four dimensional Euclidean space to make the point that I told you I wanted to be the origin to actually be the origin. So you'd be able to do that. It would be some work, but you would in fact know how to do that. We really do understand that the space is homogeneous, which is guaranteed by our original construction. OK. Finish with the basic geometry-- one more chance to ask any questions about it. OK, next I want to go on now to talk about how this fits into general relativity. And here we are going to be confronting-- actually the only place we ever will confront-- the issue of how matter causes space to curve, which is the aspect of general relativity that we're not really going to do it all. So I'll basically just be giving you the answer. Although we will in fact know enough to narrow down the range of possible answers, pretty much. But there will be a fudge factor that I'll have to just tell you the right answer for. So what I want to do now is to make a connection between this formalism and the model that we already discussed of the expanding universe whose dynamics we derived using Newtonian mechanics. So using internet Newtonian mechanics, we introduced a scale factor a of t. And convinced ourselves that a dot over a squared is equal to 8 pi over 3 g rho, minus kc squared over a squared. And furthermore, that this a of t describes the relationship between physical distances and coordinate distances. Namely, if we have objects that are at rest in this expanding universe, comoving objects is the phrase usually used to describe that. If we have comoving objects, the comoving objects will sit at fixed coordinates in our coordinate system. And the distance between any two of them will be some fixed distance-- l sub c-- a coordinate distance. But the physical distance will vary with time, proportional to this scale factor. So the physical distance between any two points will be a function of time, which is the scale factor times the time-independent coordinate distance. OK, if we look at our metric for this sphere, and say we're going to assume that this is going to be the metric that describes the space that we're describing over here, then clearly this r that sits out front rescales all of the distances. All the distances are proportional to r, just as all the distances here are proportional to a. So that could only work if r is proportional to a. So that's our key conclusion here-- that r is going to have to vary with time, and be proportional to a. But we can even say a little bit more than that, because we can look at dimensionality. And here comes in handy that I insisted from the beginning in introducing this idea of a notch. The notch helps us here to get this formula right. The units of r-- r is just distance-- so the units of r are just distance units-- and I'll pretend that we're using meters. It doesn't really matter what actual units we're using, so I'll call this m for meters. On the other hand, a of t comes from this formula, where physical distances are measured in meters, but coordinate distances are measured in notches, so a is meters per notch, as we've said many times before. So a of t is meters per notch. So that tells us something about this constant of proportionality-- it has to have the right units to turn meters per notch into meters. That is, it has to have units of notches. Where did we get notches from? The other thing that we know is that the little k that appears in the Friedman equation, which we know is a constant-- we also know it's a constant that has units of 1 over notches squared, as we worked out some time ago. Units of k is 1 over notches squared. And that's the only thing around that we can find that has units of notches, so we're going to use that to make the units turn out right in this equation. So to get the units right, we can write it as r of t is equal to some constant-- actually, it's more convenient to square this-- r squared of t is equal to some constant times a squared of t, divided by k. And now that constant is dimensionless. The units are all built into the a's and the k's and everything else. AUDIENCE: Is the constant multiplied by that, or-- PROFESSOR: Yes. That equals sign was a big mistake. Constant times a squared over k. And that constant is now dimensionless. Because this is meters squared, this is meters squared per notch squared, and this is just per notch squared, so the notches cancel over here. OK, now what this constant is really is the statement of how curved is our space-- r is really a measure the curvature of our space-- how curved is our space for a given description of what the matter is doing? a of t is directly related to the [INAUDIBLE] calculations you've already done. So this clearly is a formula of exactly the type that I told you we weren't going to learn how to deal with. We're not capable in this course of describing the Einstein field equations, which determine how matter causes a space to curve. So I'll just tell you the answer. The answer is that this constant is equal to 1. So it's certainly a simple answer, but we won't be able to derive it. So we end up with just r squared of t is equal to a squared of t, divided by k. Now it may be useful at this point to remind ourselves what little k meant in the first place. We introduced little k in the context of describing a purely Newtonian model of an expanding universe, where we imagined just a finite sphere of matter expanding. And in doing that, we defined k to be equal to minus 2 e divided by c squared, where e itself was not a quantity that we proved was conserved. And it's related to an energy, but as we discussed, there are various ways that you could relate it to the energies of different pieces of the system. But e was given by that expression. And if I put this into that, just to see more clearly how our discussion relates to our Newtonian discussion, we get r squared is equal to a squared of t times c squared over 2 e. And I wrote it this way mainly to illustrate, or demonstrate, an important point, which is that our calculation was non-relativistic, but there's a c squared appearing in this formula. This c squared really just arose from our definitions. And if we had some other quantity here whose units were-- we have to sub units of meters per second for this formula to come out right. If we put some other velocity here, then this constant would not be 1 anymore, but something else. So saying that this constant is 1 is saying that this formula is meaningful. Putting the c squared there simplifies things. And that in turn means that the curvature really is a relativistic effect. OK, we think of relativistic effects as effects that disappear as the speed of light goes to infinity. So this formula tells us that as the speed of light goes to infinity-- for fixed values of things like the mass density, which are buried in a squared of t-- as the speed of light goes to infinity for fixed values of the mass density, r squared goes to infinity. Now infinity may sound like it's backwards, but it's the right way. 1 over r is really the curvature. r is the radius of curvature of the space. As r goes to infinity, our curved space looks more and more flat. So we're saying that if you could imagine varying the speed of light, as you made the speed of light larger and larger, this space would become flatter and flatter. So this curvature of the space really is a relativistic effect, which is related to the fact that the speed of light is finite and not infinite. Yes? AUDIENCE: Sorry, when we replace a with that, are we missing a minus sign? PROFESSOR: Oh we might be, yeah. Minus sign now fixed. The point is that for the case we're talking about, e would be negative and k would be positive. So this formula needs an absolute value sign in it. Thank you. OK, it may also be useful to relate r more directly to astronomical observables, which we can do, because we have the Friedman equation up there, which relates a to rho. And a dot over a is also the Hubble expansion rate, so that's h squared. So this formula tells exactly how to write a in terms of rho and h squared. And in fact, it tells us how to write a over the square root of k in terms of rho and H squared. And that's exactly what r is-- it's a squared over k. So putting those equations together, we could write r is equal to c times the inverse of the Hubble expansion rate, over the square root of omega minus 1. Where omega you would call is equal to rho divided by rho sub c. And rho sub c is 3 h squared over 8 pi-- Newton's constant. So this formula says two things. It says that the radius of curvature becomes infinite if c were infinite. And that says what I already said, that's a relativistic effect. It also tells you the radius of curvature goes to 0 as omega goes to 1. So omega approaching 1 is the flat universe case, which is what we've already mumbled about, but this formula shows it very directly. As omega approaches 1, for a fixed value of h and a fixed value of the speed of light, the radius of curvature goes to infinity. The space becomes more and more flat. Look, I'm just going to write one more formula, which is really just a redefinition, but an important redefinition as far as where we're going to be going next. And then we'll finish today's lecture, and continue next week. What I wanted to do is to put these definitions back into the metric itself. So we can write ds squared is equal to a squared of t divided by k-- which is what we previously called r squared-- times du squared over 1 minus u squared, plus u squared times d theta squared plus sine squared theta d phi squared. And now what I want to do is make one further redefinition of this radial variable, which, remember, initially was psi. Then we let u be equal to the sine of psi. Now I'm going to make one further substitution. I'm going to let little r be equal to u divided by the square root of k, to bring this k inside. And that then is also equal to sine of psi, divided by the square root of k. And when we do that, the metric takes a slightly simpler form. ds squared is equal to a squared of t all by itself on the outside now. And then this, after we factor in the k, becomes dr squared over 1 minus k r squared, plus r squared times d theta squared plus sine squared theta, d phi squared. And this is the form that the metric is usually written in. It's called the Robertson-Walker metric. So we've only discussed closed universes. I had hoped to discuss closed and open, but open will in fact follow very quickly from what we already have. So we'll begin next time by discussing open universes.
MIT_8286_The_Early_Universe_Fall_2013
21_Problems_of_the_Conventional_Noninflationary_Hot_Big_Bang_Model.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, in that case, let's take off. There's a fair amount I'd like to do before the end of the term. First, let me quickly review what we talked about last time. We talked about the actual supernovae data, which gives us brightness as a function of redshift for very distant objects and produced the first discovery that the model is not fit very well by the standard called dark matter model, which would be this lower line. But it fits much better by this lambda CDM, a model which involves a significant fraction-- 0.76 was used here-- and significant fraction of vacuum energy along with cold, dark matter and baryons. And that was a model that fit the data well. And this, along with similar data from another group of astronomers, was a big bombshell of 1998, showing that the universe appears to not be slowing down under the influence of gravity but rather to be accelerating due to some kind of repulsive, presumably, gravitational force. We talked about the overall evidence for this idea of an accelerating universe. It certainly began with the supernovae data that we just talked about. The basic fact that characterizes that data is that the most distant supernovae are dimmer than you'd expect by 20% to 30%. And people did cook up other possible explanations for what might have made supernovae at certain distances look dimmer. None of those really held up very well. But, in addition, several other important pieces of evidence came in to support this idea of an accelerating universe. Most important, more precise measurements of the cosmic background radiation anisotropies came in. And this pattern of antisotropies can be fit to a theoretical model, which includes all of the parameters of cosmology, basically, and turns out to give a very precise setting for essentially all the parameters of cosmology. They now really all have their most precise values coming out of these CMB measurements. And the CMB measurements gave a value of omega [? vac ?], which is very close to what we'll get from the supernovae, which makes it all look very convincing. And furthermore, the cosmic background radiation shows that omega total is equal to 1 to about 1/2% accuracy, which is very hard to account for if one doesn't assume that there's a very significant amount of dark energy. Because there just does not appear to be nearly enough of anything else to make omega equal to 1. And finally, we pointed out that this vacuum energy also improves the age calculations. Without vacuum energy, we tend to define that the age of the universe as calculated from the Big Bang theory always ended up being a little younger than the ages of the oldest stars, which didn't make sense. But with the vacuum energy, that changes the cosmological calculation of the age producing older ages. So with vacuum energy of the sort that we think exists, we get ages like 13.7 or 13.8 billion years. And that's completely consistent with what we think about the ages of the oldest stars. So everything fits together. So by now, I would say that with these three arguments together, essentially, everybody is convinced that this acceleration is real. I do know a few people who aren't convinced, but they're oddballs. Most of us are convinced. And the simplest explanation for this dark energy is simply vacuum energy. And every measurement that's been made so far is consistent with the idea of vacuum energy. There is still an alternative possibility which is called quintessence, which would be a very slowly evolving scalar field. And it would show up, because you would see some evolution. And so far nobody has seen any evolution of the amount of dark energy in the universe. So that's basically where things stand as far as the observations of acceleration of dark energy. Any questions about that? OK, next, we went on to talk about the physics of vacuum energy or a cosmological constant. A cosmological constant and vacuum energy are really synonymous. And they're related to each other by the energy entering the vacuum being equal to this expression, where lambda is what Einstein originally called the cosmological constant and what we still call the cosmological constant. We discussed the fact that there are basically three contributions in a quantum field theory to the energy of a vacuum. We do not expect it to be zero, because there are these complicated contributions. There are, first of all, the quantum fluctuations of the photon and other Bosonic fields, Bosonic fields meaning particles that do not obey the Pauli exclusion principle. And that gives us a positive contribution to the energy, which is, in fact, divergent. It diverges because every standing wave contributes. And there's no lower bound to the wavelength of a standing wave. So by considering shorter and shorter wavelengths, one gets larger and larger contributions to this vacuum energy. And in the quantum field theory, it's just unbounded. Similarly, there are quantum fluctuations to other fields like the electron field which is a Fermionic field, a field that describes a particle that obeys the Pauli exclusion principle. And those fields behave somewhat differently. Like the photon, the electron is viewed as the quantum excitation of this field. And that turns out to be by far, basically, the only way we know to describe relativistic particles in a totally consistent way. In this case, again, the contribution to the vacuum energy is divergent. But in this case, it's negative and divergent, allowing possibilities of some kind of cancellation, but no reason that we know of why they should cancel each other. They seem to just be totally different objects. And, finally, there are some fields which have nonzero values in the vacuum. And, in particular, the Higgs field of the standard model is believed to have a nonzero value even in the vacuum. So this is the basic story. We commented that if we cut off these infinities by saying that we don't understand things at very, very short wavelengths, at least one plausible cut off would be the Planck scale, which is the scale associated with where we think quantum gravity becomes important. And if we cut off at this Planck scale, the energies become finite but still too large compared to what we observe by more than 120 orders of magnitude. And on the homework set that's due next Monday, you'll be calculating this number for yourself. It's a little bit more than 120 orders of magnitude. So it's a colossal failure indicating that we really don't understand what controls the value of this vacuum energy. And I think I mentioned last time, and I'll mention it a little more explicitly by writing it on the transparency this time, that the situation is so desperate in that we've had so much trouble trying to find any way of explaining why the vacuum energy should be so small that it has become quite popular to accept the possibility, at least, that the vacuum energy is determined by what is called the anthropic selection principal or anthropic selection effect. And Steve Weinberg was actually one of the first people who advocated this point of view. I'm sort of a recent convert to taking this point of view seriously. But the idea is that there might be more than one possible type of vacuum. And, in fact, string theory comes in here in an important way. String theory seems to really predict that there's a colossal number of different types of vacuum, perhaps 10 to the 500 different types of vacuum or more. And each one would have its own vacuum energy. So with that many, some of them would have a, by coincidence, near perfect cancellation between the positive and negative contributions producing a net vacuum energy that could be very, very small. But it would be a tiny fraction of all of the possible vauua, a fraction like 10 to the minus 120, since we have 120 orders of magnitude mismatch of these ranges. So you would still have to press yourself to figure out what would be the explanation why we should be living in such an atypical vacuum. And the proposed answer is that it's anthropically selected, where anthropic means having to do with life. Whereas, the claim is made that life only evolves in vacuua which have incredibly small vacuum energies. Because if the vacuum energy is much larger, if it's positive, it blows the universe apart before structures can form. And it it's negative, it implodes the universe before there's time for structures to form. So a long-lived universe requires a very small vacuum energy density. And the claim is that those are the only kinds of universes that support life. So we're here because it's the only kind of universe in which life can exist is the claim. Yes? AUDIENCE: So, different types of energies, obviously, affect the acceleration rate and stuff of the universe. But do they also affect, in any way, the fundamental forces, or would those be the same in all of the cases? PROFESSOR: OK, the question is, would the different kinds of vacuum affect the kinds of fundamental forces that exist besides the force of the cosmological constant on the acceleration of the universe itself? The answer is, yeah, it would affect really everything. These different vacuua would be very different from each other. They would each have their own version of what we call the standard model of particle physics. And that's because the standard model of particle physics would be viewed as what happens when you have small perturbations about our particular type of vacuum. And with different types of vacuum you get different time types of small perturbations about those vaccua. So the physics really could be completely different in all the different vacuua that string theory suggests exist. So the story here, basically, is a big mystery. Not everybody accepts these anthropic ideas. They are talked about. At almost any cosmology conference, there will be some session where people talk about these things. They are widely discussed but by no means completely agreed upon. And it's very much an open question, what it is that explains the very small vacuum energy density that we observe. OK, moving on, in the last lecture I also gave a quick historical overview of the interactions between Einstein and Friedmann, which I found rather interesting. And just a quick summary here, in 1922 June 29, to be precise, Alexander Friedmann's first paper about the Friedmann equations and the dynamical model of the universe were received at [INAUDIBLE]. Einstein learned about it and immediately decided that it had to be wrong and fired off a refutation claiming that Friedmann had gotten his equations wrong. And if he had gotten them right, he would have discovered that rho dot, the rate of change of the energy density, had to be zero and that there was nothing but the static solution allowed. Einstein then met a friend of Friedmann's Yuri Krutkov at a retirement lecture by Lawrence in Leiden the following spring. And Krutkov convinced Einstein that he was wrong about this calculation. Einstein had also received a letter from Friedmann, which he probably didn't read until this time, but the letter was apparently also convincing. So Einstein did finally retract. And at the end of May 1923, his refraction was received at Zeitschrift fur Physik. And another interesting fact about that is that the original handwritten draft of that retraction still exists. And it had the curious comment, which was crossed out, where Einstein suggested that the Friedmann solutions could be modified by the phrase, "a physical significance can hardly be ascribed to them." But at the last minute, apparently, Einstein decided he didn't really have a very good foundation for that statement and crossed it out. So I like the story, first of all, because it illustrates that we're not the only people who make mistakes. Even great physicists like Einstein make really silly mistakes. It really was just a dumb calculational error. And it also, I think, points out how important it is not to allow yourself to be caught in the grip of some firm idea that you cease to question, which apparently is what happened to Einstein with his belief that the universe was static. He was so sure that the universe was static that he very quickly looked at Friedmann's paper and reached the incorrect conclusion that Friedmann had gotten his calculus wrong. In fact, it was Einstein who got it wrong. So that summarizes the last lecture. Any further questions? OK, in that case, I think I am done with that. Yeah, that comes later. OK, what I want to do next is to talk about two problems associated with the conventional cosmology that we've been learning about and, in particular, I mean cosmology without inflation, which we have not learned about yet. So I am talking about the cosmology that we've learned about so far. So there are a total of three that I want to discuss problems associated with conventional cosmology which serve as motivations for the inflationary modification that, I think, you'll be learning about next time from Scott Hughes. But today I want to talk about the problems. So the first of 3 is sometimes called the homogeneity, or the horizon problem. And this is the problem that the universe is observed to be incredibly uniform. And this uniformity shows up most clearly in the cosmic microwave background radiation, where astronomers have now made very sensitive measurements of the temperature as a function of angle in the sky. And it's found that that radiation in uniform to one part in 100,000, part in 10 to the 5. Now, the CMB is essentially a snapshot of what the universe looked like at about 370,000 years after the Big Bang at the time that we call t sub d, the time of decoupling. Yes? AUDIENCE: This measurement, the 10 to the 5, it's not a limit that we've reached measurement technique-wise? That's what it actually is, [INAUDIBLE]? PROFESSOR: Yes, I was going to mention that. We actually do see fluctuations at the level of one part in 10 to the five. So it's not just a limit. It is an actual observation. And what we interpret is that the photons that we're seeing in the CMB have been flying along on straight lines since the time of decoupling. And therefore, what they show us really is an image of what the universe look like at the time of decoupling. And that image is an image of the universe which is almost a perfectly smooth mass density and a perfectly smooth temperature-- it really is just radiation-- but tiny ripples superimposed on top of that uniformity where the ripples have an amplitude of order of 10 to the minus 5. And those ripples are important, because we think that those are the origin of all structure in the universe. The universe is gravitationally unstable where there's a positive ripple making the mass density slightly higher than average. That creates a slightly stronger than average gravitational field pulling in extra matter, creating a still stronger gravitational field. And the process cascades until you ultimately have galaxies and clusters of galaxies and all the complexity in the universe. But it starts from these seeds, these minor fluctuations at the level of one part in 10 to the five. But for now we want to discuss is simply the question of how did we get so uniform. We'll talk about how the non-uniformities arise later in the context of inflation. The basic picture is that we are someplace. I'll put us here in a little diagram. We are receiving photons, say, from opposite directions in the sky. Those little arrows represent the incoming patterns of two different CMB photons coming from opposite directions. And what I'm interested in doing to understand the situation with regard to this uniformity is I'm interested in tracing these photons back to where they originated at time t sub d. And I want to do that on both sides. But, of course, it's symmetric. So I only need to do one calculation. And what I want to know is how far apart were these two points. Because I want to explore the question of whether or not this uniformity in temperature could just be mundane. If you let any object sit for a long time, it will approach a uniform temperature. That's why pizza gets cold when you take it out of the oven. So could that be responsible for this uniformity? And what we'll see is that it cannot. Because these two points are just too far apart for them to come in to thermal equilibrium by ordinary thermal equilibrium processes in the context of the conventional big bang theory. So we want to calculate how far apart these points were at the time of emission. So what do we know? We know that the temperature at the time of decoupling was about 3,000 Kelvin, which is really where we started with our discussion of decoupling. We did not do the statistical mechanics associated with this statement. But for a given density, you can calculate at what temperature hydrogen ionizes. And for the density that we expect for the early universe, that's the temperature at which the ionization occurs. So that's where decoupling occurs. That's where it becomes neutral as the universe expands. We also know that during this period, aT, the scale factor times the temperature is about equal to a constant, which follows as a consequence of conservation of entropy, the idea that the universe is near thermal equilibrium. So the entropy does not change. Then we can calculate the z for decoupling, because it would just be the ratio of the temperatures. It's defined by the ratio of the scale factors. This defines what you mean by 1 plus z decoupling. But if aT is about equal to a constant, we can relate this to the temperatures inversely. So the temperature of decoupling goes in the numerator. And the temperature today goes into the denominator. And, numerically, that's about 1,100. So the z of the cosmic background radiation is about 1,100, vastly larger than the red shifts associated with observations of galaxies or supernovae. From the z, we can calculate the physical distance today of these two locations, because this calculation we already did. So I'm going to call l sub p the physical distance between us and the source of this radiation. And its value today-- I'm starting with this formula simply because we already derived it on a homework set-- it's 2c h naught inverse times 1 minus 1 over the square root of 1 plus z. And this calculation was done for a flat matter dominated universe, flat matter dominated. Of course, that's only an approximation, because we know our real universe was matter dominated at the start of this period. But it did not remain matter dominated through to the present at about 5,000 or 6,000 years ago. We switched to a situation where the dark energy is actually larger than the non-relativistic matter. So we're ignoring that effect, which means we're only going to get an approximation here. But it will still be easily good enough to make the point. For a z this large, this factor is a small correction. I think this ends up being 0.97, or something like that, very close to 1, which means what we're getting is very close to 2c h naught inverse, which is the actual horizon. The horizon corresponds to z equals infinity. If you think about it, that's what you expect the horizon to be. It corresponds to infinite red shift. And you don't see anything beyond that. So if we take the best value we have for h naught, which I'm taking from the Planck satellite, 57.3 kilometers per second per megaparsec, and put that and the value for z into this formula, we get l sub p of t naught of 28.2 billion light years times 10 to the 9 light year. So it's of course larger than ct as we expect. It's basically 3ct for a matter dominated universe. And 3ct is the same as 2ch 0 inverse. Now, what we want to know, though, is how far away was this when the emission occurred, not the present distance. We looked at the present distance simply because we had a formula for it from our homework set. But we know how to extrapolate that backwards, l sub t at time t sub d. Distances that are fixed in co-moving space, which these are, are just stretched with the scale factor. So this will just be the scale factor at the time of decoupling divided by the scale factor today times the present distance. And this is, again, given by this ratio of temperatures. So it's 1 over 1,100, the inverse of what we had over there. So the separation at this early time is just 1,100 times smaller than the separation today. And that can be evaluated numerically. And it gives us 2.56 times 10 to the seven light years, so 25 million light years. Now, the point is that that's significantly larger than the horizon distance at that time. And remember, the horizon distance is the maximum possible distance that anything can travel limited by the speed of light from the time of the big bang up to any given point in cosmic history. So the horizon at time t sub d is just given by the simple formula that the physical value of the horizon distance, l sub h phys, l sub horizon physical, at time t sub d is just equal to, for a matter dominated universe, 3c times t sub d. And that we can evaluate, given what we have. And it's about 1.1 times 10 to the sixth light years, which is significantly less than 2.56 times 10 to the seven light years. And, in fact, the ratio of the two, given these numbers, is that l sub p of t sub d over l sub h is also of t sub d is about equal to 23, just doing the arithmetic. And that means if we go back to this picture, these two points of emission were separated from each other by about 46 horizon distances. And that's enough to imply that there's no way that this point could have known anything whatever about what was going on at this point. Yet somehow they knew to emit these two photons at the same time at the same temperature. And that's the mystery. One can get around this mystery if one simply assumes that the singularity that created all of this produced a perfectly homogeneous universe from the very start. Since we don't understand that singularity, we're allowed to attribute anything we want to it. So in particular, you can attribute perfect homogeneity to the singularity. But that's not really an explanation. That's an assumption. So if one wants to be able to explain this uniformity, then one simply cannot do it in the context of conventional cosmology. There's just no way that causality, the limit of the speed of light, allows this point to know anything about what's going on at that point. Yes? AUDIENCE: How could a singularity not be uniform? Because If it had non-uniform [INAUDIBLE], then not be singular? PROFESSOR: OK, the question is how can a singularity not be uniform? The answer is, yes, singularities can not be uniform. And I think the way one can show that is a little hard. But you have to imagine a non-uniform thing collapsing. And then it would just be the time reverse, everything popping out of the singularity. So you can ask, does a non-uniform thing collapse to a singularity? And the answer to that question is not obvious and really was debated for a long time. But there were theorems proven by Hawking and Penrose that indeed not only do the homogeneous solutions that we look at collapse but in homogeneous solutions also collapse to singularities. So a singularity does not have to be uniform. OK, so this is the story of the horizon problem. And as I said, you can get around it if you're willing to just assume that the singularity was homogeneous. But if you want to have a dynamical explanation for how the uniformity of the universe was established, then you need some model other than this conventional cosmological model that we've been discussing. And inflation will be such a twist which will allow a solution to this problem. OK, so if there are no questions, no further questions, we'll go on to the next problem I want to discuss, which is of a similar nature in that you can get around it by making strong assumptions about the initial singularity. But if one wants, again, something you can put your hands on, rather than just an assumption about a singularity, then inflation will do the job. But you cannot solve the problem in the context of a conventional big bang theory, because the mechanics of the conventional big bang theory are simply well-defined. So what I want to talk here is what is called the flatness problem, where flatness is in the sense of Omega very near 1. And this is basically the problem of why is Omega today somewhere near 1? So Omega naught is the present value of Omega, why is it about equal to 1? Now, what do we know first of all about it being about equal to 1? The best value from the Planck group, this famous Planck satellite that I've been quoting a lot of numbers from-- and I think in all cases, I've been quoting numbers that they've established combining their own data with some other pieces of data. So it's not quite the satellite alone. Although, they do give numbers for the satellite alone which are just a little bit less precise. But the best number they give for Omega naught is minus 0.0010 plus or minus 0.0065. Oops, I didn't put enough zeroes there. So it's 0.0065 is the error. So the error is just a little bit more than a half of a percent. And as you see, consistent with-- I'm sorry, I meant this to be 1. Hold on. This is Omega naught minus 1 that I'm writing a formula for. So Omega naught is very near 1 up to that accuracy. What I want to emphasize in terms of this flatness problem is that you don't need to know that Omega naught is very, very close to 1 today, which we now do know. But even back when inflation was first invented around 1980, in circa 1980 we certainly didn't know that Omega was so incredibly close to 1. But we did know that Omega was somewhere in the range of about 0.1 and 0.2, which is not nearly as close to 1 as what we know now, but still close to 1. I'll argue that the flatness problem exists for these numbers almost as strongly as it exists for those numbers. Differ, but this is still a very, very strong argument that even a number like this is amazingly close to 1 considering what you should expect. Now, what underlies this is the expectations, how close should we expect Omega to be to 1? And the important underlying piece of dynamics that controls this is the fact that Omega equals 1 is an unstable equilibrium point. That means it's like a pencil balancing on its tip. If Omega is exactly equal to 1, that means you have a flat universe. And an exactly flat universe will remain an exactly flat universe forever. So if Omega is exactly equal to 1, it will remain exactly equal to 1 forever. But if Omega in the early universe were just a tiny bit bigger than 1-- and we're about to calculate this, but I'll first qualitatively describe the result-- it would rise and would rapidly reach infinity, which is what it reaches if you have a closed universe when a closed universe reaches its maximum size. So Omega becomes infinity and then the universe recollapses. So if Omega were bigger than 1, it would rapidly approach infinity. If Omega in the early universe were just a little bit less than 1, it would rapidly trail off towards 0 and not stay 1 for any length of time. So the only way to get Omega near 1 today is like having a pencil that's almost straight up after standing there for 1 billion years. It'd have to have started out incredibly close to being straight up. It has to have started out incredibly close to Omega equals 1. And we're going to calculate how close. So that's the set-up. So the question we want to ask is how close did Omega have to be to 1 in the early universe to be in either one of these allowed ranges today. And for the early universe, I'm going to take t equals one second as my time at which I'll do these calculations. And, historically, that's where this problem was first discussed by Dicke and Peebles back in 1979. And the reason why one second was chosen by them, and why it's a sensible time for us to talk about as well, is that one second is the beginning of the processes of nucleosynthesis, which you've read about in Weinberg and in Ryden, and provides a real test of our understanding of cosmology at those times. So we could say that we have real empirical evidence in the statement that the predictions of the chemical elements work. We could say that we have real empirical evidence that our cosmological model works back to one second after the Big Bang. So we're going to choose one second for the time at which we're going to calculate what Omega must've been then for it to be an allowed range today. How close must Omega have been to 1 at t equals 1 second? Question mark. OK, now, to do this calculation, you don't need to know anything that you don't already know. It really follows as a consequence of the Friedmann equation and how matter and temperature and so on behave with time during radiation in matter dominated eras. So we're going to start with just the plain old first order Fiedmann equation, h squared is equal to 8 pi over 3 g Rho minus kc squared over a squared, which you have seen many, many times already in this course. We can combine that with other equations that you've also seen many times. The critical density is just the value of the density when k equals 0. So you just solve this equation for Rho. And you get 3h squared over h pi g. This defines the critical density. It's that density which makes the universe flat, k equals 0. And then our standard definition is that Omega is just defined to be the actual mass density divided by the critical mass density. And Omega will be the quantity we're trying to trace. And we're also going to make use of the fact that during the era that we're talking about, at is essentially equal to a constant. It does change a little bit when electron and positron pairs freeze out. It changes by a factor of something like 4/11 to the 1/3 power or something like that. But that power will be of order one for our purposes. But I guess this is a good reason why I should put a squiggle here instead of an equal sign as an approximate equality, but easily good enough for our purposes, meaning the corrections of order one. We're going to see the problem is much, much bigger than order one. So a correction of order one doesn't matter. Now, I'm going to start by using the Planck satellite limits. And at the end, I'll just make a comment about the circa 1980 situation. But if we look at the Planck limits-- I'm sorry. Since I'm going to write an equation for a peculiar looking quantity, I should motivate the peculiar looking quantity first. It turns out to be useful for these purposes. And this purpose means we're trying to track how Omega changes with time. It turns out to be useful to reshuffle the Friedmann. And it is just an algebraic reshuffling of the Friedmann equation and the definitions that we have here. We can rewrite the Friedmann equation as Omega minus 1 over Omega is equal to a quantity called a times the temperature squared over Rho. Now, the temperature didn't even occur in the original equation. So things might look a little bit suspicious. I haven't told you what a is yet. a is 3k c squared over 8 pi g a squared t squared. So when you put the a into this equation, the t squares cancel. So the equation doesn't really have any temperature dependence. But I factored things this way, because we know that at is approximately a constant. And that means that this capital a, which is just other things which are definitely constant times, a square t square in the denominator, this a is approximately a constant. And you'll have to check me at home that this is exactly equivalent to the original Friedmann equation, no approximations whatever, just substitutions of Omega and the definition of Rho sub c. So the nice thing about this is that we can read off the time dependence of the right-hand side as long as we know the time dependence in the temperature and the time dependence of the energy density. And we do for matter dominated and radiation dominated eras. So this, essentially, solves the problem for us. And now it's really just a question of looking at the numerics that follow as a consequence of that equation. And this quantity, we're really interested in just Omega minus 1. The Friedmann equation gave us the extra complication of an Omega in the denominator. But in the end, we're going to be interested in cases where Omega is very, very close to 1. So the Omega in the denominator we could just set equal to one. And it's the Omega minus 1 in the numerator that controls the value of the left-hand side. So if we look at these Planck limits, we could ask how big can that be? And it's biggest if the error occurs on the negative side. So it contributes to this small mean value which is slightly negative. And it gives you 0.0075 for Omega minus 1. And then if you put that in the numerator and the same thing in the denominator, you get something like 0.0076. But I'm just going to use the bound that Omega naught minus 1 over Omega is less than 0.01. But the more accurate thing would be 0.076. But, again, we're not really interested in small factors here. And this is a one signa error. So the actual error could be larger than this, but not too much larger than this. So I'm going to divide the time interval between one second and now up into two integrals. From one second to about 50,000 years, the universe was radiation dominated. We figured out earlier that the matter to radiation equality happens at about 50,000 years. I think we may have gotten 47,000 years or something like that when we calculated it. So for t equals 1 second to-- I'm sorry, I'm going to do it the other order. I'm going to start with the present and work backwards. So for t equals 50,000 years to the present, the universe is matter dominated. And the next thing is that we know how mattered dominated universe's behave. We don't need to recalculate it. We know that the scale factor for a matter dominated flat universe goes like t to the 2/3 power, I should have a portionality sign here. a of t is proportional to t to 2/3. And it's fair to assume flat, because we're always going to be talking about universes that are nearly flat and becoming more and more flat as we go backwards, as we'll see. And again, this isn't an approximate calculation. One could do it more accurately if one wanted to. But there's really no need to, because the result will be so extreme. The temperature behaves like one over the scale factor. And that will be true for both the matter dominated and a radiation dominated universe. And the energy density will be proportional to one over the scale factor cubed. And then if we put those together and use the formula on the other blackboard and ask how Omega minus 1 over Omega behaves, it's proportional to the temperature squared divided by the energy density. The temperature goes like 1 over a. So temperature squared goes like 1 over a squared. But Rho in the denominator goes like 1 over a cubed. So you have 1 over a squared divided by 1 over a cubed. And that means it just goes like a, the scale factor itself. So Omega minus 1 over Omega is proportional to a. And that means it's proportional to t to the 2/3. So that allows us to write down an equation, since we want to relate everything to the value of Omega minus 1 over Omega today, we can write Omega minus 1 over Omega at 50,000 years is about equal to the ratio of the 2 times the 50,000 years and today, which is 13.8 billion years, to the 2/3 power since Omega minus 1 grows like t to the 2/3. I should maybe have pointed out here, this telling us that Omega minus 1 grows with time. That's the important feature. It grows t to the 2/3. So the value at 50,000 years is this ratio to the 2/3 power times Omega minus 1 over Omega today, which I can indicate just by putting subscript zeros on my Omegas. And that makes it today. And I've written this as a fraction less than one. This says that Omega minus 1 over Omega was smaller than it is now by this ratio to the 2/3 power, which follows from the fact that Omega minus 1 over Omega grows like t to the 2/3. OK, we're now halfway there. And the other half is similar, so it will go quickly. We now want to go from 50,000 years to one second using the fact that during that era the universe was radiation dominated. So for t equals 1 second to 50,000 years, the universe is radiation dominated. And that implies that the scale factor is proportional to t to the 1/2. The temperature is, again, proportional to 1 over the scale factor. That's just conservation of entropy. And the energy density goes one over the scale factor to the fourth. So, again, we go back to this formula and do the corresponding arithmetic. Temperature goes like 1 over a. Temperature squared goes like 1 over a squared. That's our numerator. This time, in the denominator, we have Rho, which goes like one over a to the fourth. So we have 1 over a squared divided by 1 over a to the fourth. And that means it goes like a squared. So we get Omega minus 1 over Omega is proportional to a squared. And since goes like the square root of t, a squared goes like t. So during the radiation dominated era this diverges even a little faster. PROFESSOR: It goes like t, rather than like t to the 2/3, which is a slightly slower growth. And using this fact, we can put it all together now and say that Omega minus 1 over Omega at 1 second is about equal to 1 second over 50,000 years to the first power-- this is going like the first power of t-- times the value of Omega minus 1 over Omega at 50,000 years. And Omega at 50,000 years, we can put in that equality and relate everything to the present value. And when you do that, putting it all together, you ultimately find that Omega minus 1 in magnitude at t equals 1 second is less than about 10 to the minus 18. This is just putting together these inequalities and using the Planck value for the present value, the Planck inequality. So then 10 to the minus 18 is a powerfully small number. What we're saying is that to be in the allowed range today, at one second after the Big Bang, Omega have to have been equal to 1 in the context of this conventional cosmology to an accuracy of 18 decimal places. And the reason that's a problem is that we don't know any physics whatever that forces Omega to be equal to 1. Yet, somehow Omega apparently has chosen to be 1 to an accuracy of 18 decimal places. And I mention that the argument wasn't that different in 1980. In 1980, we only knew this instead of that. And that meant that instead of having 10 to the minus 2 on the right-hand side here, we would have had 10 differing by three orders of magnitude. So instead of getting 10 to the minus 18 here, we would have gotten 10 to minus 15. And 10 to minus 15 is, I guess, a lot bigger than 10 to minus 18 by a factor of 1,000. But still, it's an incredibly small number. And the argument really sounds exactly the same. The question is, how did Omega minus 1 get to be so incredibly small? What mechanism was there? Now, like the horizon problem, you can get around it by attributing your ignorance to the singularity. You can say the universe started out with Omega exactly equal to 1 or Omega equal to 1 to some extraordinary accuracy. But that's not really an explanation. It really is just a hope for an explanation. And the point is that inflation, which you'll be learning about in the next few lectures, provides an actual explanation. It provides a mechanism that drives the early universe towards Omega equals 1, thereby explaining why the early universe had a value of Omega so incredibly close to 1. So that's what we're going to be learning shortly. But at the present time, the takeaway message is simply that for Omega to be in the allowed range today it had to start out unbelievably close to 1 at, for example, t equals 1 second. And within conventional cosmology, there's no explanation for why Omega so close to 1 was in any way preferred. Any questions about that? Yes? AUDIENCE: Is there any heuristic argument that omega [INAUDIBLE] universe has total energy zero? So isn't that, at least, appealing? PROFESSOR: OK the question is, isn't it maybe appealing that Omega should equal 1 because Omega equals 1 is a flat universe, which has 0 total energy? I guess, the point is that any closed universe also has zero total energy. So I don't think Omega equals 1 is so special. And furthermore, if you look at the spectrum of possible values of Omega, it can be positive-- I'm sorry, not with Omega. Let me look at the curvature itself, little k. Little k can be positive, in which case, you have a closed universe. It can be negative, in which case, you have an open universe. And only for the one special case of k equals 0, which really is one number in the whole real line of possible numbers, do you get exactly flat. So I think from that point of view flat looks highly special and not at all plausible as what you'd get if you just grabbed something out of a grab bag. But, ultimately, I think there's no way of knowing for sure. Whether or not Omega equals 1 coming out of the singularity is plausible really depends on knowing something about the singularity, which we don't. So you're free to speculate. But the nice thing about inflation is that you don't need to speculate. Inflation really does provide a physical mechanism that we can understand that drives Omega to be 1 exactly like what we see. Any other questions? OK, in that case, what I'd like to do is to move on to problem number three, which is the magnetic monopole problem, which unfortunately requires some background to understand. And we don't have too much time. So I'm going to go through things rather quickly. This magnetic monopole problem is different from the other two in the first two problems I discussed are just problems of basic classical cosmology. The magnetic monopole problem only arises if we believe that physics at very high energies is described what are called grand unified theories, which then imply that these magnetic monopoles exist and allow us a root for estimating how many of them would have been produced. And the point is that if we assume that grand unified theories are the right description of physics at very high energies, then we conclude that far too many magnetic monopoles would be produced if we had just the standard cosmology that we've been talking about without inflation. So that's going to be the thrust of the argument. And it will all go away if you decide you don't believe in grand unified theories, which you're allowed to. But there is some evidence for grand unified theories. And I'll talk about that a little bit. Now, I'm not going to have time to completely describe grand unified theories. But I will try to tell you enough odd facts about grand unified theories. So there will be kind of a consistent picture that will hang together, even though there's no claim that I can completely teach you grand unified theories in the next 10 minutes and then talk about the production of magnetic monopoles and those theories in the next five minutes. But that will be sort of the goal. So to start with, I mentioned that there's something called the standard model of particle physics, which is enormously successful. It's been developed really since the 1970s and has not changed too much since maybe 1975 or so. We have, since 1975, learned that neutrinos have masses. And those can be incorporated into the standard model. And that's a recent addition. And, I guess, in 1975 I'm not sure if we knew all three generations that we now know. But the matter, the fundamental particles fall into three generations, these particles of a different type. And we'll talk about them later. But these are the quarks. These are the spin-1/2 particles, these three columns on the left. On the top, we have the quarks, up, down, charm, strange, top, and bottom. There are six different flavors of quarks. Each quark, by the way, comes in three different colors. The different colors are absolutely identical to each other. There's a perfect symmetry among colors. There's no perfect symmetry here. Each of these quarks is a little bit different from the others. Although, there are approximate symmetries. And related to each family of quarks is a family of leptons, particles that do not undergo strong interactions in the electron-like particles and neutrinos. This row is the neutrinos. There's an electron neutrino, a muon neutrino, and a tau neutrino, like we've already said. And there's an electron, a muon, and a tao, which I guess we've also already said. So the particles on the left are all of the spin-1/2 particles that exist in the standard model of particle physics. And then on the right, we have the Boson particles, the particles of integer span, starting with the photon on the top. Under that in this list-- there's no particular order in here really-- is the gluon which is the particle that's like the photon but the particle which describes the strong interactions, which are somewhat more complicated and electromagnetism, but still described by spin-1 particles just like the photon. And then two other spin-1 particles, the z0 and the w plus and minus, which are a neutral particle and a charged particle, which are the carriers of the so-called weak interactions. The weak interactions being the only non-gravitational interactions that neutrinos undergo. And the weak interactions are responsible for certain particle decays. For example, a neutron can decay into a proton giving off also an electron-- producing a proton, yeah-- charge has to be conserved, proton is positive. So it's produced with an electron and then an anti-electron neutrino to balance the so-called electron lepton number. And that's a weak direction. Essentially, anything that involves neutrinos is going to be weak interaction. So these are the characters. And there's a set of interactions that go with this set of characters. So we have here a complete model of how elementary particles interact. And the model has been totally successful. It actually gives predictions that are consistent with every reliable experiment that has been done since the mid-1970s up to the present. So it's made particle physics a bit dull since we have a theory that seems to predict everything. But it's also a magnificent achievement that we have such a theory. Now, in spite of the fact that this theory is so unbelievably successful I don't think I know anybody who really regards this as a candidate even or the ultimate theory of nature. And the reason for that is maybe twofold, first is that it does not incorporate gravity, it only incorporates particle interactions. And we know that gravity exists and has to be put in somehow. And there doesn't seem to be any simple way of putting gravity into this theory. And, secondly-- maybe there's three reasons-- second, it does not include any good candidates for the dark matter that we know exists in cosmology. And third, excuse me-- and this is given a lot of importance, even though it's an aesthetic argument-- this model has something like 28 free parameters, quantities that you just have to go out and measure before you can use the model to make predictions. And a large number of free parameters is associated, by theoretical physicists, with ugliness. So this is considered a very ugly model. And we have no real way of knowing, but almost all theoretical physicists believe that the correct theory of nature is going to be simpler and involve many fewer, maybe none at all, free knobs that can be turned to produce different kinds of physics. OK, what I want to talk about next, leading up to grand unified theories, is the notion of gauge theories. And, yes? AUDIENCE: I'm sorry, question real quick from the chart. I basically heard the explanation that the reason for the short range of the weak force was the massive mediator that is the cause of exponential field decay. But if the [INAUDIBLE] is massless, how do we explain that to [INAUDIBLE]? PROFESSOR: Right, OK, the question is for those of you who couldn't hear it is that the short range of the weak interactions, although I didn't talk about it, is usually explained and is explained by the fact that the z and w naught Bosons are very heavy. And heavy particles have a short range. But the strong interactions seem to also have a short range. And yet, the gluon is effectively massless. That's related to a complicated issue which goes by the name of confinement. Although the gluon is massless, it's confined. And confined means that it cannot exist as a free particle. In some sense, the strong interactions do have a long range in that if you took a meson, which is made out of a quark and an anti-quark, in principle, if you pulled it apart, there'd be a string of gluon force between the quark and the anti-quark. And that would produce a constant force no matter how far apart you pulled them. And the only thing that intervenes, and it is important that it does intervene, is that if you pulled them too far apart it would become energetically favorable for a quark anti-quark pair to materialize in the middle. And then instead of having a quark here and an anti-quark here and a string between them, you would have a quark here and an anti-quark there and a string between them, and an anti-quark here and-- I'm sorry, I guess it's a quark here and an anti-quark there and a string between those. And then they would just fly apart. So the string can break by producing quark anti-quark pairs. But the string can never just end in the middle of nowhere. And that's the essence of confinement. And it's due to the peculiar interactions that these gluons are believed to obey. So the gluons behave in a way which is somewhat uncharacteristic of particles. Except at very short distances, they behave very much like ordinary particles. But at larger distances, these effects of confinement play a very significant role. Any other questions? OK, onward, I want talk about gauge theories, because gauge theories have a lot to do with how one gets into grand unified theories from the standard model. And, basically, a gauge theory is a theory in which the fundamental fields are not themselves reality. But rather there's a set of transformations that the fields can undergo which take you from one description to an equivalent description of the same reality. So there's not a one to one mapping between the fields and reality. There are many different field configurations that correspond to the same reality. And that's basically what characterizes what we call a gauge theory. And you do know one example of a gauge theory. And that's e and m. If e and m is expressed in terms of the potentials Phi and A, you can write e in terms of the potential that way and b as the curl of A, you could put Phi and A together and make a four-vector if you want to do things in a Lorentz covariant way. And the important point, though, whether you put them together or not, is that you can always define a gauge transformation depending on some arbitrary function Lambda, which is a function of position and time. I didn't write in the arguments, but Lambda is just an arbitrary function of position and time. And for any Lambda you can replace 5 by 5 prime, given by this line, and a by a prime, given by that line. And if you go back and compute e and b, you'll find that they'll be unchanged. And therefore, the physics is not changed, because the physics really is all contained in e and b. So this gauge transformation is a transformation on the fields of the theory-- it can be written covariantly this way-- which leaves the physics invariant. And it turns out that all the important field theories that we know of are gauge theories. And that's why it's worth mentioning here. Now, for e and m, the gauge parameter is just this function Lambda, which is a function of position and time. And an important issue is what happens when you combine gauge transformations, because the succession of two transformations had better also be a symmetry transformation. So it's worth understanding that group structure. And for the case of e and m, these Lambdas just add if we make successive transformations. And that means the group is Abelian. It's commutative. But that's not always the case. Let's see, where am I going? OK, next slide actually comes somewhat later. Let me go back to the blackboard. It turns out that the important generalization of gauge theories is the generalization from Abelian gauge theories to non-Abelian ones, which was done originally by Yang and Mills in 1954, I think. And when it was first proposed, nobody knew what to do with it. But, ultimately, these non-Abelian gauge theories became the standard model of particle physics. And in non-Abelian gauge theories the parameter that describes the gauge transformation is a group element, not just something that gets added. And group elements multiply, according to the procedures of some group. And in particular, the standard model is built out of three groups. And the fact that there are three groups not one is just an example of this ugliness that I mentioned and is responsible for the fact that there's some significant number of parameters even if there were no other complications. So the standard model is based on three gauge groups, SU3, SU2, and U1. And it won't really be too important for us what exactly these groups are. Let me just mention quickly, SU3 is a group of 3 by 3 matrices which are unitary in the sense that u adjoint times u is equal to 1 and special in the sense that they have determinant 1. And the set of all 3 by 3 matrices have those properties form a group. And that group is SU3. SU2 is the same thing but replace the 3 in all those sentences by 2s. U1 is just the group of phases. That is the group of real numbers that could be written as either the i phi. So it's just a complex number of magnitude 1 with just a phase. And you can multiply those and they form a group. And the standard model contains these three groups. And the three groups all act independently, which means that if you know about group products, one can say that the full group is the product group. And that just means that a full description of a group element is really just a set of an element of SU3, and an element of SU2, and an element of U1. And if you put together three group elements in each group and put them together with commas, that becomes a group element of the group SU3 cross, SU2 cross U1. And that is the gauge group of the standard model of particle physics. OK, now grand unified theories, a grand unified theory is based on the idea that this set of three groups can all be embedded in a single simple group. Now, simple actually has a mathematical group theory meaning. But it also, for our purposes, just means simple, which is good enough for our brush through of these arguments. And, for example-- and the example is shown a little bit in the lecture notes that will be posted shortly-- an example of a grand unified theory, and indeed the first grand unified theory that was invented, is based on the full gauge group SU5, which is just a group of 5 by 5 matrices which are unitary and have determinant 1. And there's an easy way to embed SU3 and SU2 and U1 into SU5. And that's the way that was used to construct this grand unified theory. One can take a 5 by 5 matrix-- so this is a 5 by 5 matrix-- and one can simply take the upper 3 by 3 block and put an SU3 matrix there. And one can take the lower 2 by 2 block and put an SU2 matrix there. And then the U1 piece-- there's supposed to be a U1 left over-- the U1 piece can be obtained by giving an overall phase to this and an overall phase to that in such a way that the product of the five phases is 0. So the determinant has not changed. So one can put an e to the i2 Phi there and a factor of e to the minus i3 Phi there for any Phi. And then this Phi becomes the description of the U1 piece of this construction. So we can take an arbitrary SU3 matrix, and arbitrary SU2 matrix, and an arbitrary U1 value expressed by Phi and put them together to make an SU5 matrix. And if you think about it, the SU3 piece will commute with the SU2 piece and with the U1 piece. These three pieces will all commute with each other, if you think about how multiplication works with this construction. So it does exactly what we want. It decomposes SU5. So it has a subgroup of SU3 cross SU2 across U1. And that's how the simplest grand unified theory works. OK, now, there are important things that need to be said, but we're out of time. So I guess what we need to do is to withhold from the next problem set, the magnetic monopole problem. Maybe I was a bit over-ambitious to put it on the problem set. So I'll send an email announcing that. But the one problem on the problem set for next week about grand unified theories will be withheld. And Scott Hughes will pick up this discussion next Tuesday. So I will see all of you-- gee willikers, if you come to my office hour, I'll see you then. But otherwise, I may not see you until the quiz. So have a good Thanksgiving and good luck on the quiz.
MIT_8286_The_Early_Universe_Fall_2013
14_The_Geodesic_Equation.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK. In that case let's get going. In today's lecture, we're going to be sort of splitting the lecture of things, if the timing goes as I plan. We're going to start by finishing talking about the geodesic equation. And then if all goes well, we will start talking about the energy of radiation-- completely changing topics altogether. I want to begin, as usual, by reviewing quickly what we talked about last time, just to remind us where we are. Last time, we at first, at the beginning of lecture, talked about how to add time into the Robertson-Walker Metric. And this is the formula that we claimed was the correct one. For a spacetime metric, ds squared, the meaning is closely analogous to the meaning that it would have in special relativity. The main difference being that in special relativity we always talk about what is observed by inertial frames of reference and inertial observers. In general relativity, the concept of an inertial observer is not so clear cut, but we can talk about observers for whom there is no forces acting on them other than possibly gravitational forces. And whether or not there are gravitational forces is always, itself, a framed dependent question. So it does not have a definite answer. So observers for which there is no forces acting on them other than gravitational forces are called free-falling observers. And they play the role of inertial observers that the inertial observers play in special relativity. So if ds squared is positive, it's the square of the spatial separation measured by a local free-falling observer, for whom the two events happen at the same time. Last time, I think, I did not really mention or emphasize the word local. But the point is that in general relativity we expect in any small region one can construct an accelerating coordinate system in which the effects of gravity are canceled out, as the equivalence principle tells us we can do. And then you essentially see the effects of special relativity. But it's only a small region, in principle, on an infinitesimal region. So these measurements that correspond to special relativity measurements are always made locally by an observer who is, in principle, arbitrarily close to the events being measured. If ds squared is negative, then it's equal to minus c squared times the square of the time separation that would be measured by a local free-falling observer for whom the two events happen at the same location. I should point out that a special case of this is an observer looking at his own wristwatch. His own wristwatch is always at the same location relevant to him, so it's a special case of this statement. So it says that ds squared is equal to minus c squared times the time that a free-falling observer would read on his own wrist watch. And If ds squared is 0, it means that the two events can be joined by a light pulse going from one to the other. Having said this, we can go back to this formula and understand why the formula is what it is. The spatial part is what it is because any homogeneous and isotropic spatial metric can be written in this form. And we are assuming that the universe we're describing is homogeneous and isotropic. The dc squared piece is really dictated by item two here. We want the t that we write in this metric to be the cosmic time variable that we've been speaking about. And that means that it is the time variable measured on the watches of observers who are at rest in this coordinate system. And that means that it has to be simply minus c squared dt squared. Or else dt would not have the right relationship to a ds squared to be consistent with what the s squared is supposed to be. And then we also talked about why there are no dt dr terms, or dt d theta, or dt d phi. We said that any such term would violate isotropy. If you had a dt dr term, for example, it would make the positive dr direction different from the negative dr direction. And that can't be something that happens in an isotropic universe. That then is our metric for cosmology, the Roberrtson-Walker Metric. And another important thing is what is it good for, now that we decided that's the right metric? What use is to us? And what we haven't done yet, but it's actually on the homework, we need the full spacetime metric to be able to find geodesics, to be able to learn the paths of particles moving through this model universe. So we will be making important use of this Roberrtson-Walker Metric with its spacetime contributions. OK. Any questions about that? Now I'm ready to change gears to some extent. Yes, Ani? AUDIANCE: So in general, the spatial part of the metric, we can get from the geometry? And in general, can you just add a minus c squared dt square for the temporal part? PROFESSOR: It's not quite general. Remember we used an argument based on isotropy here. So I think it's safe to say that any metric you'll find in this class is likely to have the time entering, and nothing more complicated than minus c squared dt squared. But it's not a general statement about general relativity. Any other questions? Yes. AUDIANCE: [INAUDIBLE], PROFESSOR: OK, the question is, what would be a circumstance where we would have to deal with something more complicated? The answer would be, I think, all you need is to add to this model universe perturbations that break the uniformity. If we tried to describe the real universe instead of this ideal universe, where our ideal universe is perfectly isotropic and homogeneous, if it said we wanted to describe the lumps and bumps of the real universe, then it would become more complicated. And we would probably need a dt, dr term. OK, next we went on to talking about the geodesic equation. According to General Relativity, the trajectories of particles that have no forces acting on them other than gravity, these free-falling observers, are geodesics in the spacetime.. So that means we want to learn how to calculate geodesics, which means paths whose length is stationary under small variations. So we considered first just simple geodesics in the spatial metric, because that's easier to think about. What is the shortest distance between two points in a space that's described by some arbitrary metric? So first we talked about how we describe the metric. And we introduced two features in this first formula here. One is that instead of calling the coordinates XYZ or something like that. We called them x1, x2, x3, so that we could talk about them all together in one formula without writing separate pieces for the different coordinates. So i and j represent 1, 2, and 3, or just 1, and 2, which is the labeling of the spatial coordinates. And the other important piece of notation that is introduced in that formula is the Einstein summation convention. Whenever there's an index, like i and j here, which are repeated with one index lower and one index upper, they're automatically summed over all of the values that the coordinates take, without writing summation sign. It saves a lot of writing. And it turns out that one always sums under those circumstances, so there's no need to write the sums with the summation sign. Next we want to ask ourselves, how are we going to describe the path? Before we can find the minimum path, we need at least a language to talk about paths. And we could describe a path going from some point A to some point B, by giving a function x supra i of lambda. Well, lambda is an arbitrary parameter, that parametrizes the path. x supra i are a set of coordinates. i runs over the values of all the coordinates of whatever system you're dealing with. And you construct such a function where xi of 0 is the starting point, which are the coordinates of the point A. And xi of some value lambda f, where f just stands for final, will be the end of the path. And it's supposed to end at point B. So the final coordinates of the path should be x supra i sub b, the coordinates of the point B. Then we want to use this description of the path to figure out what the length is of a segment of the path. And then the full length will be the sum of the segments. So for each segment, we just apply the metric to the change in coordinates. The change in coordinates, as lambda is varied, is just the derivative of xi with respect to lambda times the change in lambda. And putting that in for both dxi and dxj one gets this formula, relating ds squared, the square of the length of an infinitesimal segment to d lambda squared, the square of the parameter that describes that length. Then the full length is gotten by, first of all, taking the square root of this equation to get the infinitesimal length, ds. And then taking the integral of that over the path from beginning to end. And that, then, gives us the full length of the path, thinking of it as the sum of the length of each of infinitesimal segment. OK? Fair enough? Now that we have this formula for the length, now we have the next challenge, which is to figure out how to calculate the path which minimizes that length. And I didn't use the word last time, but that what is called the calculus of variations. And I looked up a little bit of the history in the Wikipedia. The calculus of variations dates back to 1696, when Johann Bernoulli invented it, applied it to the brachistochrone problem, which is the problem of finding a path for which a frictionless object will slide and get to its destination in the least possible time. And it turns out to be a cycloid, just like the cycloid that describes our closed universes, closed matter dominating the universe. And the problem was also solved by-- Johann Bernoulli then announced this problem to the world and challenged other mathematicians to solve it. There's a famous story that Newton noticed this question in his mail when he got home at 4:00 AM, or something like that, from the mint-- he was apparently a hardworking guy-- but nonetheless when he seen this problem he couldn't go to bed. He went ahead and solved it by morning, which is a good MIT student kind of thing to do. So the technique is to consider a small variation from whatever path you're hoping to be the minimum. And we're going to calculate the first order change in the length of the path, starting from our original path, x of lambda, to some new path, x tilde of lambda. And we parametrize the new path by writing it as the old path, plus a correction. And I've introduced a factor, alpha, multiplying the correction, because it makes it easier to talk about derivatives. And wi of lambda is just some arbitrary deviation from the original path. But we want to always go through the same starting point to the same endpoint, because there's never going to be a minimum if we're allowed to move the endpoints. So the endpoints are fixed. And that means that this path deviation, w super i in my notation, has to vanish at the two endpoints. So we impose these two equations on the variation wi. Then what do I do is take the derivative of the path length of the varied path, x tilde with respect to alpha, and if we had a minimum length to start with, the derivative should always vanish. That is, the minimum should always occur when alpha equals 0, if the original path of the true path, the true minimum path. And if alpha equals 0 is the minimum, the derivative should always vanish at alpha equals 0. And vice versa. If we know that this happens for every variation wi, then we know that our path is at least an extremum, and, presumably, a minimum. And the path itself is just written by the same formulas we had before, except for x tilde instead of x itself. And I've introduced an axillary quantity, a of lambda alpha, which is just what appears inside the square root. That just saves some writing, because it has to be written a number of times in the course of the manipulations. So our goal now is to carry out this derivative. And the derivative acts only on the integrand, because the limits of integration do not depend on alpha. So just carry the derivative into the integrand and differentiate this square root of a of lambda, which is, itself, a product of factors that we have to use-- product rule and chain rule and various manipulations. And after we carry out those manipulations, we end up with this expression in a straightforward way involving a few steps, which I won't show again. And the complication is that what we want to do is to figure out for what paths that expression will vanish for all wi. We want it to vanish for all possible variations of the path. And what's complicated is that wi appears here as a multiplicative factor in the first term, but as a differentiated factor in the second term. And that makes it very hard to know, initially, when those two terms might cancel each other to give you 0, which is what we're looking for. But the brilliant trick that, I guess, Newton invented, along with Bernoulli and others, is to integrate by parts. Integration by parts, I'm sure, was not a well-known procedure at that time. But if we integrate the second term by parts, we could remove the derivative acting on w, and arrange for w to be a multiplicative factor in both terms. And a crucial thing that makes the whole thing useful is that when you do integrate by parts, you discover that you don't get any endpoint contributions, because the endpoint contributions would be proportional to wi at the endpoints. And remember, wi has to vanish at the endpoints, because that's the condition that we're not changing the points A and B. We're always talking about paths that have the same starting point and the same ending point. So integrating by parts, we get this expression, where now wi multiplies everything, as just simply a multiplicative factor. To write it in this form, you had to do a little bit of juggling of indices. The other important trick in these manipulations is to juggle indices, which I'll not show you explicitly. But the thing to remember is that these indices that are being summed over can be called anything and it's still the same sum. So when you want to get terms to cancel each other, you may have to change the names of indices to get them to just cancel identically. But that's straightforward. So we get this expression. And now we want this expression to vanish for every possible wi of lambda. And we argued that the only way it could vanish for every possible wi of lambda is if the expression in curly brackets, itself, vanishes. Yeah, if we only know the values for some particular wi of lambda, then there are lots of ways it could vanish, because it could be positive in some places and negative in others. But the only way it could vanish for all wi is for the quantity in curly brackets to vanish. So that gives us our final, or at least, almost final expression of the geodesic equation. And that's where we left off last time, with that equation. So note that this is just an equation that would either be obeyed or not obeyed by the function x super i of lambda. It's just a differential equation involving x super i of lambda and the metric, which we assume is given. OK. So are there any questions about that? Everybody happy? Great. OK, now we'll continue on on the blackboard. OK, the first thing I want to do is to simplify the equation a bit. This equation is fairly complicated, because of those square roots of A's in the denominators. The square root of A is a pretty complicated thing to start with, and the square root of A here is even differentiated, because it's got the lambda making an incredible mess, if you understand all that. So it would be nice to simplify that. And we do have one trick which we can still do, which we haven't done yet. We originally constructed our path, xi of lambda, as a function of some arbitrary parameter, lambda. Lambda just measures arbitrary points along the path. There are many, many ways to do that, an infinite number of ways that you can do that. And this formula will work for all of them, it's completely general. The formula, when we derived it, we didn't make any assumptions about how lambda was chosen. But we can simplify the formula by making a particular choice for lambda. And the choice that simplifies things is to choose lambda to be the arc length itself. Lambda should be the distance along the path. And then we're trying to express xi as a function of how far you've already gone. And that has the effect, if we go back to what Ai was, A of lambda really is just the path length per lambda. So if lambda is the path length itself, A is just equal to 1. I'm trying to get a formula that shows that more clearly. Here. If we remember that this quantity is A, this tells us that ds squared is equal to A times d lambda squared. So if ds is the same as d lambda, as you've chosen your parameter to be the path length, this formula makes it clear that that's equivalent to A equal to 1. So going back to the formula, if A is 1, we would just drop it from both sides of the equation. And all that really matters, I should point out here, maybe, because we'll be using it later, is that A is a constant. As long as A is a constant, it will not be differentiated, and then it will cancel on the left side and the right side. So we don't necessarily care that it is 1, but we do care that it's a constant. And then it just disappears from the formula. And then we get the simpler formula. And now we'll continue on the blackboard. The simpler formula is just dds of gij dxj ds is equal to 1/2 times the derivative of gjk, with respect to xi, times dxj ds dxk ds, where s is equal to the path length. So I've replaced lambda by s, because we set lambda equal to s. And s has a more specific meaning than lambda did. Lambda was a completely arbitrary parametrization of the path. So this one deserves a big box, because it really is the final formula for geodesics. Once we write it in terms of different letters, we will later, but this actually is the formula. Now I should mention just largely for the sake of your knowing what's going on, if you ever look at some other general relativity books, this is not the formula that the geodesic equation is usually written in. Frankly, it is the best form. If you want to find the geodesic, usually this form of writing the equation is the easiest. But most general relativity books prefer instead to just give a formula for the second derivative, here. Which involves just expanding this term, and then when we shuffle things, to try to simplify the expressions. So one can write, to start, d ds of gij dxj ds. We're just going to expand it. Now we're going to be making use of all the rules of calculus that we've learn. Every rule you've ever learned will probably get used in this calculation. So it will be using product rule, of course, because we have a product of two things here. But we also have the complication that gij is not explicitly a function of s. But gij is a function of position. And the position that one is that for any given value of s depends on s, because we're moving along the path, x super i of s. So the gij here, is evaluated at x super i of s. I should give this a new letter. x super k of s. So it depends on s, through the argument of its argument. So that's a chain rule situation. And what we get here is, from just differentiating the second factor, that's easy. We get gij DC d squared xj ds squared. And then, from the derivative of the derivative chain rule piece, we get the partial of gij, with respect to xk times the dxj ds times dxk ds. And then to continue, this piece gets brought over to the other side, because we're trying to get an equation just for the second derivative of the path. So then we get g sub ij d squared x super j ds squared is equal to 1/2 di-- I'll define that in a second-- g sub jk minus 2 dk gij dxi ds dxj ds. where this partial derivative with the subscript is just an abbreviation for the derivative with respect to the coordinate with that index. So that's just an abbreviation. Now you could think of this as a matrix times a vector is equal to an expression. What we like to do is just get an expression for this vector. So if we think of it as a matrix times a vector, all we have to do is invert the matrix to be able to get an expression for the vector itself. Yes! AUDIANCE: Should that closing parenthesis be more [INAUDIBLE]? PROFESSOR: Oh, Yeah, I think you're right, it doesn't look right. Yeah. Thank you This has to multiply everything. Oops! OK, OK. Given enough chances I'll get it right. OK, now everybody happy this time? Thank you very much for getting it straight. OK, So as I was saying, we want to isolate this second derivative. We're hoping to get just an expression for the second derivative. And this can be interpreted as a matrix times a vector equals something. We want to just invert that matrix. Yes? AUDIANCE: Isn't the ds and [? the idx ?] [INAUDIBLE]? PROFESSOR: Oh, do I have that wrong too, perhaps? I think we want j and k there, that don't we? OK, attempt number four, or did I lose count as well? j and k are the indices and the i matches the free i on the left. And all the other indices are sound. I think, probably, I finally achieved the right formula. Thanks for all the help. So inverting a matrix, the principal is a straightforward mathematical operation. In general relativity, we give a name to the inverse metric, and it's the same letter g with indices, with superscripts instead of subscripts. And that's defined to be the matrix inverse. So g super ij is defined to be the matrix inverse of g sub ij. And to put that into an equation, we could say that if we take g with upper indices-- and I'll write those upper indices as i and l-- and multiply it by a g with lower indices l and j, when you sum over adjacent indices in this index notation, that's exactly what corresponds to the definition of matrix multiplication. So this is just the matrix g with upper indices times the matrix g with lower indices, and it's the i j'th element of that matrix. And we're saying it should be the identity matrix, and that means that the i j'th element should be 0 if it's off diagonal, and 1 if it's diagonal, if i equals j. And that's exactly the definition of a chronic or a delta. So this is equal to delta ij. We remember that delta ij is 0 if i is not equal to j, 1 if i is equal to j. That's the definition. And it corresponds to that identity matrix in matrix language. So this is the relationship that actually defines g super il. And it is just the statement that g with upper indices is the matrix inverse of g with lower indices. Using this, we can bring this g to the other side essentially by multiplying by g inverse. And I will save a little time by not writing that out in gory detail, but rather I'll just write the result. And the result is written in terms of a new symbol that gets defined, which is an absolutely standard symbol in General Relativity. The formula is d squared x i, ds squared is equal to-- we know it's going to be equal to stuff times the product of two derivatives. And the stuff that appears is just given a name, capital gamma, which has an upper index i, which matches the left hand side of the equation. And two lower indices, which I'm calling j and k, which will get summed with the derivatives that follow, d x j ds, dx k, ds. And this quantity, gamma super i sub jk are just the terms that would appear when we do these manipulations. And I'll write what they are. Gamma super i sub jk is equal to 1/2 g super il times the derivative with respect to j of g sub lk, plus the derivative with respect to k of g sub lj. And then minus the derivative with respect to l of g sub jk. And this quantity has several different names. Everybody agrees how to define it up to the sign. There are different sign conventions that are used in different books. And there are also different names for it. It's often called the affine connection. If you look, for example in Steve Weinberg's General Relativity book, he calls it the affine connection. It's also very often called the Christofel connection, or the Christofel symbol. And frankly those are the only names for it that I've seen, personally. But there's a book about [INAUDIBLE] by Sean Carroll which is a very good book. And he claims that it's sometimes also called the Riemann connection And it's also sometimes called the Levi-Civita connection. So it's got lots of names, which I guess means lots of people's independently invented it. But in any case, that's the answer. And it's just a way of rewriting the formula we have up there. And for solving problems, the formula, the way we wrote up there, is almost always the best way to do it. So this is really just window dressing, largely for the purpose of making contact with other books that you might come across. OK, so that finishes the derivation of the geodesic equation. Now I'd like to give an example of its use. But before I do that, let me just pause to ask if there are any questions about the derivation? OK. So on your homework, you will, in fact, be applying this formalism to the Robertson-Walker metric. And you'll learn how moving particles slow down as they move through an expanding universe, completely in an analogy to the way photons, which we've already learned, lose energy as they travel through an expanding universe. So particles with mass also lose energy in a well-defined way, which you'll be calculating on the homework. For example, though, I'll do something different. A fun metric to talk about is the Schwarzschild metric, which describes, among other things, black holes. It in principle, describes anything which is spherically symmetric and has a gravitational field. But black holes are the most interesting example, because it's where the most surprises lie. So the Schwarzschild metric has the form ds squared is equal to minus c squared d tau squared, which is equal to-- this is just a definition, it defines d tau-- but in terms of the coordinates, it's minus 1 minus 2 G, Newton's constant, M, the mass of the object we're discussing-- the mass of the black hole, if it is a black hole-- divided by r times c squared, r is the radial coordinate, times c squared dt squared, plus 1 minus 2 GM over rc squared times dr squared plus r squared times d theta squared plus sine squared theta d phi squared. Now here, theta and phi are the usual polar angles. We're using a polar coordinate system. So as usual, theta lies between 0 and pi. 0, what we might call the North Pole, and pi what we might call the South Pole. And phi is what is often called an azimuthal angle, it goes around. And the way one describes coordinates on the surface of the Earth, phi would be the longitude variable. So 0 is less than or equal to phi is less than or equal to 2 pi where phi equals 2 pi is identified with phi equals 0. And you can go around and come back to where you started. Now notice that if we set capital M, the mass of this object equal to 0, the metric becomes the trivial metric of Special Relativity written in spherical polar coordinates. So all complications go away if there's no mass. The object disappears. But as long as the mass is non-zero there are factors that multiply the dr squared term and the c squared dt squared term. Notice that the factors that do that multiplying-- now one of these should be inverted. Important inverse, it's a minus 1 power for that factor. Notice that r can be small enough so that these factors will vanish. And the place where that happens is called the Schwarzschild radius after the same person who invented the metric. So r sub Schwarzschild is equal to 2 GM divided by c squared. When little r is equal to that, this quantity in parentheses vanishes, which means we get infinity here, because it's inverted, and we get a 0 there. Now when a term in the metric is either 0 or infinite, one calls that a singularity. In this case, it's a removable singularity. That is, the Schwarzschild singularity is only there because Schwarzschild chose to use these particular coordinates. These are simpler than other coordinates. He wasn't foolish to use them. But the appearance of that singularity is really caused solely by the choice of coordinates. There really is no singularity at the Schwarzschild horizon. And that was shown some years later by other people constructing other coordinate systems. The coordinate system is best known today that avoids the Schwarzschild singularity is a coordinate system called the Kruskal coordinate system. But we will not be looking at the Kruskal coordinate system in this class. Leave that for the GR class that you'll take some time. OK, now the masses sum parameter, notice that the metric is completely determined by the mass. And that's the same situation as we found in Newtonian gravity. The metric outside of the spherically symmetric object, by the gravitational field in Newtonian Physics outside of a spherical symmetric object, depends only on the total mass, which does not depend, at all, on how it's distributed as long as it's spherically symmetric. And the same thing here. As long as an object is spherically symmetric, the gravitational field outside of the object will always look like that formula. Now there are still two cases-- outside of the object could be larger than or smaller than this Schwarzschild radius. So for an object like the sun, the Schwarzschild radius, we could calculate it-- and it's calculated in the notes-- it's about two or three kilometers. Hold on and I'll tell you more accurately. It's 2.95 kilometers, the Schwarzschild radius of the sun. But the sun, of course, is much bigger than that. And that means that the sun doesn't have a Schwarzschild horizon. That is, at 2.95 kilometers from the center of the sun there's still sun. It's not outside the sun. This metric only holds outside the spherically symmetric object. So it does not hold inside the sun. The place where this has the apparent singularity the metric is not valid at all. So there is nothing that even comes close to anything worth talking about, as far as the Schwarzschild singularity for an object like the sun. But if the sun were compressed to a size smaller than 2.95 kilometers with the same mass, then these factors would be relevant at the places where they vanish. And whatever consequences they have, we would be dealing with. Even though r equals r Schwarzschild is not a singular point, it is still a special point. What you can show-- we won't-- but what we can show is that that is the horizon. Meaning that if an object falls inside this Schwarzschild radius, there is no trajectory that will ever get it out. Yes? AUDIANCE: Say a star is just incredibly dense at its core. Is it possible to have suppression of some fractional life of a star that's from that mass that it's contained? Or like a fusion reaction that is going on with the net radius? PROFESSOR: OK, could there be a horizon inside of a star? I think is what you're asking, basically. AUDIANCE: One that actually affects the-- PROFESSOR: One that really is a horizon. AUDIANCE: That's outside. PROFESSOR: Right. If this were the sun you were describing, this formula would just not be valid inside. There would be no horizon inside. But you're asking a real valid question. If a star had, for some reason, a very dense spot in the middle, could it actually form a horizon inside the material? And the answer is, yes, it could. It would not be stable. The material would ultimately fall in, but it could happen. Yes? AUDIANCE: So like our galaxy has a super massive black hole in the center. PROFESSOR: That's right. Our galaxy does have a super massive black hole in the center. AUDIANCE: Yeah. So you can consider that as like a larger mass that has black hole, area? PROFESSOR: Right! Right! That's right. The comment is that if we go from a star to something bigger than a star we have perfectly good example in our own galaxy, where there is a black hole in the center, but there is still mass that continues outside of that. And the black hole is accreting, more matter does keep falling in, it's not really stable. But it certainly does exist, and can exist. Any other questions? Well, our goal now is to calculate a geodesic. And I will just calculate one geodesic. I will calculate what happens if an object starts at some fixed radius at rest and is released and falls into this black hole. I first want to just rewrite the geodesic equation in terms of variables that are more appropriate for this case. When I wrote that, I had a mind just calculating the geodesics in space, looking for the shortest path between two points. The geodesic that we're talking about when we're talking about an object in general relativity moving along the geodesic is a geodesic that's a time-like geodesic. That is, any increment along the geodesic is a time-like interval, or following a particle. Particles travel on time-like trajectories in relativity. So the usual notation for time is something like tau rather than s, which is why I wrote it this way. ds squared is just defined to be minus c squared d tau squared. So d tau squared has no more or less information than ds squared, but it has the opposite sign and a difference by a factor of c squared, as well. And another change in notation which is a rather universal convention is that, when we talk about space alone we use Latin indices, ijk.. When we talk about spacetime, where one of the indices might be 0 referring to the time direction, then we usually use Greek indices, mu, nu, lambda. So I'm going to rewrite the geodesic equation using tau as my parameter instead of s, since we're talking about proper time along the trajectory instead of distances. And using Greek letters instead of Latin letters, because we're talking about spacetime rather than just space. So otherwise what I'm going to write is just identical to that. So really is nothing more than a change in notation. d d tau of g mu nu, dx super nu d tau. And it is equal to 1/2 times the partial of g lambda sigma with respect to x nu dx lambda d tau dx sigma d tau. Now you might want to go through the calculation and make sure of the fact that now we're dealing with a metric which is not positive, definite, doesn't make any difference. But it doesn't. It does mean that now we certainly have possibilities of getting maxima and stationary points as well as minima, because of the variety of signs that appear in the metric. But otherwise, the calculations of the geodesic equation goes through exactly as we calculated it. And the only thing I'm doing here, relative to what we have there, is just changing the notation a bit to conform to the notaion that is usually used for talking about spacetime trajectories. Since we're talking about radio trajectories, we're just going to release a particle at rest and then it will fall straight towards the center of our spherical object, we know by symmetry that it's not going to be deflected in the positive theta or the negative theta, or the positive phi or negative phi directions, because that would violate isotropy. It would violate the rotational symmetry that we know as part of this metric. This Is just the metric of the surface of the sphere. So theta and phi will just stay whatever values they have when you drop this object. So we will not even talk about theta and phi. We will only talk about r and t, how particle falls in as a function of time. And then it turns out to be useful to just first write down what the metric itself tells us. And we'll divide by d tau. So we could talk about derivatives with respect to tau. So changing an overall sign, since everything's going to be negative and we'd rather have everything be positive, we can just rewrite the metric equation as saying, that c squared is equal to 1 minus 2 GM over rc squared, times c squared times dt d tau squared minus 1 minus 2 GM over rc squared inverse times dr d tau squared. So this is nothing more than rewriting this equation saying d theta is equal to 0 and d phi will be 0. Written this way, though, it tells us that we can find dt d tau, for example, if we know dr d tau. And we also know where we are, you know, little r. And we'll be using that, shortly. To continue a little further, we're going to introduce some abbreviations just so we're don't have to write so much. I'm going to define little h of r as just one minus r Schwarzschild over r. And this is also 1 minus 2 GM over rc squared. That's a factor that keeps recurring in our expression for the metric. Yes? AUDIANCE: The second to last equation is supposed to be a c squared in between the two parenthesis? PROFESSOR: Probably. Yes, thank you. G squared, right? Thanks a lot. In terms of h of r, we can rewrite that equation slightly more simply. I'm going to bring things to the other side and write it as c squared times dt d tau squared is equal to c squared h inverse of r plus h to the minus 2 of r times dr d tau squared. This is just a rewriting of the above equation, making use of the new notation that we've introduced. And this is the form we will be using. It explicitly tells us how to find dt d tau in terms of other things. So dt d tau is not independent. Since we know dt d tau in terms of dr d tau. If We get an expression for dr d tau we're sort of finished. We could find everything we want to know about t from the equation we just wrote. So it turns out that all we need to do to calculate this radial trajectory is to look at the component of the metric where that free index, mu, mu is the index that's not summed, we're going to set mu equal to r. Remember mu is a number that corresponds to a coordinate. And we're going to set mu equal to the value that corresponds to the r coordinate. And that will be sufficient to get us our answer. When we do that, the equation becomes d d tau of g sub r. Now the second index, nu in the original expression, is summed from 0 to 3 for the gr case, where we have four coordinates, one time and three spatial coordinates, but we only need to write the terms where gr nu variable is non-zero. And the metric itself is diagonal. So if one index is a little r, the other index has to also be r, or else it vanishes. So the only value of nu that contributes to the sum is when nu is also equal to the r coordinate. So we get g sub rr d xr-- which, in fact I'll write it as just dr. x super r is just the r coordinate, which we also just call r times d tau is equal to 1/2 dr. And now, on the right-hand side, we're summing over lambda and sigma. And lambda and sigma have to have the property that g sub lambda sigma depends on r, or else the first factor will vanish. And furthermore, g sub lambda sigma has be non-zero, for the values of lambda and sigma that you want, which means that lambda and sigma for this case has to be equal to each other, because we have no off-diagonal terms to our metric. So the only contributions we get are from g sub rr and g sub tt. So you get the derivative with respect to r of g sub rr times dr d tau squared. This become squared, because lambda is equal to sigma. And then plus 1/2 drg sub tt times dt d tau squared. And note that buried in here is, if we expand this, the second derivative of r with respect to time-- respect to tau. So we can extract that and solve for it. And things like dt d tau will appear in our answer, initially, because it's here already. But we could replace dt d tau by this top equation and eliminate it from our results. And I'm going to skip the algebra, which is straightforward, although tedious. I urge you to go through it in the notes. But the end result ends up being remarkably simple, after a number of cancellations that look like surprises. And what you find in the end-- and it's just the simplification of this formula, nothing more-- you find that d squared r d tau squared is just equal to minus Newton's constant times the mass divided by r squared. Now this is rather shocking, and even looks exactly like Newtonian mechanics. However, even though it looks like Newtonian mechanics, it's not really the same as Newtonian mechanics, because the variables don't mean quite the same thing. First of all, even r does not really mean radius in the same sense as radius is defined by Newton. In Newtonian mechanics, radius is the distance from the origin. If we wanted to know the distance from the origin, we would have to integrate this metric. And in fact, there isn't even an actual origin here, because you would have to go through the singularity before you get there. And you really can't. That integral is not really even defined. Although, of course, if we had something like the sun, where the metric was different from this small r, then we could integrate from r equals 0, and that would define the true radius, distance from the center. But it would not be r. It would be what you got by integrating with the metric. So r has a different interpretation than it does for Newtonian physics. I might add, it still has a simple interpretation. If you look at this metric, the tangential part, the angular part, is exactly what you have for Euclidean geometry. It's just r squared times the same combination of d theta and d phi as appears on the surface of a sphere. So little r is sometimes called the circumferential radius, because it really does give you the circumference of circles at that radial coordinate. If we went around in a circle at a fixed r, the circle would involve varying phi, for example, over a range of 2 pi, we really would see a total circumference of 2 pi little r. So r is related to circumferences in exactly the way as it is in Euclidean geometry. But it's not related to the distance from the origin in the same way as it is in Euclidean geometry. In addition, tau, here, is not the universal time that Newton imagined. But rather, tau is measured along the geodesic. It is just ds squared, but remember, ds squared is being measured along the geodesic, which means that it is, in fact, the proper time as it would be measured by the person falling with the object towards the black hole. So tau is proper time as measured by the falling object. And that follows from what we know about the meaning of the metric itself. OK, that said we would now like to just study this equation more carefully. And since the equation itself still has the same form as what you get from Newton, if you remember what you would have done if this was 801, you can, in fact, do exactly the same thing here. And what you probably would have done, if this was 801, would be to recognize that this equation can be integrated. We can write the equation as d d tau of 1/2 dr d tau squared minus GM/r equals 0. I you carry out these derivatives you would get that equation. And this is just the conservation of energy version of the force equation. And that tells us that this quantity is a constant. If we drop the object from some initial position, r sub 0, and we drop it with no initial velocity, we just let go of it at r sub 0, that tells us what this quantity is when we drop it. It's minus GM over r sub 0. This piece vanishes if there is no initial velocity. And that means it will always have that value. And knowing that, we can write dr d tau is equal to-- just solving for that-- minus the square root of 2GM times r0 minus r over r r0 I've collected two terms and put them over a common denominator and added them. So this is not quite as obvious as it might be. But this is just the statement that that quantity has the same value as it did when you started. Now this can be further integrated. We can write it as dr over-- bringing all this to the other side-- is equal to d tau. And then integrate both sides. Notice when I bring this to the other side and bring the d tau to the right., everything on the left-hand side now only depends on r. So this is just an explicit integral over r that we can do. And I will just tell you that when the integral is done we get a formula for tau as a function of r. And it's equal to the square root of r sub 0 over 2GM times r0 times the inverse tangent of the square root of r0 minus r over r plus the square root of r times r0 minus r. So when r equals r0, this gives us 0, and that's what we want. When we start we're at r0, or time 0, or proper time 0. And then as r gets smaller, as it falls in, time grows. And this gives us the time as a function of r. We might prefer to have r as a function of time, but that formula can't really be inverted analytically. So that's the best we can do. Now one thing that you notice from this is that nothing special happens as r decreases all the way to 0. Even when you plug in r equals 0 here, you just get some finite number. So in a finite amount of time, the observer would find himself falling through the Schwarzchild horizon and all the way to r equals 0. I didn't mention it but r equals 0 is a true singularity. Our metric is also singular when r equals 0. These quantities all become infinite. And physically what would happen is that, as the object falling in approaches r equals 0, the tidal forces, that is the difference in the gravitational force on one part of the object verses another, will get stronger and stronger. And objects will just be ripped apart. And the ripping apart occurs as being spaghetti-ized, that is, the force on the front gets to be very strong compared to the force on the back. So I'll just get stretched out along the direction of motion. Now the curious thing is what this looks like if we think of it not as a function of the proper time measured by the wrist watch of the object falling in but rather, we could try to describe it in terms of our external time variable. The variable t that appears in the Schwarzchild metric. And to do that, to make the conversion, we want to calculate what the dr dt is, instead of dr d tau. Like maybe an analogous formula, in terms of t. And to get that, we use simply chain rule here. dr dt is equal to dr d tau-- which we've already calculated-- times d tau dt. And d tau dt is 1 over dt d tau. If you just have two variables that depend on each other. The derivatives are just the inverse of each other. So this could be written as dr d tau-- which we've calculated-- divided by dt d tau. And dt d tau we've really already calculated as well, because it's just given by this formula here. So we could write out what that is and figure out how it's going to behave as the object approaches the Schwarzchild radius. So it becomes dr dt is equal to, I'll just write the numerator as dr d tau given by that expression. But what's behaving in a more peculiar way is the denominator, which is h inverse of r plus c to the minus 2, h to the minus 2 of r times dr d tau squared. So now we want to look at this function h inverse of r. And this just means 1/h of r. It doesn't mean functional inverse. That is just equal to r over r minus r Schwarzchild. And we're going to be interested in what happens when r gets to be very near r Schwarzchild, because that's where the interesting things happen, as you're approaching the Schwarzchild horizon. And that means that the behavior of the numerator won't be important. The denominator will be going up, and that's what will control everything. So we can approximate this as just r Schwarzchild over r minus r Schwarzchild. And this is for r near r Schwarzchild. We've replaced the numerator by a constant. And then if we look at this formula, this is going to blow up as we approach the horizon. This is the square of that quantity. It will blow up faster than the first power of that quantity. And therefore, this will dominate, the denominator of the expression. We can ignore this. When this dominates, the dr d tau pieces cancel. So that's nice. We don't even need to think about what the dr d tau is. And what we get near the horizon is simply a factor of c times r minus r Schwarzchild over r Schwarzchild. It's basically just h. This becomes upstairs with a plus sign. And the square root turns it into h instead of h squared. So this is the inverse of that. OK, now if we try to play the same game here as we did here, to determine what our time variable behaves as a function of r, instead of the proper time variable tau, what we find is that t of r-- this is for r near r Schwarzchild-- is about equal to minus r sub s over c times the integral up to r of dr prime over r prime minus rs. This is dr dt. Yeah, this was dr dt from the beginning. I forgot to write the r somehow. AUDIANCE: Doesn't that [INAUDIBLE]? PROFESSOR: Yeah, I didn't write the lower limit of integration. I was about to comment on that. The integrand that we're writing is only a good approximation whenever we're near r. So whatever happens near the lower limit of integration, we just haven't done accurately. So I'm going to just not write a lower limit of integration here, meaning that we're interested only in what happens as the upper limit of integration r becomes very near r Schwarzchild. And everything will be dominated by what happens near the upper limit of integration. AUDIANCE: So would you just integrate over on [INAUDIBLE] for that? PROFESSOR: That's right, that's right. We just integrated over a small region near, r Schwarzchild. Nu r, which is also about equal to r Schwarzchild. And the point is, that this diverges logarithmically as r approaches r Schwarzchild. So it behaves approximately as minus r Schwarzchild over c times the logarithm of r minus r Schwarzchild. So as r approaches r Schwarzchild, this quantity that's the argument of the logarithm gets closer and closer to 0. It gets smaller and smaller approaching 0. But the logarithm of a very small number is a negative number, a large negative number. And then there's a minus sign here. You get a large positive number and it diverges. As r approaches r Schwarzchild the time variable approaches infinity. And that means that at no finite time does the object ever reach the Schwarzchild horizon. But as seen from the outside, it takes an infinite amount of time for the object to reach the Schwarzchild horizon. As time gets larger and larger, the object gets and closer to the Schwarzchild horizon, asymptotically approaching it but never reaching it. So this, of course, is very peculiar, because from the point of view of the person falling into the black hole, all this just happens in a finite amount of time and is over with. From the outside, it looks like it takes an infinite amount of time. And weird things like this can happen because of the fact that in general relativity time is a locally measured variable. You measure your time, I measure my time. They don't have to agree. And in this case, they can disagree by an infinite amount, which is rather bizarre, but that's what happens. So according to classical general relativity, when an object falls into a black hole, from the point of view of the object nothing special would happen as that object crossed the Schwarzchild horizon. Everybody believed that that was really the case until maybe a couple years ago. Now it's controversial, actually. At the classical level, everybody believes that's still true. I mean, classical general relativity says that an object can fall through the Schwarzchild horizon and then nothing happens. It's not really a singularity. But the issue is that when one incorporates, or attempts to incorporate, the effects of quantum theory, which nobody really knows how to do in a totally reliable way, then there are indications that there's something dramatic happening at the Schwarzchild horizon. The phrase that's often used for what people think might be happening at the horizon is the word firewall. So whether or not there is a firewall at the horizon, is not settled at this point. Certainly, though, classical general relativity does not predict the firewall. If it exists, all the arguments that say it might exist are based on the quantum physics of black holes, and black hole evaporation, and things like that. As you know quantum mechanically, the black holes are not stable, either, if they evaporate-- as was derived by Stephen Hawking in, I think, 1974. But that's strictly a quantum effect. It would go to 0 as h bar goes to 0, and, at the moment, we're only talking about classical general relativity. So the black hole that we're describing is perfectly stable. And nothing happens if you fall through the horizon. Except from the outside, it looks like it would take an infinite amount of time just to reach the horizon. So we'll stop there. I guess I'm not going to get to talk about the energy associated with radiation. But we'll get to that on Thursday. So see you folks on Thursday.
MIT_8286_The_Early_Universe_Fall_2013
7_The_Dynamics_of_Homogeneous_Expansion_Part_III.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: I think it's a good idea to review where we were because we're kind of in the middle of a discussion. We're actually on part three. And probably, there will be four parts all together of our discussion of homogeneous expansion. So I have a few slides to just review where we were last time. We were building a mathematical model of our homogeneously expanding universe. And we modeled it as a finite-sized sphere where we promised that in the end we will take the limit as the size of that sphere goes to infinity and fills all the space. But it started with some initial maximum, R max i, i for initial. We arranged for it to have a uniform density, rho. We started at a time called t sub i. There were some initial maximum radius, R max i. And we also set this up to exhibit Hubble expansion. And we're going to try to calculate how it evolves from there. But we're going to start it Hubble expanding. And Hubble expanding means that we're starting with every particle having an initial velocity which is a constant, H sub i, the initial value of the Hubble expansion rate, times the vector r. That is, the vector that goes from the origin to the point where the particle is located. And that corresponds to Hubble expansion centered on the center of our sphere. So these are our initial conditions. We just put them in by hand because we think they form a good model for what our universe looks like. And then, the laws of evolution should take over to govern what's going to happen later. And since we haven't studied general relativity, we'll be using Newton's law of gravity to discover how it behaves. But I promised you at the beginning that we will, in fact, get exactly the same equation that we would have gotten had we used general relativity. And we'll talk later today, probably, about why that's the case. So this is the initial setup. Any questions about the initial setup? OK, then we derived a lot of equations. And everything really follows from the statement at the top here, which is understanding what Newton tells us about the gravitational field of a spherical shell. We could always think of our solid sphere as being made up of shells. So if we know how a shell behaves, we know all we need to know. And Newton told us. He told us that inside a spherical shell, the effect of gravity, which I'm describing in terms of the gravitational acceleration vector, little g, inside a shell, the gravitational field is exactly 0. The force is coming from all different parts of the shell, pulling outward, cancel each other. And the net force on any object anywhere inside the shell is exactly 0. Outside, the entire shell acts exactly as if it were a single point mass located at the origin with the same total mass, m. So incredibly simple. It's hard to believe it's so simple, but it is. By the way, if you use Gauss's law of gravity, it becomes very obvious that those statements are the right statements. Newton know about Gauss's law of gravity, so Newton derived those statements by brute force integration, which is more of a tour de force. But something Newton was capable of doing it, and he did. To describe how the system is going to evolve, moving onward, we introduce something that's a little bit complicated, a function r, which is a function of two variables, r sub i and t. And r represents the radius at time t of the shell that was initially at radius r sub i. And our goal here is not just to keep track of the particle on the outside, which is for example, what Ryden does in her textbook. Ryden assumes that everything stays homogeneous. And then if you follow what happens to the outer edge, you know everything. But we're not going to make that assumption. We're going to conclude that it remains homogeneous, but we're going to derive it. Which means that we need to know the motion of every particle inside the sphere to be able to tell if it's going to stay homogeneous. And that's why we're introducing this more general description where r is a function of r sub i. So that function of the extra variable will tell us how every particle moves as the system evolves. We know that we're going to maintain spherical symmetry because we start with spherical symmetry and the force law respects spherical symmetry. So we're building that in from the beginning. We're not allowing things to depend on angular variables theta or phi. But assuming spherical symmetry, motion just is described entirely by giving the r-coordinate of each particle. And this function r of r i t does that-- exactly that. Then, we said that at a given radius, this description about shells tells us that the shells that are outside that radius don't do anything at a given radius. But the shells that are inside act like a point mass, as if it was all at the origin. So to understand how a given shell is going to evolve, all we really need to know is the total mass inside that shell. And that's given by M of r sub i, the mass inside the shell at radius r sub i. And that's just the volume of the shell initially times the initial mass density. As these shells move, the total mass inside a shell will remain exactly the same as it was as long as there's no crossings of shells. Now, the shell crossing issue is hard to talk about. But in the end, it just doesn't happen so you don't need to worry about it. But the argument was that initially we know the shells are moving apart from each other because we built in this Hubble expansion where everything is moving away from everything else. So if shells are ever going to cross, they're not going to cross immediately. It will take some time for these velocities to reverse, and the shells that were moving apart to move together and hit. So there's unambiguously at least a period of time where there are no shell crossings. And we could write down the equations that describe the motion during this period where there are no shell crossings. Now, if there was going to be a shell crossing, the equations that we're writing down would have to hold right up until the time of that first crossing. Because as long as there's no crossings, our equations are valid. Which means that if there was going to be a shell crossing, the equations we're writing down had better show it. Because the equations that we're writing down have to be valid right up until the instant of any possible shell crossing. And what we're going to find is that the equations are going to lead to just homogeneous evolution where there are no shell crossings. And therefore, that's the conclusion. If there were going to be any shell crossings, these equations would have to show it. They don't show it, so there are no shell crossings. So it's a complicated paragraph, but the bottom line is simple- we can ignore shell crossings. And that means that the total mass inside of any shell will remain exactly constant with time given by its initial value. And this formula is the initial value. And therefore, it holds at all time. Any questions about that? Any questions about the shell crossing issue? OK, good. So whoops, sorry about that. So M of r sub i is the mass inside the shell at radius r sub i. And then we can write down-- now we use Newton's law of gravity directly. We can write down the acceleration of a given particle in terms of its radius r and its initial radius r sub i. Its initial radius determines how much mass is inside. M of r sub i is independent of time. But the actual radius it's at determines how far away it is, or describes how far away it is from the origin. And that's the 1 over r squared that appears in the force law. So we have the time dependent r in the denominator and the time independent initial r sub i that appears in the numerator. And it's all proportional to a unit vector r hat pulling everything radially inward because of the minus sign in front. So gravity is pulling everything inward, which is what we'd expect. So this formula is the key formula. It's a vector formula, but we know that all the motion is radial. So all we really have to keep track of is the radius as a function of time. So we can turn this vector formula into a formula for little r itself. Just the radius number, the radial coordinate. And we get r double dot is minus 4 pi over 3 G r sub i cubed rho sub i, taking the formula for M of ri from the line above divided by r squared. And the r in this formula-- I didn't write the arguments, but it means this function r, which is a function of two arguments, r sub i and t. So this differential equation now governs our entire system and tells us everything we need to know or everything we can know about the actual dynamics. But to solve a differential equation of that sort, a second order differential equation, we need initial conditions. And we already described the initial conditions in words. Now we have to just figure out what those initial conditions are saying about r and r dot. And the answer is straightforward. We argued it last time. The initial conditions are that r at time t sub i is just r sub i. That was really the definition of r sub i in the first place. And r dot is just H sub i times v coming from the formula we had for the initial velocities. So these three equations, the two initial conditions and the differential equation, lead in principle to a mathematical solution that's completely unique and determined by those equations. And our goal now is just to figure out what that solution looks like. And we discovered a marvelous scaling property. That is, instead of talking about r, we divided r by r sub i and defined a new function, which we initially called u. Initially, thinking of it as a function of these two variables. We can write down new equations for u. And those equations end up not having any r sub i's in them at all. And once we realized that, we realized we don't need to call it u of r sub i and t. It's really just a function of t. And at that point, we renamed it because we also realized that it actually is our old friend the scale factor, a of t. So a of t is just r of r sub i and t divided by r sub i. And then the reason this is a scale factor is seen most clearly from that equation, which is really this equation just rearranged. The physical distance of a particle from the origin is equal to the scale factor times r sub i where r sub i plays the role of the coordinate distance. r sub i is a time-independent measure indicating which shell you're talking about. So the equations for a then are the equations for u, which are the equations for r just divided by r sub i. And we can write down what those equations are. We have a differential equation for a and two initial conditions where r sub i has dropped out all together. The differential equation is given immediately from the one we had up here, but we could also we write it in terms of what rho of t is. I didn't write the equation here because I guess I was running out of room. But we also figured out how rho of t behaves. And it behaves in the obvious way. As the space expands, the density just goes down as the volume. And the volume grows like a cubed, the cube of the scale factor, because volume's proportional to radius cubed. So rho of t is just rho sub i divided by a of t cubed. And putting that in, we can go from that equation to this equation. And this equation makes no reference anymore to the initial time. It's just an equation for what deceleration you see, what value of a double dot you see, as a function of the mass density and a itself. OK, any questions about any of those differential equation manipulations? Yes. AUDIENCE: In the homework, it says that this equation is not entirely general. We can't use it in all cases. Whereas, the other version that we get from the energy conservation is completely OK? PROFESSOR: That is correct, yes. AUDIENCE: It says it in there, but why is that? PROFESSOR: Why is it true? Well, as long as you have only non-relativistic matter, which is what we're talking about here, both of these-- this equation is golden and so is the equation we're about to talk about the derivation of, the conservation of energy equation. So as long as we have the context in which we derived it, it's completely valid. But on the homework set, we're talking about more general cases. We gave a different formula for the scale factor, which corresponds to a different situation in terms of the underlying materials that are building that universe. And where the change occurs is when one introduces a nonzero pressure. This gas of particles that we're talking about is just non-relativistic particles moving with the Hubble expansion. There's no internal velocity which generates a pressure. And it's pressure that makes a difference. This formula assume zero pressure. We will learn later how to correct it when there's a nonzero pressure. And the other formula doesn't depend on pressure, so it's valid whether there's pressure or not. But at the moment, we have no real way of knowing that. We'll talk later about why that's true. Other questions? No. OK, great. OK, one more slide here, not too much on it. At the end of the last class, we took the equation that I just wrote on the previous slide. I just copied it to this slide, the equation for a double dot, written in terms of the initial mass density. And discovered that it can be integrated once to produce a kind of a conservation of energy equation. And all you do is you start with this, write it by putting everything on one side of the equation, a double dot plus 4 pi over 3 G rho i over a squared equals 0. And then, multiply the whole equation by a dot. And a dot is called an integrating factor. It turns the expression into a total derivative. So once you write it this way, it is equivalent to dE dt equals 0, where e is just defined to be this quantity that would have better as a triple equal sign. e is just defined to be that quantity. And if you then write down what dE dt means, it means exactly that. So it's the same equation. So given our second order equation, we can write down a first order equation, which is that E is equal to a constant. And we commented last time that the physical interpretation of E is-- I'm not sure what to say. There are multiple physical interpretations of E is probably what I want to say. And one physical interpretation is if you multiply by the right factors, it does describe the actual energy of a test particle just on the boundary of our sphere, on the outer boundary. It doesn't really describe directly the total energy of a particle inside that sphere because calculating the potential energy of a particle inside the sphere is more complicated. And it doesn't give you the simple answer. So it doesn't really describe particles on the inside, except you could argue that if you-- talking about a particle on the inside, the particles outside of it don't matter. And you pretend they're not there. And then it does describe the energy. That is, you could think of any particle as being on the outside of the sphere and ignore what's outside it. But that's extra sentences that you have to put in. On the homework, you will also discover that for this finite-sized Newtonian sphere, there's certainly a well-defined Newtonian expression for the total energy of the sphere. And that's also proportional to this E. So by multiplying it by different constant, you can turn it into the total energy of the sphere. So it's actually related to energy and it's definitely conserved. Those are the important statements to takeaway. And that's where we left off last time. And we'll pick up from there now pretty much on the blackboard for the rest of lecture. Are there any further questions about these slides? OK. In that case, we will go on. The first thing I want to do is to take the same conservation law that we have up there and rewrite it in a way that's more conventional. And perhaps, more useful. But certainly, more conventional. We started with knowing that a quantity called E is conserved. And it's equal to 1/2 a dot squared minus 4 pi over 3 G rho i over a. OK, then we also know that rho of t is equal to rho sub i divided by a cubed of t, which just says that matter thins out with the volume which grows as the cube of the scale factor. And that can be used to manipulate this equation. For reasons that will become clearer in a minute, I'm just going to manipulate this equation by multiplying it by 2 over a squared. Just because this will get me the equation I'm trying to get to. So if I multiply the left-hand side by 2 over a squared, I have to multiply the right-hand side by 2 over a squared. The 2 cancels the half. The first term becomes a dot over a squared. And you might remember the a dot over a is the Hubble expansion rate, so it has some physical significance. And then, minus the 2 turns the 4 pi over 3 into 8 pi over 3. And the a squared multiplies the a to make an a cubed. And then here, we have rho i over a cubed, which in fact is just the current value of rho. So we can rewrite this as a dot over a squared minus 8 pi over 3 G rho. No a's anymore on the right-hand side. Well, on a's anymore in this term. OK, now the convention that brings our notation into contact with the rest of the world. Nobody talks about E, by the way. That's just my convention. But to make contact with the rest of the world, the rest of the world talks about a number called little k, lowercase k. And it connects to our notation by being equal to minus 2E divided by c squared. And with that connection, we can write our conservation law in what is the standard way of writing it, at least in many textbooks. And that's it. So I put a box around it. And this equation was first derived by Alexander Friedmann using general relativity in 1922. And it is, therefore, usually called the Friedmann equation. Alexander Friedmann, by the way, was a Russian meteorologist by profession. But as a meteorologist, he knew a lot about differential equations. And when general relativity came out, he got interested in it and was the first person to derive using general relativity the equations that described an expanding universe. And he wrote two famous papers-- now famous papers-- in 1922 and 1923. One of them talking about the system of equations where k is positive and another talking about where it's negative. I forget which order they were in. But they correspond to open and closed universes, which we'll talk about more in a few minutes. Now, I just should remind you to have our equations together. We also had the all-important equation for a double dot. And as I was just describing an answer to a question, we don't know yet how to generalize this to other kinds of matter besides the non non-relativistic dust that we just derived them for. They're certainly both correct for our non-relativistic dust. But when we try to generalize them, what we'll find is that the top equation will remain true exactly for any kind of matter, while the bottom equation assumes that pressure equals 0. Now, the standard terminology is to call the top equation the Friedmann equation. In fact, both of these equations, with this one including the pressure term that we don't have yet, appeared in Friedmann's original papers. So I usually refer to these two equations as the Friedmann equations-- plural. But many textbooks refer to just the top one as the Friedmann equation and don't give a name to that equation, which is also OK if you want. Yes. AUDIENCE: Didn't we get the top equation from the bottom equation? PROFESSOR: We did. That's right. So how does that jive? The answer is-- and we'll be coming to this later. But the answer is that when we got the top equation from the bottom equation, we used that equation. And this equation will no longer hold when there's a significant pressure. And in fact, when we derive it later-- I forget what order we'll do things in. But we'll make sure that all three of these equations are consistent when we include pressure. The reason the top equation changes if you include pressure-- may not be obvious. But if I tell you why it happens, it will become obvious. The top equation looks like it's just how things thin out. Like a cubed. But rho is the total mass density. And relativistically, it's equivalent to the energy density. If you just multiply by c squared, that becomes the energy density by the E equals mc squared equality. So it's a question of how much energy there is inside this sphere or box. And if you imagine a box changing size, if it's filled with a gas with a positive pressure, as that box changes size, the pressure does work on the boundary. If you think of it as a piston. And if we have a positive pressure and a gas expands, it loses energy. And relativistically, that means it has to also lose mass. The total mass inside the box does not remain constant as it expands, which is the idea that we use when we derive that rho i over a cubed. So rho i over a cubed is the right behavior for the total mass density or energy density for a zero pressure gas. But when you include pressure and take into account relativity, that's not the right formula anymore. AUDIENCE: The top one, it just cancels out somehow? PROFESSOR: Well, the pressure ends up canceling out, so that this ends up still being true and this ends up being different. And we'll see later how it happens. I just want to indicate where the changes are going to be. We'll see what the changes are when we get there. AUDIENCE: What happened to the third factor of a in the second equation? PROFESSOR: There's a factor of a missing, sorry. Yes, thank you very much. It's important to get the equations right. That wasn't Friedmann's equation. That was Alan's equation. Now it's Friedmann's equation. He got it right. OK, any other errors or questions to bring up? OK. Now is probably a good time to talk about the question of why we were so fortunate to discover that the Friedmann equation that we derived agrees exactly with general relativity. There is a simple reason for it. I don't think it's an accident at all. The reason for it, as I understand it, is that we assume from the beginning-- and we would be assuming this whether we're talking about the Newtonian calculation or the corresponding general relativity calculation. We assume from the beginning that we are modeling a completely homogeneous system, where every part of it is identical to every other part. And once you assume that homogeneity, it means that if you know what happens in a little box, a meter by a meter by a meter say. If you know what happens in a little box, you know what happens everywhere because you assume that what happens everywhere is exactly the same as what's happening in that box. So that implies that if Newton is right for what happens in the box, Newton has to be right for what's happening in the universe. And we do expect Newtonian physics to work on small scales, scales of a meter, and small velocities. The Hubble expansion of a meter is negligible. So we expect Newton to give us a proper description of how the system is behaving on small scales. And the assumption of homogeneity guarantees that if you understand the small scales, you also understand the large scales. So I think we are guaranteed that this had better give us the same results as general relativity or else Newtonian physics is not the proper limit of general relativity. But it is. We would not accept general relativity if it did not give Newtonian physics for small scales and low velocities. So we expect to get the same answer as Gr and we do. This is exactly what Gr would give. And this is exactly what Gr would give also for the case where there's no pressure-- the case we're doing. OK, any questions about that? OK, next item. I would like to say a couple words about the units in which we're going to define these quantities. So far, in our mathematical model, r and r sub i are distances. And therefore, they're measured in whatever units you're using to measure distances. And I'll pretend we're using SI units. So I'll say meters. We could use light years, or whatever. It doesn't really matter. But they're both measured in ordinary distance units. And earlier when we talked about scale factors and things, I told you that many books do it this way. Think of both the co-moving coordinates and the physical distances as being measured in meters and the scale factor being dimensionless. But I told you I don't like to do it that way because I think it's clearer to recognize that these co-moving coordinates don't have any real relationship to actual distances. At least not as time changes. So for me, it's better to have a different unit to describe the co-moving coordinate systems. So I would like to introduce that here. And all I need to do really is say that let the unit of r sub i be called a notch. Now, we already called it a meter, but that doesn't really mean that a meter is the same as a notch. Because when we called it a meter, we were not really taking into account the fact that it's only a meter at time t sub i. So another way of describing this definition, which might not sound like we're trying to redefine the meter, is to say that the statement r sub i equals 5 notches, which is the kind of statement I'm going to make now because I'm only going to talk about r sub i measured in notches. When I say r sub i is 5 notches, that's equivalent to saying that the particle labeled by r sub i equals 5-- 5 notches-- was at 5 meters from the origin at time t sub i. So giving the value of a co-moving coordinate in notches tells you exactly what distance it was from the origin at time t sub i. Now, the reason why I don't use this to just say, well, why don't we call it meters, is that we're now going to forget about t sub i. If you look at equations we have, t sub i no longer appears in any of them. t sub i was just our way of getting started. And we could have started at any time we wanted. And once we have these equations, we can talk about times earlier than t sub i, times later than t sub i. And there's nothing special about t sub i anymore. But r sub i we're going to keep as our permanent label for every shell, which means for every particle there will be a value of r sub i attached to that particle which will be maintained as the system evolves. And it will be clearly playing the role of what we've called the co-moving coordinate. And therefore, we will want to keep r sub i and we'll want to call its unit something. And I'm just saying I'm going to call them notches. You can call them meters if you want, but I'm going to call them notches. So using this language, I'm not changing any of the equations that we wrote. So we will still have r being equal to a of t times r sub i. But now, r will be measured in meters. a of t I will think of as meters per notch. And r sub i will me measured in notches. And the scale factor a of t will be playing the same role it played ever since we introduced the word scale factor. It just means that when the scale factor doubles, all distances in our model double exactly. Now that we have the sort of new system of units, I just want to work out the units of an object where the units are not all obvious. What are the units of this thing that we've defined that we called k, which is related to this thing that we defined that was called E, where E was not really an energy? But we can work out what k is. I'm using square brackets to mean units of. k can be thought of as being defined by this equation. Or in any case, this equation certainly must be dimensionally consistent because we derived it and you didn't point out that I made any mistakes, so I must not have. So the units of kc squared over a squared should be the same as the units of a dot squared over a squared. And if I multiply through to write units of k-- to relate them to a dot, we get the units of k have to be the same as the units of 1/c squared times a dot squared. The a squareds in the denominator here just cancel each other. So the units of k has to be the same as the units of 1/c squared a dot squared. And we know what they are. The units of 1/c squared is second squared per meter squared. Meter per second squared, but upside down. That's the 1/c squared factor. And a dot is meters per notch per second because of the dot. So meters per notch would be a. a dot would have an extra second. And the whole thing gets squared because it's a dot squared here. And you see then that the meter squares cancel. The second squareds cancel. And we just get the peculiar answer 1 over a notch squared. Now, there's probably not a lot of intuition behind that. But what it clearly does say is that we can make k change its numerical value by changing our value for the notch. And that's important to know. And it's a clear consequence of what we just did. So now we can talk about different conventions that people use for defining the notch. And hence, k, which are clearly related to each other we've now learned. So first of all, in the construction that we just did-- starting out with our sphere, and letting it grow, and defining the maximum sphere as r max comma i and so on. In that construction, our initial value of t-- excuse me, our initial value of a, a of ti, was one. And the way we first did it where all lengths were measured in meters. And with our new definition that r sub i is measured in notches so that a is meters per notch, it means that in the system we just had, a of t sub i was 1 meter per notch. And since we already did this, we don't really want to change it. But I point out that t sub i, if you look at our equations, survives in only one place, which is in this equation. It has disappeared from every other equation we're going to be keeping. So we are perfectly safe in just forgetting about this equation. Or if we want to remember about it, we could just say, well, yeah, t sub i had some significance. It was the time at which the scale factor happened to have been equal to 1 meters per notch. But otherwise, it has no importance. There's a different time when the scale factor was 10 meters per notch, or 1 light year per notch. So since t sub i is of no relevance whatever, we can safely forget the above equation. Or we could think of it as the definition of t sub i, where we don't care anything else about t sub i. So it was just some symbol that was used in some earlier calculation. So we now just have a scale factor a of t. And we can talk about how we might normalize it. And there are basically two important conventions that are in use in textbooks. Some textbooks, which include Barbara Ryden's textbook that we're using, define a of t to be equal to 1, which I will call 1 meter per not much today. So a of t is just defined to be 1 today as a common notation. It's notation that Barbara Ryden uses. And that makes a notch equal to a meter today. There's another common convention, which is to recognize that since k has units 1 over notches, we can make k any value we want without changing any of the physics, just changing our definition of the notch, which is up for grabs. Nobody has yet defined the notch. We're defining it now. So we can choose the definition of a notch to make the value of k something simple. And the obvious choice for the simplest real number that you could imagine that's not 0 is 1. So we could choose the definition of the notch to make k equal to 1. Except that we can't change the sign of k. The sign of k makes important differences in this equation. And we don't want to make the definition of a notch negative or imaginary, I guess, is what you need to change the sign of k. So as long as notches are real, we can only change positive k's to different positive k's and negative k's to different values of negative k. So the convention would be that when k is not equal to 0-- it can be 0 as a special case. But when k is not equal to 0, define the notch so that k is equal to plus or minus 1. I would say that this convention is more common than that convention of the books that I've read in my lifetime, but both conventions are in use. And one sees from this dimensional relationship that one can certainly choose a notch to make k if it's nonzero have any value you want of the same sign. And that means you can always make k plus or minus 1 if it's not 0. The books that use this as their convention, generally speaking, leave the notch undefined when k equals 0. Undefined means arbitrary. And there's no problem with that because k and the notch never really appear in final physical quantities. The notch always was just your choice of how to write down your co-moving map of what the universe looks like. OK, any questions about these funny issues involving units? OK, good. The next thing I want to do now is to start talking about solutions to this equation. And I guess I'll leave the equation there and start on the new blackboard. OK, I'm going to rewrite the equation almost the way it was written on the top there. In fact, exactly as it was written on top there. I'm going to rewrite the equation as E equals 1/2 a dot squared minus 4 pi over 3 G rho sub i over a. And the reason I'm writing it in this way rather than any of the other six or seven ways that we've written it, is that this way the only time-varying thing is a itself. And if we want to talk about what the differential equation tells us about the time variation of a, it helps a lot if we're writing an equation where a of t is the only thing that varies with time. And this is at least one way of doing that. So in particular, I used rho sub i rather than rho. So the behavior of this equation might very well depend on the sign of E. And we'll see that it does. And if we think it might depend on the sign of E, we realize from the beginning that E could be positive, negative, or 0. There are those three cases. So those are the cases we want to look at. E can be positive, negative, or 0. So we'll take them one at a time. It will help to make things completely obvious to rewrite this as an equation for a dot squared. I'll multiply by 2. And write this as equation a dot squared is equal to 2E plus 8 pi over 3 G rho sub i over a. 8 pi instead of 4 pi because we multiplied by 2. And we notice that this term, proportional to rho sub i over a, is unambiguously positive. We're not going to have any negative mass densities in our problem. There's no way that can happen. And a is always positive. So the right term is positive. a dot squared had better be positive because it's the square of something. E could, in principle, have either sign. And we'll talk about both cases, or the 0 case. But if we start with the case where E is positive, just to consider these three cases one at a time. So suppose E is greater than 0. And remind you that E and k had opposite signs. So that would imply that k was negative. So if we start by considering the k negative case or the E positive case, then we see that we have a positive number here and a positive number there. So they will add up to always give us a positive number. a dot squared will always be positive. And it will just mean that a dot will be positive. Square root of a positive number is a positive number. At least it's only the positive square root that matters here. So a will just keep growing forever in this case. a dot squared will never fall below 2E. So it will be a lower bound to a dot squared. And that means there will always be a minimum expansion rate that the universe will have. So in this case, a increases forever. And that's called an open universe. And it's one of the three possibilities that we're going to be investigating here. Next case is E less than 0, which is the more common notation of k means k is greater than 0. In this case, if you think of E as an energy, which it really is, it means we have less than 0 energy. Which means that we basically have a bound system. And the equation tells us that it acts like a bound system. We don't have to rely on that intuition, but that is the right intuition. The equation up there tells us that if E is negative, the total right-hand side had better be positive because the left-hand side is positive and the left-hand side cannot go negative. But this term is going to get smaller and smaller as a increases. And as this term gets smaller and smaller, it runs the risk of no longer outweighing this term and giving possibly a negative answer. And what has to happen is a cannot get any bigger than the value it would have where the right-hand side would vanish. So a continues to grow because a dot is positive. This gets smaller and smaller until this term equals that term in magnitude. And then, a dot goes to 0. What happens next is not completely obvious from this equation, but it means that we have an expanding universe that's reached a maximum size and then stopped. Then, what is actually obvious is from this equation is that it will start to collapse. So this case corresponds to a universe that reaches a maximum size and then turns around and collapses. So a has a maximum value. And we can read off from that equation what it is, a max is just the value that makes the right-hand side of that equation 0, which is minus 4 pi G rho sub i divided by 3E. And remember, E is negative for this case, so this is a positive number. So a has some positive maximum value. Reaches that value, and then turns around and collapses. Yes? AUDIENCE: Sorry, I don't know if you said this already, but since a dot squared is equal to some quantity, when you solve for a dot, you can have positive and negative solution. Why do we discount the negative solution? PROFESSOR: OK, very good question. The question if you didn't hear it is, why do we discount the negative solution when we have an equation for a dot squared? Couldn't a dot be positive or negative? And the answer is it certainly could be either. And both solutions exist as valid solutions to these equations. But we started out with an initial condition that a dot was positive. Our initial value of a dot was H i. And once it's positive, it can't change sign according to that equation. Except by going through 0, which is what we're talking about now. But it will only change sign when it goes through 0. So it reaches a maximum value, then it does collapse. And in the collapsing phase, that same equation, a dot squared equals the right-hand side, holds where it would be the negative solution that describes the collapsing phase. So the verbal description of what's happening here is that the universe reaches a maximum size and then collapses. And it collapses all the way to a equals 0 in this model. And that's often called the Big Crunch for lack of a better word. The Big Crunch being the collapsing form that corresponds to the Big Bang, which is the instant which all this starts. And this was called an open universe. As you could probably guess, this is called a closed universe. OK. And now finally, we want to consider a case where E is not positive and not negative. The case that we're left with is E equals 0. And that's called the critical case. So the critical value for E is E is equal to 0. And that means that k is equal to 0 as well. It implies k equals 0. And notice that this is a special case. E is a real number. It can be positive, negative, and 0 is just a particular value on the borderline between those two. For the people who are in the habit of rescaling notches so that k is always plus 1, minus 1, or 0, it makes it sound like there are three totally distinct cases. But that's only because of the rescaling that those people are doing. If you keep track of E as your variable, which you certainly can, you do see that the flat case, E equals 0-- the critical case-- is really just the borderline of the other two. And it's therefore, arbitrarily close to both of the other two. It really is where they meet. But working out the equations, we have in this case a dot over a squared is equal to 8 pi over 3 G rho. And in general, it's minus kc squared over a squared. But we're now considering the case where that vanishes. And that means we have a unique value for rho in terms of a dot over a. And at this point, it's worth reminding ourselves that a dot over a is just H. So this is H squared. So for this critical case, the case that's just on the borderline between being open and closed-- and we'll be calling it flat. For this critical case, we have a definite relationship between rho and H. So rho has to equal what we call the critical density, which you get by just solving that equation. And it's 3H squared over 8 pi G. And we see, therefore, that rho being equal to rho c is this dividing line between open and closed. And if you think back about the signs of what we had, what you'll see is that rho bigger rho c is what corresponds to a closed universe. Rho less than rho c is what corresponds to an open universe. And rho equals rho c can be called either a critical universe or we'll be calling it a flat universe. And the meaning of the word "flat" will be motivated later. For now, these words-- open, closed, and flat-- refer to the time evolution of the universe. We'll see later that it's also connected to the geometry of the universe, but we're not there yet. Then the word "flat" will make some sense. Yes. AUDIENCE: How do we know there's not some very large entity, like some cluster of black holes or something that renders all of this not applicable to our universe? PROFESSOR: OK. The question is, how do we know there's not some humongous perturbation, some huge collection of black holes that renders this all inapplicable to our universe? The answer is that it works for our universe. That is, observationally we can test these things in a number of ways. Tests include calculations of the production of the light chemical elements in the Big Bang. Tests include making predictions for what the cosmic background radiation should look like in detail. And those tests work extraordinarily well. So that's why we believe the picture. But you're right, we don't have really direct confirmation of most of this. And if there was some giant conglomeration of mass out there someplace, it might not have been found yet. But so far, this picture works very well. That's all I can say. And there really is quite a bit of evidence. We'll maybe talk more about it later. Any other questions? OK. So having understood the importance of this critical density, it might be nice to know what the value for the critical density for our universe is. And we can calculate it because it just depends on the Hubble expansion rate, an the Hubble expansion rate has been measured. So if we try to put in numbers, it's useful to write the present value of the Hubble expansion rate, as it's often written, as 100 times h sub 0 kilometers per second per megaparsec. So this defines h sub 0, little h sub 0. And I think the main advantage of using this notation is that you don't have to keep writing kilometers per second per megaparsec which gets to be a real pain to keep writing. So little h sub 0 is just a dimensionless number that defines the Hubble expansion rate. And it does then allow you to write other formulas in simple ways. Numerically, Newton's constant you can look up. It's 6.672 times 10 to the minus 8 centimeter cubed per gram per second squared. And when you put these equations together, all you need to know is G and H squared to know what rho critical is. You find that rho critical can be written initially for any H0 as 1.88 h0 squared coming from the h squared in the original formula times 10 to the minus 29 grams per centimeter cubed. And note the whopping smallness of that 10 to the minus 29. The mass density of our universe is, as far as we know, equal to this critical density. We know it's equal to within about a half of a percent or so. And h0 is near 1. h0, according to Planck, is 0.67-- according to the Planck satellite measurement of the Hubble parameter And if you put that into here, you get the critical density is about 8.4 times 10 to the minus 30 grams per centimeter cubed. And that is equivalent to about 5 protons per cubic meter. So I've written the answer in terms of grams per cubic centimeter because to me that's a very natural unit for density because it's the density of water. We're saying that the average density of the universe is only about 10 to the minus 29 quantity 8.410. The average density of the universe is only about 10 to the minus 29 times the density of water. So it's an unbelievably empty universe that we're living in. It's hard to believe the universe is that empty, but there are large spaces between the galaxies that we look at. So the universe is incredibly empty. And in fact, this is a vastly better vacuum. An average part of the universe is a vastly better vacuum than can be made on Earth by any machinery that we have access to. So the best vacuum is empty space, just middle of nowhere. And it's vastly better than what we can actually produce on Earth. Yes. AUDIENCE: I see you used protons here, 5 protons per cubic medium. So is this density corresponding to density of baryonic matter, or all types of matter? PROFESSOR: This is actually all types of matters, even though I'm using my proton as a meter stick. But it is the total mass density of the universe that's very close to the critical value. And I was just about to say something about what the total mass is made up of. Cosmologists define-- this was certainly mentioned in my first lecture-- a Greek letter capital Omega to mean the actual mass density of the universe divided by this critical density. So omega equals 1 in this language corresponds to a flat universe at this critical point. Omega bigger than 1 corresponds to a closed universe. And omega less than 1 corresponds to an open universe. And today, we know that omega is equal to 1 to an accuracy of about a half of a percent. To a very good accuracy we know omega is very close to 1. It's made up of different contributions. And these tend to vary with time-- as the best measurements tend to vary with time by a few percent. But omega matter-- and here I mean visible plus dark matter-- is roughly about 0.3. And most of the universe today, as we mentioned earlier, the universe today is pretty much dark energy-dominated. So omega dark energy is about equal to 0.70. And omega total is pretty close to 1. Plus or minus about a half of a percent. So one of the implications here is that we've been assuming in our calculation so far that we're talking about nothing but non-relativistic matter. That's actually only about 30% of the actual matter in the current universe. So I did say this is the beginning. The current universe today does not obey the equations that we've written down very accurately. It's pretty far off. But the equations that we wrote down are pretty accurate for the history of our universe from a period of about 50,000 years after the Big Bang up to about 9 billion years after the Big Bang. Yes. AUDIENCE: Before dark energy was discovered, did they think omega was [INAUDIBLE]? PROFESSOR: Yes. At least many people did. Before dark energy was discovered, there was a controversy in the community over what we thought omega was. Those of us who had faith in inflation believed that omega would be 1 because that's what inflation predicts. Astronomers who just had faith in observations believed that omega was 0.2 or 0.3 because that's what they saw. And the truth ended up being somewhat in between in the sense that omega total we now all agree is very close to 1 as inflation predicts. But it's still true that the stuff that the astronomers saw at this earlier time did only add up to 0.2 or 0.3. So they correctly estimated what they were looking at and they had no way of knowing that there was also this dark energy component until it was finally discovered in 1998. Yes. AUDIENCE: If we don't know what dark energy is, observationally how have we been able to measure that? PROFESSOR: How do we measure it so accurately if we don't know what it is, right? Right. Well, the answer is while we're not sure what it is, we actually do think we know a lot of its properties. And essentially, almost all properties that are relevant to cosmology. We just don't know what's sort of like inside. So we know it creates repulsive gravity. We know how much repulsive gravity it creates. And we also know to reasonable accuracy how the dark energy has been evolving with time, which is really that it's not been evolving with time. And that determines what its pressure is. It determines, in fact-- to not evolve with time we'll see later requires the pressure to be equal to the negative of the energy density. Pressure is related to how energies change with time, as I mentioned a few minutes ago in a different context. If you have a box that expands and there's a pressure, the pressure does dp dv work on the box. And you can tell how much the energy in the box should change for a given pressure. And we'll do this more carefully later, but to have the energy not change at all requires a pressure, which is the negative of the energy density. So we know how much acceleration the dark energy causes. We know to reasonable accuracy and we assume it's true that the pressure is equal to minus the energy density. And that's all you need to know to calculate how much of it you need to account for that much acceleration. And that's how it's done. Any other questions? OK next thing we want to do is to actually solve this equation for the easiest case. We'll solve it in general later, but the easiest case to solve it is the case of the critical case. We only have a few minutes, but it only takes a few minutes to solve the equation for this case. It should be over here. So for the critical case, it's the case E is equal to 0. And therefore, we just have a dot squared is equal to a constant divided by a. And it won't really matter for us right now what this constant is. So I don't even have to keep track of it. I'll just write it as const, C-O-N-S-T. And I'll take the square root of this equation because it's easier to know what a dot is than to know what a is. Easier to make use of knowing what a dot is. So I can rewrite that equation as a dot, which I'll now write as da dt, to be a little more explicit about what we're talking about, is equal to a constant over a to the 1/2. So this now is the k equals 0 evolution. So this is just the same Friedmann equation rewritten for the special case k equals 0. And now I'm just going to perform the amazingly complicated manipulation of multiplying both sides of the equation by a to the half. So we have a to the half. I'm also going to multiply by dt. So a to the half da will be equal to a constant times dt. And now we can just integrate both sides as an indefinite integral. And integrating both sides as an indefinite integral, the left-hand side becomes-- go back over here. 2/3 a to the 3/2 is equal to a constant times t plus an arbitrary other constant of integration. This is the most general equation, which when differentiated gives you this. And now, this equation can be solved to tell us what a is as a function of t. But before I do that, I'm going to say something about c prime here. c prime is allowed by the integration. But remember that when we defined our scale of t, we just started at some arbitrary time, ti, which we didn't even specify. So there's no particular significance to the origin of time in the equations that we've written so far. So we're perfectly free to shift the origin of time by just redefining our clocks. Cosmic time, remember, is defined in a way which makes it uniform throughout the universe by our construction. But we haven't said anything yet about how to start cosmic time. But now, we have a good way to start it. In this model, there is going to be a time at which a is going to go to 0. No matter what we choose for c prime here, there will be some t which will make the right-hand side 0. And therefore, a 0. And that's the instant of the Big Bang. That's when everything starts. a never gets smaller than 0. So it's very natural to take that to be defined to be the 0 of time. So that's what we're going to do. So we're going to define t equals 0 to be when a of t equals 0. That's just a choice of the origin of time, which you're certainly allowed to do without contradicting anything else that we've said. And that means we're just setting c prime equal to 0. So that when that's 0, that's 0. So this implies c prime equals 0. And that then implies-- we could take the 2/3 power of this equation. And I told you we don't really care what that constant is. So therefore, we don't really care about what that constant is. What we get is that a is equal to some constant, not necessarily related to any of the constants we've said so far. Although, you can calculate how it is. But some constant times t to the 2/3 power. Or equivalently, you could just write a is proportional to t to the 2/3, which has the same content. Now, you might think you'd want to know what the constant of proportionality is. But remember, the constant of proportionality just depends on the definition of the notch. If you want to define the notch so that a is equal to 1 today, then you would care what the constant is. If you're willing to just leave the definition of the notch arbitrary, then you don't care what the constant is. And that's the case that I'll be doing, actually. I will not define a to be 1 today. The definition of the notch is just arbitrary as far as the equations that I'll be writing. And therefore, it will be sufficient to know that a is just proportional to t to the 2/3 for the flat universe case, for the critical density case. And that's where we'll stop today. And this actually pretty really covers everything through lecture notes three, which is the same material that will be covered on the quiz next week. [INAUDIBLE] does not seem to have shown up, but we'll assume that probably the review session will be next Monday night at 7:30. I will check it out and get back to you by email.
MIT_8286_The_Early_Universe_Fall_2013
6_The_Dynamics_of_Homogeneous_Expansion_Part_II.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu PROFESSOR: OK, in that case, let's get going. I want to begin by just summarizing where we left off last time, because we're still to some extent in the middle of the same discussion. We were talking about the gravitational effects of a completely homogeneous universe that fills all of space. And you recall that Newton had concluded that such a system would be stable. But I was arguing that such a system would not be stable, even given the laws of Newtonian mechanics. And we discussed a few of those arguments last time. We discussed, for example, Gauss's law formulation of Newton's law of gravity. And I remind you that it's a very simple derivation to go from Newton's law of gravity, as Newton stated it as an acceleration at a distance, to Gauss's law. What you do is you just show that if the acceleration of gravity is given by Newton's law, then for any one particle creating such a gravitational field, Gauss's law holds. This integral is either equal to 0 or minus 4 pi GM, depending on whether the surface that you're integrating over encloses or does not enclose the charge or the mass. And once you know it for one mass, Newton tells us that for many masses you just add the forces as vectors. And that means that you'll be adding up these integrals for each of the particles. And you're lead automatically to the expression we have here. So really, it follows very directly from Newton's law of gravity. On the other hand, if we apply this formulation to an infinite distribution of mass, if Newton was right and there were no forces, that would mean that little g, the acceleration of gravity, would be 0 everywhere. And then, the integral on the left would be 0. But the term on the right is clearly not 0, if we have a volume with some nontrivial size that includes some mass. So this formulation of Newton's law of gravity clearly shows that an infinite distribution of mass could not be static. Further, I showed you there's another formulation, more modern, of Newton's law of gravity in terms of what's called Poisson's equation. This was really shown for the benefit of people who know it. If you don't it, don't worry about it. We won't need it. But it's another way of formulating the law of gravity by introducing a gravitational potential, phi, and writing the acceleration of gravity as minus the gradient of phi. That just defines phi. And then, you can show that phi obeys Poisson's equation, del squared on phi is equal to 4 pi G times rho, where rho is the mass density. And again, one can see immediately that this does not allow a static distribution of mass. If the distribution of mass was static, that would mean that g vector was 0. That would mean that gradient phi was 0. That would mean that phi was a constant. And if phi is a constant, del squared phi is 0, and that's inconsistent with the Poisson equation. I might further add-- I don't think I said this last time-- that from a modern perspective, equations like Poisson's equation are considered more fundamental than Newton's original statement of the law of gravity as an action at a distance. In particular, when one wants to generalize Newton's law, for example, to general relativity, Einstein started with Poisson's equation, not with the force law at a distance. And there's nothing like the force law at a distance and the theory of general relativity. General relativity is formulated in a language very similar to the language of Poisson's equation. The key idea that underlies this distinction is that all the laws of physics that we know of can be expressed in a local way. Poisson's equation is a local equation. That's just a differential equation that holds at each point in space and doesn't say anything about how something at one point in space affects something far away. That happens as a consequence of this equation. But it happens as a consequence of solving the equation. It's not built into the equation to start with. Continuing. We then discussed what goes on if one does just try to add up the forces using Newton's law and action at a distance. And what I argued is that it's a conditionally convergent integral, which means it's the kind of integral which has the property that it converges. But it can converge to different things depending on what order you add up the different parts of the integral. And we considered two possible orderings for adding up the mass. In all cases, what we're talking about here is just a single location, P. And you can't tell the slide is filled with mass, but think of that light cyan as mass. It fills the entire slide and fills the entire universe in our toy problem here. So we're interested in calculating the force on some point, P, in the midst of an infinite distribution of mass. And the only thing that we're going to do differently in these two calculations is we're going to add up that mass in a different order. And if we add up the mass ordered by concentric shells about P, each consecutive shell clearly contributes 0 to the force at P. And therefore, the sum in the limit, as we go out to infinity, will still be 0. So for this case, we get g equals 0. We do get no acceleration at the point, P, as long as we add up all the masses in that way. But there's nothing in Newton's laws that tell us what order to add up the forces. Newton just tells you that each mass creates a 1 over r squared force, and it's a vector. And Newton says to add the vectors. Normally, addition of vectors commutes. It doesn't matter what order you add them in. But what we're going to be finding here is that it does matter what order. And therefore, the answer is ambiguous. And to see this, we'll consider a different ordering. Instead-- we'll still use spherical shells, because that's the easiest thing to think about. We could try to do something else, but it's much harder to use any other shape. But this time, we'll consider spherical shells that are centered around a different point. And we'll call the point that the spherical shells are centered around, Q. We're still trying to calculate the force at the point, P, due to the infinite mass distribution that fills space-- so we're doing the same problem we did before-- but we're going to add up the contributions in a different order. And we discussed last time that when we do that, it turns out that all the mass inside the sphere, centered at Q out to the radius of P, contributes to the acceleration at P. And all mass outside of that, could be divided into concentric shells, where the point, P, is inside. And there's no force inside a concentric shell. So all of the rest of the mass contributes zilch. And the answer you get is then simply the answer that you would have for the force of a point mass located at Q, whose mass was the total mass in that shaded region. And clearly it's nonzero. And furthermore, clearly, we can-- by choosing different points, Q-- make this anything we want. We can make it bigger by putting the point, Q, further away; and it always points in the direction of Q. Yeah, points in the direction of Q. So we can let it point in any direction we want by putting the point, Q, anywhere we want. So depending on how we add up the contributions, we can get any answer we want. And that's a fundamental ambiguity in trying to apply Newton using only the original statement of the law of gravity as Newton gave it. OK. So the conclusion, here, is that the action at a distance description is simply ambiguous. Descriptions by Gauss's law, or Poisson's law, tell us that the system cannot be static. And we'll soon try to figure out exactly how we do expect it to behave. Now, I still want come back to one argument, which was really the argument that persuaded Newton in the first place. Newton said that if we want to calculate the acceleration of a certain point in this infinite mass distribution, we have a symmetry problem. All directions from that point looking outward look identical. If there's going to be an acceleration acting on any point, what could possibly determine the direction that the acceleration will have. So that's the symmetry argument, which is a very sticky one. It sounds very convincing, in Newton's reasoning. There could be no acceleration, simply because there's no preferred direction for the acceleration to point. To convince Newton that that's not a valid argument would probably be hard. And I don't know if we could succeed in convincing him or not. Don't get the chance to try. But if we did have a chance to try, what we would try to explain to him is that the acceleration is usually measured relative to an inertial frame. That's how Newton always described it. And to Newton, there was a unique inertial frame-- unique up to changes in velocity-- determined by the frame of the fixed stars. That was the language that Newton used. And that defined his inertial frame. And all of his laws of physics were claimed to hold in this inertial frame. On the other hand, if all of space is filled with matter and it's all going to collapse as we're claiming, there isn't any place to have any fixed stars. So the whole idea of an inertial frame really disappears. There's no object, which one can think of as being at rest or being non-accelerating with respect to any would-be inertial frame. So in the absence of an inertial frame, one really has to admit that all accelerations, just like all velocities, have to be measured as relative accelerations. We could talk about the acceleration of one mass relative to another. But we can't talk about what the absolute acceleration is of a given mass, because we don't have an inertial frame with which to compare the acceleration of that object. So when all accelerations are relative, then there are more options here. And it turns out that the right option-- the one that we'll eventually deduce-- is an option that looks similar to Hubble's law. Hubble's law is a law about velocities. And it says that from the point of view of any observer, all the other objects will look like they're moving radially outward from that observer. And in spite of the fact that that description makes it sound, emphatically, like the observer you're talking about is special, you can transform to the frame of any other observer and thus seeing exactly the same thing. So having one observer seeing all the other velocities moving outward from him does not violate homogeneity. It does not violate any of the symmetries that we're trying to incorporate in the system. And the same thing is true for acceleration. So I'm not going to try to show it now. We will be showing it in the course of our upcoming calculations. But in the kind of collapsing universe that we're going to be describing, any observer can consider himself or herself to be non-accelerating. And then, that observer would see all the other particles accelerating directly towards her. And although that description makes it sound like the person at the center is special, it's not true. You can transform to any other observer's frame. And each observer can regard himself as being non-accelerating and would see all the other objects accelerating radially towards him. OK. So now we are ready to go on and try to build a mathematical model, which will tell us how a uniform distribution of mass will behave. Now in doing this, we first would like to tame the issue of infinities. And we are going to do that by starting out with a finite sphere. And then at the very end, we'll let the size of that sphere go to infinity. So our goal is to build a mathematical model of our toy universe. And what we want to do is to incorporate the three features that we discussed, isotropy, homogeneity, and Hubble's law. And we're going to build this as a mechanical system, using the laws of mechanics as we know them. We're, in fact, going to be using a Newtonian description. But I will assure you that although we are using a Newtonian description, the answer that we'll get will in fact be the exact answer that we would have gotten with general relativity. And we'll talk later about why that's the case. But we're not wasting our time doing only an approximate calculation. This actually is a completely valid calculation, which gives us, in the end, entirely the correct answer. So we're going to model our universe by introducing a coordinate system. I'm now just going to really re-draw the picture that's up there. But if I draw it down here, I can point to it better. We're going to imagine starting at our toy universe as a finite sized sphere of matter. And we're going to let t sub i be equal to the time of the initial picture. t sub i need not be particularly special in any way, in terms of the life of our universe. Once we construct the picture, we'll be able to calculate how it would behave-- at times later than ti and at times earlier than ti. ti is just where we are starting. At time, ti, we will give our sphere a maximum size. I'm calling it maximum, because I'm thinking of this as filled with particles. It's, therefore, the maximum radius for any particle. It's just the radius of the sphere. So R sub max i is just the initial radius. An initial means at time, t sub i. We're going to fill the sphere with matter. And we'll think of this matter as being a kind of a uniform fluid, or a dust of very small particles, which we can also think of as a fluid. And it will have a mass density, rho sub i. OK. So it's already homogeneous and isotropic, at least isotropic about the center. And now, we want to incorporate Hubble's law. So we're going to start all of this matter, in our toy system here, expanding and expanding in precisely the pattern that Hubble's law requires-- namely, all velocities will be moving out from the center with a magnitude proportional to the distance. So if I label a particle by-- I'm sorry, v sub i is just for initial. For any particle-- there's no way we're indicating a particle-- at the initial time, the velocity will just obey Hubble's law. It will be equal to some constant, which I'll call H sub i-- the initial value of the Hubble expansion rate-- times the vector r, which is just a vector from the origin to the particle. That tells us where the particle is that we're talking about. So v sub i is the initial velocity of any particle. H sub i is the initial Hubble expansion rate. And r is the position of the particle. OK. As I said, we're starting with a finite system, which is completely under control. We know, unambiguously, how to calculate-- in principle at least-- how that system will evolve, once we set up these initial conditions. And at the end of the calculation, we'll take the limit as R max i goes to infinity. And that, we hope, will capture the idea that this model universe is going to fill the infinite space. Now, I did want to say a little bit here-- only because it's something which has entered my scientific life recently and in interesting ways. I would say a few things about infinities. Now, this is an aside, which means that if you're struggling to understand what's going in the course, you can ignore this and don't worry about it. But if you're interested in thinking about these concepts, the concept of infinity has sort of struck cosmology in the nose in the context of the multiverse, which I spoke of a little bit about in the overview lecture, and which we'll get back to at the end of the course. The multiverse has forced us to think much more about infinities than we had previously. And in the course of that, I learned some things about infinity that in fact surprised me. For the most part, in physics, we think of infinities as the limit of finite things, as we're doing over here. So if we want to discuss the behavior of an infinite space, we very frequently in physics start by discussing a finite space, where things are much easier to control mathematically. And then we take the limit as the space gets bigger and bigger. That works for almost everything we do in physics. And I would say that the reason why it works is because we are assuming fundamentally that physical interactions are local. Things that are vastly faraway don't affect what happens here. So as we make this sphere larger and larger, we'll be adding matter at larger and larger radii. That new matter that we're adding is not going to have much affect on what happens inside. And in fact, for this problem, we'll soon see that the extra matter we had on the outside has no effect whatever what happens inside, related to this fact that the gravitational field inside a shell of matter is 0. So that's a typical situation and gives physicist a very strong motivation to always try to think of infinities as the limit of finite systems. What I want to point out, however, is that that's not always the right thing to do. And there are cases where it's, in fact, emphatically the wrong thing to do. Mathematicians know about this, but physicists tend not to. So I want to point out that not all infinities are well-described as limits of finite systems. I don't want to call this into question. This, I think, is absolutely solid. And we will continue with it after I go off on this side discussion. But to give you an example of a system which is infinite and not well-described as a limit of finite systems-- this is a mathematical example-- we could just think of the set of positive integers, also called the natural numbers. And I'll use the letter that's often used for it, N with an extra line through it to make it super bold or something like that. So that's the set of integers. And the question is, suppose we consider trying to describe the set of positive integers, as a limit-- a finite set. So we could think about the limit from N goes to infinity-- as N goes to infinity-- of the natural numbers up to N. So we're taking sets of integers and taking bigger and bigger sets and trying to take the limit, and asking, does that give us the set of integers? One might think the answer is yes. What I will claim is that this is definitely not equal to the set of integers. And in fact, I'll claim that the limit does not exist at all, which is why it couldn't possibly be equal to the set integers. And to drive this home, I just need to remind us what a limit means. And since this is not a math course, I won't give a rigorous definition. But I'll just give you an example that will strike bells and things that you learned in math classes. Suppose we want to talk about the limit as x goes to 0 of sine x over x. But we all know how to do that. It's usually called L'Hopital's rule or something. But you could probably just use the definition of a limit and get it directly. That limit is 1. And what we mean when we say that, is that as x gets closer and closer to 0, we could evaluate this expression for any value of x not equal to 0-- for 0, itself, it's ambiguous. But for any value of x not equal to 0, we can evaluate this. And as x gets closer and closer to 0, that evaluation gives us numbers that are closer and closer to 1. And we can get as close to 1 as we want by choosing x's as close to 0 as necessary. And that's usually phrased in terms of epsilons and deltas. But I don't think I need that here. The point is that, the limit is simply the statement that this can be made as close to 1 as you like by choosing x as close as possible to the limit point, 0. Now, if you imagine applying the same concept to this set-- the set of integers from 1 to N-- the question is, as you make N large, does it get closer to the set of all integers? Are the numbers from 1 to 10 close to the set of all integers? No. What about 1 to a million? Still infinitely far away. 1 to a billion? 1 to a google? No matter what number you pick as that upper limit, you're still infinitely far away from the set of all integers. You're not coming close. So there's no sense in which this converges to the set of all integers. It's just a different animal. Now does this make a difference? Are there any questions where it matters whether you think of the integers as being defined in some other way, or by this limit? Or maybe first say how it is defined. If you ask mathematicians how you define the set of integers, I think all of them will tell you, well, we used the Peano axioms. And the key thing in the Peano axioms-- if you look at them-- that controls the fact that there's an infinite number of integers, is the successor axiom. That is, built into these Peano axioms that describe the integers mathematically is a statement that every integer has a successor. And then, there are other statements that guarantee that the successor is not one of the ones that's already on the list. So you always have a new element as the successor to your highest element so far. And that guarantees from the beginning, the set of axioms is infinite. It's not thought of as the limit of finite sets and cannot be thought of as the limit of finite sets. Because, no finite set is, in any way, resembling an infinite set. So does it matter? Are there questions where we care whether we could describe the integers this way or not? And the only questions I know sound kind of contrived, I'll admit. But I also point out that in mathematics, the word, contrived doesn't cut much ice. If you discover a contradiction in some system, nobody's going to tell you whether you should ignore that contradiction because it's contrived. If it's really a contradiction, it counts. So a question where it makes a difference, whether you think of the integers as being defined as infinite from the start, or whether you think of it-- or try to think of it-- as a limit of this sort, is a question such as, what fraction of the integers are so large that they cannot be doubled and still be an integer? And notice that, if we consider this set, for any N, no matter how large N is, we would conclude that half of the integers are so large that they cannot be doubled. And that would hold no matter how large we made N. On the other hand, if we look at the actual set of integers, we know that any integer can be doubled and you just get another integer. So that's an example of a property of the integers, which you would get wrong if you thought that you could think of the integers as this kind of a limit. So you really just can't. Any questions about that? Subtle point, I think. AUDIENCE: How does that relate to-- PROFESSOR: To this? It's just a warning that you should be careful about treating infinity as limits of finite things. That's all it is. It does not relate to this. That's why I said it was an aside when I started. Aside means something worth knowing but not directly related to what we're talking about. OK. So we're going to proceed with this model. Let me mention one other feature of the model to discuss a little bit, the shape that we started with. We're starting with a sphere. You might ask, why a sphere? Well, a sphere is certainly the simplest thing we could start with. And a sphere also guarantees isotropy-- or at least isotropy about the origin. Isotropy about one point. We could, by doing significantly more work, have started, for example, with a cube and let the cube get bigger and bigger. And as the cube got bigger and bigger, it would also fill all space. And we might think that would be another way of getting the same answer. And that would be right. If we did it with a cube, it would be a lot more work. But we would, in fact, get the same answer. The cube has enough symmetry, So it, in fact, will be the same as sphere in this case. I'm not going to try to tell you how to calculate it with an arbitrary shape. But I will guarantee you that a cube will get you the same answer. On the other hand, if we started with a rectangular solid, where the three sides were different-- or at least not all equal to each other-- then would be starting with something which is asymmetric in the first place. Some direction would be privileged. And then, if we continued-- as we're going to be doing for our sphere-- starting with this irregular rectangular solid, we would build in an anisotropy from the very beginning. We end up with an anisotropic model of the universe. So since we're trying to model the real universe, which is to a high degree isotropic, we will start with something which guarantees the isotropy. And the sphere does that. And it's the simplest shape which does that. OK. Now, we're ready to start putting in some dynamics into this model. And the dynamics that we're going to put in for now will just be purely Newtonian dynamics. And in fact, we'll be modeling the matter that makes up this sphere as just a dust of Newtonian particles-- or if you like, a gas of Newtonian particles. These particles will be nonrelativistic-- as implied by the word Newtonian. And that describes our real universe for a good chunk of its evolution but not for all of it. So let me-- before we proceed-- say a few words about the real universe and the kind of matter that has dominated it during different eras of evolution. In the earliest time, our universe, we believe, was radiation dominated. And that means that, if you follow the evolution of our universe backwards in time, as one goes to earlier and earlier times, the photons that make up the cosmic background radiation blue shift. We've learned that they red shift as universe expands. That means if we extrapolate backwards, they blue shift. They get more energetic for every photon. But the number of photons remains constant, essentially, as one goes backwards in time. They just get squeezed into a small volume. And they become more energetic. Now, meanwhile, the particles of ordinary matter-- and dark matter as well, the protons and whatever the dark matter is made of-- also gets squeezed, as you go backwards in time. But they don't become more energetic. They remain just-- a proton remains a particle whose mass is the mass of a proton times c squared. So as you go backwards, the energy density in the radiation-- in the cosmic microwave background radiation-- gets to be larger and larger compared to the energy density in the matter. And later, we'll learn how to calculate this. But the two cross at about 50,000 years. Yes. AUDIENCE: If we think of particles as waves, then how does that make sense that they wouldn't change? PROFESSOR: Right. If we think of particles like waves, how does it make sense that they don't change? The answer is, they do a little bit. But we're going to be assuming that these particles have negligible velocities. What happens is their momentum actually does blue shift. But a blue shift is proportional to the initial value. And if the initial value is very small, even when it blue shifts, the momentum could still be negligible. So for the real universe, between time, 0, and about 50,000 years, the universe was radiation dominated. And we'll be talking about that in a few weeks. But today, we're just ignoring that. Then, from this time of about 50,000 years until about 9 billion years-- so a good chunk of the history of the universe-- the universe was what is called matter dominated. And matter means nonrelativistic matter. That's standard jargon in cosmology. When we talk about a matter-dominated universe, and even though we don't use the word nonrelativistic, that's what everybody means by a matter dominated universe. And that's the case we're going to be discussing, just ordinary nonrelativistic matter filling space. And then for our real universe, something else happened at about 9 billion years in its history up to the present-- and presumably in the future as well-- is the universe became dark energy dominated. And dark energy is the stuff that causes the universe to accelerate. So the universe has been accelerating since about 9 billion years from the Big Bang. Now, I think we mentioned-- I'll mention it again quickly-- there's no conversion of ordinary matter to dark energy here, which you might guess from this change in domination. It's just a question of how they behave as the universe expands. Ordinary matter has a density, which decreases as 1 over the cube of the scale factor. You just have a fixed number of particles scattering out over a larger and larger volume. The dark energy-- for reasons that we will come to near the end of the course, in fact, does not change its energy density at all as the universe expands. So what happened 9 billion years ago was simply that the density of ordinary matter fell below the density of dark energy. And then, the dark energy started to dominate and the universe started to accelerate. Today, the dark energy, by the way, is about 60% or 70% of the total. So it doesn't dominate completely. But it's the largest component. OK. For today's calculation, we're going to be focusing on this middle period and pretending it's the whole story. And we will come back and discuss these other eras. We're not going to ignore them. But we will not discuss them today. So we will be discussing this case of a matter-dominated universe. And we're going to be discussing it using Newtonian mechanics. And in spite of the fact that we're using Newtonian mechanics, I will assure you-- and try to give you some arguments later-- it gives exactly the same answer that general relativity gives. OK. So how do we proceed? How do we write down the equations that describe how this sphere is going to evolve? We're going to be using a shell picture. That is, we will describe that sphere in terms of shells that make it up. So in other words, we will divide the matter at the initial time. And then, we'll just imagine labeling those particles and following them. So at the initial time, we're going to divide the matter into concentric shells. So each shell extends from some value of r sub i-- the initial radius-- to r sub i plus dr sub i. Our shells will be infinitesimal in their thickness. And each shell will have a different r sub i. And we could let all the dr sub i's be the same, if we want. We can let each shell be the same thickness. We can let them be different. It doesn't matter. Now a reason why we can get away with thinking of the matter purely has being described by shells is that we know that we started with all of the velocities radial. That Hubble's law that we assumed says that the velocities were proportional to the radius vector, which is measured from the origin. So all our initial velocities were radial outward. And furthermore, if we think about how the Newtonian force of gravity will come about for this spherical object, they will all be radial as well. So all the motion will remain radial. There will never be any forces acting on any of our particles that will push them in a tangential direction. So all motion will be radial. And as long as we keep track of the radius of each particle, its angular variables, theta and phi-- which I will never mention again-- will just be constant in time. So we will not need to mention them. So all motion is radial, because the v's start radial. And there are no tangential forces, where tangential means any direction other than radial. So we're just saying that all the forces are radial. And that means the motion will stay radial. To describe the motion, I want to give a little bit more teeth to this statement that we're going to be describing it in terms of shells. We'll going to describe the motion by a function, r, which will be a function of two variables, r sub i and t. And this is just the radius at time, t, of the shell that was at radius, r sub i, at time, t sub i. So each shell is labeled by where it is at time, t sub i. And once we label it, it keeps that label. But then, we'll talk about this function, r, which tells us where it is at any later time. And we could also think about earlier times if we want. So r of r sub i, t is the radius of the shell that started at r sub i, where we're looking at the radius at time, t. Now, I should warn you that if you look in any textbook, you will see a simpler derivation than what I'm about to show you. So you might wonder why am I going to so much trouble when there's an easier way to do it. And the answer is that the calculation I show you will, in fact, show you more than what's in the textbooks-- and Ryden's textbook, for example. What most textbooks do-- including Ryden, I believe-- is to assume that this motion will continue to obey Hubble's law and continue to be completely uniform in the density. And then, you could just calculate what happens to the outside of the sphere. And that governs everything, if you assume that will stay uniform. We are not going to assume that it will stay uniform. We will show that it will stay uniform. And from my point of view, it's a lot better to actually show something than to just guess it and we write about, without showing it. So that's why we're going to do some extra work. We will actually show that the uniformity, once you put in, is preserved under time, under the laws of Newtonian evolution. I guess I'll leave the picture. Maybe, I'll leave all of that. OK. Now, there's an issue that's a little bit complicated that I'll try to describe. And again, this is a subtlety that's probably not mentioned in the books. We have these different shells that are evolving. And we can calculate the force on any given shell if we know the matter that's inside that shell. That's what it tells us. If everything is in shells, the shells have no effect on the matter that is inside the shell. So if we want to know the effect on a given shell, we only need to know the matter that's inside that shell. The matter outside has no force. So it's very important that we know the ordering of these shells. Now, initially, we certainly do. They're just ordered according to r sub i. But once they start to move around, there's, in principle, the possibility that shells could cross. And if shells crossed, our equations of emotion would have to change, because then different amounts of mass would be acting on different shells. And we'd have to take that into account. Fortunately, that turns out not to be a problem. It just sounds like it could be a problem. And the way we're going to treat it is to recognize that, initially, all these shells are moving away from each other, because of the Hubble expansion. If Hubble's law holds, any two particles are moving away from each other with a relative velocity proportional to their distance. So that holds for any two shells, as well. So if shells are going cross, they're certainly not going to cross immediately. There are no two shells that are approaching each other initially. All shells are moving apart initially. This could be turned around by the forces-- and we'll have to see. But what we can do is, we can write down equations that we know will hold, at least until the time where there might be some shell crossings. That is, we'll write down equations that will hold as long as there are no shell crossings. Then, if there was going to be a shell crossing, the equations we write down would have to be valid right up until the time of that shell crossing. And, therefore, the equations that we're writing down would have to show us that the shells are going to cross. The shells can't just decide to cross independently of our equations of motion. And what we'll find is that our equations will tell us that there will be no shell crossings. And the equations are valid as long as there are no shell crossings. And I think if you think about that, that's an airtight argument, even though we're never going to write down equations that will tell us what would happen if shells did cross. OK. So we're going to write equations that hold as long as there are no shell crossings. OK. As long as there's no shell crossings, then the total mass inside of any shell is independent of time. It's just the shells that are inside it. So the shell at initial radius, r sub i, even at later times, feels the force of the mass inside. And we can write down a formula for that mass inside. The mass inside the shell, whose initial radius is r sub i, is just equal to the initial volume of that sphere, which is 4 pi over 3 r sub i cubed, times the initial mass density, rho sub i. So that's how much mass there is inside a given shell when the system starts. And that will continue to be exactly how much mass is inside that shell as the system evolves, unless there are shell crossings. And our goal is simply to write down equations that are valid until there might be a shell crossing. And then, we can ask whether or not there will be any shell crossings. OK, now, Newton's law tells us how to write down the acceleration of an arbitrary particle in this system. Newton's law tells us the acceleration will be proportional to negative-- I'll put a radial vector at the end in r hat. So it's direction, minus r hat, will be the direction enforced on any particle. And the magnitude, it will be Newton's constant times the mass enclosed in the sphere of radius, r sub i, divided by the square of the distance of that shell from the origin. And that's exactly what we called this function, r of ri, t. It's the radius of the shell at any given time. So it's r squared of r sub i and t. And then as I promised you, there's a unit vector, r hat, at the end. So the force is-- and the acceleration, therefore,-- is in the negative r hat direction. So this holds for any shell, whether which shell we're talking about is indicated by this variable, r sub i. r sub i tells us the initial position of that shell. So is everybody happy with this equation? This is really the crucial thing. Once you write this down, everything else is really just chugging along. So everybody happy with that? It's just the statement from Newton, that if all the masses are ranged spherically symmetrically, the mass inside any shell-- excuse me, the force due to the mass that's on a shell that's at a larger radius than you are, produces no acceleration for you. The only acceleration you feel is due to the masses at smaller radii. And that's what this formula says. OK, good. Now, what I want to do is-- this is a vector equation-- but we know that we're just taking these spherical motions. And all we really have to do is keep track of how r changes with time, little r. So we can turn this into an ordinary differential equation with no vectors for this function, little r. The acceleration has a magnitude, which is just, r double dot. And that is then equal to minus-- and we write again what M of ri is from this formula. So I'm going to write this as minus 4 pi over 3 G times r sub i cubed rho sub i-- coming from that formula-- divided by r squared. And I'm going to stop writing the arguments. But this r is the function of r sub i and t. But I will stop writing its arguments. And the two dots means derivative with respect to time. Yes. AUDIENCE: I guess I don't understand how come we would, calculating the mass, we use r sub i, which is the radius at some initial time, ti. But then, when we're saying the distance from that mass to our point, we use this function, r, instead of r sub i. PROFESSOR: Right. An important question. The reason is that we're using the initial radius, but we're also using the initial density. So this formula certainly gives us the mass that's inside that circle at the beginning. And then, if there's no shell crossings, all these circles move together. The mass inside never changes. So it's the same as it was then. While, the distance from the center of that mass does change, as the radius changes. So the denominator changes, the numerator does not. Yes. AUDIENCE: So, the first equation, g, that's the gravity-- PROFESSOR: g is the acceleration of gravity. AUDIENCE: So, why does it have-- it has units of-- no, it doesn't. Sorry. PROFESSOR: Yes, it should have units of acceleration, as our double dot should have units of acceleration. Hopefully, they do. OK. Now, as the system evolves, r sub i is just a constant-- different for each shell, but a constant in time. And imagine solving this one shell at a time. Rho sub i is, again, a constant. It's the value of the mass density at the initial time. So it's going to keep its value. So the differential equation involves a differential equation in which little r changes with time, and nothing else here does. So this is just a second order differential equation for little r. So, we're well on the way to having a solution, because second order differential equations are, in principle, solvable. But the one thing that you should all remember about second order differential equations is that in order to have a unique solution, you need initial conditions. And if it's a second order equation-- as Newton's equations always are-- you need to be able to specify the initial position and the initial velocity before the second order equation leads to a unique answer. So that's what we want to do next. We want to write down the initial value of r and the initial value of r dot. And then, we'll have a system, which is just something we can turn over to a mathematician. And if the mathematician is at all smart, he'll be able to solve it. So initial conditions. So we want the initial value of r sub i-- and initial means a time, t sub i. That's where we're setting our initial conditions. So we need to know what r of r sub i, t sub i is, which is a trivial thing if you keep track of what this notation means. What is that? r sub i, good. The meaning of this is just the radius at time, t sub i, of the particle, whose radius at time, t sub i, was r sub i. So to put together all those tautologies, the answer is just, obviously, r sub i. OK. And then we also want to be able to solve this equation and have a unique solution-- we want the initial value of r sub i dot. And initial means, again, at time, t sub i. And that's specified by our Hubble expansion. Every initial velocity is just H sub i times the radius. So if the radius is r sub i, this is just H sub i times r sub i. That's just the Hubble velocity that we put in to the system when we started it. OK. So now, we have a system, which is purely mathematical. We have a second order differential equation and initial conditions on r and r dot. That leads to a unique solution. Now it's pure math. No more physics, at least at this stage of the game. However, there are interesting things mathematically that one can notice about this system of equations. And now what we're going to see is the magic of these equations for preserving the uniformity of this system. It's all built into these equations. And let's see how. The key feature that's somewhat miraculous that these equations have is that r sub i can be made to disappear by a change of variables. I'll tell you what that is in a second. It also helps if you know how to spell "disappear." We're going to define a new function, u. I just made up a letter there. Could have called it anything. But u of r sub i and t is just going to be defined to be r of r sub i and t divided by r sub i. Given any function, r of r sub i and t, I can always define a new function, which is just the same function divided by r sub i. Now, let's look at what this does to our equations. My claim is that r sub i was going to disappear. And now, we'll see how that happens. OK, if u is defined this way, we can write down the differential equation that it will obey, by writing down an equation for u double dot. And u double dot will just be r double dot divided by r sub i. So we take the equation for r double dot over here and divide it by r sub i . So we can write that as minus 4 pi over 3 G-- for some reason, I decided to leave the r sub i cubed in the numerator for now-- and just put an extra r sub i in the denominator-- the one that we divided by-- times r squared. And now, what I'm going to do is just replace r . We're trying to write an equation for u. r is related to u by this equation. r is r sub i times u. So I can write this as minus 4 pi over 3 G r sub i cubed rho sub i over-- now we have-- u squared times r sub i cubed. I left the numerator alone. I just rewrote the denominator by placing r by r sub i times u. And we get this formula. And now, of course, the r sub i cubes cancel. And I think the reason why I kept it separated this way when I wrote my notes is at this shows very explicitly that what you have is a cancellation between an r sub i here, which was the power of r that appears in a volume-- r sub i cubed was just proportional to the volume of the sphere-- and r sub i cubed down here, where one of them came from just a change of variables, and the other was the r squared that appeared in the force law. So this cancellation depends crucially on the force law, being a 1 over r squared force law. If we had 1 over some other power of r-- even if it differed by just a little bit-- then the r sub i would not drop out of this formula. And we'll see in a minute that it's the dropping out of r sub i which is crucial to the maintenance of homogeneity as the system evolves. As it evolves, Newton tells us what happens. We don't get to make any further choices. And if Newton is telling us there's a 1 over r squared force law, then it remains homogeneous. But otherwise, it would not. And that's, I think, a very interesting fact. Continuing, we've now crossed off these r sub i's. And we get now a simple equation, u double dot is equal to minus 4 pi over 3 G rho sub i over u squared with no r sub i's in the equation anymore. And that means that this u tells us the solution for every r sub i. We don't have different solutions for every value of r sub i anymore. r sub i has dropped out of the problem. We have a unique solution independent of r sub i. And that means it holds for all r sub i. I'm sorry. I jumped the gun. The conclusions I just told you about are true, but part of the logic I haven't done yet. What part of the logic did I leave out? Initial conditions, I heard somebody mumble. Yes. Exactly. To get the same solution, we have to not only have a differential equation that's independent of u sub i, but we don't have a unique solution unless we look at the initial conditions. We better see if the initial conditions depend on u sub i. And they don't either. That's the beauty of it all. Starting with the r initial condition, the initial value of u of r sub i, t sub i, will just be equal to the initial value of r divided by r sub i. But the initial value of r is r sub i. So here, we get r sub i divided by r sub i, which is 1, independent of r sub i. The initial value of u is 1, for any r sub i. And similarly, we now want to look at u dot of r sub i and t. And that will just be r dot divided by r sub i, where we take the initial value of r dot. The initial value of r dot is Hi times r sub i. The r sub i's cancel. And u sub i dot is just H sub i. So now, we have justified the claim that I falsely made a little bit prematurely a minute ago. We have a system of equations, which can be solved to give us a solution for u, which is independent of r sub i, which means every value of r sub i has the same equation for u. That-- if you stare at it a little bit-- gives us a physical interpretation for this quantity, u. u, in fact, is nothing more nor less than the scale factor that we spoke about earlier. We have found that we do have a homogeneously expanding system. We started it homogeneously expanding, but we didn't know until we looked at equation of motion that it would continue to homogeneously expand. But it does. And that means it can be described by a scale factor. We have u of r sub i, t. In principle, it was defined so it might have depended on r sub i. That's why I've been writing these r sub i's all along. But now, we discovered that the u's are completely determined by these equations, which have no r sub i's in them-- at least none that survived. We had an r sub i divided by r sub it, but that's not really an r sub i, as you know. It's 1. So u is independent of r sub i and can be thought of as just a function of t. And we can also change its name to a of t to make contact with the notion of a scale factor that we discussed last time. And we can see that the way u arises, in terms of how it describes the motion, r of r sub i and t is just equal to-- from this equation-- r sub i times u, which is now called a. So r, of r sub i and t, is equal to a of t times r sub i. That's just another way of rewriting our definition that we started with of u. So what does that mean? These r sub i's are comoving coordinates. We've labeled each shell by its initial position, r sub i. And we've let that shell keep its label, r sub i, as it evolved. That's a comoving coordinate. It labels the particles independent of where they move. And what this is telling us is that the physical distance-- from the origin in this case-- is equal to the scale factor times the coordinate distance. That is the coordinate distance between the shell and the center of the system, the origin. OK. Any questions about that? OK. It's now useful to rewrite these equations in a few different ways. Let's see. First of all, we have written our differential equation, up here, in terms of rho sub i. And that was very useful because rho sub i is a constant. It doesn't change with time. Still, it's sometimes useful to write the differential equation in terms of the temporary value of rho, which changes with time, to see what the relationships are between physical quantities at a given time. And we could certainly do that, because we know what the density will be at any given time. For any shell, we can calculate the density as the total mass divided by the volume. We know this is going to stay uniform, because everything is moving together, when you just have motion, which is an overall scale factor that multiplies all distances. So the density will be constant. And we can calculate the density inside a shell by taking M of r sub i-- which we already have formulas for and is independent of time-- and dividing it by the volume inside the shell, which is 4 pi r cubed. And doing some substitutions, M of r sub i is just the initial volume times the initial density. So that is-- as we've written before-- 4 pi over 3 r sub i cubed times rho sub i. And then, we can write the denominator. r is equal to a times r sub i. The physical radius is the scale factor times the coordinate radius. So here, we have a cubed times r sub i cubed. And now, notice that almost everything cancels. And what we're left with is just rho sub i divided by a cubed. So that's certainly what we would've guess, I think. The density is just what it started but then divided by the cube of the scale factor. And our scale factor is defined as 1 at the initial time-- the way we've set up these conventions. So that is just the ratio of the scale factors cubed that appears in that equation. As the universe expands, the density falls off by 1 over the scale factor cubed. We can also now rewrite the equation for a double dot. a is u, so we have the equation up there. But we could write it in terms of the current mass density. Starting with what we have up there, it's minus 4 pi over 3 G rho sub i over a squared. Notice, that I can multiply numerator and denominator by a. And then, we have rho sub i over a cubed here, which is just the mass density in any given time. So I can make that substitution. And I get the meaningful equation that a double dot is equal to minus 4 pi over 3 G rho of t times a. So this equation gives the deceleration of our model universe in terms of the current mass density. And notice that it does, in fact, depend only on the current mass density. It predicts the ratio of a double dot over a. And that, you would expect to be what should be predicted, because remember, a is still measured in notches per meter. So the only way the notches can cancel out is if there is an a on both sides. Or if you could bring them all to one side and have a double dot divided by a. And then, the notches go away. And you have something which has physical units being related to something with physical units. OK. We said at the beginning that when we were finished, we were going to take the limit as R max initial goes to infinity. And lots of times when I present this, I forget to talk about that. And the reason I forget to talk about that is, if you notice, R max sub i doesn't appear in any of these equations. So taking the limit as R max sub i goes to infinity doesn't actually involve doing anything. It really just involves pointing out that the answers we got are independent of how big the sphere is, as long as everything we want to talk about fits inside the sphere. Adding extra matter on the outside doesn't change anything at all. So taking the limit as you add an infinite amount of matter on the outside-- as long as you imagine doing it in these spherical shells-- is a trivial matter. So the limit as R max sub i goes to infinity is done without any work. OK. I would like to go ahead now, and in the end, we're going to want to think about different kinds of solutions to this equation and what they look like. For today, I want to take one more step, which is to rewrite this equation in a slightly different way, which will help us to see what the solutions look what. What I want to do is to find a first integral of this equation. OK. To find a first integral, I'm going to go back to the form that we had on the top there, where everything is expressed in terms of rho sub i rather than rho. And the advantage of that for current purposes is that I really want to look at the time dependence of things. And rho has its own time dependence, which I don't want to worry about. So if I look at the formula in terms of rho sub i, all time dependence is explicit. So I'm going to write the differential equation. It's the top equation in the box there, but I'm replacing u by a, because we renamed u as a. And I'm going to put everything on one side. So I'm going to write a double dot plus 4 pi over 3 G rho sub i divided by a squared equals 0. OK, that's our differential equation. Now, it's a second order differential equation, like we're very much accustomed to from Newtonian mechanics-- as this is an equation which determines a double dot, the acceleration of a in terms of the value of a. A common thing to make use of in Newtonian mechanics is conservation of energy. In this case, I don't know if we should call this conservation of energy or not. We'll talk later about what physical significance the quantities that we're dealing with have. But certainly, as a mathematical technique, we can do the same thing that would have been done if this were a Newtonian mechanics problem, and somebody asked you to derive the conserved energy. Now, you might have forgotten how to do that. But I'll remind you. To get the conserved energy that goes with this equation, you put brackets around it. You could choose whether you want curly brackets or square brackets or just ordinary parentheses. But then, the important thing is that it's useful to multiply the entire equation by an integrating factor, a dot. And once you do that, this entire expression is a total derivative. This equation is equivalent to dE-- for some E that I'll define in a second-- dt equals zero, where E equals 1/2 a dot squared minus 4 pi over 3 G rho sub i over a. And you can easily check. If I differentiate this, I get exactly that equation. So they're equivalent. That is, this is equivalent to that. So E is a conserved quantity. Now if one wants to relate this to the energy of something, there are different ways you can do it. One way to do it is to multiply by an expression, which I'll write down in a second, and to think of it as the energy of a test particle at the surface of this sphere. I'll show you how that works. I'm going to find something, which I'll call E sub phys-- or physical-- meaning, it's the physical energy of this hypothetical test particle. It's not all that physical. But it will just be defined to be m r sub i squared times E. m is the mass of my test particle. r sub i is the radius of that test particle expressed in terms of its initial value. And then, if we write down what E phys looks like, absorbing these factors, it can be written as 1/2 m times a dot r sub i squared minus GmM of r sub i divided by a r sub i. And that's just some algebra-- absorbed these extra factors that I put into the definition. And now, if we think of this as describing a test particle, where r sub i is capital R sub i max-- so we're talking about the boundary of our sphere-- then, we can identify what's being conserved here. What's being conserved here is 1/2 m v squared. a dot times r sub i would just be the velocity of the particle of the boundary of the sphere. And then, minus G times the product of the masses divided by the distance between the particle in the center. And that would just be the Newtonian energy-- kinetic energy plus potential energy, where the potential energy is negative-- of a point particle on the boundary, where we let r sub i be capital R sub i max-- the boundary of the sphere. Now, if we want to apply this to a particle inside the sphere, it's a little trickier to get the words right. If the particle is inside the sphere-- if r sub i is not equal to the max-- this is not really the potential energy of the particle. Can somebody tell me why not? Well, maybe the question is a little vague. But if I did want to calculate the potential energy of a particle inside the sphere-- that's meant to be in the interior. You can't really tell unless it's the actual diagram. But that's deep inside sphere. I would do it by integrating from infinity G da, and ask how much work do I have to do to bring in a particle from infinity and put it there. And in doing this line integral, I get a contribution from the mass that's inside this dot, which is what determines the force on that dot. But I also get a contribution from what's outside the dot. So I don't get-- if I wanted to calculate the actual potential energy of that point-- I don't get simply Gm times the mass inside divided by the distance from the center. It's more complicated what I get. And in fact, what I get is not conserved. Why is it not conserved? I could ask you, but I'll tell you. We're running out of time. It's not conserved, because if you ask for the potential energy of something in the presence of moving masses, there's no reason for it to be conserved. The potential energy for a point particle moving in the field of static masses is conserved. That's what you've learned in [? AO1 ?] or whatever. But if other particles are moving, the total energy of the full system will be conserved. But the potential energy of a single particle-- just thought of as a particle moving in the potential of the other particles-- will not be conserved. OK. What is conserved, besides the energy of this test particle on the boundary, is the total energy of this system, which can also be calculated. And that, in fact, will be one of your homework problems for the coming problem set-- to calculate the total energy of that sphere. And that will be related to this quantity with a different constant of proportionality and will be conserved for the obvious physical reason. For the particles inside, what one can imagine-- and I'll just say this and then I'll stop-- you can imagine that you know that the motion of this particle is uninfluenced by these particles outside. And therefore, you could pretend that they're not there and think of it as an analog problem, where the particles outside of the radius of the particle you're focusing on simply do not exist in this analog problem. For that analog problem, this would be the potential energy. And it would be conserved. And you could argue that way, that you expect this to be a constant. And you'd be correct. But it's a little subtle to understand exactly what's conserved and why and how to use it. OK, that's all for today.
MIT_8286_The_Early_Universe_Fall_2013
10_Introduction_to_NonEuclidean_Spaces.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: We do have a little bit of finishing to do because we didn't quite finish the dynamics of homogeneous expansion last time. So we'll begin by finishing that after a brief review of where we were last time. And then we'll move on to discuss non-euclidean spaces, which I hope will be still the bulk of today's lecture. OK. In that case, let's get going. Again, as I said, I want to begin by just reviewing some of the things we talked about last time. And you should consider this a good opportunity to ask questions if you discover there are things that you're not really sure you understood as well as you'd like. We were talking about the evolution of a closed universe. And to summarize that calculation we first re-shuffled the first order Friedmann equation, the equation for a dot over a quantity squared, by bringing all the d t's to one side and all the d a's on the other side after doing a little bit of rescaling. And we got this equation. Which we then said we can integrate. And the integral from time will go from time 0 to some arbitrary final time that we called t tilde sub f, where the "tilde" indicates that we multiplied by c. And the sub f means it's the final time of our calculation. And on the other side we have to integrate with corresponding limits of integration. Corresponding to t equals 0 is a equals zero. So the lower limit of integration is 0. And the final limit of integration is just the value of a tilde at the final time, whatever that is. Then we discovered that we could actually do the integral on the right if we made a substitution. And in the lecture last time we did it in two stages. But the combined substitution is just to replace a tilde by cosine theta, according to this formula, if we combine the two substitutions that we made last time. And if we do that we could integrate the right hand side. And the integral then gives us t tilde f is equal to the integral of that, which is just this expression. And the expression below that is just the substitution itself. How to relate a tilde to theta, according to substitution which gave us that formula. So these two formulas together allow us to determine t tilde sub f, and a tilde sub f in terms of theta and alpha. And once we had that we realized we no longer needed to keep those sub f's because that was really just a way of keeping track of our notation during the calculation. It holds for any theta sub f. So it holds for any theta. So then we just rewrote those same formulas without the sub f's. And here we wrote it removing the tildes, replacing them by their definitions. And this was the final answer. This describes the evolution of a closed universe expressed in this parametric form. That is, we were not able to explicitly write a as a function of t, which is what would have liked to determine how the expansion varied as a function of time. But instead we introduced the auxiliary variable theta, often called the development angle. And in terms of theta we can express both t and a, and thereby indirectly have an unambiguous relationship between a and t. Any questions about that? OK. Then we noticed that those were, in fact, the equations of a cycloid. And I won't go through the argument again, but the key point is that the evolution of our closed universe scale factor, as a function of time, is described by what would happen if you had a disk rolling on the horizontal axis with a point marked on the disk which was initially straight down, and then as the disk rolls that point traces out a cycloid. And that is exactly the evolution of a closed universe. It starts at size zero, goes to a maximum size, and then contracts again in a completely symmetric way. The contraction phase is the mirror image of the expansion phase. And then we went on to calculate the age of a closed universe. And that was really more of an exercise in algebra than anything else. The age is really given by this formula to start with. This expresses the age, t, in terms of alpha and theta. And the only problem with that is that nobody really knows what alpha and theta mean. It's much more useful to have an expression for the age in terms of things that astronomers directly measure. And one needs two things to replace alpha and theta. And the kinds of things astronomers directly measure are things like the Hubble parameter and the mass density, where they often express the mass density in terms of omega where omega is the mass density divided by the critical density. And that's what I chose to do in the formulas that we went on to derive. So our goal is simply to take that formula for t and figure out how to express the alpha and the theta that appear in it so that we can get an expression in terms of h and omega. And to do that, we'll go one step at a time. We started by just writing down the Friedmann equation. And the Friedmann equation has an a in it. And everything else in it is rho, which can be expressed in terms of omega without any trouble, and h itself. So everything else in it consists of variables that we're accepting to be part of our final answer. So we could solve the Friedmann equations for a or a tilde and that will allow us to eliminate any a tilde's from our expressions. Next, alpha was originally defined in terms of this formula which only involves rho and a tilde. We now know what to do with a tilde. We could substitute that formula. And rho, we always knew what to do with. We could express that in terms of omega. So we can instantly solve that equation and get an equation for alpha in terms of the quantities that we want to appear in our final answer. And there's still one more thing we need. We need an expression for theta. And theta we can get by looking at the other of those two parametric equations, the one that's not the "t equals" equation, but rather the "a equals" equation. And in this equation we know everything except theta itself. So we could substitute for a over root k from up here- that. And on the right hand side we can place alpha by that expression and then the 1 minus cosine theta stays. And now we could solve this equation for theta. I might just mention that, in lecture last time I actually mis-copied this equation. I forgot the factor of omega in the numerator. So if any of you were taking notes you can go back and correct your notes. But it's correct in the printed notes. And now it's correct on a screen. So this you could either solve for cosine theta-- initially what I did was to solve for cosine theta. But then it turned out to be more useful to know what sine theta is, because sine theta appears in that parametric expression for the age. So if you know cosine theta you can, of course, get sine theta because sine theta is just the square root of 1 minus cosine squared theta. And that's what we did. And that's how we got a square root here. And since square roots can have either sign, there's a plus or minus there which depends on where you are in the evolution of our universe. The right hand side here is always positive because we define that square root symbol to mean the positive square root, where the plus or minus, in the end, tells you the sine. And sine theta itself-- we know what theta does. It goes from 0 to 2 pi. So sine theta starts out positive and after theta crosses pi, sine theta is negative for the second half of the evolution. Meaning, sine theta is negative for the contracting phase and positive for the expanding phase. We then put all that together into the formula for CT, which is alpha theta minus sine theta. The alpha becomes this factor. And the theta minus sine theta becomes the arc sine of this expression, minus or plus that expression, where this is just sine theta from that formula. And it's minus sine theta, which is why the plus minus becomes minus or plus. The signs get a little tricky, but it's, in principle, straightforward. It all just follows from this formula. And if you know the sine of theta you have all your problems solved. So we put all that together into a table. This is just a copy of the same formula that we had on the previous slide. Theta is what appears in this right hand column. It's indicated as the sine inverse of some expression. It refers to that expression-- an abbreviation. So we know what theta does. Theta just goes from 0 to 2 pi in quadrants. At least, it pays to divide it up into quadrants. Sine theta is always positive for the first half and negative for the second half. And that means that we have the plus sign, or the upper sign, for the first half of the motion and the minus sign for the second half of the motion. Omega we could just calculate in terms of theta. No problem about filling in the omega column. And we know that we're expanding for the first half and contracting for the second half, so that really completes the table. And the important thing when you're actually using this formula is to decide what theta is. And once you have that, the sine of that-- the inverse sine of that-- well, no. Excuse me. Theta itself appears there, and the sine of theta appears there. And theta itself you have to figure out which is the relevant value in terms of where you are in the evolution. The point is that sine theta itself goes-- does not uniquely determine what theta is. OK. That, I think, completes our discussion of the evolution of closed universes. I think it completes everything that we did last time. So are there any questions? OK. Good. To finish our discussion of the evolution of matter-dominated universes, we go on to discuss open universes. And open universes are really the same algebra as closed universes. They just differ by the sine of k. Because one doesn't like to deal with negative numbers, I defined kappa to be equal to minus k, so that for our open universes kappa was positive while k was negative. Then I used a different substitution for a tilde. Instead of a tilde being a divided by the square root of k, which in this case would be the square root of a negative number-- which, one doesn't like to deal with imaginary numbers if one doesn't have any need to. So instead, for the open universe I'm defining a tilde to be a divided by the square root of kappa, so that a tilde is again real. Then I'm going to skip all the algebra here. There's a little bit more of it shown in the printed lecture notes. But there are no new concepts here. Everything is really the same, conceptually, as it was for the closed universe. One difference is that this time, instead of getting trigonometric functions, we find that we get hypergeometric functions. Hyper-- yeah. Hyper trigonometric functions, I think, is the right word. That is, sinhs and coshs instead of sines and cosines. The formulas are very similar. These are the formulas we get for the open universe case, compared to those formulas for the closed universe case. We get a change in the order of-- instead of theta minus sine theta, we get sinh theta minus theta. But that's what you have to get if it's going to turn out to be positive. Sine theta is always less than theta. So this is a positive quantity. Sinh theta is always greater than theta, so this is a positive quantity. And same for the second lines. Cosh theta minus 1 is always positive, and one minus cosine theta is always positive. So you really know which order to write them in just by knowing that you want to write down something that's positive. So these formulas describe the evolution of the open universe exactly the same way as those formulas describe the evolution of a closed, matter-dominated universe. So any questions? OK. Next step, just repeating what we did for the closed universe, we can derive a formula for the age of an open universe. And again, it's really just a matter of substituting into the formula we already have to be able to re-express it in terms of useful quantities. Which, again, I choose to be the Hubble expansion rate and omega. And here I've put together all three formulas for the age. The flat universe, the first one we did, where t is just h inverse if we bring it to the other side, times 2/3. And the open universe on the top, and the closed universe on the bottom. Now one of the, perhaps surprising, things that one finds here is that all three of these expressions look fairly different from each other. And you might think that that would give some kind of a jagged, discontinuous curve. But you can go ahead and plot it, which I did. And there's the plot. It's one nice, smooth curve. And we won't go into this in detail, but many of you have had courses in complex functions, functions of a complex variable. If you know about functions of a complex variable you can tell that these are, in fact, all the same function. That is, if omega is, say, bigger than 1, this formula can be evaluated straightforwardly. It involves only things like square roots of positive numbers. But you could also try evaluating this formula for omega bigger than 1. And then you have square roots of negative numbers appearing. But square roots of negative numbers are OK if you know about complex numbers. They're just purely imaginary. And then you get trigonometric functions, and even inverse trigonometric functions, or inverse hyperbolic trig functions of imaginary arguments. But all those are well-defined. And if you work through what the definitions are, the top line really is identical to the bottom line. Those really are the same function. And that's why one expects that, when you plot them they will join together smoothly, as they clearly do. The point in the middle here is the first age that we derived for the flat universe where omega is equal to 1 and h t is just equal to 2/3. That is, t is equal to 2/3 h inverse for a flat universe, which is the middle dot. And the closed universes are on the right and open universes are on the left. OK. Any questions about these age calculations? Yes? AUDIENCE: I noticed that, for the open solution, there's a case where you get some imaginary numbers? PROFESSOR: If you use these formulas you don't get any imaginary numbers. But if you tried to use, say, the bottom formula for a value of omega less than 1 it would give you imaginary numbers. And it would, in fact, if you trace through what those imaginary numbers do, it would give you the formula on the top. So it's all consistent. It's most straightforward to use the formula on the top for omega less than 1 and the formula on the bottom for omega greater than 1. And then one never confronts imaginary numbers. AUDIENCE: OK. PROFESSOR: Any other questions? OK. Where are we going next? Finally, just actually one last graph on the evolution of matter-dominated universes, which is the final form of a of t for a matter-dominated universe. If you re-scale things by dividing a over root k by alpha, and c t over alpha-- if you look back at our equations-- let me go back to where the equations were. We're really just graphing these equations. If I divide this equation by alpha, I just get a pure function of theta with dimensionless variables. And similarly here, if I divide a over root k by alpha I just get 1 minus cosine theta, which is a pure number. So that's what I've chosen to do. And the dimensionality works the same for the open case as well. So it allows you to draw a plot which is just independent of parameters. All the parameters are absorbed into the way the axes are defined, which are both defined as dimensionless numbers. And in that case the closed universe survives for a duration of 2 pi. The axis here, at least, has the same duration as theta. It's not actually theta, because t is not linear in theta. But this point does change by 2 pi as theta goes from 0 to 2 pi. And one can see here, the three possible curves. A closed universe which starts and then falls back, a flat universe which goes off to the right and has actually a constant slope as you go out here, and an open universe which behaves slightly differently. Actually, I think I was wrong when I said the flat universe has a constant slope. But the open and the flat case are similar to each other. They both go off to infinity, but in different ways. And all three of them merge as you go backwards in time. That's not something that might have been obvious before we wrote down the equations. But in very early times all universes look like they're flat universes, if you go to early enough times. And that actually is an important point which we'll talk about later in terms of what's called the flatness problem of cosmology. That basically is the consequences of that fact. Yes? AUDIENCE: Why didn't you just say all of them looks like open universes [INAUDIBLE]? PROFESSOR: [LAUGHS] AUDIENCE: I mean, what's special about flat universes? PROFESSOR: Well, actually, there is something special about flat. Which is that, if we plotted omega as the function of time all of them approach omega equals 1 as time goes to 0. So there is a very definite meaning of saying that they're all approaching flat, rather than they're all approaching open or closed. Good question, though. Because from this graph you really can't tell anything special about the flat. Any other questions? Yes? AUDIENCE: So does this mean that the open and flat solutions will extend indefinitely? PROFESSOR: Yeah. The open and flat solutions extend indefinitely in time. That's right, they do. And one can see that from the formulas or the graph. Yes? AUDIENCE: So for plotting omega as a function of time, I'm a little bit confused as to how it changes as-- for example, an open universe expands, or a closed universe-- because it seemed like, from our calculations, that when the universe was expanding, omega was increasing? Is that-- at least for a closed universe? PROFESSOR: That is true for a closed universe during the expanding phase. For a closed universe during the expanding phase, omega does increase. It starts out as 1 and then it rises to infinity when the universe reaches its maximum size and is about to turn around and go back. Because it doesn't mean the mass density increases. That's maybe what's confusing you. The actual mass density always decreases as it expands, but the critical density decreases even faster. So the ratio, omega, actually rises for a closed universe as it expands. For an open universe it's the reverse. For an open universe omega starts out being 1 at early times, as it does for any matter-dominated universe. And as the universe expands, omega gets smaller and smaller for an open universe. And it follows-- I don't have them on slides here, but we do have in the notes formulas that we derived, that we did on the blackboard, that give us omega as a function of theta. And those formulas, you can just look at them and see how omega behaves as the universe evolves. Because as the universe evolves, theta just increases. And those formulas do trivially show what I said about how omega evolves. Any other questions? Yes? AUDIENCE: Why a over alpha root k, but for a flat universe k is 0? PROFESSOR: That's right. I didn't really say that, but for the flat universe there's an arbitrary normalization that one chooses in drawing this graph. And it was really an arbitrary choice for me to draw the flat case to join on smoothly with the open and closed cases. I could have put any constant in front of t to the 2/3. And you're right, then they would not necessarily mesh unless I chose that constant in the right way. Yes? AUDIENCE: Based on that, is there a particular reason you decided to chose [INAUDIBLE] a flat universe looks like this? Is there a particular thing you're trying to show in choosing [INAUDIBLE]? PROFESSOR: OK. The question is, is there a particular reason why I chose the normalization that I chose? If I did not choose it, it would only differ by an overall factor. So it would still look-- the flat line by itself would look indistinguishable, really. It would just be higher or lower. So the only question is how it meshes with the others. And since the flat case really is the borderline between the other two cases, and since this constant that appears in front of t to the 2/3 has no physical meaning whatever, it seems the sensible thing to do is to plot it so that it looks like the limit of an open or closed universe. Because physically it is the same as the limit of an open or a closed universe coming from either side. Any other questions? Those are good questions. OK. In that case, we are finished with the evolution of matter-dominated universes, and ready to start talking about non-euclidean spaces. So what we'll be doing next is kind of a mini introduction to general relativity, which how non-euclidean spaces enter physics. Now, needless to say, general relativity is an entire course separate from this course. And of course that even has more prerequisites that this course has, so we're not going to duplicate what would be taught in the general relativity class. But it turns out that the discussion of general relativity does, in fact, divide pretty cleanly into two major issues. And we will be dealing with one of those issues but not the other. In particular, what we will be doing in this class is learning how the formulas of general relativity is used to describe curved spaces. And we will learn how particles move in curved spaces. So we'll be able to analyze trajectories in any curbed space if somebody tells you what the curved space itself looks like. What we will not be doing is we won't even attempt to describe how general relativity predicts that matter should cause space to curve. That would be left entirely for the GR course that you may or may not take. But it will not be discussed here. There's only one point where we will need a result of that sort. We'll want to know how the matter in our Friedman Roberson Walker universe affects the curvature. And there I would just give you the result. I'll try to make it sound plausible. But I won't make any pretense being able to drive that result. That is, we will not be able to drive how much space curves as a consequence of the matter that's in it. But we will write down the answer so you at least know what the answer is for a homogeneous isotropic universe. So here's a picturesque slide about curve space. Four years ago, I think it was, a postdoc here named Mustafa Amin gave this lecture for me because I was out of town. And he had much more colorful transparencies than I ever do. So I'll be using some of his transparencies here. And this is one of his opening slides. So this is what we want to talk about-- curved space as illustrated in that nifty picture. So I want to begin with a kind of an historical introduction. To be honest, I'm pretty much following the logic of the first chapter of Steve Weinberg's General Relativity book. Non-Euclidean geometry of course starts by thinking about Euclidean geometry and then how one might be move away from it. And historically, there's kind of a clear cut path, which was followed. Euclid based his geometry, as described in Euclid's elements, in terms of five postulates. The first of which is that a straight line segment can be drawn joining any two points. Sounds clear enough. Second, any straight line segment can be extended indefinitely in a straight line. Also sounds obvious, which is what Euclid was banking on. Third, given a straight line segment, a circle can be drawn having the segment as a radius and one endpoint as the center. That also-- you can imagine yourself doing that-- seems straightforward enough. But then we come to the fifth postulate, which still sounds pretty obvious. But it's certainly much more complicated than the others. The fifth postulate says that if a straight line falling on two straight lines makes the interior angle on the same side are less than two right angles, the two straight lines-- if produced indefinitely-- meet on that-- I think this is mis-typed. Mustafa typed it. I guess he's typed this one, too. This one shows the picture. Yeah, this should be on that side in which the angles are less than two right angles. So let me just explain it, independent of the text. The question is, what happens if you have one line-- the line that's shown more or less vertical here-- and two lines that cross it such that the interior angles-- here shown as a and b-- are on one side less than pi. Less than two right angles is the way you could describe it. Then, as you can see from the picture, these lines will meet on this side and will not meet on that side. And that's what the postulate says that under those circumstances where two lines cross a given line such that the sum of the two angles adds up to less than pi on one side that the lines will be on that side and will not be on the other side. Yes? AUDIENCE: What was his motivation for making this the fifth postulate? It seems kind of arbitrary. PROFESSOR: OK, the question is what was Euclid's motivation for making this the fifth postulate? Well, I have to admit I haven't had many conversations with Euclid so I'm not sure I know the answer to that question. But it was what was needed to basically complete geometry as we know it. So much of what you've learned in geometry would not be there if there was not something equivalent to this postulate. But actually what I'm going to be talking about next is there has been discovered there are a number of substitutes for this fifth postulate. Mathematicians studied for a long time whether or not this postulate could be derived from the others since it seems so much more complicated than the others. And there was a strong desire during thousands of years really-- at least over 1,000 years-- among mathematicians to try to prove the fifth postulate from the first four postulates. And nobody ever succeeded in doing that. And we now are pretty clear that it's not possible to do that. That the postulate is independent of the other postulates. It was discovered along the way that there are a number of equivalent statements to fifth postulate. And you could equally well have chosen any one of these four statements that are illustrated in these pictures. And I'll give you words one by one for what these alternative versions of the fifth postulate would be. A, up here, illustrated there, says that if a straight line intersects one of two parallels, meaning two parallel lines-- so this is the one line intersecting two parallel lines. If it intersects one of them as the heavy part of the line shows, then the theorem says that if you continue that line it will always intersect the other. And certainly obvious from the picture, that's the way it works. But that's equivalent to the fifth postulate and not provable from the other four postulates. A second statement-- b is the one that's illustrated there-- is the one that I remember learning when I was in high school, I think, which says that if you have one line and another line parallel to it-- or rather, I'm sorry another point off the line that there's one and only one line through that point parallel to the original line. So that is yet another statement of this famous fifth postulate. Number c is less obvious. But it turns out that once you go away from Euclidean geometry, your space always has a built in scale. So things are not scalable. One example I might mention at this point of occurred space is, say, the surface of a sphere. And the service of a sphere has some fixed size. So if you have a figure of one size, and you wanted, on the surface of this sphere, to make a figure 5 times bigger, it might not fit on the sphere anymore. So you can't always make a figure bigger on the surface of the sphere. In fact, you could never make a figure bigger without bending in some way on the surface of the sphere. So that gives rise to this third statement of the fifth postulate, which is that, given any figure, there exists a figure similar to it of any size. And by similar it means that for polygons they're similar if the corresponding angles are equal to each other as they're supposed to be on those two images. And the corresponding sides are proportional to each other. So a similar figure is just a blow up-- a rescaling-- of the original figure. And you can only do it if the fifth postulate holds. You can do it on a flat space but not on a curved space. And I think that is a less obvious version of the fifth postulate. And finally, the fifth postulate is linked to the behavior of triangles. We all learned in Euclidean geometry that the sum of the three angles of a triangle is 180 degrees. That is a crucial theorem of Euclidean geometry that depends directly on the fifth postulate and is, in fact, equivalent to the fifth postulate. So you could assume it and forget the fifth postulate and still prove everything. So the fact that-- actually, I'm sorry. It is equivalent to the fifth postulate. But you don't need to assume that it's true for every triangle to prove the fifth postulate. It turns out that sufficient and mathematicians always look for the minimum axiom to be able to prove what they want to prove. The minimum version of the axiom for triangles is to simply assume that there's just one triangle who's angles add up 180 degrees. And if there exist just one triangle whose angles add up to 180 degrees, then the fifth postulate has to hold-- turns out-- which is not so obvious, again. So these are all different, equivalent ways of staying in the fifth postulate. But it's pretty clear now that it's not possible to prove the fifth postulate from the first four. Any questions about that? OK, so a little bit of history now. The first person, apparently, to seriously explore the possibility that the fifth postulate might be wrong was a Jesuit priest named Giovanni Geralamo Saccheri. I'm sure other people can pronounce it much better than I can. And in 1733, which is the same year he died, he published a study of what geometry would be like if the fifth postulate did not hold. And he titled it- this is a lot title, which I don't know how to pronounce really, but in English it's apparently translated as Euclid Freed of Every Flaw, treating the fifth postulate as kind of a flaw in Euclid's axiomization of geometry. Saccheri was, in fact, convinced that the fifth postulate was true. He didn't really want to consider the possibility that it was false. But he was nonetheless exploring the possibility that it was false because he understood the concept of a proof by contradiction. He was looking for a contradiction to be able to prove that mathematics would not be consistent if you assume the fifth postulate was false. And therefore, the fifth postulate would be proven to be true. So he was exploring what would happen if the fifth postulate was false, looking all the time to find some inconsistency, and was not able to find any. So he considered all of this a failure. But he published it anyway. And that's the publication front page. The next person to enter the stage-- or actually three people together, but I only have a nice transparency on Gauss. Gauss, Bolyai, and Lobachevsky went on to seriously explore the possibility of geometry without the fifth postulate, actually assuming that the fifth postulate is false, developing what we call GBL, Gauss-Bolyai-Lobachevsky geometry. Gauss was a German mathematician. These were the years he lived. He, in fact, was born the son of a poor, working class parents, which I found a little surprising. We kind of think of scholars in those early years as being gentleman who were part of the nobility. But Gauss was not but, nonetheless, went on to be one of the most important mathematicians of his age. One of the other things that surprised me and to be honest I just learned all this from Wikipedia. I'm no real expert on the history. But they gave a list of Gauss' students. And they included that Richard Dedekind, Bernhard Riemann, Peter Gustav Lejeune Dirichlet, which is the name I remember, Kirchhoff, and Mobius. So quite a list of famous mathematicians. So I have to admit, when I read that, I was just about to send off an email to all of my former students saying, look, what's happening here? You're not competing at all. [LAUGHING] But I decided not to do that. It would be impolite. And who knows? Maybe 100 years from now my students will be as famous as these guys. You never know. We can plan, I hope. OK, so the other guys involved in this and they were all working independently were Bolyai who was, I think, a Prussian military man, actually, and Lobachevsky who was also a professional mathematician working in the university. The three of them independently developed a geometry in which the fifth postulate was assumed to be false. There are two ways it could be false. In version with the triangles, for example, a triangle could have more than or less than 180 degrees. Since there were assuming that the fifth postulate was false, it meant they had to be assuming that every version that we just talked about of the fifth postulate is false because they are all equivalent to each other and these people realize that. So in particular in the Gauss-Bolyai-Lobachevsky geometry, there are infinitely many lines parallel to a given line. And no figures of different size are similar. And the sum of the angles of a triangle are always less than 180 degrees or pi, depending on whether you're a radian person or a degree person. Now I should mention here that the surface of the sphere is, in fact, a perfectly good example of the non-Euclidean geometry. But for some reason it was not taken seriously by mathematicians until long after these guys were doing their work. And part of the reason, I guess, is that the surfaces three evaluates not just one of Euclid's axioms but two if we go back to Euclid's axioms. The second of Euclid's axioms said that any straight line segment can be extended indefinitely in a straight line. And the surface of the sphere and the analog of the straight line is a great circle. And if you extend the great circle, it comes back on itself. So the surface of the sphere violates the fifth postulate. But it also violates the second postulate. But still perfectly consistent geometry, and it is a non-Euclidean geometry. And on the surface of this sphere, these statements all kind of reverse. Instead of having infinitely many lines parallel to a given line, you have no lines that are parallel to a given line. Remember, lines are great circles and lines are parallel if they never meet. But any two great circles meet. So there are no lines parallel to a given line in the geometry of the surface of the sphere. It's, again, true that no figures of different size are similar. That has to be true for any any geometry where the fifth postulate was false. The last one, again, has a choice. And it's the opposite choice for a sphere as for the Gauss-Bolyai-Lobachevsky geometry. In the Gauss-Bolyai-Lobachevsky Bolyai- geometry, the sum of the angles is always less than 180 degrees. If you picture a triangle and a sphere, you can imagine that the edges look like they bulge. And the sum of the angles on the surface of the sphere of a triangle are always greater than 180 degrees. The easiest way to convince yourself that that's true in at least one important case is to imagine a triangle. Here's a sphere. Everybody see the sphere? There's the North Pole. There's an equator. Imagine a triangle where one vertex is at the North Pole and the liver disease or on the equator and the triangle looks like this. And these are both 90 degree angles down here. So you already have 180 degrees. And any angle you have on top just adds to the 180 degrees putting you above the Euclidean value of 180 degrees. So for a sphere it's always the opposite of this greater than 180 degrees. OK, continuing with Gauss, Bolyai and Lobachevsky, their work was based on exploring the axioms of Euclid, assuming the reverse of the fifth postulate in any one of its forms. And they proved theorems and wrote, sort of, their own versions of Euclid's elements. But that still left open the question whether or not all of this was really consistent. That is, from the thinking of Saccheri that we already talked about, there's always the chance that you might find the contradiction even if you haven't found a contradiction yet. And Gauss, Bolyai and Lobachevsky didn't really have any way of answering that. They didn't really have any way of knowing whether if they continued further they might find some contradiction. So what they did certainly extracted the properties of this non-Euclidean geometry. But it did not really prove that the non-Euclidean geometry was consistent. That happened slightly later in an argument by Felix Klein, who was the same Klein as the Klein bottle, by the way. And that was his famous paper on this was published in 1870 somewhat later than the early work that we're talking about. And what he did is he gave an actual construction of the GBL geometry. And by construction I mean in terms of coordinates using coordinate geometry ideas, which were originally developed by Descartes. That's why we call them Cartesian coordinate systems. So what he gave as a description of the Gauss-Bolyai-Lobachevsky geometry was a space that consisted of a disk with coordinates x and y just as Descartes would have done. He restricts himself to the inside of the disk. So he restricts himself to x squared plus y squared less than 1. And what he gave was a function of two points in this disk where the function represents the distance between those two points. He decided that distances don't have to be Euclidean distances if we're trying to explore non-Euclidean geometry. So he invented his own distance function. And it's pretty complicated looking. The function he came up with was that the cosh of the distance between points 1 and 2 divided by sum number a-- and a could be any number-- is equal to 1 minus x1 times x2-- these are the two x-coordinates for points 1 and 2-- minus y1 y2 over the square root of 1 minus x1 squared minus y1 squared. Sorry, it's the product of two square roots in the denominator here. And the second square root is the same thing for the two coordinates 1 minus x2 squared minus y2 squared. So this formula is certainly not obvious to anybody. But Klein figured out that this actually does-- the geometry described by these formulas-- reproduce completely the postulates of the Gauss-Bolyai-Lobachevsky geometry, including the failure of Euclid's fifth postulate. And since this boils down to just algebra, if algebra is consistent, it proves that the Gauss-Bolyai-Lobachevsky geometry is consistent. Now, as I understand it, nobody can prove that any of this actually is consistent. People prove relative consistency. So in the assumption that algebra is consistent, theorems about the real numbers are consistent. Felix Klein was able to prove that the Gauss-Bolyai-Lobachevsky geometry is consistent. And this was really the beginning of coordinate geometry. I'm sure all of you are rather familiar with coordinate geometry. It's a standard topic now in math courses and even in high school. And this is really where it began. Euclid had no idea that it was any value in trying to represent geometric quantities as equations. Euclid did everything in terms of theorems. But this opened up a whole new door for how to discuss geometry. So the geometry is a slide that just shows the same formula. And I guess this is supposed to be an image of the disk. One thing I should point out, which I forgot to point out, which is that although this disk looks finite, it really is an infinite space that's being described. And one can see that by looking carefully at the distance functions. As either one of these two points-- point 1 or point 2-- approaches the boundary x square plus y square equals 1, this square root denominator blows up. So the distance between a point and another point that's approaching the boundary goes to infinity as that point approaches the boundary. So boundary's actually infinitely far away even though in coordinates they are still x squared plus y squared equals 1. So this introduces another important concept, which we'll be using in general relativity, which is the coordinates don't represent distances. Distances could be very different from the way the coordinates look. So that boundary, even though it looks like it's right there on the blackboard, is actually very far away from the other points. OK, so after Klein, the important new idea that Klein introduced was, first of all, the explicit construction but also the general idea that you can describe geometry not by giving postulates but instead by actually doing a construction where you've given names to the points in terms of coordinates and you describe what happens geometrically in terms of some distance function which describes the distance between two arbitrary points. And Gauss went on to make two other very important observations about geometry, which has become essential to our current understanding of non-Euclidean geometry. So let me mention two other ideas that Gauss introduced. The first one was the distinction between what he called inner and outer properties of a curved surface. His curved spaces were all two dimensional, so they were surfaces. So this is most easily described for say a surface of a sphere where we can all visualize what we're talking about. The surface of a sphere we visualize in three Euclidean dimensions and we think of its properties as being determined by that three dimensional space. And the geometry of that three dimensional space. And of course the three dimensional space is Euclidean that we're embedding our sphere into. But the non-Euclidean aspects are all seen in the geometry of the two dimensional surface. Figures drawn on the surface as if the rest of the three dimensional space did not exist. And that is this key idea of inner versus outer properties. The outer properties of the sphere are properties that relate to the three dimensional space in which the sphere is embedded. But what Gauss realized is that there's a perfectly well defined mathematics contained entirely in the two dimensional space of the surface of the sphere. You could discuss it without making any reference to the three dimensional space around it if you wanted to, it's just a little more complicated to be able to do that. But we will in fact be doing it explicitly very shortly. And all it amounts to from our point of view is assigning coordinates to the points on the surface of the sphere and the distance function for those coordinates. And then one has a full description of this two dimensional space consisting of the surface of the sphere which no longer needs make any reference to the third dimension that you imagined the sphere embedded in. So the study of the properties of that two dimensional space is the study of the inner properties of the space. And if you care about how it's embedded, that's called the outer properties. And Gauss made it clear in the way he described things that from his point of view, the real thing that mathematicians should be doing is studying the inner properties. The outer properties are not that interesting. So that's one key idea that Gauss introduced. And the second is the idea of what we now call a metric. And there are really two pieces to this. The first of which is that instead of giving macroscopic distances, which is what Klein did, he told you how to write down a formula for the distance between any two points, Gauss realized that things could become more interesting if instead of trying to immediately write down a function for the distance between two arbitrary points, you can find your attention to very nearby points. And consider the distance between two far away points as just the length of a line that joins them where every little segment in the line is a short distance which is governed by the rules that you've made up for short distances. So if you understand short distances, long distances are obtained just by adding. That's the basic idea. So only short distances are needed. And then there's another important idea which is that the short distances themselves for an interesting class of spaces should not be some arbitrary function of the coordinates of the two points but should in fact be a quadratic function. And in two dimensions, a quadratic function mean something like this, where gxx is just any function of x and y, x and y are the two coordinates of the space. gxy is also just any function of x and u. gyy is any function and gxx multiplies the x squared, xy multiplies the x times dy and g sub yy multiplies dy squared. And there's no terms portion for dx by itself or dy by itself or dz by itself. And there's no terms proportional to the cubes of those quantities, it's all quadratic. That's the assumption. Now what underlies that assumption is not that all spaces have to have this property. This does narrow one down to a particular class of spaces. But one way of characterizing that class is that what Gauss understood is that that class, the class of spaces described by a metric like this, are precisely the class of spaces which are locally Euclidean, meaning that even though the surface is curved, any tiny little patch of it looks flat and can be covered by Euclidean coordinates, redefining coordinates in that one little patch only where the metric in the one little patch would just be the Euclidean metric given by the Pythagorean theorem of Euclidean geometry, dx prime squared plus dy squared, which would be the distance function for a flat space and ordinary Cartesian coordinates. And it turns out that saying that this is true in every microscopic region is equivalent to saying that the metric over the entire space can be written as a quadratic form, meaning one that looks exactly like that. And spaces that have this property are now usually called Riemannian spaces, even though it was Gauss who first identified them. And all the spaces that we deal with in physics, in particular the spaces we deal with general relativity, will be Riemannian spaces. Or sometimes they're called pseudo Riemannian spaces because time is treated a little bit differently in physics. And the word "Riemannian" was really built on spatial geometry. But the word "pseudo" only changes the fact that this becomes the Lorenz metric, if you know what that means, instead of the Euclidean metric. But the same idea holds, which is that the spaces that we're interested in are spaces which locally look exactly like flat space. And that implies that globally, you can always write down a metric function which is quadratic, as Gauss said So the metric should be a quadratic function that specifies the distance only between infinitesimally separated points, not finitely separated points and should have the form of ds squared is equal to some function of x and y times dx squared plus a different function of x and y times dx times dy plus a different function of x and y times dy squared. So I'm just writing the same form that was on the side. This is a very important formula. Incidentally, the two that appears here is only because when we write this in a different notation, this term will occur twice, once as dx times dy, and once as dy times dx. And the two times it occurs are equal to each other, so here they're just collected together with a factor of two in front. Yes. STUDENT: Just to make sure, those three different functions you wrote, those subscripts aren't supposed to mean partial derivatives, right? PROFESSOR: No, that's right. The subscripts and this expression only mean that this multiplies dx squared, so it has subscripts xx. And this multiplies dx times dy, so it has subscripts xy. That's what the subscripts mean and nothing more. So the subscripts only mean that they're the things that appear in that equation. Yes. STUDENT: It seems like the metric is giving us distance in terms of an infintesimal displacement, but then a locally Euclidean space is already tangent infintesimally, so how are we relating the local metric with the global metric? PROFESSOR: OK, the question is how do we relate the local metric which I say is Euclidean to the global metric? And the answer I think for now I will stick to just giving kind of a pictorial answer based on the picture here. That is once you know the distances between any two points in a tiny little patch here, it's then always possible to construct coordinates, here called x prime and y prime, such that the distance between any two points as calculated from the original metric, which is the one here, is exactly the same as the distance you get using this metric. And the claim is that you can always define coordinates x prime and y prime which make that true. That claim is not absolutely obvious. But it's something you can probably visualize if you just imagine that every little tiny piece of this curved surface looks like it was just a flat surface and then a flat surface you know that you can write down a Cartesian coordinate system, which will have the Euclidean metric. But it's only an intuitive statement, proving it is actually harder. Yes. STUDENT: [INAUDIBLE] bottom formula [INAUDIBLE] with positive curvature if we analogize to second derivative. PROFESSOR: I'm sorry, say that again. STUDENT: If we analogize gxx, gyy, in the bottom formula there, [INAUDIBLE] positive curvature [INAUDIBLE] second derivative-- PROFESSOR: Yeah OK. So you're asking does this mean that we have two states of positive curvature? STUDENT: Bottom right. PROFESSOR: Bottom right, oh this. These are Mustafa's slides. I forgot to say that. You can tell from the style, these are not my slides, these are Mustafa's slides. And I don't know what he meant by this. You're right, this does-- well you do want the metric to be positive definite, which is not the same as saying the curvature is positive. And I think this might just be the condition that the metric is positive definite, that this expression will always be positive. Yeah. I'll bet that's what it is, I don't know for sure. I'll bet that's what that condition is about. And you do want that, the metric had better be positive definite for spatial geometries. In fact, in general relativity, where it's a space time metric, it will not be positive definite. For reasons that we'll talk about later. But for geometry, the metric should be positive definite. All distances should be positive. OK. So that ends my slides. So now I'll continue on the blackboard. OK next I wanted to say a few general comments about general relativity. General relativity was of course invented by the Einstein in 1916. It's a theory that he was working on for about 10 years after he invented the theory of special relativity. To understand what's going on there, you want to recognize that special relativity is a theory of mechanics and electrodynamics which was designed to be consistent with the principle that the speed of light is always the same speed of light independent of the speed of the source of light or the speed of the observer of the light. And it's of course not easy to do that, because you think that if you move relative to something else that's moving, that you would see its velocity change. So in order to make a theory that where the speed of light was an absolute invariant, Einstein had introduced a number of things. And we talked about those at the beginning of the course, the three primary effects built in to special relativity. Time dilation, length contraction, and new rules about simultaneity, and how things that look simultaneous to one observer will not look simultaneous to other observers in a very definite, well defined way. So by inventing these rules, Einstein was able to devise a system which was consistent with the idea that the speed of light always looked the same to all observers. And at similar times, the Michelson Morley experiment seemed to show that that was in fact the case and ultimately there's a tremendous amount of evidence verifying that what the hypothesis that Einstein was pursuing is the right one for the way nature behaves. The speed of light is invariant. But what was lacking in Einstein's formulation of special relativity was any version of a consistent theory of gravity. Gravity had a well defined description known since Newton. But Newton's description was a description of a force at a distance. And that is intrinsically inconsistent with a theory like special relativity, which holds that simultaneity is itself relative. In Newtonian physics, the force of gravity is equal to Newton's constant times the product of the two masses that are interacting with each other divided by the square of the distance and then times some union vector pointing along the line that joins the two particles. But for that formula to make any sense at all, you have to know where the two particles are at the same instant of time. And this r is the distance between the locations of the two particles at some instant of time. And this r hat is a unit vector that points along the line joining those two particles where you've pinned down where those particles are at one instant of time. But we know from the very beginning that in special relativity the notion of two things happening at distant points at one instant of time is ambiguous. Different observers will see different notions of what it means for two events to happen at the same time across this distance. And that means there's no way to make sense out of Newton's Law of gravity in the context of special relativity. You can't modify it by just changing the way the force depends on distance. You really have to change the variables that it depends on from the beginning. Yes. STUDENT: Is that equivalent to saying the other particle can't know how much mass of one is [INAUDIBLE] limit of the information of the speed of light? PROFESSOR: Is that equivalent to saying that one particle can't know what the mass of the other particle is because of limitation. Yeah it's the distance, it's not the mass. Because the mass is preserved in time. So one particle could've measured the mass of the other particle at an earlier time and it would be reasonable to infer, given the laws of physics, that we know that would stay the same. But one particle has no way of where the other particle is at the same time. And not only does the particle not have any way of knowing, but even an external observer can't know. Because different external observers will have different definitions of what it means to be at the same time. So there really is no way that this could work. Now I should mention that it is still possible to have action at a distance theories which are consistent with special relativity. And in fact, Maxwell's equations can be reformulated that way. The easiest way to make things consistent with special relativity is to describe interactions in terms of fields. And that is really what Einstein did originally in special relativity. He was thinking about light, he was thinking about Maxwell's equations and he was thinking very explicitly about Maxwell's equations. And in the Maxwell description of electromagnetism, particles at a distance don't directly interact with each other. But rather, each particle interacts with the fields around it and that is a local interaction. A particle interacts directly only with the fields at the same point. But then those fields obey wave equations that can propagate information. And the fact that you have an electron here can create a field which then exerts a force on an electron there. And the force on the electron here depends only on the electric fields here or the magnetic fields as well. The particle is moving, but everything is completely local and the description of electromagnetism as given in the form of Maxwell's equations. And Maxwell's equations are completely relativistically invariant. And that was part of Einstein's-- it was really the key part of Einstein's motivation in constructing the theory of special relativity in the first place to make Maxwell's equations invariant. That they held in every frame. It is still possible though and worth recognizing that it's possible to reformulate electromagnetism as an action at a distance theory. And it is in fact described that way in volume one of the Feynman lectures for those of you who've looked at the Feynman lectures. In order to make that work, you have to complicate things in a pretty significant way. So I'm going to draw here just a space time diagram. X and CT, CT going up that way, x going that way. So in this diagram the speed of light would be a 45 degree line. And let's suppose we have two particles traveling in this space. A particle that I will call a and a particle that I will call b. Being very original with these names. If we wanted to know the force on particle a at a certain time t indicated by this dotted line, I guess I'll label t to make it as clear as possible. Feynman gives us a formula where we can determine it solely in terms of the motion of particle B without talking about fields at all. But the formula does not depend on where b is at the same time. And it could not if this is going to be a relativistic description. But instead, the way this action and the distance formulation works is when imagine drawing a 45 degree line backwards, meaning a line that light could travel on, and one sees where that intersects the trajectory of particle B. And that time is t prime. And the word that's used for that symbol is retarded time. It's an earlier time. It's exactly that time which has the property that if particle b emitted a light beam at that time, it would be arriving at particle a at just the time t that we're interested in the time when we're trying to calculate the force on particle a. And what Feynman gives you in volume one if you look at it is a very complicated formula that determines the force on particle a in terms of not only the position of this particle at time t prime but also its velocity and even its acceleration. But if you do know the position, the velocity, and the acceleration of this particle, and of course the velocity of particle a, you can determine the force on a. Not obvious, but it's true. But that's certainly not the easiest way to formulate electromagnetism. And that's not the way most of us have learned, unless you've learned by starting by reading volume one of Feynman. But most of us learn Maxwell's equations as differential equations. Where information is propagated by the field from one point to another. In the case of general relativity, one has the same problem. How can you describe something which-- and the only approximate you initially know is an action and the distance, how can you describe it in a way that's consistent with relativity? And the idea that simultaneity is not a well defined concept. So that was the problem was Einstein was wrestling with for 10 years, how to build a theory of gravity that would be consistent with the basic principles of special relativity. And the result of those 10 years of cogitating is the theory that we call general relativity. And it's essentially a theory which describes gravity as a field theory similar to Maxwell's field theory where all interactions are local. Nothing interacts at a distance, but particles interact with fields at the same point, the fields can propagate information by obeying wave equations, and the fields then at a distant point can exert forces on other particles. But in the case of general relativity, what Einstein concluded was that the fields that were relevant, the fields that described gravity, were in fact the metric of space and time. So general relativity is the field theory of the metric of space and time. And gravity is described solely as a distortion of space and time. And that's what general relativity is about and that's what we will be learning more about. Now as I said earlier, now I can say it perhaps more explicitly, what we will be learning about is how to describe the curvature of space time as general relativity describes it. We will learn how that curvature affects things like the motions of particles. But we will not in this course learn how the presence of particles and masses affects the curvature of space time. That you'll have to take it in a relativity course to learn. OK, I think that's where we'll be stopping today. Just doesn't pay to start a new topic with a minute and a half left. But let me just ask if there are any questions. OK, well class over, I will see you folks in a week, because there's no class next Tuesday.
MIT_8286_The_Early_Universe_Fall_2013
22_The_Higgs_Field_and_the_Cosmological_Magnetic_Monopole_Problem.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu PROFESSOR: As you know, Professor Guth is away. I'm substituting for today, he didn't leave me with a particularly coherent game plan, so I'm going to begin with where he thinks we should start. Please jump in if I am just repeating something that he has already described to you guys, or if there's anything you like me go over a little bit more detail, I will do my best here. So, I'm working off of a fairly rough plan. But let me just quickly describe what-- based on what Alan has explained to me --what we're planning to talk about today, and if there's any adjustments you think I should be making that would be great. So, the game plan for today. What I want to do very quickly is hit on a couple of the key points which I believe you talked about last week, which is a quick review of the essential features of symmetries of the gauge fields the make up the standard model. Now, I believe you guys did in fact talk about this last week, at least briefly. And you talked about how you can take these things and embed them in a larger gauge group, the group SU(5). I'm not going to talk about that too much, but I want to just quickly hit on a few elements related to this before we get into that. From this we'll then talk about the Higgs mechanism-- really I'm going to talk about the Higgs field, I'm not going to talk about the Higgs mechanism quite so much as motivate why it is necessary-- and then talk about how the Higgs field behaves and why it's important for the next problem, which is what is called the cosmological monopole problem. To be more specific magnetic monopole problem. I confess I feel a little bit awkward talking about this problem on behalf of Alan. This would be kind of like if you were planning on studying Hamlet and there was this guy W. Shakespeare who was listed as the instructor and you walk in and discover there's this guy Warren Shackspeare, who's actually going to be teaching or something like that. I kind of feel like Warren here. This stuff really is Allen's thing, so it's sort of, I'm probably going to leave this at the denouement of all this when you actually get into inflation to him. I may have a little bit of time at the end to just motivate it a little bit, but the grand summary will come from him. OK, so, as discussed by Alan the standard model describes all the fundamental interactions between particles via gauge theories. OK, and these gauge theories all have a combined symmetry group that is traditionally written in a somewhat awkward form, SU(3) cross SU(2) cross U(1). U(1) could be an SU(1) for reasons which I'll elucidate a little bit more clearly in just a moment. There's really no point in putting the S on that one. So each of these things essentially labels the particular symmetry group. So, the "S" an element of SU(n) is a matrix that is n x n, that is unitary-- that's the U. Unitary just means that the inverse and the transpose of the matrix at the same, actually the Hermitian conjugate because they can be complex, in fact, they generally are. And it has determinant of 1. That's what the special refers to, special, the S in SU(n) stands for special unitary n. So, the S means that the determinant is one-- that's what's special about it-- unitary is this idea that the inverse Hermitian conjugate are the same, and then n refers to all these things. So, that tells us that the gauge degrees of freedom are related by a symmetry that looks like a 3 by 3 matrix with these properties, as listed there for the SU(3) piece of the symmetry. SU(2) means it's a 2 by 2 matrix. And U(1) means it's a one by one matrix, what's a 1 by 1 matrix? It's a number, its a complex number. And that's why we don't really need to put an "s" in front of it. If it's a complex number its determinant is 1 if it's just a complex number whose modules is one. That's why we don't bother with the S on the U(1). So, I think you've already hit on some of this but this is sort of useful to review because it's going to set up why we need to introduce a Higgs mechanism in a little bit. Let me just quickly hit on what the details structure of this looks like for you want to think is the easiest one understand, So, as I just said, a one by one matrix is just a complex number. So that means that any element of this group is a complex number, which we can write in the form z equals ei theta, where theta is a real number. Now, the thing which is I want to hit on in this, the reason I want to describe this a little bit is, this may not smell like the gauge symmetry that you're used to if you study classical E&M. Some of you here are in 807 with me right now, and we've gone over this quite a bit recently. How is this akin to the gauge group that we are normally used to when we talk about the gauge freedom of electricity and magnetism? Well, it turns out there's actually a very simple relationship between one and the other, rather between this view of it and the way we learn about it when we study classical E&M. It's simply that we use a somewhat different language, because when we talk about it in this group theoretic picture we're doing it in the way that is sort of tuned to a quantum field theory. So, the way we have learned about electromagnetic gauge symmetry in terms of the fields sort of goes as follows. We actually work with the potentials, and so what we do is we note that the potentials Amu, which you can write as a four vector, whose time-like component is the negative of the scalar potential, and whose spatial components are just the three components of the vector potential. So, this potential and this potential-- --okay this is possibly module of factor of c somewhere in here but I'm going to imagine the speed of light has been set equal to 1. Both of those potentials generate the same E&B fields. OK, again you still should be looking at this and thinking to yourself what the hell does this have to do with the U(1) as we presented it here. I've given you a bunch of operations that involve some kind of a scale or function of time and space. And I've added particular components of this four vector in this way, what does that to do with this multiplication by a complex number? Well, where it comes from is that when we study E&M, not as a classical field theory but as a quantum field theory, we have a field that describes the electron. So, where it comes from is that when you examine the Dirac field, which is the quantum field theory that governs the electron, when you change gauge the electron field acquires a local phase change. So in particular, what we find is that if we have a field 5x, which those of you who have taken a little bit of quantum field theory should know this is actually a spinner field, but for now, just think of it as some kind of a field that under the field equations of quantum electrodynamics-- the Dirac equation or high order ones that have been developed by Feynman, Schwinger, and others-- under a change of gauge this goes over to si prime of x, which equals e to the-- terrible notation I realized-- 1e is obviously the root of natural logs, e sub 0 is the fundamental electric charge. OK, can everyone read that? I didn't block it too badly here I'm not used to this classroom. So, here's the thing to note, is that this field lambda, which we learned about in classical E&M directly connects to the phase function of the Dirac field in quantum electrodynamics. So, our gauge symmetry is simply expressed in the quantum version of electrodynamics by a function of the form e to the i real number, where that real number is the fundamental electric charge times the classical gauge generator. So, this is what is meant when people say that electrodynamics is a U(1) gauge theory. Now, I'm not going to go into this level of detail for the other two gauge symmetry that are built into the standard model. But, what I want you to understand is that the root idea is very, very similar. It's just now, instead of my gauge functions looking like e to the i, some kind of a local gauge phase of x multiplying my functions, my quantities which generate the gauge transformation are going to become complex value matrices. So that makes them a lot more complicated, and it's responsible for the fact that the weak and the strong interactions are non-abelian paid which order you perform the gauge transformation in matters. Question. AUDIENCE: What's the physical significance of them being non-abelian? PROFESSOR: Yes. So, what is a physical significance of them being non-abelian? I'm trying think of a really simple way to put this, it's-- Alan would have an answer to this right off the top of his head, so I apologize for this-- this isn't the kind of thing that I work on every day so I don't have an answer right at the very top of my head, unfortunately. Let me get back to you on that one, OK, that's something I can't give you a quick answer to. It's an excellent question and it's an important question. Any other questions? OK, so, here's a basic picture that we have. So, we find is that the strong interactions have a similar structure where my need to e to the i factor goes over to a 3 by 3 matrix, and the weak interactions in a similar structure with my e to the i factor going over to a 2 by 2 complex matrix. OK, what does this have to do with cosmology? In fact, as an enormous amount to do with cosmology, as we'll see over the course of the rest of course. Part of the thing which is interesting about all this is that we have strong experimental reasons, and theoretical reasons to believe, that the different symmetries that these interactions participate in, the different symmetries that we see them having, that isn't the way things have always been. So, in particular when the universe was a lot hotter and denser these different symmetries actually all began to look the same. In particular the one which is particularly important, and you guys have surely heard of this, is that the SU(2)-- if we just focus on electric and the weak piece of this-- SU(2) cross U(1). So, this is associated with the gauge boson that carry the weak force, OK, the z boson, the w plus, and the w minus. And your U(1) ends up being associated with the photon. In many ways, when you actually look at the equations that govern these things, they seem very, very similar to one another except that the-- here's partly an answer to your question I just realized-- the gauge generators of these things have a mass associated with them. That mass ends up being connected to the non-abelian nature of these things. That's not the whole answer, but it has a connection to that. That's one thing which I do remember, like I said I feel this is really Alan's perfect framework here and I'm just a posture in bad shoes. So if we look at this thing, what we see is that these symmetry groups, what's particularly interesting is that U(1) can be regarded as a piece of SU(2). And we would expect that in a perfect world they would actually be SU(2) governing both the electric and the weak interactions. Whereby perfect I mean everything is a nice balmy 10 to the 16th GeV throughout all of space time, and all the different vector bosons happily exchange with one another, not caring with who is who. It's actually not very perfect if you want to teach a physics class and have a nice conversation, but if you are interested in perfect symmetry among gauge interactions it's very, very nice. So, the fact that these are separate is now-- I was about to use the word believed but it's stronger and that, we now know this for sure thanks to all the exciting work that happened at the LHC over the past year or two-- the fact that these symmetries are separate is due to what is called spontaneous symmetry breaking. So, let's talk very briefly about what goes into this spontaneous symmetry breaking. So SU(2) turns out to actually be isomorphic to the group of rotations on a sphere. So, when you think about something that has perfect SU(2) symmetry it's as though you have perfect symmetry when you move around through a whole host of different angles. OK, so you move through all of your different angles and everyone looks exactly identical to all the others. If you break that symmetry it may mean you're picking out one angle as being special, and then you only retain a symmetry with respect to the other angle. And essentially, that is what happens when SU(2) breaks off in a U(1) piece of it. Something has occurred that picked out one of these directions. And by the way, you have to think very abstractly here. This is not necessarily a direction in physical space we're talking about here but it's a direction in the space of gauge fields. So, if we imagine that all of these, my gauge fields in some sense the different components of them defined in some abstract space direction, initially these things are completely symmetric with respect to rotations in some kind of an abstract notion of a sphere. And then something happens to freeze one of the directions and only symmetries with respect to one of the angles remains the same. Let's just write that out, when SU(2)'s symmetry is broken so one of the directions in the space of gauge fields is picked out as special. That direction then ends up being associated with your U(1) symmetry. So, what is the mechanism that actually breaks the symmetry and causes this to happen. Well, this is what the Higgs field is all about. The idea is there is some field that fills all of space time. It has the property that at very high energies it is extremely symmetric, with to respect all these gauge fields, all directions and sort of gauge field space look exactly the same. And then as things cool, as the energy density goes down by the temperature of the expanding universe, cooling everything off, the Higgs field moves to a particular place that picks out some direction in the space of gauge fields as being special. So let's make this a little bit more concrete. OK. You guys have probably heard quite a lot about the Higgs field over the past couple years, months-- what actually is it? Well, the field itself is described by a complex doublet. So, if you actually see someone write down a Higgs field what they will actually write down is h, being a two components spinner, whose components are h1 of x, h2 of x-- where x really stands for space time coordinates, so that's time and all of your spatial coordinates-- and both h1 and h2 are complex fields. The thing which is particularly key to understanding the importance of this thing is that h transforms, under gauge transformations, with elements of SU(2). So, if you want to change gauge the way you're going to do it is you're going to have some new Higgs field. So remember, if U(2) is an element of SU(2) we call it the two by two matrix. This is what they look like in a new gauge OK-- pardon me a second I don't see a clock in this room, I just want to make sure I know the time, thank you. OK, so, what are we going to do with this? Well, there's a couple features which it must have, so the Higgs field fills all of space time and it has an energy density associated with it, which we will call just the potential energy. It's really an energy density, but, whatever. The energy density that is associated with this thing must be gauge invariant. OK, even when you're working with strong fields and weak fields, the lesson of gauge invariance from E&M still holds. OK, one of the key points was that the gauge fields affect potentials, they allow us to manipulate our equations to put things into a form where the calculation may be easier. But at the end of the day, there are certain things it actually exert forces that cause things to happen, those must be invariant to the gauge transformation. Energy density is of those things. If you were to get into your spaceship and go back to the early universe and actually take a little scoop of early universe out and measure the energy density, A, that would be cool, but B it would be something that couldn't actually depend on what gauge you were using to make your measurements. That is something that is a complete artifice of how you want to set up the convenience of your calculation. So, in order for the energy density to be gauge invariant we have to find a gauge invariant quantity that is constructed from this, which is the only thing the energy density can depend on. This means, let's call our energy density V, it's the potential energy density. So, it can only depend on the following combination of the fundamental fields Pretty much just what you'd expect. This is sort of the equivalent to saying that if you're working in spherical symmetry the electrostatic potential can only depend on the distance from a point charge. This is a very similar kind of construct here, where I'm taking the only quantity that follows in a fully symmetric way, of calling the fact that this is a special unitary matrix that I can construct from these things. So then, where all the magic comes in is in how the Higgs field potential energy density varies as a function of this h, this magnitude of h. So, as I plot v as a function of h, in order to get your spontaneous symmetry breaking to happen what you want is for the minimum of V, the minimum potential energy, to occur somewhere out at a non-zero value of the Higgs field H. Now, why is that so special? The thing that is so special about that is that when I constructed this magnitude of h, I actually lost a lot of information about the Higgs field. OK, let's just say for the sake of argument that this minimum occurs at a place where the Higgs field in some system of units has a value of 1. So, all I need to do is as my universe cools what I'm going to want is energetically, my potential is going to want to go down to its minimum. So, that just means that as the universe is cooling, maybe at very, very early times when everything is extremely hot and dense, I'm up here where the potential energy is very high. As the universe expands, as everything cools, it moves over to here, it just moves to someplace where the Higgs field takes on a value of 1. And that's exactly correct, that is what ends up happening. But remember, the minimum occurs at some value in which the magnitude of this field does not equal zero, but given that value-- where again let's just say for this for sake of specificity that we set it equal to the magnitude of this thing equal to 1 in some units-- there's actually an infinite number of configurations that correspond to that because this is a complex number, this is a complex number. I could put it all into little h1, and I could set into the value where that thing is completely real, or I could put it all into little h2 being completely imaginary or all on to h1 being all imaginary, halfway into h1, halfway into h2. There are literally an infinite number of combinations that I can choose which are consistent with this value of the magnitude of H. So, yeah-- AUDIENCE: So, I don't know if I'm putting too much physical significance on the gauge, but with the other cases of spontaneous symmetry, briefly, that we discussed you can always measure. OK, I've broken my symmetry, and now it's lined up this way, or there's something measurable. Now, the field has to be physical because the fact that you have gauge symmetry gives you some concerned quantity, right? But, how can I measure what direction in gauge space that I picked out? PROFESSOR: So, that is, let me talk about this just a little bit more. I think answering your question completely is not really possible, but there is a residue of that is in fact very interesting, and let me just lay out a couple more facts about what actually happens with this gauge symmetry, and it's not going to answer your question but it's going to give you something to think about. OK, so that's an excellent and very deep question, and there are really interesting consequences. And this is a case where my failure to answer the previous one is because there's details I can't remember, in this case, I think it's because there's details we actually don't understand fully. Research into the mechanism of electroweak symmetry breaking, which is what this is all about, is one of the hot topics in particle physics right now. AUDIENCE: I was just wondering if gravity has any gauge symmetry associated with it. PROFESSOR: It does, but it fits in a very, very different way, and with the exception of the fairly speculative framework of string theory-- which I think is very, very promising, but it's just sufficiently removed from experimental verification that I'm going to have to label it speculative-- it doesn't quite tie in in the same way. And that's the best I can say right now. The gauge symmetries of general relativity are, at the classical level, they correspond to coordinate transformations, at a quantum level, there's not such a simple way to put it. All right, where was I, OK, sorry I didn't get to your question. So, the point we made here is that we have spontaneously, when we actually choose which one of these infinite number of values we're going to have, we just randomly break the symmetry. OK, and you guys apparently have already talked a little bit about spontaneous symmetry breaking. The analogy that people often make is to the freezing of water, OK, prior to the water entering its solid phase its completely rotationally symmetric, then at a certain point crystalline planes start to form, the water forms, all the molecules get set into a particular orientation, you lose that rotational symmetry. In this case, we started out with a theory, with a set of interactions that were completely symmetric in sort of gauge field space. And now by settling down and picking a particular special value of h1 and h2 we have at least nailed down one direction. It's like we've defined a crystalline plane, and so now things, suddenly, aren't as symmetric. And we start to pick out preferred directions in our gauge fields. What we can do with this is really a topic for a whole other course, and that course is called quantum field theory, but I will sketch a couple of the consequences and this gets directly to the answer your questions. So, one of the consequences of this is that once we have picked out a particular direction, electrons and neutrinos are different. When the Higgs field is equal to zero there is no difference between an electron and a neutrino. They obey exactly the same equation, there's literally no difference between them. Once we have actually settled on an h1 and an h2 some combination of the fundamental underlying fields comes together, acquires a mass, acquires an electric charge, and we say A-HA thou beist an electron. It wasn't like that in the original unbroken symmetry. AUDIENCE: Also, [INAUDIBLE]? PROFESSOR: Presumably, but I'm going to stick with just these for now, but I've I'm pretty sure that's the case, yeah. That gets into even more complications of course because the additional generations are actually consequence presumably of some broken higher level symmetry, which is even poorly, more poorly understood. But you raise a good point. So, that's one partial answer your question. How one can actually walk that backwards to understand this thing about the initial state? That's hard to say. I actually think this particular one is one of the profound and interesting aspects of this, in part because we now know the neutrino has a mass. We have no idea what that is, and in fact we only really have bounds on the mass, such that we know it is non-zero, and we have upper limits that are set by very indirect measurements. But the actual values of the mass are very, very poorly constrained. Within the standard model you just take the electroweak interaction, introduce a Higgs coupling and allow the symmetry to be spontaneously broken, the neutrino mass is zero. Full stop zero. So something's not right, we're actually missing something here. People have kind of jury rigged the standard model to put in the masses by hand, and it works OK, but it's not completely satisfying. And a lot of experiments going on right now to explore the neutrino sector are hopefully going to open us up to a deeper understanding of this and may say a lot about all this physics, which is at present, pretty poorly understood. The consequence, which has received the most popular press, and what you guys have certainly seen about in newspapers, given the results that came out from the LHC over the past year is that quarks and leptons have mass, or put more specifically, rest mass. To understand what this actually means I think you really need to ask yourself what is mass meant to be. Well, the idea is you calculate the spectrum of oscillations associated with the fields of your theory, and then if your theory predicts a discrete spectrum of oscillations, it doesn't even have to be discrete but predict some spectrum of oscillations, then for every oscillation frequency omega there's an associated mass that is just H bar omega over c squared. If your omega has some lower bound that is greater than zero, then your theory has particles with nonzero rest mass. Without going into the details-- and this again is something which those of you who are going to go on to study this in more detail in a higher level course, which is fairly standard stuff is done in probably the first or maybe late in the first or early in the second semester of a typical quantum field theory course-- what you'll find is that when the Higgs field is zero then quarks and leptons have, the field that describes quarks and leptons-- and yes including mu and tau, so including all the leptons, this one I'm very confident on-- the spectrum goes all the way to zero if the Higgs field is zero. But when the Higgs field becomes non-zero, roughly speaking, it shifts the spectrum over for these particles. There's an interaction between the things like the electron field in the Higgs field or the up quark field and the Higgs field, which shifts the spectrum over just enough so that the frequency is never allowed to go below some minimum. AUDIENCE: Going back a bit, I'm confused about how picking a specific value to the Higgs field is breaking SU(2) symmetry and not U(1), because it seems like we're fixed on a circle, right? PROFESSOR: That's right what U(1) is a symmetry on a circle, SU(2) is kind of like symmetry on a sphere, essentially. AUDIENCE: Right, so how are we not picking a specific value [INAUDIBLE] circle [INAUDIBLE]? PROFESSOR: Well, what we're doing is, think of it this way, imagine SU(2) is a symmetry on a sphere, and then when we break the SU(2) symmetry it's like we're picking some circle on that sphere. So, we've broken one circle, we've picked one circle, but now we're allowed to go anywhere on that remaining circle, which is a U(1) symmetry. Does that help? Yeah, OK good. And it comes down to the fact if you sort of count up your degrees of freedom, it has to do with the fact you you've got four, you have two complex numbers, so there's four real parameters associated with this thing, and they are isomorphic to sort of rotations in a three space and you're adding one constraint. OK, so let me just finish making this point here again. So, when h does not equal zero, spectrum get shifted for the quarks and leptons, so everything picks up a little bit of a mass. And the final one, final consequence which we're going to talk about today, is that the universe is filled with magnetic monopoles. We all remember studying Maxwell's equations learning that del dot b is equal to 4 pi times the density of magnetic charge-- this all makes perfect sense, right? Well, this is actually something that when it first sort of came out and people begin to appreciate this thing with sort of a "Um, well everything else works so well, maybe we're just not looking hard enough. " So, it was a bit of a surprise. So, where do these magnetic monopoles come from? And essentially, the magnetic monopoles are going to turn out to be a consequence of the fact that when spontaneous symmetry breaking happens it doesn't happen everywhere simultaneously. So, think again about-- yeah? AUDIENCE: Doesn't that bring up possibility that the symmetry could break in different ways in different places? PROFESSOR: That is in fact exactly what this is going to be. Magnetic monopoles are in fact exactly a consequence of this, yes. Give me a few moments to step ahead to fill in a couple of the gaps, but you're basically already there. So, think about crystalline crystal formation again. Imagine you have, we could do ice if you like or choose something that's got a little bit more of an interesting crystalline structure. Imagine you have a big bucket full of molten quarts, OK. So, if you have a big thing of quartz that you want to sort of freeze into a single gigantic crystal, what you typically do if you'd like to do this is you actually seed it with a little bit of a starter crystal. So, you put a little bit of crystal into this thing, and what that does is it sort of defines a preferred orientation of the crystal axes, so that as things start to cool in the vicinity of that they have a preferred orientation to grab on to. And that seed then gradually gets bigger and bigger and bigger, and all the little crystals as they form near it tend to latch onto the preexisting crystalline structure, and that allows you to grow actually extremely large crystals. I don't know if anyone here is doing a year off with the LIGO project but these guys have to make these sort of 100 kilogram mirrors of very pure either Sapphire or silicon dioxide, and when you make 100 kilograms of crystal you need to build it really, really carefully. It's extremely important for the optical purposes that all the axes associated with the crystal will be pointing in the right direction. Otherwise you spend $100,000 on this thing and it ends up being the world's prettiest paperweight. So, similar things happen when the Higgs field cools. Let's imagine that we've got our universe, time going forward like this, and at some point over here the universe cools enough that's the Higgs field condenses into some particular direction. And symmetry is spontaneously broken right at this one point over here. So, I'm going to draw my diagram over there and put some words over here. I shouldn't say Higgs field cools enough, the universe cools enough so that the Higgs field breaks the symmetry. So, just to be concrete, let's imagine that at 0.1 over here it takes on a field of the value one for h1 and I for h2. So just for concreteness imagine it looks something like this at this point. And so what happens is as the university continues to expand other areas are going to cool off. The bits that are closest to it are going to see that there is already a preferred orientation defined by the Higgs field. And so it's energetically favorable for those regions of the universe to fall into the same alignment and so there'll be a region in space times that grows here as the universe cools, in which the Higgs field all falls into this configuration, which I will call h1. But suppose somewhere over here at 0.2, and the key thing is that initially 0.2 is going to be so far away from 0.1 that these points are out of causal contact with one another. I can not send a message from event one to event two. The Higgs field also reaches a point that the universe cools enough that at 0.2, just you know, it's a system that's not in thermal equilibrium. So, some places are going to be a little bit hotter than others, some are going to be a little bit cooler. And so, at these two points it just so happened that the Higgs field got to the point where it could spontaneously break the symmetry. So at 0.2 the Higgs field also got to the point where it could spontaneously break its symmetry. And the only thing that's got to happen is, remember the only constraint we have is that the magnitude of the Higgs field be equals to some value-- I should normalize that to root 2 in units I want to use but whatever. Let's say on this one my h1 is equal to y, and h2 is equal to minus 1. So, it's basically the same thing but all the fields are multiplied by i. It's the same magnitude, so it's going to have the same potential energy. So that's cool. Clearly this is allowed, and now all the regions in the universe that are close to the this are going to sort of smell this particular arrangement of the Higgs field and say OK, that's preferred arrangement I want to go into. So, we have two separate values of the Higgs field that are happily swooping out space time here. This gets to the excellent question I was just asked a moment ago-- what happens when they collide? As the universe expands and gets cooler, all of it is going to end up getting swooped into either the field that was seeded at event one, or the field that was seeded in to event two, but at a certain point we're going to get the bits where they're smashing into one another. So what happens when these different domains come into contact with one another? The absolutely full and probably correct answer is we don't know. The reason is that we don't really, to be perfectly blunt, fully understand every little detail about the symmetry breaking, or about the structure of whatever grand unified theory brings all these things together at the temperatures at which this is happening. Because this is happening when the universe has a temperature of like 10 to the 16th GeV. And so it's way beyond the domain of where we can push things. But we can, as physicists are fond of doing, we can paramaterize our ignorance, and we can ask ourselves, well what happens if these various parameters that characterize my grand unified theory take on the following plausible kinds of parameters. And what we find is that generically, when you have two different domains where the Higgs field takes on different values like this, when these domains come into contact you get what are called topological defects. The topological defects come in three different flavors. To understand something about those flavors you have to know a little bit about what happens in general when you have phase transitions, and different regions of your medium go through a phase transition with different values of the parameters. So, it's a general case that whenever you have some kind of a phase transition and you have domains of different phase that come into contact with one another, your field will attempt to smoothly match itself across the boundary. But that can be very difficult. So if you imagine these particular two cases that I have here, that's essentially saying that when these two domains coming to contact with one another there's going to be sort of a transition zone where the field is attempting to rotate from one value of the Higgs to the other. And it's going to pick some value that is in some sense intermediate to those two things. So that, let's say we continue these up here, so that the collision is occurring right in this place here, in this little locus of events in space time. I have Higgs field 2 over here, Higgs field 1 over here, and I've got some crazy intermediate field that goes between the two of them, which is trying to sort of force itself to smoothly transition from one to the other. In so doing, I might end up pushing my field away from the minimum, in which case there will then be some energy trapped in that layer. And there's a reason we do this level the class in a bit of a hand wavy way, I mean it's very, very complicated to get the details right. But the key thing we see is that in doing this match, the field has to do some pretty silly shenanigans order to make everything kind of match up and we can be left with odd observable consequences from the energy associated with the Higgs field getting pinned down at that boundary here. Now, the details of the forms of this boundary vary a lot depending upon to the specific assumptions you make about your underlying grand unified theory. OK, so I should back up for a bit. I'm sort of assuming here when I discuss all this that there is some underlying SU(5) theory which describes the strong weak and electromagnetic interactions are very, very high temperatures as one gigantic thing. And we're getting to the point now where all the different interactions are beginning to just sort of crystallize out of it. There's a lot of different ways you can pack your underlying, fundamental, what we now think of as our standard model, into SU(5) grand unified theories. And so the ways in which we can get different topological defects depend upon how we choose to do that. So defect flavor one is you get something called a domain wall. When we do this the fields attempts to make itself smoothly match from one region of Higgs field, say from Higgs 1 to Higgs 2. It succeeds, but you end up with kind of a two dimensional structure-- a wall-- in which there's some kind of anomalous field that is just pinned down there. And so we end up with a big sheet. So in a theory like this, it would predict that somewhere out in the universe if there were regions in which the Higgs field had taken on a different value than the one that we encounter around us right now, it could be somewhere out gigaparsecs away, essentially a giant sheet of some kind. And there would be weird, anomalous behavior associated with it. People have really looked long and hard to try to find things like this and in fact it would be expected to leave interesting residuals in the cause of microwave background. My understanding of the literature is that there are actually now very strong bounds on the possibility of having a grand unified theory that leads to domain walls. And so this kind of a topological defect is observationally disfavored. So this, I should mention, only occurs in some grand unified theories. Basically, As we move on to the other flavors of defects we end up just going down a step in dimensionality associated with the little kinks that are left over when the different domains come into contact with one another. Flavor two, we would get what's called a cosmic string. Some of you may have heard of this. This is essentially, at its core, just a one dimensional, it could be gigparsecs long, but one dimensional, truly one dimensional-- essentially just a point in the other two dimensions-- string of mismatch Higgs field with some kind of an energy density associated with it when the different domains get in contact. AUDIENCE: Do we have any estimate of how close in actual space these different regions would have started? PROFESSOR: We do and I'm actually going to get to that. So, let me give you two answers to that. One of them is you are going to estimate that apparently on PSET 10, according to the notes that Alan left for me. But I'm going to spell out for you the arguments that go into it in the last 10 minutes of a class. But yeah, so let me just quickly finish up this one because this again-- so a cosmic string is sort of like a one dimensional analog of a domain wall. And because it would be this sort of long one dimensional structure, that has actually up a lot of energy sort of pinned down to it by the fact it has a Higgs anomaly associated with it, it would be strongly gravitating and so it would leave really interesting signatures. It was thought for a while that cosmic strings might have been the sort of original gravitational anomalies that seeded some of the structures we see in the universe today. Again, it's now pretty highly disfavored. If cosmic strings exist, they don't appear to contribute very much to the budget of mass in our universe. I should also mention that this is only predicted by some grand unifying theories. If you guys are curious about this I suggest when Alice back you ask him what the difference between these sums, why some predict a domain wall, some predict the cosmic strings. Flavor three is where you end up with the Higgs field essentially being able to smoothly transition without leaving any defect anywhere except at a zero dimensional point. So you end up with just a little knot in the Higgs field. And for reasons that I will outline very soon, it turns out that this little not must carry magnetic charge, and so it must be a magnetic monopole. The domain walls and the cosmic strings are, as I've emphasized, only predicted by certain specific grand unified theories. Magnetic monopoles are actually predicted by all of them. Question. AUDIENCE: What does it mean to have a one dimensional domain wall, because there's no different region separated by one [INAUDIBLE]. PROFESSOR: That's right. So what ends up happening, and this is where I think you're going to have to ask Alan to sort of follow up on this a little bit. So, as the domains come into contact with one another. The fields do their best to smoothly transition from one to the other. And grand unified theories that predict a cosmic string, they succeed pretty much everywhere. They're able to actually smoothly make it all go away so you don't end up with feel being pinned down anywhere, except in a little one dimensional singularity that is somewhere along where the two dimensional services originally met. And that is-- there's details there that I'm not even pretending to explain. And as I say, those are only predicted by certain kinds of grand unified theories. All of them will then predict that even if you don't have that, that cosmic string will then shrink itself down and it'll just be left with a little knot of Higgs field, where there's a little bit of residual mismatch between the two regions. AUDIENCE: Do all three types of defects carry a magnetic charge, or only the knots? PROFESSOR: I think only the knots. They do carry other kinds of fields, though, in particular the other ones gravitate, in fact all them gravitate, and so that's one of the ways in which people have tried to set observational limits on these things. In particular there have recently been a fair amount of work of people trying to set limits on cosmic strings from gravitational lensing, and there was really a lot of excitement because people thought they discovered want a couple years ago. And they saw basically two quasars that looked absolutely identical, that were separated or scale that was just right to be a cosmic string. And then people actually looked at with better telescopes, and saw they had absolutely nothing to do with one another. They were not cosmic, they were not lenses, it's just every now and then God is screwing with you. OK, so without going into some of the details what you have, these little point like defects-- and I'm short on time so I'm going to kind of go through this a little bit in a sketchy way enough so that I can pay for you how to do some calculations you're going to need to do. So the point like defects end up being regions, where at that point the Higgs field actually takes the value zero. So remember I was describing how when you have two regions where the Higgs fields are both taking on values such as there at the minimum of the Higgs potential energy, and they come in to match one another, and what we have a boundary condition that very far away the Higgs field has values such as the energy is minimized. And there is a theorem, which in his notes Alan-- the way he describes it is he gives you a figure and outlines the various things that are necessary for the theorem to be true, and invites you to think deeply for a moment and until insight comes to you, I guess. And when you put this ingredient that the Higgs field has this asymptotic, very far away value that drives you to the minimum of the field, and yet it must change value somewhere in the middle, the theorem requires that there be one point at which H equals 0. And apparently, this is a consequence in all grand unified theories. So, recall, H equals 0. This is a point at which the potential energy density can be huge. So, when you have a little point like defect like this, it looks like a massive nugget, little massive particle. You can in fact calculate the total amount of energy associated with this particle. If you do so just including the influence of the Higgs field, the calculation basically goes like this. It's very similar to the way we calculate the energy associated with electric and magnetic fields in electrodynamics. Ask yourself, how much energy is contained in a sphere of radius, capital R, centered on this little knot of Higgs field. Well, it's going to look like 4pi times an integral of the gradient of the Higgs field squared r squared dr It turns out, when you calculate the [INAUDIBLE] of the Higgs field around one of these little defects, it's actually very complicated close to the defect, but as you get far away it has a very simple form. The gradient goes as 1 over r, it tells you the field itself actually goes something like log. That means your energy looks something like, R squared, 1 over R squared dr which goes as R, which diverges as you make the sphere bigger and bigger and bigger. So, what's the mistake we made? Well, the Higgs field doesn't always just sit there and operate on itself. The Higgs field actually couples pretty strongly to all of our vector bosons. Particularly, it couples pretty strongly to electric and magnetic fields. So, we have to repeat this calculation including the interaction of the Higgs field with the E&D field. And in Alan's notes he gives you some references on this because this is not the kind of calculation you can really sketch out very easily in an undergraduate class. To make this integral convergent, the only way it can be done is if that little nugget of Higgs field is endowed with magnetic charge. You need to have a monopolar magnetic field that ends up putting in interaction terms, that make the divergence of this integral go away. So, I at last get to the punchline of all this, we are left inevitably, if we accept the whole foundation story of particle physics that the different interactions were unified in some high energy scale and then froze out. We are driven inevitably to the story that defects in the Higgs field create magnetic monopoles. Now, I realize I'm out of time, so let me just quickly sketch a few interesting facts about this and there's a few exercises that you guys are apparently going to look at in your homework assignment. When we do this calculation, one which is I believe just referenced in the notes that Alan has for the class, we learn a couple of things about this magnetic charge. One of them is that if you work in the fundamental unit, say CGS units, the value of the magnetic charge, we'll call that g, is exactly 1 over 2 alpha where alpha is a fine structure constant, times the electric charge. So if you have two magnetic monopoles they attract each other with a force that is-- so 1 over 2 alpha is approximately 68.5 I think-- and so it would be 68.5 squared times the force of two electric charges at that same distance. We also end up learning the mass. It turns out to be 1 over alpha times the scale of GUT symmetry breaking. Anyone recall what the scale of GUT symmetry breaking is? 10 to the 16 GeV. So, this is a particle, 1 over alpha is approximately 10 to the two, so this is a particle that has a mass of about 10 to the 18th GeV, in other words it's a single particle with a mass of 10 to the 18th protons. This is approximately one microgram. If you put one of these things on a scale it could measure it, that's bloody big. So getting to the last bit of the class, which I am just going to very basically quote the answer. The question becomes how often do these things get created and here I'm going to refer to Alan's notes. What you'll find is that, remember when we sketched our original picture of this thing we looked at regions of the universe where the Higgs field was initially seeded with different values. In order for the Higgs field to take on different values, initially, these regions had to be out of causal contact with one another. So we are going to require that the initial seed areas be separated by a distance, which is the correlation length, which has to be less than or of order the horizon distance. You can get a lower bound on this thing by imagining that it's-- sorry let me say one other thing. If you do that, then you can estimate that the number density associated with these things, the number density of these monopoles will be 1 over the correlation length cubed. To get a lower bound on the number density of these things, set the correlation length exactly to the horizon distance, and then do the following exercise. So first, let's set up the correlation length equal to the horizon distance. Set the density in monopoles equal to the mass of a monopole over rH cubed, normalized to the critical density. If you do this, you will find that just due to magnetic monopoles alone, the density of the universe. PROFESSOR 2: Excuse me, professor PROFESSOR: Yes, I'm wrapping up right this second. PROFESSOR 2: It's seven minutes. You were supposed to end at 10:55. PROFESSOR: I'm substitute teaching, I'm sorry. OK, so this tells us that we are at 10 to the 20 of the critical density. And a consequence is that the universe is approximately two years old. I will let Alan pick it up from there.
MIT_8286_The_Early_Universe_Fall_2013
15_BlackBody_Radiation_and_the_Early_History_of_the_Universe_Part_I.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, starting again. I want to begin with a quick review of what we said last time. This time will be quicker than usual because we're not really continuing from there, we'll be starting a completely new topic today. But I thought I'd remind you, nonetheless, that we had a last lecture-- the universe was not created between then and now. And what we talked about last time was the spacetime geodesic equation, which is written this way. It's a completely identical to the equation we derived for geodesics in pure spatial situations. The only difference really, is a difference in notation. Instead of using i and j the tradition is to use mu, nu, et cetera for the spacetime indices which are sum from 0 to 3, where x super 0 is identical to t. And in addition, the parameter which is sometimes called s when we're talking about space, is called tau when we're talking about time because what the parameter refers to is the proper time along the trajectory of the object whose geodesic we're calculating. We then introduced the Schwarzschild metric, which we did not derive, but we claimed it describes the metric for any spherically symmetric mass distribution as long as you're talking about the region outside where the masses are located. And here M is the total mass of the object, which is the only thing the metric really depends on, G is Newton's constant, and C is of course the speed of light. The metric has an interesting feature that the coefficients of both the dt squared term and to dr squared term becomes singular either 0 or infinity, depending on which one you're looking at, a particular value of the radius called the Schwarzschild radius, given by this formula 2 GM divided by c squared. The bigger the mass the bigger the Schwarzschild radius, they're proportional to each other. The metric is singular at those points, but I mentioned but did not prove, that that particular singularity is in fact what is referred to as a coordinate singularity. It's a singularity that's there only because of the way the coordinates were chosen. So there are other ways of choosing the coordinates where that singularity disappears. There's also singularity at r equals zero, and that singularity is real, there's no way to remove that singularity by a change of coordinates. However, although r sub s is not a true singularity, it is a horizon. And by that we mean that if any particle or even a light beam gets inside the Schwarzschild radius, it can never get out. There's no geodesic which will take it out of the horizon. And it's not even a matter of geodesics, there are no time-like paths even if you have a rocket which would then not follow geodesic. There's no way to get out from inside a black hole. We didn't show that, but that fact is claimed. Then we calculated the geodesic for our radially falling object. We solve the problem of an object that is released from rest at some initial value r sub 0, and just let fall straight down towards the center of the sphere. And the the equation describing the geodesic is just a special case of the general equation that we had a few slides ago. And we need only look at the radial component if we want to track how the radius changes with time. So there was a free index mu in the generic form of the equation, we're setting mu equal to the r variable. And then the equation reduces to this form and we know what these G sub t t's and G sub r r's are, they come from the equation for the Schwarzschild metric on the previous slide. And that equation can be manipulated and eventually it simplifies to something extraordinarily simple. It's just the statement that d squared r d tau squared is equal to minus GM over r squared, which looks exactly like the Newtonian equation for something falling in a spherically symmetric gravitational field. But it's not really the same equation, it just looks like it's the same equation, because the variables both a different meanings. r and tau both have different meanings from the r and t that would have appeared in the Newtonian calculation. The r variable that appears here is not really the distance from the origin. If you wanted to know the distance from the origin you'd have to integrate the metric singular even-- if not even a well defined distance to the origin because the origin singular. And the tau that appears here is a time variable, but it's not the time that would be read on any fixed clock, rather it's the time that would be read on the wristwatch of the person falling into the spherically symmetric object, which we might consider to be a black hole. And we were able to solve this equation by using essentially conservation of energy techniques, or at least what would be called conservation of energy if we were doing the Newtonian version of the problem, which is the same equation even though the variables have a different interpretation. So we were able to calculate not r as a function of tau, but at least tau a function of r. And we got that equation, which is a little complicated, but the interesting thing about it is that it gives finite answers for every value of r going all the way down to r equals zero. So it means that in a finite amount of time, as seen by the person falling into the black hole, the person would reach r equals zero, at which point he would disappear into the singularity. He'd actually be ripped apart as he approached the singularity because of tidal forces which pull more strongly on the front part of him than on the back part of him, stretching the object out in the radial direction. However, curiously, if one calculates what this trajectory looks like as a function of the coordinate time, t, we did actually do that calculation but we looked at how it would behave in the limit as you approached-- as the particle approached the Schwarzschild horizon. And we discovered it would take an infinite amount of time, as seen from the outside, for the in falling object to reach the horizon, let go through the horizon and get to r equals zero. So from the outside, it looks like the object never actually falls into the black hole, but just gets closer and closer and closer as t approaches infinity. So it's an example of a very highly distorted spacetime, where you can see very different pictures depending on which observer you're trying to describe the observations of. And I think that's it. Any questions about any of that? I guess I'll put that back up. OK. On your homework you'll be applying this geodesic equation to a model universe, to Robertson-Walker universe, and this will serve only as an example for those calculations. I guess there's also a homework problem about the Schwarzschild metric, that orbits in the Schwarzschild metric that you'll be working at. It's all in principal straightforward if you just look at equations and follow what the equations tell you, thinking carefully about what the variables mean. OK. In that case, let's get started on today's work, which is a change of gear. We're now going to be talking about black body radiation and its effect on the universe. I should say that my original plan was to get into this set of lectures notes-- lecture notes six, which have not been handed out yet-- to get into those last time and to finish them today. I don't think that's going to be possible because I didn't get into them last time, and I don't think I'll be able to finish them today. But I would like today to be sort of the closing for what's needed for the problem set due Monday and for the quiz next week. So, shortly after today's lecture I will send you an email telling you where the cutoff is as far as the reading and the lecture notes. And I also hope to post the lecture notes by tomorrow. And I also will be posting a set of review problems as we had for quiz one. And I hope to get that done by tomorrow. You may have noticed that not all of my hopes are filled, but I do my best, and I'll try. OK are there any logistic questions or anything before we go on? Yes. AUDIENCE: Can you post the solutions to the previous problem set, the one that we turned in on Friday? PROFESSOR: Oh, um, yes I can, I should. OK I will. OK, that's a third item I should try to get done today. Thanks for reminding me. And the solutions to the problem set that you'll be handing in Monday will be posted very shortly after you hand them in so that people can start talking about them and prepare for the quiz. Yes. AUDIENCE: Do we have any day at which the videos might be up? PROFESSOR: Ah, I've request about that and all I was told was that they're doing their best. So, I did look into it, but I don't know the answer. I hope that you'll have all the videos available to study for the quiz, but I don't know if that's going to happen or not. OK. in that case, the new topic is Blackbody Radiation and the Early History of the Universe. So far we've dealt with a universe which contains only non-relativistic matter and that, as we said from the beginning, describes our universe for the bulk of its history. But in the early period, the universe was in fact dominated by radiation, as we will now be calculating. And in the more recent period, the universe is dominated by dark energy, which we'll be talking about immediately after we finish talking about radiation. So the important point here is that even though we don't think of light as having mass, light certainly has energy and relativistically we know that energy and mass are equivalent. The key equation that actually dominates today's lecture is perhaps the most famous equation physics, E equals MC squared, energy and mass are equivalent. And the numbers-- I'll just give you some numbers for this equation. You've probably already aware that the numbers are kind of out of sight. One kilogram, a point to that equation, is equivalent to-- I don't have any figures to give you-- 8.9876, in case you really want to know it accurately, times 10 to the 16th, most important to know the exponent there, joules. And it's also perhaps interesting to translate this into the kind of energy units that are use when we talk about power consumption in practical situations. It corresponds to 2.497 times 10 to the 10th kilowatt hours, which is a lot of kilowatt hours. An interesting comparison, is the total power consumption of the world, which turns out to be comparable to a kilograms worth of things. I looked things up in the Wikipedia, and it told me that in 2008-- which is the most recent year it had numbers for, which is a little surprising, that's so far in the past-- the total world power consumption for the year equaled about, I'm rounding off here, about 150 petawatt hours. Now, I always have to look up peta when I see it, I haven't quite learned what peta means yet. But they translated this, this means 150 trillion kilowatt hours. So 150 times 10 to the 12th kilowatt hours. And if we divide this by the number of hours in the year-- to ask how much is per hour, which is I think a natural thing to think about if you're using kilowatt hours to measure the power, especially the energy-- the power that goes with this is about 17 times 10 to the ninth kilowatts. And therefore 17 times 10 to the 9th kilowatt hours per hour. And if you compare these two numbers, it means that if you could convert one kilogram per hour into pure energy, that would be about equal to 1.5 times the world's power usage in 2008. So if you could convert matter completely to energy, as they do on Star Trek, it would mean that you could fill up your tank of your typical American car and power the world for about two days. But of course we can't undo this, that's the important fact concerning power. Only a small fraction of the mass of the uranium in a nuclear reactor is actually converted as power, a fraction of a percent. So you don't get nearly as much power as this calculation would indicate, but in principle this much energy is contained in the matter that we have around us all the time. OK. I want to introduce a few formulas which we'll be using sooner or later concerning the relativistic treatment of momentum and energy, which is what we're getting into here. And we're not trying to derive relativity in this course, so I'm just trying to quote the results that we'll be needing. So, it is useful to introduce an energy momentum four-vector, which has a 0-th component and an i-th component, where i refers to the spatial indices 1, 2, and 3. And sometimes I might write this as p zero p with a vector sign over it, where the vector sign indicates the three components 1, 2, 3. The momentum here is the momentum. It differs in its relationship to velocity from what we have from Newton, and I'll write that in a minute, but this momentum is the conserved physical momentum of an object. And p0 is also conserved. p0 is just an abbreviation for the energy divided by C. And this quantity forms a four dimensional vector in special relativity. And when we say that it's a four-vector, we're actually making a definite statement about how it transforms from one Lorentz frame to another. a four-vector is something which transforms in exactly the same way as x super mu, four spatial and time coordinates transform. And in particular, we learned that there was an invariant associated with spacetime transformations this s squared, which was x squared plus y squared plus z squared minus C squared t squared. And the same thing will happen here. p squared, which means the Lorentz invariant square of the four-vector is the Lorentz [INAUDIBLE] again here, and it's again equal to the sum of the squares of the 3 spatial components minus the square of the time component. And that can be written out as the square of the momentum-- spatial momentum-- minus E squared C squared. And the claim is that this is also Lorentz invariant. And we could figure out what Lorentz invariant quantity it's equal to by having knowledge that this is the same in all frames. We can evaluate it in the simplest frame. And the simplest frame would be the rest frame of the object. In the rest frame of the object the p would be zero and this would then just be minus E squared over C squared. E squared would be M squared C to the fourth, so that implies that this is equal to minus M0 C squared squared where M0 is often called the rest mass. And when I say it's often called the the rest mass, what I mean is that nobody ever mistakes the word "rest mass" to mean anything else, if anybody says "rest mass," and he knows what he's talking about, he means this. But this is often sometimes just called the mass because sometimes people only talk about masses as being rest masses. But I'll try to call this the rest mass because I will use the word mass in other ways. We could also relate this momentum to the velocity, and in doing that we will again encounter this factor of gamma that we found kinematically earlier. AUDIENCE: Yes, question. Did you forget to divide by C squared there? PROFESSOR: Um, yes. There's too many C's here. Absolutely. It should have units of momentum. So it should have units of mass times velocity squared. Thank you. Thank you. So, the quantity gamma, is the same quantity we encounter at the beginning of the course when we talked about time dilation and Lorentz contraction, it just depends on the velocity and approaches infinity as the velocity approaches the speed of light. And the physical momentum of a particle, relativistically is equal to gamma times M sub 0 times V. Where V is the ordinary velocity-- this is special relativity, we're not concerned with coordinate velocity versus physical velocity yet-- and gamma is that factor. So the momentum is larger than what you would get a la Newton. The energy can also be written down. And this formula is one expression we can use to find the energy in terms of the momentum. The energy in terms of the momentum is M0 C squared squared plus p squared C squared. And it also can be written in terms of the velocity as just gamma times M0 C squared where the velocity appears in the gamma. And a special case of this is when the particle is at rest. Might as well write this, the energy of a particle at rest, which we might call E0, is just M0 C squared, which gets us back to where we started with E equals MC squared. Now, I might just say a quick word about where these formulas come from, what idea underlies them. I'm not going to make any attempt to derive them because we just don't have time. It'd be easy to drive them, but we want to do other things. But logically, where they come from is simply the observation, by Einstein originally, that if one has the Lorentz transformations relating what one inertial observer sees to another inertial observer, if one used those transformations but used the Newtonian definitions of energy and momentum, then you would find immediately that if energy and momentum were conserved in one frame you then know how to calculate what happens in other frames by using the transformations, you'd find it would not be conserved other frames. So if the conservation of energy and momentum are to be a universal principle of physics, which Einstein wanted to maintain, it would be necessary to redefine energy and momentum. Now they're defined in ways so that they approach the Newtonian values for small velocities, but for velocities of the order of the speed of light they're different. And they have the property-- not completely obvious from what we wrote, well it is actually completely obvious from what we wrote-- they have the property that if it's conserved in on frame, it's conserved in all frames. And what makes it obvious-- maybe the connections between these different formulas are not obvious-- but I did tell you that p super mu transforms as a four-vector, meaning it transforms the same way as x super mu transforms, which are linear transformations. And that's enough to guarantee that if p mu is conserved in one frame, it has to be conserved in all frames because delta p mu, the change in p mu, would also be a four-vector, and if a four-vector vanishes in one frame, it vanishes in all frames. OK. Well I wanted to give you sort of a quick example of how this works in practice. So I to just talk about the energetics of a hydrogen atom. A hydrogen atom consists of a proton, with a mass that we'll call M sub p, and an electron, with the mass that we'll call M sub E. And, as, if you imagine starting with the electron and proton arbitrarily far apart and bring them together, what you can discover experimentally-- of if you know quantum mechanics you can calculate theoretically-- energy is released, because you're releasing potential energy as you bring the electron into the atom. And the amount of energy released is 13.6 electron volts. And the important E equals MC squared implication which I want to point out here is that loss of energy. This energy would be extracted from the system as you made the hydrogen atom. The fact that you've extracted energy from the system means that now the system should have less energy than it had to start with. Initially it had the rest energy of the proton and the rest of electronic, M sub p C squared and M sub E C squared. Now it has less energy by delta E, and that means it also has to have less mass. So the mass of a hydrogen atom is not the sum of the mass of the proton and electron, as it would be in Newtonian mechanics, but is less by an amount proportional to this energy given off, delta E, 13.6 electron volts. And just putting in the C squares in I hope the right places, the mass of a hydrogen atom will be equal to the mass of a proton plus the mass of an electron, but then minus the binding energy expressed in mass units-- delta E over C squared. OK. So I guess, just probably one more topic I want to talk about in terms of just basic special relativity, and this actually get's into general relativity. I wanted to find the relativistic mass of any system just being its energy divided by C squared. And this means that the relativistic mass of a particle increases with its velocity. The energy of a single moving particle is gamma times M0 C squared. That would say the by this definition the relativistic mass of that particle is gamma times M0. And I might mention that this concept of relativistic mass is disparaged in many books on special relativity. It's certainly a concept that you can do without, so people who emotionally are bothered by it can get along without it because it is just the energy divided by C squared. And in fact a lot of work in special relativity is done in units where C is equal to 1 and then it is just the energy. Especially if you use C equals 1 you could dispense with this concept of relativistic mass. We're not going to be using C equals 1, so the phrase relativistic mass will allow us to abbreviate E divided by E squared in a convenient way. But the important thing is not the definitions, the important thing is what properties does this relativistic mass have, whether or not one chooses to call it relativistic mass or E divided by C squared. And it as an important property concerning the gravitational field created by matter. Now the gravitational field of a single moving object is complicated. If we were talking about, say, a moving star that was moving at a velocity large enough so we care about relativity, the way we calculate that actually as we start with the Schwarzschild metric, which will describe the metric of the star outside of the matter if it were stationary. And then you can just make a transformation to a moving frame. You're allowed to use any coordinates you want in general relativity, so transforming to coordinates that describe the moving frame is no problem. But it distorts the field in a complicated way, nonetheless, a way that you can deal with. And what you find, of course, is that what you get would be asymmetric once you transform to the moving frame, it would show the signs of the velocity that you used to transform from the original spherically symmetric Schwarzschild metric to the new frame. So the bottom line is that the gravitational field of a moving object is not isotropic, it's more complicated than that, just as the electric field would be. But if we have a gas of particles, which is pretty much what we have in the early universe. If we have a gas of particles in a box moving every which way, then if we thought of this box as being an object that we're only going to look at from the outside-- a black box in the classic use of the phrase black box-- the mass of the black box really would just be the sum of the relativistic masses of the particles. And the isotropy of the metric that any one particle would generate would be canceled by averaging or summing over all the particles going every which direction, because on average the velocity of particles inside the box is 0. So this relativistic mass, when you're talking about a gas, really is the mass per particle. And if you divide that by the volume really you do get the mass density, which is a relevant mass density in terms of talking about how this matter would generate gravitational fields. Yes. AUDIENCE: So why do some people have emotional problems with it? PROFESSOR: I think some people have emotional problems with it because when one thinks about pedagogy in a course, for example, one worries about people confusing it with the rest mass. I think that's the reason. And I guess there are other possible sources of confusion, so your question is a good one. Another source of confusion is that this mass does not fit into an F equals ma equation. So in calling this the mass you might suggest to students who aren't paying attention to every word that you say, you might go ahead and put F equals ma for this mass, that does not work. So it has some of the properties of a mass, but not-- by no means all of them. But in particular for us is important because we're going to be interested in the gravitational field of a gas and then it really is the mass that determines that. In the more formal language in general relativity, it's the mass-- it's the energy density, that appears in the equations that produce gravitational fields, and then this really is the energy density except for a factor of C squared. OK, any questions? Because now I'm going to leave this formalism and get into what role this radiation could play in the early universe. So now I'd like to talk about radiation in particular, and for now I mean electromagnetic radiation just ordinary photons. And we're not accustomed to thinking of light as having mass, but we know light has energy and energy is related to mass by a factor of C squared. So we can write down the formula that says that rho is equal to u divided by C squared. Where u is the energy density, which we know electromagnetic fields have. And rho will be the mass density of radiation. Now, photons have zero rest mass, so if we apply for example this formula for a photon we would set M sub 0 equal to 0. And we said that photons have zero rest mass what we mean is that there's no lower limit to the energy a photon can have. In general, the rest mass determines the lowest possible energy a particle could have, which is when it's at rest. Photon can never be rest and there's no lower limit to what its energy can be. So for photons, M0 is equal to 0 and that implies that the energy of the photon is just C times the magnitude of its momentum. And this formula one can derive just for electromagnetic waves is purely classical EM-- you don't have to be talking about photons, but since it's true for a classical electromagnetic wave it had better also be true for photons because we think of this classical electromagnetic waves as really being made out of photons. So the energy that exists in the universe in the form of electromagnetic radiation will have an energy density, which we know how to calculate. And we know to calculate the momentum of any given photon. Now that will average to zero, but nonetheless if we imagine talking about a box of photons, which are bouncing off the walls, the fact that each photon carries momentum means that there will be a pressure on the walls. And we will be interested in that pressure, we'll be calculating a formula for it in a minute. So, what we now want to talk about is what happens when we put a photon gas into the universe and allow the universe to expand. What happens to the radiation energy density, or equivalently mass density, as the universe expands? This turns out to be a very easy question to answer if we think of the energy as being made out of photons. We would get an equivalent identical answer if we use classical Maxwell's equations and talked about how energy density is a [INAUDIBLE] of electromagnetic fields behaved. It would be more work actually do it that way, but we would get the same answer. In terms of photos, we could simply notice that the number density of photons-- photons are not going to disappear as the universe expands, the number density will just keep the same number of photons, but as the universe expands those photons will occupy a larger volume. So it's exactly the same as what we said about non-relativistic matter, the number density of photons will fall like 1 over the cube of the scale factor, which just says that photons are conserved. And the volume of any region grows like a cubed. I should also mention that I'm using gamma here. Gamma of course also sometimes meant 1 over the square-root of 1 minus V squared over C squared, but besides that use, gamma is also just a label that means photons. It comes from the idea of gamma rays, but it's actually used in this context for any kind of a photon no matter what its frequency is, whether it's a gamma ray or an x-ray or visible light, or infrared. Yes. AUDIENCE: Is this assumed for time average? Because photons can be absorbed, right? I mean, at least like in a small [? slice ?] can't there be a decided non-relativistic matter? PROFESSOR: Is this true on the--? Well, the validity of this formula-- you're right this formula's not exact, photons can be absorbed. But in terms of what happens as the universe evolves, that's a very, very, very minor process, especially when we're talking about the early universe when there isn't really anything around to absorb them. So, especially for the early universe and even pretty well today, if we're talking about the cosmic background radiation, which is the bulk of the photons, this formula's a very good approximation. Yes. AUDIENCE: Even though the photons in the early universe created a lot of massive particles, [INAUDIBLE] didn't that affect the expansion? PROFESSOR: Are you're asking, will the photons be important if there's a lot of mass of particles? Is that your question? AUDIENCE: Would the photons, won't they decay into matter antimatter? PROFESSOR: Will the photon decay into matter antimatter? No, not really. It is, in principle, possible for two photons to collide and produce an electron positron pair. It's actually a rather small cross section for that. And all of these processes in the early universe will rapidly reach an equilibrium, which we'll be talking about more a little later, where there'll be just as many photons converting into E plus E minus pairs as there will be E plus E minus pairs colliding and making photons. So the early universe is assumed to reach equilibrium very quickly, and all the description we'll be giving will be a description of the universe that's in thermal equilibrium with these processes will tend to iron out. We will learn that they don't always cancel each other because the universe is cooling, and that means it can't be exactly in thermal equilibrium. And there are some cases where the effect of that cooling is significant and we'll be talking about those. But for the most part, if the cooling is slow, which it is for the most part compared to other processes, everything stays in thermal equilibrium. OK. Those are good questions. We've gotten a little ahead of what I wanted to talk about. So for now I'm just imagining a free photon gas, which is an excellent approximation for the early universe. And those photons just continue to exist as the universe expands, so their number density falls off as 1 over a cubed. But there's another affect that goes on which is that the photons are redshifting. And we already know about that, but now we're going to take it into account in terms of the energy balances. So we know that the frequency of a photon at some time t2 divide by its frequency at some time t1, and here nu equals frequency, is just diluted by the expansion of the universe. This ratio is 1 over 1 plus z, where z is the redshift between these two times. But written out in more detail which is a formula we'll actually be using, is just a of t1 divided by a of t2. When the scale factor doubles, all the frequency is half. And that means that all the photons are lowering in frequency, and we also know that photons are quantized. The energy of a photon can't be any old thing, but in fact, the energy of a photon is equal to h times nu. Where little h is what's called Planck's constant. And numerically there are various units you could express it in, but it's 4.136 times 10 to the minus 15th electron volt seconds. So if you measure a frequency an inverse seconds, you get an energy in electron volts from that formula. The important thing for now though is that this says that the energy of each photon-- being proportional to its frequency, and the frequency being proportional to 1 over the scale factor-- the energy of each photons is proportional to 1 over a of t. And then the total energy density of photons in photon on gas which I'll call u sub gamma, the energy density of the gas, can be thought of as the number density of photons times the energy of each photon. And the number density is falling off like 1 over a cubed, the energy is falling off like 1 over a. And therefore, this is proportional to 1 over a to the fourth. So as the universe expands, the density of non-relativistic matter falls off like 1 over a cubed-- as we've been talking about some time now-- but the energy of radiation falls off faster, like 1 over a to the fourth because of the red shifting of the photons. OK, now once we know this, we can ask ourselves what happens if we look at our universe going backwards, knowing where we are now where we come from? And if the energy density of photons is falling off faster than the energy density of matter, it would mean that the ratio is getting smaller as we go forward in time. But that of course implies that the ratio gets larger as we go backwards in time. So as we go backwards in time, the radiation becomes more and more important, and there actually is going to be a time when the radiation will equal the matter and at earlier times the radiation will dominate. Today I'll just give you a number for now, we'll learn later how to calculate it, but for today the total radiation energy density in the universe is equal to 7.01 times 10 to the minus 14 joules per meter cubed. And this actually includes two kinds of radiation, it includes photons and also neutrinos, which at least in the early universe behaved just like radiation. And we'll be talking more about neutrinos later so don't worry if you don't have any idea what a neutrino is. But for now it's just another contribution to the radiation, and we can measure basically this is all based on measuring the temperature of the cosmic microwave background radiation-- and we'll learn later how to make that conversion. But once you measure the temperature of the cosmic background radiation and have a theory about how many neutrinos there should be, that's actually all theoretical and we'll talk about that later as well. One can determine what the energy density of that radiation is. And it corresponds to a mass density just dividing by C squared of 7.80 times 10 to the minus 31 kilograms per meter cubed. And when I think of mass densities I always like to think of in centimeters per-- excuse me, grams per centimeter cubed because I'm used to the density of water being one gram per centimeter cubed and I like to be able to make that comparison. So just making that conversion, usually I use SI units, but some things just seem to make more sense in other units. So it's 10 to the minus 34 grams per centimeter cubed, so 10 to the minus 34, or maybe 10 to the minus 33, times the density of water. And this is incredibly low even compared to the critical density of our universe, and the actual density we know is very near this critical density. Let me remind you that the critical density we derived a formula for, and when we put numbers into that formula we found that was equal to 1.88 times little h0 squared, which is Hubble's units-- Hubble's constant in units of 100 kilometers per second per megaparsec-- I'll write that in a second-- times 10 to the minus 29 grams per centimeter cubed. So just writing down the equation for little h sub 0 is where capital H sub 0, Hubble's expansion rate, is equal to 100 times little h sub 0 kilometers per second per megaparsec. Yep. AUDIENCE: What's the, I guess, what's the motivation of normalizing the Hubble constant in this way? PROFESSOR: Well, I think the real motivation is that the astronomers like these peculiar units of kilometers per second per megaparsec, but if your favorite unit is called a kilometer per second per megaparsec you don't want to have to say those units very often. So H0 is dimensionless, so it's a dimensionless way of talking about the Hubble constant. But that's the only importance, there's no real-- no deep significance to it. But it's a standard notation, so it's worth knowing. And then finally, we can write down how much the radiation contributes to omega-- omega sub r, r is going to indicate radiation. The notation, by the way, will be-- and I realized I've already violated that notation. This really should have been r. Well, I'm not going to change it, it'll get too messy. But I'm going to start using the notation where gamma indicates photons and little r indicates radiation. And the difference is that there are other kinds of radiation besides photons, in particular we've already added in neutrinos in part of what we're calling radiation. So omega sub r, which now includes photons and neutrinos, is just defined to be the mass density radiation divided by the critical density. And that turns out, when you combine these numbers, to be 4.15 times 10 to the minus 5 little h0 to the minus 2. And then for h0 equals 0.67, which is the Planck satellite value for little h0, we finally get omega sub r is equal to 9.2 times 10 to the minus 5. So roughly 10 to the minus 4. The fraction of the mass density, or energy density today is about 10 to the minus 4 fraction and radiation. And actually as I write this, this actually calls to mind another reason for defining little h sub 0, which is that if you write formulas in terms of little h sub 0, they remain valid between one year and next year. The observational value of the Hubble parameter is still floating around and differs, for example, every time I teach this course. So the formulas in terms of little h sub 0 stay, and then you plug in the current value of little h sub 0 to get the best value that one can currently write down. Now we know the Hubble constant to within a few percent, which is much better than it used to be, but it's still floating. The Planck value was somewhat lower than the previous except a value, which was about 0.70. OK. So we know enough information to extrapolate backwards and calculate when this radiation would've equaled the energy density of matter. Because we know how the ratio changes. It changes by a factor of a, the scale factor, because the energy density of non-relativistic matter is falling off like 1 over a cubed. The energy density of radiation is flowing off like 1 over a to the fourth. So the ratio between them just changes by a factor of a, decreasing as a gets larger. So we can write RHO radiation of t divided by RHO matter of t. The m there means non-relativistic matter. This is just equal to the current value. And the current value is gotten by taking that number and, where I forgot to write down is that omega matter today is about 0.30. So the ratio of the two, omega radiation over omega matter, which is the same as RHO radiation over RHO matter, is about 3.1 times 10 to the minus 4. And that number is about to appear in this equation. If we want the ratio as a function of time, we start with this value today, 3.1 times 10 to the minus 4, and then we can just multiply that by the scale factor today divided by the scale factor at the time that we want to know it, because we know it falls off as 1 over the scale factor. And by putting an a of t0 here, this just guarantees that if we let t equal t0, we get this number, which is the right ratio for today. OK. Now it's just a matter of arithmetic. We also know that for a matter-dominated universe. And for now, we're going to estimate when the radiation energy will equal the matter density. This will only be an estimate. We're going to estimate it by assuming that we can approximate the universe as matter dominated between now, all the way back until that time. There are two errors in that calculation. We're not taking into account here the era of acceleration where dark energy is playing a significant role. We'll learn later how to do that. And also, as we approach this time when they're making equal contributions, we will run into a regime where the contribution of the radiation itself will be relevant. So this is only an estimate. But what we're going to do is we're going to assume that we can treat this as a matter-dominated universe with a of t proportional to the 2/3. And then we can plug numbers into this formula and ask, when was the ratio 1? And when the ratio was 1, we call that the time of equality, using the subscript EQ for equality to indicate anything having to do with that crossing point. And what we find is that the z of equality, and z is just the ratio of the a's, is-- according to this calculation, it would be 3.1 times 10 to the minus 4 minus 1. I fibbed when I said that z is the ratio of the a's. It's offset by a little bit. It's 1 plus z that's the ratio of the a's. And that's why there's a minus 1 there. And numerically, this is about equal to 32,000. So if we look back in the history of the universe, we can define looking back in terms of the redshift. If you look back to a redshift of 3,200, we get to the time when matter and radiation had the same energy density. And we can know what time that is if we assume t to the 2/3, which again is only a crude approximation. We don't necessarily expect to get the right answer here. But we expect to get the right order of magnitude. So t-equality, according to this situation, would be about 75,000 years after the Big Bang-- just converting the scale factor that we just calculated to a time using that formula, treating this as t to the 2/3. So this says that about 75,000 years after the Big Bang, the energy densities of matter and radiation were equal. In the earlier times, the universe was radiation dominated. The radiation exceeded the matter in its energy density. Yes? AUDIENCE: It seems like zEQ is relatively about 31 [INAUDIBLE]. PROFESSOR: You might be right if we would just look to these formulas. When I calculated this at home, I kept more decimal places all the way and rounded off each answer to one significant figure. And that's not the same as taking the answer to one significant figure and calculating and then rounding off to ones in every figure. So I think there's always an ambiguity of, roughly speaking, 1 in the last decimal place whenever you're rounding numbers off. AUDIENCE: So we just plug that [INAUDIBLE] PROFESSOR: 3,220-- starting with this or starting with a more accurate number than the 3.1. AUDIENCE: Just 1/3.1. PROFESSOR: 1/3.1. OK. So, OK? AUDIENCE: Sorry. AUDIENCE: Wait no, [INAUDIBLE] like we have 1 over 3 times 10 negative 4. That's 3 times 10-- PROFESSOR: It's 3.1. AUDIENCE: You have 1 over 3.1. PROFESSOR: 1 over 3.1. So you have to divide. AUDIENCE:I do that all the time. AUDIENCE: [INAUDIBLE] PROFESSOR: So apparently it's even right if you just calculate with that. But there is actually some ambiguity. The numbers I'm giving you probably have some uncertainty of 1 in the last digit, depending on how you calculate. But the number they give you, I think they're the ones that you get if you start using this and the 0.67 and from then on do everything to large numbers in decimal places and round off at each stage. You'll get the numbers I've given you. In any case, all these are really, at best, order of magnitude estimates. So worrying about whether or not the last figure is accurate is not a big deal. Yes? AUDIENCE: I don't really understand how you got t [INAUDIBLE] without telling was a of t dot is [INAUDIBLE]. PROFESSOR: I'm sorry. There is actually a piece of information I used I forgot to write here. You're absolutely right. I used t0 is equal to 13.8 times 10 to the nine years. And then everything can be related to t0 if you know how things are proportional to t. You're absolute right. I did not give you all the information necessary for that calculation. Now I think I have. I haven't done the arithmetic for you. But otherwise it's all there. OK. Now I might mention that in Ryden's book, she does the calculation taking into account everything-- matter, radiation, cosmological constants. And her number for t-equality is 47,000 years, which verifies that we have the right order of magnitude. And actually, the biggest difference between her number and my number is not that she's taken into account these more sophisticated things, but rather that she used a different value for the Hubble expansion rate than I'm using. She's using a value that was current at the time she wrote her book, which was like '72, I think. h0 equals 0.72 instead of [INAUDIBLE] 0.67. And that does make a significant difference here. But either of these numbers are, I think, probably within the range of uncertainty of when it really happened. But it's on that scale, on the scale of 50,000 years, 100,000 years, something of that order. So there was a significantly long period compared to human lifetime when the universe was radiation dominated. But it's a very small fraction of the overall history of the universe, but nonetheless does have important features that happened during that time period. Now, if we want to understand those features, we have to understand how a radiation-dominated universe evolves, which is what we're going to get to next. OK. The next little chapter than I'm going to be talking about-- the dynamics of a radiation-dominated universe. This is a chapter that you more or less get to work out the equations for yourself on one of the homework problems. That's part of this week's set. So I will try here to outline the logic. But because all the calculations are in the homework, I will basically skip the calculations themselves, and let you do them for yourselves as part of the homework. But where we start is we have written down Friedman equations for the matter-dominated case. And I'll start by reminding us what those were. And then in addition to these two equations, which describe our expanding universe, which we've derived sometime ago for-- and to remind us here, this is for a matter-dominated universe. And matter-dominated means non-relativistic matter dominated. And going along with these equations, we also know that RHO of t for non-relativistic matter falls off like one over a cubed of t. This can be converted into a differential equation for RHO. That is, we can calculate Rho dot from this equation. And the way to see that is probably most easily to start on a new blackboard and write that equation not as a proportionality, since it's hard to differentiate a proportionality, but we can write it in an arbitrary constant of proportionality. And then it becomes an equality. So I'm going to write the equation as RHO of t is equal to some constant, b, divided by a cubed of t. And this we know how to differentiate. We can write Rho dot is equal to minus b over a to the fourth of t times a dot. And that is equal to minus 3. I'm sorry--there's a 3 here. Minus 3 times a dot over a times the original RHO. So we can forget the intermediate steps, and we just arrived at the equation that RHO dot is equal to minus 3 a dot over a times RHO. And we can think of that as going along with equations one and two. Maybe I'll even give that a number. Equation three will be RHO dot is equal to minus 3 a dot over a times RHO. Now for radiation, there will be a 4 here. The 4 will arrive the same way as the 3 arrives there. It's just the power that appeared in the factor of a. So for radiation, this last formula we know is going to be modified, which is the key point. Note that these three formulas are not independent of each other. If we know, for example, equation one, which is an equation for a dot, we could differentiate that with respect to time and get an equation for a double dot. When we do that, everything has to be differentiated. So it involves differentiating a, but that just expresses things in terms of derivatives of a. But the new quantity that gets introduced is RHO. If we wanted to differentiate this equation with respect to time, we have to know what RHO dot is. But we do. That's what equation three tells us. So we can differentiate equation one, use equation three, and we can derive an equation for a double dot. And if these equations are consistent, it'd better be equation two. And it will be. You can check it. And actually, I think any two of these equations can be used to derive the third. Those equations just are-- really a set of two independent equations and one dependent equation. You can shuffle it any way you want. But, now what we want to do is to consider a different kind of matter. Instead of non-relativistic matter, we're considering photon matter. And in particular, we know that it's going to change equation three. So for radiation, 3 gets modified into 3 prime, which is the equation that says that RHO dot is equal to minus 4 a dot over a times RHO. So how are we going to fix these equations? Now they're inconsistent. If we change three and don't change either one or two, we know that we're inconsistent, because any two of those equations can be used to derive the third. So we're in trouble. Either equations one or two will also have to be modified if we're going to modify equation three. OK. Before we go on, I'd like to say a little more about why this equation is different from that equation. One might think that it should just be governed by the conservation of energy. After all, we just write down an equation for RHO dot-- how energy density changes with time. Shouldn't conservation of energy determine that? It does. But there is an extra element to conservation of energy that we need to take into account, and that is the pressure of the gas affects what happens to its energy as it expands. So before we get back to the early universe, I just want to consider a gas in a piston chamber. And I'm going to let the piston have an area a, and inside we're going to have a volume v. Just to define our notation. If we have a gas inside a piston chamber and let the piston chamber enlarge by pulling out on the piston, the gas has a pressure, in general, and that pressure will exert a force on the piston. So if I allow the piston to move to the right, that gas will be exerting a force on the piston in the direction that it's moving. And that means the gas will be doing work on the piston. So, by our ordinary notions of Newtonian conservation of energy, we would know that the gas would lose energy, and we can even calculate how much energy it loses. And the formula is easy enough to get in a Newtonian context. It's just du is equal to minus the pressure of the gas times the change in volume. A famous formula. And this just comes about by saying that the work that's done is the force times the distance. The force is the pressure times the area. And the volume is the area times the distance. And putting those things together, you get this formula immediately. Now this formula is actually much more general than the quasi derivation that I just showed. It works no matter what the shape of the gas is. If you put a gas in any kind of a container and let that container enlarge, even in an irregular way, the work that the gas will do will always be equal to minus the pressure of the gas times the change in the volume. We can apply this to the early universe. It actually works. The difference between our two cases is that our non-relativistic matter has no pressure at all. We're just talking about particles sitting at rest. They're not bouncing off of any walls. They're not creating any pressure, while the photons are moving around all the time. And if you imagine a box of them, they'd be hitting against the walls of that box, exerting a pressure. And we're now in a position to relate the pressure to the difference between the 3's and the 4's that appear in those two equations for RHO dot in the two cases. To apply this naive idea to a piece of the universe, we can imagine choosing-- we're going to choose some fixed volume in our co-moving coordinate system. So our box, the volume that we're talking about will actually be expanding with the universe but be fixed in co-moving coordinates. And the physical volume therefore of our box will be a cubed of t times the coordinate volume of the box-- the volume and not just cubed. And this volume will be independent of time. There's time dependents there. There's time dependents there. The physical volume of our box will be enlarging. The total energy in our box, the total gas energy, which I'll call capital U, will just be the physical volume times the energy density. The energy density is energy per physical volume. And we can now apply this formula using this U and this v. And here again is one of these cases where I'm going to be skipping steps because you're going to be doing it in detail on the homework. But by putting these equations together, what you'll find is that d dt of a cubed times RHO times c squared-- this is just d dt of a cubed times the energy density-- basically, the left hand side of that equation divided by dt and divided by v coordinates-- is equal to minus p times d dt of a cubed. And this is just the PDV term from the right hand side of that equation rewritten in terms of the variables. And you'll be doing this for homework. I'm just getting straight the factors to make sure I have them right. OK. Reshuffling that equation-- and again, this is a homework problem-- you can turn that into an equation for RHO dot. And what you'll get is minus 3 a dot over a times RHO plus p over c squared. And now we can see how our two cases emerge. If the pressure is 0, we get minus 3 a dot over 8 times RHO, which is what we had for non-relativistic matter. And the photon gas is going to have a pressure, and we could read off from this formula to know what the pressure has to be to turn the 3 into a 4. The pressure has to be a factor of a third. So you determine for this that the pressure for light is 1/3 of the energy density, or 1/3 times RHO c squared. And that's what you need to turn the 3 into a 4. So we now have indirectly calculated the pressure of light, and this agrees with any other calculation for the pressure of light that you might do. It's by no means the only way to calculate it. And now finally, we're in a position-- and we'll just do this quickly to decide how to modify these equations. Now, we're not in a position to determine that rigorously. It can be determined rigorously by doing general relativity, which we're not doing at that level. But we can still motivate the answer. One of these two equations is going to have to be changed to accommodate a more general expression for RHO dot. The top equation we know is really an equation for conservation of energy. That's how we got it in the Newtonian case where little k ended up being partial to the energy in the Newtonian case. But this is basically a conservation of energy equation. And that's what you expect, just given your general notion of mechanics as well. If you have a second order equation, a second order differential is with respect to time. That's the force equation. And if you have a first order differential equation with respect to time, 1/2 mv squared plus v of r equals constant. That's energy conservation. Same thing here. And we know that energy cannot suddenly change. If we imagine-- I guess the first experiment I want to do is imagining somehow there's an explosion throughout all of space. I imagine putting pieces of TNT throughout space and arranging for, at the same cosmic time, for all of them to be ignited. And that would suddenly change the pressure of the universe, but it would not change the energy density. The energy density would be conserved. So the bottom line is that pressures can change discontinuously, but energy densities cannot. And since this equation is the conservation of energy equation, we'd expected that nothing can change suddenly here, that the pressure term cannot contribute here, because if it did, the pressure term would changed suddenly. Nothing else in this equation would change suddenly. There would be no way the equation could be satisfied. But if we added a pressure term to the second equation, that would allow the pressure to change discontinuously as the TNT went off. And that would change a double dot discontinuously. And there's nothing wrong with a double dot changing discontinuously. If you suddenly the apply a new force to a particle, you suddenly change its second derivative of its motion. You suddenly change its acceleration. So that's OK. So any pressure to this term make sense. Adding pressure to this equation does not make sense. And then we can just ask, what do you have to do to this equation if we're going to add a pressure term to make all three equations now consistent with the new equation for RHO dot? It's your homework problem to answer that question, but the homework tells you the answer, and I'll write the answer on the board right now, and then we'll consider today's lecture over. The bottom line is that equation number one has to be modified into one prime, which says that equation number two has to be modified. What am I talking about? And the new equation is a double dot is equal to minus 4 pi over 3 G times RHO plus 3p over c squared times a. And now we have a consistent set of Friedman equations, and these are the Friedman equations that we would have gotten if we had done everything using general relativity from the beginning. And we'll stop there. And we will meet again next Tuesday. And I'll send you an email about-- there will be at least one homework problem on the problems set that will have to be held over to the following problems set. I'll send you an email about that and post it on the website.
MIT_8286_The_Early_Universe_Fall_2013
8_The_Dynamics_of_Homogeneous_Expansion_Part_IV.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: I'd like to begin building up momentum by going over what we've already done to kind of see how it all fits together. So I put the last lecture summarised on transparencies here. And we'll go through them quickly, and then we'll start new material, which I'll be doing on the blackboard. So we've been studying this mathematical model of a universe obeying Newton's laws of gravity. We considered simply a uniform distribution of mass, initially spherical, and uniformly expanding, which means expanding according to Hubble's law, with velocities proportional to the distance from the origin. And Newton's laws then tell us how that's going to evolve. And our job was just to execute Newton's laws to calculate how it would evolve. And we did that by describing the evolution in terms of a function little r, which is a function of r, i, and t. And that's the radius at time t of the shell that was initially a radius r sub i. So we're trying to track every particle in this sphere, not just the particles on the surface. And we want to verify that it will remain uniform in time. Matter will not collect near the edge of the center. And we discovered it will remain uniform if we have a 1 over r squared force law, but for any other force law it will in fact not remain uniform. So we derived the equations and found that it obeyed this scaling relationship, which is what indicated the maintenance of uniformity, wholesale scale by the same factor, which we then called a of t. So the physical distance of any shell from the origin at any time is just equal to a of t times the initial distance from the origin. And furthermore, we were able to drive equations of motion for the scale factor. And it obeys the equations, which in fact were derived from general relativity by Alexander Freedman in 1922, and therefore they're called the Freedman equations. There's a second order equation, a double dot, which just tells us how the expansion slowed down by Newtonian gravity. a double dot is negative, so the expansion is being slowed by the gravitational attraction of every particle in this spherical distribution towards every other particle. And we also were able to find a first order equation by integrating the second order equation. And the first equation has this form. It could be written a number of different ways depending on how you arrange it. But this is the way that I consider most common. And many books refer to the second equation as the Freedman equation. Both of these equations were derived by Alexander Freedman. I think it's perfect called both Freedman equations. But most books do not do that. And in addition to finding the equations of evolution for a of t, we also understood how rho of t evolves. And that really is pretty trivial to begin with. The Newtonian mass of this sphere stays the same. It just spreads out over a larger volume as a of t grows. So if the mass stays the same and the volume grows as a cubed, then the density has to go like one over a cubed. And this alone could be written more precisely as an equation by saying the easiest way to view the logic is that this equation implies that a cubed times rho is independent of time. And then once you know that a cubed times rho is independent of time, you can write the equation in this form, which if I multiply a of t cubed to the left hand side of the equation, we can just say that a of t cubed times rho is the same at time t as it is at some other time t1. Now in lecture last time, we wrote this equation where t1 was t sub i the initial time, and a of t sub i was one. So we didn't include that factor. So this is slightly more general way of writing it than we did last time. But it still has no more content than the statement that rho of t falls as 1 over a cubed of t. We introduced a special set of units to describe this. Question, yes? AUDIENCE: I had a question about the Freedman equations and what we're calling e squared, and the interpretation of that. PROFESSOR: What we're calling what squared? AUDIENCE: h squared. PROFESSOR: h squared, yes. AUDIENCE: So in the spherical universe, we showed that if e was positive that corresponded to an open universe that would expand forever. So doing the p set for this week for the cylindrical universe, I think we found that that universe would collapse. PROFESSOR: Yes. AUDIENCE: E was also positive for that one. So I was just wondering about the interpretation of [INAUDIBLE] PROFESSOR: Right, well I'll be coming to that issue later. Let me come back to that, OK. Since I haven't introduced e yet. And the slide is here. So an interesting question. We'll come back to that. So to describe this system of equations, I like to introduce this notion of a notch, which is a special unit used only to measure co-moving coordinates. And in this case, r sub i is our co-moving coordinate. As our shells move, we label them all by where they were a time t i. We don't change those labels. So those are the co-moving coordinates, a coordinate system that expands with the universe. So instead of measuring r i in meters or any other physical length, I like to measure them in a new unit called a notch, just to keep things separate. And a notch is defined so that a of t is measured in meters per notch, and at time t i, one notch equals 1 meter. But at different times, the relationship between notches and meters is different because it's given by this time dependent scale factor of t. If [INAUDIBLE] works to have things depend on these units, one find that this new quantity that we introduced, little k, which all we know is that it's a constant, has the units of 1 over a notch squared. And that has some relevance to us, because it means that we can make little k have any value we want by choosing different definitions for the notch. And the notch is up for grabs. We are just inventing a unit to use to measure our co-moving coordinate system. So we can always adjust the meaning of a notch so that k has whatever value we want. As long as we can't change its sign by changing the units. And if it's zero we can't change it by changing its units. As long as non-zero we can make it any value we want, and in fact that is often used in many textbooks. And that's I guess what I want to talk about next, the conventions that are used to define a of t. And for us, I'm going to treat this notch as being arbitrary. We've defined the notch originally so that a of t i was one meters per notch at time t i, and that gave the notch more or less a specific meaning. But the specific meaning depends on what t i is. One can take the same relationship and view it as simply a definition of t i. t i is the time at which the scale factor is one meter per notch. We take the definition of t i and we can let the notch be anything we want, and there will be some time t i that will still make that statement true. So what I want to do basically is to think of this equation, a of t i equals 1 meter per notch, not as a definition of a notch, which I want to leave arbitrary, but rather as a definition of t sub i. And after defining t sub i, I want to just forget about t sub i. t sub i in fact will not enter our equations anywhere. So we don't need to remember its definition. I decided like the cubit. All of us know that the cubit is some unit distance, but we don't care what it is because we never use cubits. Same thing here. We'll just never use t sub i. And since we're never going to use it. We don't need to remember how it was initially defined. It's only of historical interest. So the bottom line then is simply that for us the notch is just an undefined unit of distance in the co-moving coordinate system. Other people use different definitions. Ryden, for example uses the definition where a of t sub, a of the present time is equal to 1. And we would interpret that as meaning one meter per notch today. And that's a perfectly good definition. And we can use it whenever we want, because our notch is initially undefined. So that allows us the freedom to define it in any particular problem in any way that we want. Many other books take advantage of the fact that this quantity k has units of inverse notch squared, even though they don't say that. But that means you could rescale the co-moving coordinate system to make k equal to whatever value you want. So in many books k is always equal to plus or minus 1 if it's non-zero. The co-moving coordinate systems is just scaled. We would do is we scaling of the notch to make k have magnitude 1. OK, having derived these equations, the next step is to go about asking what do the solutions to the equations look like. And that's where things start getting interestingly when we start getting some real nontrivial results. The Freedman equation, the first order one, could be rewritten this way. It's just a rewriting of rearranging things. And I used here the fact that rho times a cubed was a constant. If we took our original form of the Freedman equation, this would be rho sub i times a cubed of t sub i, which would be 1. But knowing that rho times a cubed is a constant, we can let t argument here be anything we want, which is what I'll do, just to emphasize that we don't care anymore what t sub i was. It really has disappeared from our problem. It was just our way of getting started. So this equation holds. And we can use it to discuss how the different classes of solutions will behave. And what we can see very quickly from this equation is that the behavior of the solutions will depend crucially on the sign of k. And it's useful here to remember, although we don't need to know anything more than this proportionality, that k is proportional to the negative of something that we call e. And that the thing that we call e is related to the overall energy of this thing. So we'll keep that in the back of our minds. But it's really only for intuition. Everything that we're going to say follows directly from this equation, where we don't know anything about e. There are three types of solutions, depending on whether k is positive, negative, or zero, and those are the options for what k might be. It's a real number. So first we consider with the solutions where k is less than zero, which means e is greater than zero. And e being greater than zero means the system has more energy than zero. And in this case, zero energy would correspond to having all the particles infinitely far away. So the potential energies would be zero. And all particles the rest, so the kinetic energies would be zero. So in particular, zero energy would correspond to the system being completely dispersed, no longer compact. And in this case, our system has more energy than that. And more energy than that means it can blow outward without limit. And we see that directly from the equation. If the second term has a negative value of k, then the second term itself is positive. And the first term is also always positive. And that means that a dot squared, no matter what happens to the first term, is always at least bigger than the second term, which is a constant. And if a dot is always bigger than some constant means that a grows indefinitely. And that's called an open universe. And it goes on expanding forever. Second case we'll consider is k greater than zero, which corresponds to e less than zero. And that means since 0 corresponds to the system being completely dispersed, e less than 0 means the system does not have enough energy to ever become completely dispersed. So we'll have some maximum size. And the maximum size follows immediately from this equation. a dot squared has to be positive. It can ever get negative. It can become zero, but it can never get negative. In this case, the minus k c squared term is negative. And that means that if this term gets to be too small, the sum will be negative, which is not possible. So this term cannot get to be too small. And since a of t is in the denominator, that means a of t cannot get to be too big. And you can easily derive the inequality that a has to obey for the right hand side to always be positive. And in that case you [INAUDIBLE] a max, which is what you would get if you just set the right hand side equal to 0, given by that expression. a can never get bigger than that, because if it did, the right hand side of the equation become negative, which is not possible. So this universe will reach a maximum size, which we just calculated, and then [INAUDIBLE] will come back. So we already have a very nontrivial result here. Given a description of a universe of this type, we can calculate how big it will get before it turns around and collapses. And this of universe ultimately undergoes a big crunch when it collapses, where the word big crunch was made as an analogy to the phrase big bang. It's called a closed universe. And then finally, we've now considered the k less than zero and k greater than zero. There are many cases when k equals zero. And that's called the critical mass density or critical universe. And we can figure out what it means in terms of the mass density. This again is our Freedman equation. If k is zero, this last term is absent. So we just have a relationship between rho and h, the Hubble expansion rate. And we can solve that. And the value of rho which satisfies that equation is called the critical density. As the density is equal to the critical density, it means that k is zero. And that's called a flat case. We'll figure out in a minute how it evolves. It's not clear if it will be collapsed or stop or what. But we'll find out soon. It's really on the borderline between something which we know expands and something which we know collapses. And it's called a flat universe. The word flat suggests geometry, and we'll be learning about that later. General relativity tells us a little bit more than we learn here. These equations are all exactly true in the context of general relativity. But general relativity also tells us that these equations are connected to the geometry of space. And only for this critical mass density is the space Euclidean. The word flat here is used in the sense of Euclidean. So to summarize what we've said, if the mass density is bigger than this critical value, we get a closed universe, which reaches a maximum size and then collapses. If the mass density is less than the critical density, we get an open universe, which goes on expanding forever. And if the mass density is exactly equal to the critical density, that's called a flat k. So we'll explore a little bit more in a minute. It's interesting to know what this critical density is. It depends on the expansion rate. But the expansion rate has now been measured quite accurately. So if I take the value of 67.3, which is the value that comes from the Planck satellite combining their results with several other experiments, they get a value of 67.3. And when we'll put that into this formula, the number one gets is 8.4 times 10 to the minus 30 grams per centimeter cubed, which is only about 5 proton masses per cubic meter. It's an unbelievably empty universe that we live in. I say the universe that we live in because in fact the mass density of our universe is very close to this critical value. It's equal to it to within about half of a percent we now know. An important definition, which we'll be continuing to use through the course and which cosmologists always use, is omega, where omega means capital Greek omega. And that's just defined to be the actual density of the universe, whatever it is, divided by this critical density. OK, the one remaining thing that we did last time, and we'll summarize this and go on, is we figured out what the evolution is for a flat universe. And we can do that just by solving the differential equation, which is a fairly simple differential equation. If we leave out the k term, the Freedman equation becomes a dot over a squared is equal to 8 pi g over 3 times rho And we know how rho depends on a. It's proportional to 1 over a cubed. So the right hand side here is some constant divided by a cubed. And by just rearranging things, we can rewrite that as da over dt is equal to some constant over a to the 1/2. In this slide I use this symbol const a number of times. Those constants are not all equal to each other. But they're all constants, which you can keep track of if you wanted to. But there's no need to keep track of them because they have no bearing on the answer anyway. So I just called these constants const. So again, da over dt equals const over a to the one half, which is an easy differential equation to solve. We just multiply through by dt and a to the one half, and write it in as a to the one half times da equals constant times dt, which can easily be integrated both sides of the equation as indefinite integrals. And then we get 2/3 times a to the 3/2 is equal to a constant times t, where this constant happens to be the same as that constant, for whatever that's worth, plus a new constant of integration, c prime. Then we argue that the value of c prime depends on how we synchronize our clocks. If we reset our clock by changing t by a constant that would change the value of c prime. And since we haven't said anything yet about how we're going set our clock, we're perfectly free at this point to just say that we're going to set our clock so that t equals 0 corresponds to the same time that a is equal to zero. The initial singularity of the universe starts from zero size with a as zero. So if we do that when a is zero, t is zero. That means that c prime is zero. So setting c prime equal to zero is just a choice of how to set our clocks. So we do that. And then we can take the 2/3 power of this equation. And since constants are just constants that we don't care about, we end up with a of t is proportional to t to the 2/3. And proportional is all we need to know, because the constant of proportionality would depend on the definition of the notch. And we haven't defined the notch. And we don't need to define the notch. And so anything that depends on the constant of proportionality will never enter any physical answer. It will be relevant to questions like how many notches are a certain distance on your map. But for any physical answer, we don't care. If we want to talk about our map, we could just define a notch to be whatever we want it to be. OK, that's the end of my summary. Any questions about any of that? I'm sorry. I didn't come back to answer your question about the cylinder. The cylinder problem does end up always giving you a closed universe no matter what the parameters are. It always collapses. And even though its energy as you compute it would turn out to be positive, the difference though is that for the case of the cylinder, the potential energy does not go to 0 as the thing becomes infinitely big. The potential energy has a logarithmic diversion in it. So the zero is just placed differently for the case of the cylinder problem. So it ends up being closed no matter how fast it's expanding. Yes? AUDIENCE: [INAUDIBLE] PROFESSOR: Well that is true. Certainly the differential equations break down when a is equal to zero. The mass density goes to infinity. But we're still certainly free to set our clocks so that the equations themselves, when extrapolated to 0, would have the property that a equals zero when t is equal to zero. It's certainly correct, and I was going to be talking about this in a minute, that you should not trust these equations back to t equals zero. But that doesn't stop you for choosing whatever you want as your origin of t. And if these are the equations we have, the simplest way to deal with these equations is to use the zero of t when equations say that a was zero. Any other questions? OK, in that case, we will leave the slides for a bit and proceed on the blackboard. So so far I think we have learned two varied nontrivial things from this calculation. We learned how to calculate the critical density. We learned how to calculate what density the universe would have to have so it would re-collapse. And we've also learned that if the universe is closed, we can calculate how large it will get before it collapses. So those are two very nontrivial results coming out of this Newtonian calculation. The next question I want to ask is still about the flat universe. It's a fairly trivial extension of what we have. Given this formula for a of t, I would like to calculate the age of a flat universe. If you were living in a flat universe that was matter dominated like the one we're describing, how would you determine how old it was? And the answer is that it's immediately related to the Hubble expansion rate, and the age can be expressed in terms of the Hubble expansion rate. To see that-- so we're calculating the age of a matter dominated flat universe. And these age calculations will be extending as we go through the course. So in the end, we'll have the full calculation for the real model that we have of our universe. But you have to start somewhere. So we're starting with just a flat, matter dominated universe. We know that a of t is equal to some constant, which I will call little v times t to the 2/3 power. Previously I just used proportional to, but now it's just more convenient to give a name to the constant of proportionality. We'll never need to know what it is. But v is some constant of proportionality. This by the way already tells us something else, which wasn't obvious from the beginning, which is our flat universe does go on expanding forever, somewhat like an open universe. An important difference is that if you calculate da dt for the open universe, that approaches a constant as time goes to infinity. That is, the universe keeps on expanding at some-- minimal rate forever. In this case, if you calculate the adt, it goes to 0 at times. So the flat universe expands forever, but at an ever, ever, ever decreasing rate. We know how to relate a of t to h. The Hubble expansion rate is a dot over a, we learned a long time ago. And if we know what a is, we know what this is. So this is just 2 over 3t. The 2/3 coming from differentiating the 2/3. And that gives you a t to the minus 1/3, but then you're also dividing by a, which turns that t to the minus 1/3 to a t to the minus 1. So you get 2 over 3t is the final answer. So this is the relationship now between h and t, and the question we asked is how to calculate the age. The age is t. This is all defined as where t is equal to 0 at the big bang. So t really is the time elapsed since the big bang. So we're left immediately with a simple result, that t is equal to 2/3 times h inverse. Now this result immediately makes rigorous contact with something that we talked about in vague terms earlier. If you are so unfortunate as to badly mis-measure h, you can get a pretty wild answer for the h of your model universe. And Hubble mis-measured h by about a factor of seven comparative to present modern values. He got h to be too high. His value of h was too high by a factor of about seven, and that meant that when big bang theorists calculated the age of the universe were getting ages that were too low by a factor of seven. And in particular that meant they were getting ages of the order of 2 billion years. And even back in the 1920s and '30s, there was sufficient geological evidence that the Earth was older than 2 billion years. There was also significant understanding of stellar evolution, that starts took longer to evolve than 2 billion years. So the big bang model was in trouble from the start, largely because of this very serious mis-measurement in the early days of the Hubble expansion rate. If we put in some numbers, this of course is not an accurate model for our universe, we now know. Our universe is now currently dark energy dominated. But nonetheless, just to see how this works, we can put in numbers. So h, using this Planck value that I quoted earlier 67.3 plus or minus 1.2 kilometers per second per megaparsec. To be able to get an answer in years, one has to be able to convert this into inverse years. h is actually an inverse time. And a useful conversion number is 1 over 10 to the 10 years is equal to almost 100, but not quite, 97.8 kilometers per second per megaparsec, which allows you to convert these Hubble expansion rate units into inverse years. And using that one finds that the age of the universe using the 2/3 h inverse formula is 9.7 plus or minus 0.2 billion years. Now this number played a role in the fairly recent history of cosmology. Before 1998, when the dark energy was discovered, which kind of settled all these questions, but before 1998, we thought the universe what matter dominated. It might have been open. It didn't have to be flat. That was debated. It looked more open and flat. But some of us wanted to hold out for a flat universe because we were fans of inflation and admired inflation's other successes, which we'll learn about later in the course, and thought therefore that the universe should be flat, and wanted to try to reconcile all this. And the problem was that like cosmology in the '20s and '30s when the age of the universe that you calculate was too young, the same thing was happening here before 1998, when we thought the universe was matter dominated. This is the age that we got, modified a little bit by having different values of h, but pretty close to this. At the same time, there were calculations about how old the universe had to be to accommodate the oldest stars. And in the lecture notes I quote a particular paper by Krauss and Chaboyer. Lawrence Krauss is an MIT PhD by the way. And what they decided by studying globular clusters, which are supposed to contain the oldest stars that astronomers know about, that the oldest stars, they said, had an age of 12.6 plus 3.4 minus 2.2 billion years. And this is a 95% confidence number. That is, instead of using one sigma, which are often used to quote errors, these are two sigma errors, which have probabilities of being wrong by only 5% if things work properly according to the statistics. So then we're going to think about 95% limits. So they were willing to take the lower limit here, which was 10.4. So they got a minimum age of 10.4 billion years for the oldest stars. But they also argued that the stars really could not possibly start to form until about 0.8 billion years into the history of the universe. And doing a little bit of simple arithmetic there, they decided that the minimum possible age for the universe at the 95% confidence level, would be 10.4 plus 0.8 or 11.2. And 11.2 is older than 9.7, and by a fair number of standard deviations, although [INAUDIBLE] were somewhat bigger in 1998 than they are now. But in any case, this led to I think what people at the time regarded as a tension between the age of the universe question and the possibility of having a flat universe. A flat universe seemed to produce ages that were too young to be consistent with what we knew about stars. Yet there was still evidence in terms of the desired to make inflationary models work to indicate that the universe was flat. And actually it's also true that by 1998, there was evidence from the Kobe satellite measuring fluctuation in the cosmic background radiation, which also suggested that omega was one, that the universe with flat. So things didn't fit together very well before 1998. And this was at the crux of the argument. It all got settled with the discovery of the dark energy, which we'll learn how to account for in a few weeks. When we includes dark energy in these calculations, the ages go up, and everything does come into accord with the idea now that the age of the universe is estimated at 13.8 billion years. And that's consistent with the Hubble expansion rate given here, as long as one has dark energy and not just relativistic matter. OK any question about that? OK next thing I wanted to say a little bit about is what exactly we mean here by age. And the question of what we mean by age does of course connect to the question of how do we think it actually began. Because age means time since the beginning presumably. And the answer really is that we don't know how the universe began. The big bang is often said to be the beginning of the universe. But I would argue that we don't know that, and I think most cosmologists would agree that we don't know that. As we extrapolate backwards, we're using our knowledge of physics that we measure in laboratories and physics that we confirm with other astrophysical type observations, but nonetheless, as we get closer and closer to t equals zero, the mass density in this approximation grows like 1 over the scale factor cubed, which means it goes up arbitrarily large. Later we'll learn that when the universe gets to be very young, we have to include radiation and not just relativistic matter. The dark energy is actually totally unimportant when we go backwards in time. It becomes important when we go forwards in time. But when we put in radiation, it does not solve this problem. The universe still requires a mass density that goes to infinity as we approach t equals zero. People had wondered whether maybe that's an idealization associated with our approximation of exact homogeneity and isotropy which after all do break down at some level. Maybe if we put in a slightly inhomogeneous and slightly anisotropic universe and ran it backwards, maybe the mass density would not climb to infinity. Hawking proved that that was not a way out. The universe would become singular. He didn't really prove the mass density went to infinity, but he proved it becomes singular in other ways as t went to zero no matter what geometry you put in. So the bottom line is that classical general relativity does predict a singularity of some sort as we extrapolate backwards in time. But the important qualification is that once the mass density goes far above any mass densities that we've had any experience with, we really don't know how things are going to behave. And we don't really know how classical general relativity holds in that regime. And in fact, we have very strong ways is to believe that classical general relativity will not hold in that regime. Because classical general relativity is after all a classical theory, a theory in which one talks about fields that have definite values at definite times without incorporating the ideas of the uncertainty principle of quantum theory. So nobody in fact knows how to build a theory in which matter is quantized and gravity is not quantized. So all the smart money bets on the fact that gravity is really a quantum theory, even though we don't yet quite understand it as a quantum theory. And that as we go back in time, the quantum effects become more and more important. So there's no reason to trust classical general relativity as we approach t equals zero, and therefore no reason to really take this singularity seriously. Furthermore, we'll even see at the end of the course that most inflationary scenarios imply that what we call the big bang is not a unique beginning of the universe. But rather it now seems pretty likely, although we sure don't know, that our universe is part of a multiverse, where we are just one universe in the multiverse, and that the big bang, what we call the big bang, is really our big bang, the beginning of our pocket universe. But before that the space of time already existed. The big bang is just a nucleation of a phase transition. It's not really a beginning. And that there was other stuff that existed before what we call the big bang. I should add though that the inflationary scenario does not provide any answer whatever to the question of how did it all ultimately begin. That's still very much an open question. And it's clear that inflation by itself does not even offer an answer to that question. So when we talk about the age of the universe, what are we talking about? What we're talking about is the age, the amount of time that has elapsed, since this event that we call the big bang. The big bang might not have been the beginning of everything, but certainly the evidence is overwhelmingly strong that it happened. And we could talk about how much time has elapsed since it happened. And that's the t that we're trying to calculate here. And it will be offset by a tiny amount by changing the history in the very, very early stages, but only by a tiny fraction of a second. So the uncertainties of quantum gravity are not important in calculating the age of the universe. Although they are important in interpreting what you mean by it. I think we don't really mean the origin of space and time, but rather simply the time has elapsed since the event called the big bang. OK any questions about that? All right, next event I want to talk about is that if the universe as we know it began some 13.8 billion years ago to use the actual current number, that would mean that light could only have traveled some finite distance since the beginning of the universe as we know it, meaning the universe since the big bang, and that would mean there would be some maximum distance that we could see things. And beyond that there might be more things, but they'd be things for which the light has not yet had time to reach us. So that's an important concept in cosmology, the maximum distance that you could see. It goes by the name the horizon distance. If you're sailing on the ocean, the horizon is the furthest thing you can see. So what we want to do now is to calculate this horizon distance in the model that we now understand, the flat matter dominated universe. And this of course is also a calculation that we will be generalizing as we go through the course learning how to treat more and more complicated cases and more and more realistic cases. So this horizon distance, I should define it more exactly. It's the present distance, and maybe I should even stick the word proper here. I've been usually using the word physical distance to refer to the distance to an object as it would be measured by rulers, which are each along the way moving with the velocity of the average matter at those locations. That is also called the proper distance, which is [INAUDIBLE] calls it. And this horizon distance is defined as the present proper distance of the most distant objects that can be seen, limited only by the speed of light. So we pretend we have telescopes that are incredibly powerful and could see anything, any light that could have reached us. But we know the light has a finite propagation time, so take that into account in talking about this horizon distance. OK so what is the horizon distance going to be? Well remember that the coordinate velocity of light is equal to c divided by a of t. I should maybe start by saying before we get down to details, that you might think naively that the answer should be the speed of light times the age of the universe. That's how far light can travel. And so if the universe was static and just appeared a certain time in the past, that would be the right answer. I would just start off at the beginning and travel at speed c. But it's more complicated because the universe has been expanding all along. And it started out with a scale factor of 0. And furthermore, what we're looking for is present distance in these objects, and the objects of course have continued to move after the light that we're now seeing has left them. So it's a little more complicated than just c times the speed of light. And we'll see what it is by tracking it through very carefully. We'll imagine a light beam that leaves from some distant object. And the light beam will get the furthest if it leaves earliest. So we want the earliest possible light beam that could have left this distant object. And that would be a light beam that left at literally t equals zero. So the light beam leaves the object at t equals zero and reaches us today. And we want to know how far away is that object? That's the furthest object that we could see, objects for which we can only see the light that was emitted from the object at t equals zero. So we're going to use our co-moving coordinate system to trace things. All calculations are done most straightforwardly in the co-moving coordinate system. And we know that light travels in the co-moving coordinate system at the rate of dx dt is equal to c divided by a of t. And this really just says that as the light passes any observer in this co-moving coordinate system, the observer sees speed c, as special relativity tells us he must. But we need to convert it into notches per second to be able to trace it through the co-moving coordinate system. And the relationship between notches and meters is a of t. So a of t is just a conversion factor here that converts the local speed of this light pulse from meters per second to notches per second, which is what dx dt has to be measured in. So this will be the speed. That means that the maximum distance that light will travel, still measured in notches in co-moving coordinates, will just be the integral of the speed. The integral of dx dt is just delta x. So it would be the integral of dx dt dt, integrating from 0 up to t zero, the present time. Now this is not the final answer that we're interested in. We want to know the present physical distance or the present proper distance of this object that's the furthest object that we can see. And the way to go from co-moving distances to physical businesses is to multiply by the scale factor. And we are interested in the distance today, so we multiply by the scale factor today, the present value of the scale factor. So the answers to our problem, which I will call l sub p or sub [INAUDIBLE] of t. I want to get the word horizon into the subscript someplace. So I will call it l sub p comma horizon, which means the physical distance to the horizon at time t is just equal to x max that we have here times the present value of the scale factor. So it's a of 2 [INAUDIBLE] times x max. Or the final formula, just substituting in x max, will be of a of t naught times the integral from 0 to t naught of dx dt. I'm going to substitute c over a of t. Let me just remind you that [INAUDIBLE] variable integration should never have the same symbol as the limits of integration, because that just causes confusion. They're never really the same thing. So I called the limits t sub zero. So it's perfectly OK to call the variable of integration t. In the notes, I call the value of the time that we want to calculate this as t, and then I use t prime for the variable of integration. Whatever you do, you should make sure that those are not the same. There's one variable that corresponds to the variable that varies from the initial time to the final time. And then there's also another value that represents the final time. OK, so now all we have to do is plug in a of t is a constant times t to the 2/3 into this formula and we have our answer. Notice that it does obey an important property. There's an a in the numerator and an a in the denominator. And that means that when we put in the formula for a of t, the constant of proportionality b will cancel out. And it must, as the constants proportionality is measuring the notches or notches per seconds to the 2/3 power but proportional to the notch. And the answer can't depend on notches because notches are not really a physical unit. But it works. That's an important check. So-- now it's just a matter of plugging in here, and maybe I'll do it explicitly. I'll leave out the b's [INAUDIBLE] so you see the cancel. We have b times t zero to the 2/3 times the integral from 0 to t zero, times c over b times t to the 2/3 times dt. The b's cancel as I claimed. The integral of t to the minus 2/3 is 3 times t to the 1/3. We then subtract the t to the 1/3 giving it the value t zero on the positive side, and then we subtract the value [INAUDIBLE] same expression at zero. But t to the 2/3 when t is zero vanishes. So we just get t zero to the 2/3 from the upper limit of integration. I'm sorry, to the 1/3 power. We integrate minus 2/3. We get t to the 1/3. The t to the 1/3 multiplies the t to the 2/3 giving us a full one power of t. So what we're left with is just 3 times c times t zero, which has the right units. It should have units of physical distance. Speed times time is the distance. And it has a surprising factor of 3 in it. The naive answer would have just been c times t zero, saying that the light at time t travels so it travels a distance c times t zero. That would be true, as I said, in a stationary universe. But the universe is not stationary. It's expanding. And the fact it's expanding means that you expect this to be bigger than c times t zero. It means that at earlier times, things were closer. So the light can save time by leaving early and traveling a good part of the distance while distances are smaller than they are now. And it's a full factor of 3. So that is the horizon distance for a flat, [INAUDIBLE] universe. We can also, since we know how to relate t sub zero to the Hubble expansion rate, we can express the horizon distance if we want in terms of the Hubble expansion rate by just doing that substitution. So this then becomes 2 times c times the current Hubble expansion rate inverse. So these are both valid expressions for the horizon distance in this particular model of the universe. Any questions about the meaning of horizon distance? There is actually a subtlety about the meaning of horizon, which I should talk about. The initial value of the scale factor in our model is zero. It's t to the 2/3, and t goes to 0, the scale factor goes to 0. Things are of course singular there, where you don't really trust exactly what the equations are telling us at t equals zero. But that's certainly how they behave. a of t goes to 0 as t goes to 0. That means initially everything was on top of everything else. So if everything was on top of everything else, why is there any horizon distance? Couldn't anything have communicated with anything at t equals zero, when the distance between anything and anything was zero? The answer to that is perhaps somewhat ambiguous. We of course don't really understand the singularity. We don't claim to understand the singularity. And therefore anything you want to believe about the singularity at t equals zero you are welcome to believe. And nobody intelligent is going to contradict you. They might not reinforce you, but they're not going to contradict you either, because nobody knows. So it is conceivable that everything had a chance to communicate with everything else at t equals zero at the singularity. And it's conceivable that when we understand quantum gravity, it may even tell us that. We don't know. What is still the case is that if you strike out t equals zero exactly, then everything is well defined. You can ask what happens if a photon is sent from one object to another leaving that object at time epsilon, when epsilon is slightly later than zero. And you can ask how long does that photon take to go from object a to object b? And that's really exactly the calculation we did, except instead of going down to t equals zero, you go down to t equals epsilon, where epsilon is the earliest time that you trust your classical calculations. Then you'd be asking how far is the furthest object that we could see during this classical era, the era that starts from t equals epsilon? The only difference would be to put an epsilon there instead of 0. And the answer-- you can go through it-- differs only by some small multiple of epsilon. And if epsilon is small, it doesn't change the answer at all. Physically what is going on is that if you go to very, very early times and look at two objects a and b and trace them back, the distance between them does get smaller and smaller as epsilon goes to 0. So you might think that communication would be trivial. But at the same time as the distance is going to zero, you can calculate the velocities. h remember is going like 2 over 3t. h is blowing up. The velocities between these two objects a and b are going to go to infinity at the same time that the distance between them goes to zero. So even though they will become very close, if one sends a light beam to the other, the other is actually moving away faster than the light would be moving. The light would eventually catch up, but the amount of time it would take for the light to catch up is exactly what this integral is telling us. So there is no widespread communication that is possible once these equations are valid. They really do say that in the early universe, things just cannot talk to other things because the universe is expanding so fast. And the maximum distance that you can see is more than c times t zero, but less than infinity. Yes? AUDIENCE: Why does it matter how far away the furthest galaxies we can see are now? Because we're seeing them as they were a long time ago when they were closer to us. PROFESSOR: Yea, now we certainly are seeing them a long time ago when they were closer to us. That's right. I would just say that this is sort of a figure of merit. If you want to describe what you think the universe looks like now, you would assume that those galaxies that you're seeing billions of light years in the past are still there, and you'd extrapolate to the present. So it's relevant to the picture that you would draw in your mind of what the universe looks like at this instant, although that picture would be based on things you haven't actually seen. AUDIENCE: [INAUDIBLE] PROFESSOR: Yes? AUDIENCE: Is the fact that this number is greater than c t zero proof that the universe is the actual space of the universe is expanding? PROFESSOR: I don't think so because if you just have the objects moving and a space that you regard as absolutely fixed and then you ask the present distance of that object given that you can see a light pulse that was emitted by that object, it will still be bigger than ct, because the object continues to move after it emits the light pulse. So I don't think this is proof of anything like that. I think I should add that certainly we think of this as the space expanding. That is certainly the easiest picture. But you can't have any absolute definition of what it means for the space to expand. AUDIENCE: So you're saying it has moved three times the original distance it was when it sent out an impulse? PROFESSOR: No it actually was zero distance when it sent out the light pulse, because the first object we in principle could see is an object for which the light pulse left it at t equals zero when it literally was zero distance from us. And the light pulse actually then gets further away from us and comes back and eventually reaches us. We have enough information here to trace it. And that's how it behaves. OK any other questions? OK, next thing I want to do, and I don't know if we'll finish this today or not. But we'll get ourselves started. We'd like to now, having solved the equation for the flat universe for determining a of t, we would now like to do the same thing for the open and closed universes. And we'll do the closed universe first because it's a little bit simpler. I mean they're about equal, but we're going to do the closed universe first because it's first in the notes that I've written. So we know what equation we're trying to solve. This is really now just an exercise in solving differential equations. So one has to be clever to solve this equation. So we will go through it together and see how one can be clever and find the solution to it. The equation is a dot over a squared is equal to 8 pi over 3 g rho minus k c squared over a squared, with rho of t being equal to rho i times a cubed of t i over a cubed of t. And I'm writing i here. I could just as well write 1. It could be any time. Rho times a cubed isn't dependent of time. So you can put any time you want there. And the numerator is just a fixed number. OK, so our goal is to solve this equation. The first thing I want to do is something which is usually a good thing to do when you are given some physical differential equation. Physical differential equations usually have constants in them like g and c squared, which have different units which complicate things. And they don't complicate things in any intrinsic way. They just give you extra factors to carry around. So it's usually a little cleaner to eliminate them to begin with by redefining variables that absorb them. One can do that by defining variables that have the simplest possible units that are available for the problem at hand. And in our case, we have these complications that k is measuring in inverse notch squared, and a is meters per notch. We could simplify all that by defining some auxiliary quantities. So in particular, I'm going to define an a with a tilde above it, which is sort of like the scale factor, but redefined by the square root of k. And this is the case when k is positive. I said we're going to do the closed universe first. I would not want to divide by the square root of k if k were negative. That would be confusing. Now the nice thing about this is remember k has units of inverse notch squared. a has units of meters per notch. That means the notches just go away. So this has units of length only. And that was the motivation for dividing by the square root of k, to get rid of the notch. Similarly, time has units of meters per second. I'm trying to minimize the number of distinct physical quantities that we have in our problem. So I'm going to define a t twiddle which is just equal to c times t. This is of course no difference in just saying the working units is c is equal to 1, which a lot of times is something people say. This is the same thing except I'm a little bit more explicit about. So t twiddle now is units of length also. So the idea is to translate everything so that everything is the same units, which would be meters or whatever physical unit of length you want to use. OK, now I'm just going to rewrite the Freedman equation using these substitutions. And to make way for the substitutions, I'm first just going to divide the Freedman equation by kc squared. This is k to the 3/2. OK now when I divided by kc squared, the kc squared over a squared there became-- I'm sorry. I'm doing more than dividing by kc squared. I'm dividing by kc squared and multiplying by a squared. So I have turned that last term into 1 by multiplying by its inverse. On the left hand side, the 1 over a squared went away when I multiplied by a squared. So I just have a dot squared divided by kc squared so far. We'll simplify that shortly. And the middle term, I took the liberty of multiplying by the square root of k, and then we have a k to the 3/2 here. If we absorb this into that, it would just be the kc squared factor that we divided by. And the a squared has been also divided into two pieces, a cubed and a. So together these two factors make up the factor of a squared that we multiplied that term by. So it's the same thing we had, just multiplied by the common factor. And now the nice thing is that this is, in fact, our definition of da tilde over dt. The c turns the dt into a dt tilde, and the 1 over k turns the da into da tilde. So the left hand side now is simply da tilde over dt tilde squared. And the right hand side, rho times a cubed remember is a constant. So the only thing that depends on time on the right hand side is the a divided by mu k, which I've isolated here. Earth That's a tilde. So I'm going to take all of this, which is a constant, and just give it a name, constant. So this term becomes a constant, we'll call 2 alpha divided by a tilde. And then we still have minus 1. And this constant, which I'm calling alpha, is just all of this factor except for a factor of 2. So it's 4 pi over 3 g rho a tilde cubed divided by c squared. a tilde because we had a cubed divided by k to the 3/2, and that's a tilde cubed. So I've just rearranged things. But now everything has the units of length. a tilde has units of length. t tilde has units of length. This is dimensionless, which is good because that's also dimensionless. Alpha, if you work out all this stuff, has units of length. a tilde has units of length. This is length divided by length, dimensionless. We haven't really changed anything significant. But at least as far as keeping track of factors, that equation is the one we'll solve. And the factors are now absorbed into the constant alpha, which is the only thing we have to worry about. And we don't need to worry about that until the end. Yes? AUDIENCE: [INAUDIBLE] PROFESSOR: As long as these two are evaluated at the same time, it doesn't matter. AUDIENCE: [INAUDIBLE] PROFESSOR: That's right. One does need to remember that these two are to be evaluated at the same time, whatever it is. And the product does not depend on time. So I didn't write the arguments. I could have, and I could have just put in an arbitrary time. But it would be the same time for the rho and the tilde. I will do that. It will be rho of some t one times a tilde cubed, evaluated the same t one for any t one. This product is independent of time. OK, now that is the kind of equation where we could move things from one side of the equation to the other to reduce it to doing ordinary intervals. So I can multiply by d tilde and divide by the expression on the right hand side and get an expression that says that dt tilde is equal to da tilde divided by the square root of 2 alpha over a tilde minus 1. And since this has an a tilde in the denominator of the denominator, I'm going to multiply through by a tilde to rationalize things. So I'm going to rewrite this as a tilde da tilde over the square root of 2 alpha a tilde minus a tilde squared. OK this looks better than we've had so far. Now in principle, if we imagine we can do that integral, we can just integrate both sides. We get an equation that says t tilde is equal to some expression involving the final value of a. So we're going to imagine doing that. And we will actually be able to carry it out. When we integrate, there are two ways you can proceed. When I solve the analogous problem for the flat case with the t to the 2/3 it was a much simpler equation. But you might remember that at one point, I had an equation where I calculated the indefinite integral of both sides, and then I got a constant of integration, which I then argued should be set equal to zero if we wanted to find the zero of time to be the zero of a. The situation is really exactly the same, but to show you an alternative way of thinking about it, this time I'm going to apply definite integrals to both sides. And if you apply a definite integral to both sides, the thing to keep in mind is that you should be integrating over the same physical interval on both sides. So on the left hand side, I'm going to integrate from zero up to some t tilde final. t tilde final is just some arbitrary time, but I'm going to give it the name t tilde final. And that integral of course we can do. It's just t tilde final. And that should be equal to the integral of the right hand side over the same period of time. But the right hand side is not expressed as an integral over time. It's expressed as an integral over a tilde. So we have to ask ourselves, what do we call the integral over a tilde that corresponds to the integral over time from 0 to t tilde sub f. And the lower limits should match. And we know what we want a tilde to be at t tilde equals zero. We're going to use the same dimensions we had in the other case. We're going to define the zero of time to be the time when the scale factor is zero. So t tilde equals zero should correspond to a tilde equals zero. So to get the lower limits of integration to correspond to the same time, we just put zero her. And zero here doesn't mean time zero. It means a tilde equals zero. But that's what we want. And for the upper limit of integration, that should just be the value of a tilde at the time t tilde sub f. So I will call that a tilde sub f, where I might make a note on the side here that a tilde sub f is e tilde of t tilde sub f. It's just the final value of a tilde, where final means anytime I choose to stop this integration. The interval is valid over any time period. So these are moments of integration where the only thing that's new on this line-- now I just copy from the line above, 2 alpha times a tilde minus a tilde squared. So t tilde f is equal to that integral. And our goal now would be to do that integral. And if possible-- it won't quite be possible. We can see how close we can come. But if possible we would like to invert that relationship that we get from doing this integral, to determine a tilde as a function of t tilde. That's what we would love to do. It turns out that's not quite possible. But we will nonetheless be able to obtain something called a parametric solution to this problem. And we'll stop now. But I will tell you next Tuesday after the quiz how we proceed here to get a solution to this problem.
MIT_8286_The_Early_Universe_Fall_2013
5_Cosmological_Redshift_and_the_Dynamics_of_Homogeneous_Expansion_Part_I.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK if there are no questions, we will get back to physics. What I want to do today, as it suggests on the slide, is to finish the kinematics of homogeneous expansion that we were talking about last time. And the one topic in that category that we have not discussed yet is the cosmological redshift So we'll begin by going over that. And then we'll begin to go on to the next topic altogether, which is the dynamics of homogeneous expansion -- how do we understand how gravity affects the expansion of the universe? So that will be the main subject of today's lecture, once we've finished up the issue of the cosmological redshift shift. Let me remind you that at end of the last lecture, we were talking about the synchronization of clocks, and the coordinate system that we'll be using to describe the homogeneously expanding model of the universe. Remember, we are introducing spatial coordinates that grow with the universe, so that we're going to be assuming the fact, literally, that the universe is perfectly homogeneous and isotropic, which means that all objects will be literally addressed, relative to this coordinate system. If we're talking about the real universe, then there would be some motion relative to this coordinate system, because the universe is not exactly homogeneous. But we're going to be working for now with the approximation that our model universe is exactly homogeneous, which means that all matter is completely at rest, relative to this expanding coordinate system. And now we want to talk about how to define time, or to review what we said last time when we talked about how to define time. What we will imagine is that in every location in the universe at rest, relative to the matter, is a clock. And each clock ticks off time, and all those clocks will be acceptable as a clock which measures the time at relevant positions-- time is measured locally-- but we still have to talk about synchronizing those clocks. And what we said last time is that we can synchronize the clocks as long as there's some cosmic phenomena that can be seen everywhere, which has some time evolution. And we gave two examples-- one is the evolution of the Hubble expansion rate, which can be measured locally, and everybody can agree to set their clocks to midnight when the Hubble expansion rate has a certain value. And another cosmic variable is the temperature of the cosmic microwave background radiation. So, everybody in this model universe will agree that we'll set the clocks to midnight when the temperature of the cosmic background radiation goes to 5 degrees, or any specified number. So as long as there's a phenomena of that sort, which there is in our universe, it's possible to synchronize these clocks in a unique way. And the important thing to realize is that once they're synchronized at one time, they will remain synchronized as a consequence of our assumption of homogeneity. That is, if everybody agrees that the cosmic background radiation has a temperature of 10 degrees at midnight, if everybody waits for 15 minutes after midnight, everybody should see the same fall in temperature during that time interval, otherwise it would be a violation of this hypothesis of perfect homogeneity. Yes, question. AUDIENCE: Is it verified that temperature is invariant for all observers-- all Lorentz observers? PROFESSOR: OK the question is, is temperature invariant for all observers? And the question even included all Lorentz observers. It's not really invariant to different Lorentz observers. We're talking about a privileged class of observers, all of whom are at rest, relative to the average matter. If you move through the cosmic background radiation, then you don't see uniform thermal distribution any more. Rather what you see is radiation that's hotter in the forward direction and colder in the backward direction. And we in fact, as I think I have mentioned here, see that effect in our real universe. We're apparently moving relative to the cosmic background radiation, at about 1/1000th of the speed of light. So it's not invariant with respect to motion. There's the additional question, though is, is it the same everywhere in the visible universe? As far as we can tell, it is. There is some direct measurement of that, that we'll probably talk about later in the course, by looking at certain spectral lines in distant galaxies. One can effectively measure the temperature of the cosmic microwave background radiation in some distant galaxies. This line cannot be seen in all galaxies, and the extent that it's been measured in degrees. So certainly in our model, we're going to assume complete homogeneity, so everything's the same everywhere, and there is strong evidence for that homogeneity. Although it's not exact, but there's strong evidence for approximate homogeneity in the real universe. Yes? AUDIENCE: If you were really close to the black hole [INAUDIBLE]. PROFESSOR: OK. The question is, suppose we're a little bit more careful, and talk about the fact that some people might be living near black holes, and other people are not. Will that affect the synchronization of clocks for the people who are living near black holes? The answer is sure, it will. We can only synchronize clocks cosmically if we assume that the universe is absolutely homogeneous. As soon as you introduce inhomogeneities like black holes, or even just stars like the sun, they create small perturbations, which then make it really impossible to expect clocks to stay in sync with each other. So as soon as you have concentrations of mass, then the fact that what we're talking about now is an approximation becomes real. But those deviations are small. The deviations coming from the sun are only on the order of a part in a million or so. So, to a very good approximation, the universe obeys what we're describing, although if you went very close to the surface of one of these super-massive black holes in the centers of galaxies, or something, you would in fact find they had a very significant effect on your clocks. Any other questions? OK. Let me move on now. The next topic, as I have warned you, is the cosmological redshift. Now in the first lecture beyond the overview, which I guess was a combination of the second and third lectures in the course, we talked about the Doppler shift for sound waves, and we talked about the relativistic Doppler shift for light waves-- that was all in the context of special relativity. Now what we're going to face is the fact that cosmology is not really governed entirely by special relativity, although special relativity still holds locally in our cosmology. But special relativity does not include the effects of gravity, and on a global scale, the effects of gravity are very important for cosmology, and therefore special relativity by itself is not enough to understand many properties of the universe, including the cosmological redshift. It turns out though, that there's a way of describing the cosmological redshift which will make it sound even simpler than special relativity. And I'll describe that first, and then afterwards, we'll talk a little bit about how this very simple-looking derivation jives with the special relativity derivation, which must also be correct, at least locally. OK. So, the question we want to ask ourselves, is suppose we look at a distant galaxy, and light is emitted from that galaxy. How will the frequency of that light shift between the frequency it had when it was emitted, and the frequency that we would measure as we received the light. So to draw the situation on the blackboard, let's introduce a coordinate system, x. And this will be our comoving coordinate system. X is measured in notches. We'll put ourselves at the origin-- there is us. And we'll put our galaxy out here someplace-- there is the distant galaxy that we will be observing. They galaxy will be at some particular coordinate, which I will call l sub c, c for coordinate distance, so l sub c is the coordinate distance to the galaxy. And then the physical distance-- is what we've been calling l sub p, p for physical, which depends on time, because there's Hubble expansion. So l sub p of t, as we've said a number of times already, is a of t times l sub c. The scale factor, which depends on time, times the coordinate distance, which does not depend on time. So everything just expands with the scale factor a of t. So this describes the situation, and now what we want to ask ourselves, is suppose a wave is being emitted by the galaxy-- and we'll be trying to track the distance between wave crests, which determines what the wavelength is. Since we'll only be interested in wave crests, we will talk in language where we just imagine there's a pulse at each crest, and what happens in between doesn't matter for what we're talking about. So we want to track successive pulses emitted by the galaxy. Now the important feature of our system is that we have argued that we know how to track light waves through this kind of coordinate system. If x is our cosmic coordinate, dx dt, the coordinate velocity of light, is just equal to the ordinary velocity of light, c, but rescaled by the scale factor. And the scale factor here is playing the role of converting meters to notches. So c is measured in meters per second. By dividing by a of t, we get the speed in notches per second, which is what we want, because x is measured, not in meters, but in notches. A notch being the arbitrary coordinate-- the arbitrary unit that we adopt to describe our comoving coordinate system. Now the important feature of this equation, for our current purpose, is that the speed of light, as we're going to follow these light pulses through our coordinate system, depends on time, but it does not depend on x. Our universe is homogeneous, so all points x are the same. So two pulses will travel at the same speed at the same time, no matter where they are. And that's all we really need, to understand the fact that if one pulse leaves our galaxy and is coming towards us-- I should do that with my right hand, because the second pulse is going to be my other hand-- as that second pulse follows it, the second pulse, at any given time-- even though the speed will change with time, but at any given time-- the second pulse will be traveling at the same speed as the first pulse. And that means that it'll look something like this. The speed might change with time, but as long as they both travel at exactly the same speed at any given time, they will stay exactly the same distance apart in our comoving coordinate system. Delta x, the x distance between the two pulses, will not change with time. And if the coordinate distance does not change with time-- the physical distance is always the scale factor times the coordinate distance-- it means that the physical wavelength of the light pulse will simply be stretched with the scale factor, which means you'll be stretched with the expansion of the universe, in exactly the same way as any other distance in this model universe will be stretched as the universe expands. So that's the key idea, and it's very simple, and those words really say it all. Delta x equals constant implies delta l physical is proportional to a of t, and that implies that the wavelength of the light, as a function of t, is proportional to a of t. Wavelength is actually what I was calling delta l physical, the distance between these two pulses, where each pulse represents a crest of the wave. And lambda is the standard letter of the wavelength. Now the wavelength is related to the period of a wave simply by the relationship that lambda is equal to c times delta t. Wavelength is just the distance the wave travels in one period. So if lambda is proportional to a of t, so is the time interval, delta t, the period of the wave, going to be proportional to delta t. So we have been defining the redshift in terms of the period. So delta t observed over delta t at the source is equal to lambda observed over lambda at the source. Lambda and delta t are proportional to each other. And-- let me finish and I'll get to you, OK? AUDIENCE: Yes. PROFESSOR: This then, the ratio of the lengths, is just the amount by which universe has stretched over that time. So just the ratio of the scale factors at the two times. So this is equal to just a of the time of observation, which I'll call t sub o, over a of the time of the source, t sub s. So this is the scale factor at source, and the numerator here is the scale factor at observation. And this ratio of times, or ratio of wavelengths, or ratio of scale factors, is defined to be 1 plus z, as we have always done. The ratio of the time intervals we had defined originally as 1 plus z, we'll keep that definition, and that defines the redshift shift, z. Question now? Yes. AUDIENCE: Is that definition of lambda, does that have anything to do with the Lorentz invariant? Like, it just kind of struck me as the first term? PROFESSOR: Not sure what you mean? What-- Lorentz invariant what? AUDIENCE: Like the c delta tau squared equals c delta t-- PROFESSOR: Oh. Well, the delta t could be put into that formula, but that's formula could measure any delta t. AUDIENCE: Yeah PROFESSOR: So of course Lorentz is a special case, but any delta t would be a special case of that formula, so I don't think there's a lot to say about it being a special case. AUDIENCE: All right, cool. PROFESSOR: Any other questions? Yes? AUDIENCE: Is this like fundamentally different? Or is it similar [INAUDIBLE]? PROFESSOR: [INAUDIBLE] I was going to come to that. That's the question of how the cosmological redshift relates to the special relativity redshift that we derived earlier, and I'm coming to that immediately. Good question, we're getting there. Any other questions, though, before I go there? In my point of view, that's the next topic. OK. OK, so let me move on to exactly that question. How does this relate to what we already said about the redshift? This answer-- I would like to quantify things and say that it differs in two ways from the calculation that we've done previously. And the first is-- the reason why it's important to us-- is that this actually takes into account, effects which were not taken into account by our earlier calculation. In particular, even though we derived this by a very simple kinematic argument, which didn't seem to involve much math at all, it actually is incredibly strong, in that it encompasses not only special relativity, but also general relativity. It includes all the effects of gravity. If you think about what gravity might do to what we're talking about, gravity doesn't change the fact that the speed of light is going to be c over a of t. That really is just a unit conversion, combined with the fundamental physics assumption that the speed of light is always measured at c, relative to any observer. So when we put in gravity, this relationship continues to hold-- that was really all we used to drive this-- so gravity is not going to affect the answer. If you think about special relativity, is there something left out? Everything I said here, Newton would have understood perfectly. I didn't have to mention time dilation, which was crucial to our special relativity calculation of the redshift shift. Did I make a mistake? Is there some place where time dilation should come in here? The answer, really, is no, if you think about it. We had two clocks involved in our system, a clock on the galaxy, and a clock at us, which we used to measure the period of emission, and the period of reception, but those clocks are each at rest, relative to matter in the region-- even though they're moving with respect to each other-- so by definition, they do measure cosmic time. Cosmic time is a very peculiar kind of time, it's not the time in any inertial frame. These clocks are moving with respect to each other, so if you were defining inertial frame time, their clocks could never be synchronized and would never agree with each other. But in this concept of cosmic time, they do agree with each other, by construction. And since each clock is at rest, relative to its local matter, it measures this t that we're talking about, this cosmic time variable. And when the pulse arrives at us, when we measure delta t on our clock, that's exactly the quantity that, in the end, we want to talk about-- delta t sub observer. The quantity measured on our clock, which is a clock which also measures cosmic time. So there's no place for any time dilation to enter. It's not that we forgot it, it's not there. It's not part of this calculation. So this result, as simple as it looks, actually fully encompasses the effects of both special relativity and gravity. Now let me just mention, it's not obvious how gravity came in here. I'm telling you it satisfies-- includes all the effects of gravity. Where is gravity hidden? Let me throw that out as a question. How does gravity affect this calculation, even though I didn't have to mention gravity when I described the calculation? Yeah, in back. AUDIENCE: The scale factor? PROFESSOR: That's right, the scale factor. We have not yet talked about how a of t evolves. And the evolution of the a of t will explicitly involve the effects of gravity. And that's why this result depends on gravity, even though we didn't need to use gravity, or say anything about gravity to get the results. So this is the first difference. This calculation includes the effect of gravity. Which is through a of t. Now, because this calculation seems to include everything that the first calculation included and more, you'd expect to be more complicated, but it's less complicated. Could we have saved ourselves a lot of time last week by just giving this calculation, and deriving the other answer from it. The answer is, not easily, it would not have saved time, one can't, in principle, do it that way. But the other important difference between these two calculations is the variables that you're using to express your answer. Once you ask a question, if you ask the question vaguely, there could be many different answers to that question, depending on what variables are used to express the answer. So what we're doing here is we're expressing the redshift z for objects which are in fact at rest in the comoving coordinate system. The special relativity calculation-- I think I'm going to need another blackboard. The special relativity calculation, on the other hand gives z as a function of the velocity, as measured in an inertial coordinate system. So the answers are just being expressed in terms of totally different things, and the answer is so simple here because a of t already incorporates a lot of information, and we've just taken advantage of that to be able to give a very simple answer in terms of a of t, without yet saying how we're going to calculate a of t. Yes. AUDIENCE: [INAUDIBLE] two questions. One is about that constant time. PROFESSOR: Yes. AUDIENCE: How is that different than the Newton or Galilean idea of absolute time? PROFESSOR: OK. The question was how does the notion of cosmic time differ from Newton's or Galileo's notion of absolute time? And the answer is perhaps not much. Operationally, I think it is pretty much the same, but the real point is that Newton and Galileo did not know anything about relative effects like time dilation. So for them, it was just obvious that all clocks ran at the same speed, and time was naturally universal-- naturally absolute. In this case, we're aware of the fact that moving clocks run at different speeds. So if we were to take these clocks between us and the galaxy, and transport one to the other, depending on what path we used to transport them on, in the end, they would probably not agree with each other. So, we're setting up a definition of what we're going to define locally as time, recognizing that what time means here, versus what time means there, is a consequence of our assumptions about how we define things. It is not given automatically by the fact that all clocks will run at the same speed. Follow up? AUDIENCE: Yeah. An addition, this is slightly different. So, in the special relativity calculations, [INAUDIBLE] z could be [INAUDIBLE] PROFESSOR: Absolutely. AUDIENCE: So here we're only seeing a red shift, but we would obtain a blue shift if we allowed a of t to be decreasing, right? PROFESSOR: That's right. If the universe contracted, we would get a blue shift. AUDIENCE: [INAUDIBLE] PROFESSOR: That's right. It would correspond to the special relativity case. I was going to say a few words about the correspondence, but I'll answer questions first. Yes. AUDIENCE: I'd kind of like to add on to that question regarding the causal time. PROFESSOR: Yes. AUDIENCE: Isn't the fact that you've scaled the speed of light, that's what takes care of this discrepancy between the clocks themselves? PROFESSOR: The question is, does the fact that we've rescaled the speed of light take care of the discrepancy of times? Well, partially, but it doesn't say anything about what moving clocks will do. If you had a clock moving through this universe, you would have to calculate a time dilation for that clock, just as in any other case. AUDIENCE: What about the two end points, say, of the path. Is that why you're scaling the speed of light? PROFESSOR: Not really. The scaling of the speed of light really comes about through the scaling of space. This in fact is just the scale factor that we scale space. Time is measured locally on every clock, and we don't think of it as being rescaled. The speed of light looks different, simply because a notch is changing with time. And that formula tells you how to convert meters per second, which will always be the same to the speed of light, to notches per second, which will change as the size of a notch changes AUDIENCE: Right,yeah. OK. I understand. PROFESSOR: Yes. AUDIENCE: Further in the line of questioning about cosmological time-- so we expect that us and that other galaxy have simultaneous clocks relative to the cosmic time, and also we expect our own clocks to be simultaneous with our cosmological clocks, I assume. So if we-- Is that true? PROFESSOR: That's right, yes. Our own clock is just an example of one of the clocks sitting on the place called us, and all clocks sitting on that place will behave the same way. And they define the local definition of cosmic time. AUDIENCE: So if we take those clocks and move very slowly across to the other galaxy, in cosmological-- in comoving coordinates, we wouldn't expect there to be any time dilation, in the respect that clocks stay simultaneous. Safe to say, that we would think it would be simultaneous with us the whole time, until we got to the other galaxy. And then it would still be simultaneous. But, they're moving at a speed relative to us, so we wouldn't expect [INAUDIBLE]. PROFESSOR: Right. OK. You raise a good question, which I would have to think about the answer. If we brought-- if we carried our clock very slowly to this galaxy, and the limit was infinitely slow, would it agree when it got there? Let me think about that, and answer it next time. I'm not altogether sure. Any other questions? OK. I want to say something about the relationship between these two calculations. What would happen if we tried to actually compare the answers that we got for the relativistic Doppler shift, and for this answer, for the cosmological redshift. There's really only one case where it would be legitimate to compare them. Since the calculation we just did was supposed to include the effects of gravity, and special relativity calculation does not include the effects of gravity, the only way we should be able to compare them, and see that they agree, would be the case where gravity is negligible. And one can talk about a cosmological model where gravity is negligible, there's nothing inconsistent about that. If gravity were negligible, what would we expect for the behavior of a of t for this [INAUDIBLE] question. I hear a constant. Constant is certainly a possibility, but it's not the only possibility, so try to think a little harder, and ask if there are other possibilities. Yes. AUDIENCE: I'm sorry, could you rephrase the question? PROFESSOR: Rephrase the question. The question is, if gravity were negligible, what would we expect for the behavior of the scale factor a of t? And so far, it's been suggested that it could be a constant, and that's true, but that's not the most general answer. Yes. AUDIENCE: It could be negative. PROFESSOR: Could be negative? I don't know what would mean, actually. AUDIENCE: What do you-- PROFESSOR: It would mean the universe was inside out. AUDIENCE: Oh. PROFESSOR: It would really Just mean that you've reversed your coordinates. I don't think it would have any significance. AUDIENCE: Oh, the expansion would actually be a contraction? PROFESSOR: Oh, well it could decrease with time, that's not the same as being negative. AUDIENCE: Oh, I'm sorry PROFESSOR: It could always increase or decrease with time, whether gravity is present or not. For our universe it's increasing with time, but one could imagine a contracting universe. Yes, Aviv. AUDIENCE: Linear? PROFESSOR: Linear. That's right. If there's no gravity, a of t should be a constant times t. The constant could be zero, and then a of t is-- and maybe I should say it should be a constant plus a constant times t, and then in a special case it could just be a constant. But it should vary linearly with time. And that simply means that all velocities are constant. If all velocities are constant, then a of t is varied linearly with time, so that the distance -- the famous relationship is the distance of a of t times l sub c. If this distance were growing linearly with time, it could just be a constant velocity, which is certainly allowed in the absence of gravity. It would mean that a of t was growing linearly in time. So that would be the special case of absence of gravity, a of t growing linearly in time. And one can always set the constant that would be added to the linear to be zero, just by choosing zero of time to be the time at which a of t is zero. So, in the absence of gravity, one can say that a of t should just be proportional to t. So for that special case, these two calculations should really agree. And it will be, I'm pretty sure, an extra-credit homework problem coming up soon, in which you'll get a chance to calculate that. It's not easy, which is why it will be an extra-credit problem, probably, not required problem, because it involves understanding the relationship between these two coordinate systems. The special relativity answer is given in inertial coordinate system which, when gravity is present, doesn't exist at all. In the presence of gravity, there is no global inertial coordinate system. But without its action, there is. But it's related to this coordinate system, where everything's expending in a complicated way, because of the various time dilations and Lorentz contractions associated with the motions that are taking place in our expanding universe. So what you'll need to do is to figure out the relationship between these two coordinate systems. And when you do, and actually compare the answers, is you find that they actually do agree exactly. This is all perfectly consistent with special relativity, but the special case where there is no gravity. OK. Ready to leave cosmological redshift altogether, unless there are any further questions? OK. In that case, Onward to the next major topic. We've now finished what I wanted to say about the kinematics of homogeneously expanding universes, and now we're ready to talk about the dynamics. What happens when we try to think about what gravity is going to do to this universe, to be able to calculate how a of t is going to vary with time. That will be the only goal, to understand the behavior of a of t. Now this problem, in a way, goes back to Isaac Newton. And I might just give a little aside here, and mention that one of the fun things about cosmology, actually, is that if one looks back at the history of cosmology, many great physicists have made great blunders in trying to analyze cosmological questions. And in the discussion today, we'll be discussing one of Newton's blunders. And to me, it's very consoling to know that even physicists as great as Newton can make stupid mistakes. And he actually did make a stupid mistake, in terms of analyzing the cosmological effect of his own theory of gravity. At issue was Newton's view of the universe, and Newton, like everybody, really, until Hubble, believed that the universe was static. He imagined the universe as a static distribution of stars scattered through space. And early in his career, from what I understand of the history, he assumed that this distribution of stars was finite, and an infinite background space. But he realized at some point that if you had a finite distribution of mass, in otherwise empty space, that everything would attract everything else, with his one over r squared force of gravity-- which he knew about, he invented it-- and the result would be everything would collapse to a point. So he decided that would not work, but he was still sure everything was static. Because everything looked static, stars don't seem to move very much. So he asked what he could change, and decided that instead of assuming that the stars made up a finite distribution, he could assume that they were an infinite distribution, sharing all of space. And he reasoned-- and this is really where the fallacy showed up-- but he reasoned that if the stars filled the infinite space that, even though they would all be tugging on each other through the force of gravity, they wouldn't know which way to go. And since they wouldn't know which way to go, because they'd be tugged in all directions, they would stand still. So he believed that an infinite, uniform, distribution of mass would be stable-- that there'd be no gravitational forces resulting from the masses in this infinite distribution. And I have some quotes here, which I think are kind of cute, so I'll show them to you. Newton had a long discussion about these issues with Richard Bentley, the theologian. And we get to read about it, because all these letters have been preserved. In fact, I'm told that the original letters are actually still in existence at Trinity College in Cambridge University. And you can find them on the web even, I'll give you a web reference for the text of these letters, and they're in books and various places. So let me read this to you. I think it's a cute quotation. "As to your first query"-- by the way, I think we don't have the letters that Richard Bentley sent to Newton, only the responses. But Newton fortunately responded in a way that made the questions pretty clear, so it's not an important problem in understanding what's going on. Newton says, "It seems to me that if the matter of our sun and planets and all the matter of the universe were evenly scattered throughout all the heavens, and every particle had an innate gravity toward all the rest, and the whole space throughout which this matter was scattered was but finite, the matter on the outside space would, by its gravity, tend toward all the matter on the inside"-- this is a finite universe he's talking about -- and he says, "that by its consequence, everything would fall down into the middle of the whole space, and there compose one great spherical mass." So, there he's describing how it would not work if you had a finite collection of matter. But, he says, "If the matter was evenly disposed throughout an infinite space, it could never convene into one mass, but some of it would convene into one mass, and some into another, to as to make an infinite number of great masses, scattered at great distances from one to another, throughout all that infinite space." So he thought there'd be local coagulation, which of course is what we see in our real world. We see stars that have formed, and now we know about galaxies, which Newton had no way of knowing about. That's the kind of coagulation process that he's discussing. And he-- oops, sorry. "And thus might the sun and the fixed stars be formed, supposing the matter were of a lucid nature." That's a cute phrase. I can tell you what it means, it may not be obvious. But at this point, nobody had any idea what the sun was made out of, and why the sun was different from the earth. In fact nobody really had much of a real idea what the earth was made out of either, here. Chemistry wasn't really invented yet. So the assumption was that there were two kinds of matter, lucid matter and opaque matter. Where lucid matter is the stuff the sun is made out of, and the stars, that glows, and is fundamentally different in some way, that was of course not understood at all, from opaque matter, which is what the Earth is made out of. You can't see through it, and it doesn't, obviously, glow. So here, when he's talking about matter forming the stars and the sun, he says if the matter was lucid, if it was the kind of matter that glows. Going on-- so far, what he said sounds pretty good. Going on, he goes on now to talk more about this lucid versus opaque business. And I think it's cute. I don't know where exactly it's going, but it shows something about Newton's personality, which one might not have known otherwise. "But how the matter should divide itself"-- I should also warn you, all of this is one sentence. If you think sometimes my sentences sound convoluted, just think how lucky you are that you don't have Newton here as your lecturer. This is just impossible. So, "But how the matter should divide itself into two sorts"-- how we'd have lucid and opaque matter in the right places-- "and that part of it which is to comprise a shining body should fall down into one mass, and make a sun, and the rest, of which is fit to compose an opaque body, should coalesce not into one great body, like shiny matter, but into many little ones"-- somehow he's forgotten about the stars here, when he's talking about the sun and the planets, many planets and one sun. So he says that, "how the opaque matter should fall instead into many little masses"-- and then he talks about other possibilities. It's wonderful the way he lists all the possibilities. "Or," he says, "if the sun were at first an opaque body, like the planets, or if the planets were lucid bodies like the sun, how he alone"-- he being the sun, if you track everything back, "how the sun alone should be changed into a shiny body, while all the"-- lost track -- "where all they"-- of the planets-- "continue to be opaque, or"-- he's considering all possibilities-- "or they all be changed to opaque ones, while he,"-- the sun-- "remains unchanged as a lucid one." He does not know how to explain all that, is what he's saying. Bottom line of the sentence is, I don't know, I don't have a clue. And he says, "I don't think it's explicable by mere natural causes, but am forced," Newton says, "to ascribe it to the council and contrivance of a voluntary agent." So the theory of intelligence design, as well the theory of gravity, actually both go back to Newton, it turns out. Newton was a very religious person, and in certain aspects of physics, he was happy to ascribe to a voluntary agent, as he calls it. I have some references here, and I'll be posting this so you'll be able to read those references and type them in, if you want. Now Newton decided that you could not have a finite distribution, because it would collapse. If you had an infinite distribution, he thought it would be stable, but he apparently had heard different arguments to that same conclusion. And one argument that you might give for saying that the infinite distribution would be stable would be the argument that if you look at the force one any one particle, there is an infinite force pulling it to the right-- my right, your left-- and an infinite force pulling it to my left, your right, and since they're both infinite, they would cancel each other. Newton did not accept that argument. He was sophisticated enough to realize that infinity minus infinity isn't necessarily zero. And he has a bit of a tirade on that, that I thought was worth quoting. And this is a second letter to the same Richard Bentley. I guess it was Bentley who made this argument, and Newton rejected it. Infinity minus infinity, Newton realized, is ambiguous. It's not something that we should necessarily think of zero. "But you argue in the next paragraph of your letter that every particle of the matter in the infinite space has an infinite quantity of matter on all sides, and by consequence, an infinite attraction every way, and therefore must rest in equilibrio, because all infinities are equal:"-- he's summarizing Richard Bentley's argument-- "yet you suspect a parologism"-- that means logical error, I think-- "in this argument: and I can see the paralogism lies in the position that all infinities are equal. The generality of mankind consider infinities no other ways than indefinitely"-- and in this sentence they said all infinities are equal-- "though they would speak more truly if they should say that they are neither equal nor unequal, nor have any certain difference or proportion, one to another." So he realizes that the ratio of infinity could be anything, and infinity minus infinity could be anything, all of which is consistent with our modern view of how to do the mathematics. "In this sense, therefore, no conclusions can be drawn from them about the equality, proportions or differences of things, and they that attempt to do so usually fall into paralogisms." He goes on, now I just have one more Newton quote-- I like Newton quotes-- I have one more Newton quote, again from the same series of letters. These are all from 1692 and 1693, I believe, where he gives an example-- I think this follows the quotes of the previous slide immediately-- where he gives an example of a false argument that you get into-- and apparently it's an argument that he had heard from other people-- if you think all infinities are equal. What he says is, "So when men"-- he doesn't say who men are, and I don't really know the history. He may referring to some particular philosophers at the time-- "when men argue that the infinite divisibility of magnitude by saying that an inch may be divided into an infinite number of parts, the sum of those parts would be an inch-- and a foot can be divided into an infinite number of parts, the sum of those parts must be a foot-- and therefore, since all infinities are equal, these sums must be equal." Understand the argument here. He's saying that if you divide and inch into an infinite number of parts-- this is all you've been given as a foil. He's not claiming the argument is right, he's claiming it's wrong-- that argument is that if you divide an inch into an infinite number of parts, you get an infinite number of points, if you put them together, you get an inch. If you divide a foot into an infinite number of parts, you get an infinite number of points, and if you put them together, you should get a foot. But they're both an infinite number of points in the description. So if you think all infinities are equal, the infinite number of points that make an inch should be the same as the infinite number of points that make a foot, therefore a foot should equal an inch, obviously. Right. Not right, he know. So he says that the falseness of the conclusion shows an error in the premises, and the error lies in the position that all infinities are equal. So Newton has given us a very nice example of how you can convince yourself that you get into logical paradoxes if you pretend that all infinities are equal. But, this does not change the fact that Newton was still convinced that an infinite distribution of mass would be stable. The argument that convinced him was not the infinity on each side, but rather the symmetry. Newton's argument, the one he believed, was that if you look at any point in this infinite distribution, if you look around that point, all directions would look exactly the same, with matter extending off to infinity, and therefore there'd be no direction that the force should point on any given particle. And if there's no direction in that force at the point, it must be zero. That was the argument Newton believed. OK. What I want to do now is to talk about this in a little bit more detail, and try to understand how modern folks would look at the argument. And by the way, I might just add a little bit more about the history first. Newton's argument, as far as I know, was not questioned by anybody for hundreds of years, until the time of Albert Einstein. Albert Einstein, in trying to describe cosmology using his new theory of general relativity, was the first person, as far as I know, to realize that even if you had an infinite distribution of mass, it would collapse-- and we'll talk about why. And Einstein did realize that the same thing would happen with Newtonian physics, it's not really a special feature of general relativity, it just somehow historically took the invention of general relativity to cause people to rethink these ideas and realize that Newton had been wrong. So, what's going on. The difficulty in trying to analyze things the way in which Newton did is that Newton was thinking of gravity, in the language that he first proposed it, as a force at a distance. If you have two objects in space, the distance r apart, they will exert a force on each other proportional to one over r squared. Since the time of Newton, other ways of describing Newtonian gravity itself have been invented, which make it much more clear what's going on. The difficulty in using Newton's method-- we'll talk about in more detail in a few minutes-- but it's simply that we try to add up all of these one over r squared forces, you get divergent sums that you have to figure out how to interpret. But to understand that Newton couldn't possibly have been right, the easiest thing to do is to look at other formulations of Newton's gravity. And I'll describe two of them, both of which will probably have some familiarity to you. The first one I'm quite sure will. And I'm going to describe it by analogy with Coulomb's law, because 802 goes a little further with Coulomb's law than any course you are likely to have taken has gone with gravity. But Coulomb's law is really the same as the force law of gravity. So Coulomb's law says that any charged particle will create an electric field, which is the charge divided by the distance squared times the unit vector pointing radially outward. That's Coulomb's law. People can-- sometimes there's constants in here, depending on what units you measure q in, but that won't be important for us. So I'm going to assume we're using this where that constant is one. You know that Coulomb's law can be reformulated in terms of what we call Gauss's law. If Coulomb's law is true, you can make a definite statement about what happens when you integrate the flux of the electric field over any surface. It's proportionate to the total amount of charge inside. So Coulomb's law implies Gauss's law, which says that the integral over any closed surface of E dotted into da is equal to 4 pi times the total enclosed charge. q encloses the total amount of charge inside that volume. And what constants appear depends on what constants appear here, which depends on what units you're using, but these equations are consistent. Those are the correct constants, if you measure charge in a way which makes the electric field be given by that simple formula. OK, so I'm going to assume you know this, that you learned it in 802 or elsewhere. If this is true, then, since this is the same inverse square law, if we write down Newton's law of gravity, almost as Newton would have written it, we can express it as the acceleration of gravity at a given distance from an object. So we could write Newton's law of gravity by saying the acceleration of gravity is equal to minus Newton's constant times the mass of the object, the analog of the charge up there, divided by r squared times r hat. Again, it's the inverse r squared law, and the point radiating outward is just like Coulomb's law, except for the constant out front. The constant actually has the opposite sign, which is important for some issues, but not for what we're saying now. The important point is that this can also be recast as a Gauss's law, and it's called Gauss's law of gravity. And the only thing that differs is a constant out front, so it's a trivial transformation. The integral over any closed surface of the gravitational acceleration vector, little g dotted into da is equal to minus 4 pi g times the total mass enclosed. The only difference is the minus sign, and the factor of g, which follow from the difference of the minus sign and the factor of g in the formula on the left. OK, does everybody believe that? OK, now let's think about this homogeneous distribution of mass that Newton was trying to think about. Newton's claim was that you could have a homogeneous distribution of mass filling all of the infinite space, and that would be static, that is, there would be no acceleration. No acceleration means Newton is claiming in this language that little g could be zero everywhere. But if you look at this formula, if little g is zero everywhere, then the integral of g over any surface is going to zero, and therefore the total mass enclosed had better be zero. But if we have a uniform distribution of mass, the total mass enclosed will certainly not be zero for anything with non-zero volume. So clearly this assertion that the system would be static was in direct contradiction with the Gauss's law formulation of Newton's law of gravity. Just for the fun of it, I'll give you another similar argument using another more modern formulation of Newtonian gravity. Another way of formulating Newtonian gravity, which you may or may not have seen-- and if you haven't seen it, don't understand what I'm saying, don't worry about it, it's not that important. But for those of you who have seen it, I'll give you this argument. Another way of formulating Newtonian gravity is to introduce the gravitational potential. So I'm going to use the letter phi for the gravitational potential. I'll tell you in a second how that relates to gravity-- well, I guess I'll tell you now. It's related to the gravitational acceleration by g is equal to minus the gradient of phi, and gradient of phi is something that you probably all learned in 802, but I'll write down the formula anyway. It's equal to i hat, a unit vector in the x direction, times the derivative of phi with respect to x, plus j hat, a unit vector in the y direction, times the partial of phi with respect to y, plus k hat times the partial of phi with respect to z. And once one defines this gravitational potential, one can write down the differential form of the Gauss's law, which becomes what's called Laplace's equation. And it says the del squared phi is equal to 4 pi times Newton's constant times rho, where rho is the mass density. And this is called Laplace's equation, and if you're given the mass density, it allows you to find the gravitational potential, and then you can take its gradient, and that determines what g is. And it's equivalent to the other formulations of gravity. But it gives us another test of Newton's claim that you could have a homogeneous distribution of matter, and no gravitational forces. If there are no gravitational forces, then g would have to be zero, as we said a minute ago, and this formulation of g is zero, that implies the gradient of phi is zero. If we look at the formula for the gradient, it's a vector. For the vector to be zero, each of the three components has to be zero, and therefore the derivative of phi with respect to x has to vanish, the derivative of phi with respect to y has to vanish, the derivative of phi with respect to z has to vanish, that means phi has to be constant everywhere, it has no derivative with respect to any spacial coordinate. So if g vanishes, the gradient of phi vanishes, and phi is a constant throughout space. And if phi is a constant throughout space, now we can look at this formula-- and I forgot to write down the definition of del squared. Del squared phi is defined to be the second derivative of phi with respect to x squared, plus the second derivative of phi with respect to y squared, plus the second derivative of phi with respect to z squared. So if phi is a constant everywhere, as it would have to be if there were no gravitational forces, then one can see immediately from this equation that del squared phi would have to be zero, and one can see from this equation that rho would have to be zero, there would have to be no mass density. But Newton wanted to have a non-zero mass density, the matter of the universe spread out uniformly over an infinite space. So this is another demonstration that Newton's argument was inconsistent. Yes. AUDIENCE: I'm sorry, what does phi represent? PROFESSOR: Phi is really defined by these equations, it's defined, really, by this equation. The name is that it's the gravitational potential. AUDIENCE: Potential. PROFESSOR: And its physical meaning is simply that it gives you another way of writing what g is. AUDIENCE: Yeah. PROFESSOR: Any other questions? OK, so the conclusion seems to be that Newton has not gotten the right answer, here, but we still have to analyze Newton's argument a little bi more carefully, to see exactly where he went wrong. So, the next thing I want to talk about is the ambiguity associated with trying to add up the Newtonian gravitational forces, as Newton was thinking, for an infinite universe. I mentioned that the real problem with Newton's calculation is that the quantum he was calculating actually diverges, and you have to be more careful about trying to calculate it in a reliable way. So to make this clear, I want to begin by giving an example of this general notion of integrals that give ambiguous values. And I want to define just a couple of mathematical terms. I want to consider just-- again, starting talking about general functions, and when integrals are well defined and when they're not. I want to imagine that we just have some arbitrary function f of x where x would not just be one variable. We'll generalize this to three dimensions, which is the case that we'll be interested in, but we'll start by talking in terms of one variable. If we have a function f of x, we can discuss what I'll call I sub 1, which is the integral, from minus infinity to infinity, of f of x dx. This is exactly the kind of integral that you're thinking of when we wanted to-- thinking about adding up all the gravitational forces acting on a given body. Now I want to consider the case where I1 is finite. I'm sorry. I need to first define more carefully what I mean by I1. OK, to even define what you mean by this minus v to infinity, you should say something a little bit more precise. So we could define I1 a little bit more precisely, and I'll call this I1 prime, for clarity. This will really just be a clearer way of describing what one probably meant when one wrote the first line. We can define the integral from minus infinity to infinity as the limit, as some quantity L goes to infinity, of the integral from minus L to L of f of x dx. So this says to do the integral from minus L to L, and if we assume f of x is itself finite, this is always finite. I will assume f of x itself is finite, we'll only worry about the convergence of the integral. So for any given L, this is a number, then you can ask, does this number approach a limit as L goes to infinity? And if it does, you say that's the value of this integral. That just defines what we mean by the integral from minus infinity to infinity. I want to now consider the case where that exists. So consider the case where I1 prime is-- I'll write is less than infinity, meaning it has some finite value. The limit as L goes to infinity exists. But now, I want to also consider-- and I'll move on to the next blackboard-- to consider this-- consider an integral that I'll call I2, for future reference, which is just defined to be the integral from minus infinity to infinity. Defined as the same kind of limit that we used here, but I won't rewrite it. I'll just assume that the integral from minus infinity to infinity means that limit. But I want to consider the integral from minus infinity to infinity of the absolute value of f of x dx. And now I want to introduce some terminology. If I2 is less than infinity, if it converges, then I1 is called absolutely convergent. So absolutely convergent means that it would converge, even if you had absolute value signs. Conversely, this I2 is divergent-- and I'll just write that as I2 equals infinity, if that limit does not exist, if its a divergent integral. But remember, we assumed I1 did exist, so I1 still converges, but it's called conditionally convergent. So if an integral converges, but the integral of the absolute value of that same client does not converge, that's the case that's called conditional convergence. And the moral of the story, that I'll be beginning to tell you now, is that conditionally convergent integrals are very dangerous. What makes them dangerous is that they're not really well defined. You can get any value you want by adding up the integrand in different orders. As long as you stick to a particular order, which is how we define the symbol, you will get a unique answer, but if, for example, you just shift your origin, you can get a different answer, which is something you don't usually expect. You usually think of just integrating over the whole real line, it doesn't matter what you took to be the center of the line. So things become much less well defined when one is discussing conditionally convergent integrals. And before we get to the particular integral that we're really interested in, which is trying to add up the gravitational forces of and infinite distribution of matter, which I'll get to, I'm going to give you an example of a very simple function that just illustrates this ambiguity, that the integral converges, but is not absolutely convergent. You can get any answer you want by adding it in different orders-- adding up the pieces of the integral in different orders. So let me consider an example-- and this is just to illustrate the ambiguity-- the example I'll consider will be a function f of x, which is defined to equal plus 1 if x is greater than zero, and minus one if x is less than zero. And I have neglected to specify what happens if x is exactly equal to zero, but when you integrate, that doesn't matter. A single point never matters. So you could measure it's anything you want at x equals zero, it won't change anything you're going to be saying. Let me draw a graph of this. f of x versus x. I'll put plus 1 there, and minus 1 there. The function is plus 1. Maybe I have a little bit of colored chalk here to draw the function. The function is plus 1 for all positive values of x, and minus one for all negative values of x. And there's the function. And if we integrate it symmetrically, following this definition of what we mean by integrating from minus infinity to infinity, we do get a perfect cancellation. When you integrate from minus L to L, we get zero, because you get a perfect cancellation between the negative parts and the positive parts. And then if you take the limit as L goes to infinity, the limit of zero is zero. There's not really any ambiguity to that statement. So in the order specified, this has unique integral, which is zero. But, it depends on how you've chosen to add things up. In particular, if you just change your origin, and integrate starting moving outward from the new origin, you'll get a different answer, and that's what I want to illustrate next. Suppose-- suppose we consider the limit as L goes to infinity, we'll pick the limit the same way, but instead of integrating from minus L to L, we can integrate from a minus L to a plus L of f of x dx. Now this is really the same integral, we've just basically changed our origin by integrating from a outwards. In the special case a equals zero, it's exactly the same as what we did before, but if a is non-zero, it means that our integral is centered about x equals a, instead of centered about x equals zero. So we can draw that on the blackboard. If we let a be over here, our integral will go from a minus L, and that will be to the left of distance L, you will extend to a plus L, which will be to the right by distance L. The integral defined by the equation on the blackboard at the left will correspond to that region of integration. And the specification is that we should do that interval first, and then take the limit as L goes to infinity, and see what we get. It's easy to see what we will get. Once L is bigger than a, you can see that the answer won't change any more, as we make L bigger. As you make L bigger, we will always be adding a certain amount of minus 1 on the left, and certain amount-- the same amount of plus 1 on the right, and they will cancel each other once L is bigger than a. And we don't care about small l, because we're only interested in taking the limit of large L, but we should look at what happens when L equals a, and then from any bigger value of L will give us exactly the same number. And when L equals a, the integral will go from 0 up to 2a-- a plus L which is a equals L, so that's 2a. So the integral will be only on the positive side, and we'll have a length of 2a, and that means the integral will be 2a, because we're just integrating one from 0 to 2a. And that will be what we get for any bigger value of L also, because as we increase L, as I said, we just get a cancellation between adding more plus 1 on the right, and adding more minus 1 on the left. So this limit has a perfectly well defined value, which is 2a. And a is just where we chose to start integrating, so a could be anything. We could choose a to be anything we want if we're free to integrate in any order. So we can get any answer we want, if we're free to integrate in any order, to add up the pieces of this integral in the order that we choose. And that is a fundamental ambiguity of conditionally convergent integrals. And what we'll see is that trying to add up the force on a particle in an infinite mass distribution is exactly this kind of conditionally convergent integral. And that's why you get any answer you want, and it doesn't really mean anything unless you do things very carefully. OK. Let's move on. We only have a few minutes left, which I guess means I will set up this calculation, but not quite get the answer, and we'll continue next time. I actually have some diagrams here on my slides. What I want to do now, is calculate the force on some particle in an infinite mass distribution, and show you that I can get different answers, depending on what order I add things up. I will add things up in a definite order at each stage, so I will get a definite answer at each stage, though I'll get different answers, depending on what ordering I choose. So, we're going to start by trying to calculate-- and the only thing [? of interest, ?] actually, in calculating the gravitational force on some point, p in an infinite distribution of mass. Mass fills the slide, and everything, out to infinity. And we're going to add up that mass in contributions that are specified. And for our first calculation, we're going to add up the forces for masses that are defined in concentric shells, where we're going to take the innermost shell first, then the second shell, then the third shell, going outward from the center. In that case, it's easy to see that the force on p calculated in that order of integration is 0, because every shell has p exactly at the center, and by symmetry, it has to cancel exactly. In fact, we know-- and we'll use this fact shortly-- that the gravitational field of a shell, inside the shell, is exactly zero-- Newton figured this out-- and outside the shell, the gravitational field of a shell looks exactly the same as the gravitational field of a point mass located at the center of the shell with the same total mass. So we're going to be using those facts. And clearly those facts indicate that, for this case, the answer is 0. P equals 0. Now we're going to consider a more complicated case-- going too far, here, don't want to tell you the answer yet-- this more complicated case, we're going to still calculate the force at the point p, but we're going to choose concentric spherical shells which are centered around a different point, q. So q just defines the shells that we're going to use for adding things up, and we're still going to add up all the shells out to infinity, so we're going to be adding up the force on p due to the entire infinite mass distribution, but we'll be taking those contributions in a different order, because we're going to be ordering it according to shells that are all centered on q, starting with the innermost, and then the second, and then the third, and so on. Now in this case, we can first talk about the contribution of the shaded region, which are all the shells around q which have radii which are less than the distance to p. For all of these shells, p lies outside the shell. And therefore all of those shells act just like a point mass, with the same total mass concentrated at q, the center of all those spheres. So the mass that's in the shaded region will give a contribution to the force at p, which is just equal to the force of the mass given by the same total mass the point q, located at q. On the other hand, all the shells outside will be shells for which p is inside. P is no longer at the center of those shells, but Newton figured out, and I'll assume that we all believe, it doesn't matter. Inside the spherical shell, the gravitational force is zero anywhere, no matter how close you are to the boundaries. It just cancels out perfectly. As you get closer to one boundary, you might think you'd be pulled toward that boundary, but-- let me just tell you what's happening here-- as you get closer to one boundary, it is true that the force pulling you towards the particular particles at the boundary get stronger, because it's 1 over r squared, but as you get close to this boundary, there's more mass on this side, because all the mass except for a little sliver is on the other side. And those two effects cancel out exactly. So the force on a particle inside a shell is exactly zero, as you can prove very easily by the way, from the Gauss's law of formulation of gravity. And therefore, the outer shells give no contribution. So we've completely calculated now the force at p is just equal to the force due to the shaded mass. It's just given by that simple formula, it's g times the total mass, divided by b squared, that would be it's distance between q and p. And it's non-zero. So you get 0 or non-zero answer, depending on what ordering you chose for adding up the pieces of the mass that are going to make up this infinite distribution. And furthermore, this answer could be anything you want, because I could let b be anything I want. And this answer depends on b, and becomes arbitrarily large in magnitude as b gets bigger. The mass grows like b cubed. It might look like it falls with b, but actually it grows with b. And we could get a point in any direction, by choosing q on any side we want of p, so we can get, really, any answer we want by using this particular way of adding up the masses. Yes. AUDIENCE: Well, although we can get any answer we want, every answer [INAUDIBLE] PROFESSOR: Every answer, say again? AUDIENCE: Like every single one of those answers corresponds to a setup. I mean like the g equals 0, [INAUDIBLE] PROFESSOR: Well the reason it's a problem is that these shells don't really exist. We're just thinking about these shells. The shells only determine what order we are going to use for adding up the different contributions. The matter is just uniformly distributed and there's no shells present. The shells are purely a mental construct, which should not affect the answer. This is not part of the physical system at all. The shells only reflect the order that we have used to add up the masses. So we'll stop there. If anybody has questions, we can talk after class, and we can talk more about the question at the beginning of the next class, but class is over for now.
MIT_8286_The_Early_Universe_Fall_2013
18_Cosmic_Microwave_Background_Spectrum_and_the_Cosmological_Constant_Part_I.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK. Good morning, everybody. Let's get started. Let me just begin by asking if there are any questions, either about logistical issues or about physics issues? OK. Today we'll be finishing our discussion of black-body radiation by talking about the actual spectrum of the cosmic microwave background that we find in our universe. And then move on to talk about the rather exciting discovery in 1998 of the fact that our universe today appears to have a nonzero cosmological constant. So I want to begin by reviewing what we did last time. And one of the reasons why I do this is I think it's a good opportunity for you to ask questions that don't occur to you the first time we go through. And that, from my point of view, has been an extraordinary success. I think you've asked great questions. So we'll see what comes up today. We began the last lecture by recalling, I think from the previous lecture actually, the basic formulas for black-body radiation, which is just the radiation of massless particles at a given temperature. And we have formulas for the energy density, the pressure, the number density, and the entropy density, all of which are given in terms of two constants, little g and little g star, which is the only place where the actual nature of the matter comes in. G and g-star are both equal to 2 for photons, but these formulas allow us to talk about other kinds of black-body radiation as well, black-body radiation of other kinds of particles. As neutrinos are also effectively massless, so they contribute. And in addition, e plus e minus pairs, if the temperature gets hot enough so that the mass of the electrons becomes negligible compared to kt, also contribute to the cosmic background radiation. And if we want the higher temperatures, other particle will start to contribute. And at the highest temperatures all particles act like black-body radiation. The general formula for g and g star is that there is a factor out front that depends on whether the particle is a boson or a fermion, a particle which does or does not obey the Pauli exclusion principle. Fermions do not, bosons-- excuse me, I said that backwards. Fermions obey the Pauli exclusion principle, bosons do not. G and g star are both 1 for bosons. But for fermions there's a factor of 7/8 for g and 3/4 for g star. Yes? AUDIENCE: Would you mind quickly restating why the positron-electron pairs act like radiation above that temperature? PROFESSOR: OK. The question is, why do electron-positron pairs act like radiation at these high temperatures? And the answer is that radiation is just characterized by the fact that the particles are effectively massless. And the effective energy scale is kt, that's the average thermal energy in a thermal mix. So as long as m e c squared is small compared to kt, electrons and positrons think that they're massless and act like they're massless. And as I said, if you go to higher temperatures still, all particles will act like they're massless. Coming back to the story of g and g star, we have the factor out front which depends on whether they're bosons or fermions. And then that just multiplies the total number of particle types, whereby a particle type-- we made a complete specification of what kind of a thing it is. And that includes specifying what species of particle it is, whether it's a particle or an anti-particle if that distinction exists, and what the spin state is if the particle has spin. So we can try this out now on some examples. First example, will be neutrinos which play a very important role in the early universe, and even in the particle number balance of today's universe. Neutrinos actually have a small mass, as we talked about last time and as I'll review again this time. But nonetheless, as far as cosmology is concerned, they effectively act like massless particles although the story about why they act like massless particles is a little complicated. It's more than just saying that they're mass is small, for reasons that we'll see. But anyway, I'm nonetheless going to start by describing neutrinos as if they were massless, as was believed to be the case really until 15 years ago or so. The massless model of the neutrino was a particle which was always left-handed. And by left-handed what I mean is that for neutrinos, if you took the angular momentum of the neutrino in the direction of the momentum, p hat there means dotted with the unit vector in the direction of the spatial momentum, you'd always get minus 1/2 in units of h bar. And conversely, all new bars are right-handed which just means the same equation holds with the opposite sign. So neutrinos always have spins that oppose the direction of motion, and anti-neutrinos always have spins aligned with the direction of motion. Now, it's not obvious but, if neutrinos were massless this would be a Lorentz invariant statement. If neutrinos have a mass, that statement is obviously not learn Lorentz invariant. As you can see by imagining a neutrino coming by, and you can get into a rocket ship, chase it, and pass it, and then see it going the other way out your window because you're going faster than it. You would see the momentum in the opposite direction from the way it looked to begin with. But the spin would look like it was the same direction as it did to begin with, and therefore the spin would now be aligned with the momentum instead of opposite the momentum. So this could not possibly hold universally if the neutrino has a mass. But for the time being our neutrinos are massless. So we're going to take this as a given fact. And it certainly is a fact for all neutrinos that have ever been actually measured. Given this model of the neutrino, the g for neutrinos is 7/8 because they're fermions. Then there's a factor of 3, because there are three different species of neutrinos- electron neutrinos, muon neutrinos, tau neutrinos. Neutrinos come in particles or anti-particles which are distinct from each other, we think. So there's a factor of 2 associated with the particle anti-particle duality. And there's only one spin state. The spin that's anti-aligned with the momentum, or aligned for the anti-neutrinos. But only one spin state in either case. So just a factor of 1 from spin states, and multiplying that through we get 21/4 for g, and 9/2 for g star. Yes? AUDIENCE: If we found out that they were Majorana, that they were their own anti-particles, would that change what we expect the temperature [INAUDIBLE] to be? PROFESSOR: No, it would not. OK. The question was, if we find that they're Majorana particles-- which I'm going to be talking about in a minute-- where the particles would be their own anti-particles, which would mean that the right-handed anti-neutrino would really just be the anti-particle of the left-handed neutrino, it would not change these final numbers at all. What it would do is, instead of having the 2 for particle anti-particle, we would have a 2 for spin states. So there would still be two kinds of neutrinos, but instead of calling them the neutrino and the anti-neutrino, the right words would be right-handed neutrino and left-handed neutrino. But the product would still be the same. AUDIENCE: Wait, they have mass and they are Majorana? PROFESSOR: If they have mass and Majorana, what I just said applies. The fact that they have a mass would mean at the lowest possible temperatures they would not act like black-body radiation. Kt would have to be bigger than their mass times c squared. But that's only on the order of electron volts at most. So I'll talk later about why the true model neutrinos which have masses give the same result as this. OK. Then we can also, just as an exercise, calculate g and g star. It's more than an exercise. We like to know the results. We can calculate g and g star for e plus e minus pairs, which is relevant for when kt is large compared to the rest energy of an electron. And again, they are fermions so we get a factor of 7/8 appearing in the expression for g, and 3/4 appearing in the expression for g star. And then we just have to multiply that times the total number of types of electrons that exist. There's only one species called an electron, so we only get a factor of one in the species slot of the product. There are both electrons and anti-electrons where the anti-electrons are usually called positrons. So we get a factor of 2 in particle anti-particle. Two spin states because an electron can be spin up or spin down, and that gives us 7/2 and 3. Given that, we can go ahead and calculate what the energy density and radiation should be for the present universe given the temperature of the photons, the temperature of the cosmic microwave background. And in doing that there's an important catch which is something which is the subject of a homework problem that you'll be doing on problem set seven. When the electron-positron pairs disappear from the thermal equilibrium mix, if everything were still in thermal contact, its heat would be shared between the photons and the neutrinos in a way that would keep a common temperature. But in fact, when the e plus e minus pairs disappear, things are not in thermal contact anymore. And in particular, the neutrinos have decoupled. They're effectively not interacting with anything anymore. So the neutrinos keep their own entropy and do not absorb any entropy coming from the e plus e minus pairs. So all the entropy of the e plus e minus pairs gets transferred only to the photons. And that heats the photons relative to the neutrinos in a calculable amount, which you will calculate on the homework problem. And the answer is that the temperature of the neutrinos ends up being only 4/11 to the 1/3 power, times the temperature of the photons. And that's important for understanding what's been happening in the universe since this time. That ratio is maintained forever from that time onward. So if we want to write down the formula for energy density and radiation today it would have two terms. The 2 here is the g for the photons, and this, times that expression is the energy density in photons. The second term is the energy density in neutrinos. And it has the factor of 21/4 which was the g factor for neutrinos. But then there's also a correction factor for the temperature, because on the right hand side here I put t gamma to the fourth. So this factor corrects it to make it into t neutrino to the fourth, which is what we need there to give the right energy density for the neutrinos today. And this is just that ratio to the fourth power. And once you plug in numbers there it's 7.01 times 10 to the minus 14th joules per meter cubed, which is, from the beginning, what we said was the energy density in radiation of the universe today. OK. Finally I'd like to come back to this real story of neutrinos and their masses and why, even though they have small masses, the answers that we gave for the massless model of the neutrino are completely accurate for cosmology. We've never actually measured the mass of a neutrino. But what we have seen is that neutrinos of one species can oscillate into neutrinos of the other species. And it turns out, theoretically, that that requires them to have a mass. And by seeing how fast they oscillate you can actually measure the difference in the mass squareds between the two species. So it's still possible actually, in principle, that one of the species could have zero mass. But they can't all have zero mass because we know the differences in the squares of their masses. So in particular, delta m squared 2 1 times c to the fourth, meaning the mass expressed as an energy, is 7.5 times 10 to the minus 5 eV squared. And larger values obtained for 2 3, which is 2.3 times 10 to the minus 3 eV squared. We're still talking about fractions of an eV. The other of the three possible combinations here are just not known yet. Now, if neutrinos have a mass, that does actually change things rather dramatically because of what we said about-- the statement that the neutrinos always align their spins with their motion just cannot be true if neutrinos have a mass. And more generally, for any particle with a mass of arbitrary spin j, the statement is that, the component of j along any particular axis-- and we'll call it the z-axis-- always takes on the possible values in terms of h bar going from minus j up to j with no emissions. It's different for massless particles. For massless particles every one of these elements on the right hand side is independent and, by itself, a Lorentz invariant possibility. But, coming back to neutrinos-- if the neutrinos have a mass, in addition to the left-handed neutrinos there has to be a right-handed partner. And the question then is, what's the story behind that? And it turns out we don't know the story. But we know two possible stories. And one of the possible stories is that could be what's called a Dirac mass. And for Dirac mass what it means is that, the right-handed neutrino is simply a new type of particle which just happens to be a particle that we've never seen, but a particle which would have a perfectly real existence. It would however, to fit into theory and observation, be an extremely weakly interacting particle. The interactions of the right-handed one do not have to be the same as the interactions of the left-handed one. That is, the interactions can depend explicitly on p hat dot j. So depending its value, you could affect what the interactions are, again, in a Lorentz invariant way. And in practice, the right-handed neutrinos would indirect so weakly that we would not expect to see them in the laboratory. And we would not expect even in the early universe that they would have been produced in any significant number. So even though it would be a particle that, in principle, exists, we would not expect to see it. And we would not expected it to affect the early universe. Alternatively-- and in some ways a more subtle idea-- is that the mass of the neutrino could be what's called a Majorana mass. Where Majorana, like Dirac, is the name of a person-- perhaps less well known than Dirac, but made important contributions in this context nonetheless. In this case, it can only occur if lepton number is not conserved. And if lepton number is not conserved then there are really no quantum numbers that separate the particle that we call a neutrino from the particle that we call an anti-neutrino. And if that's the case, the particle that we call the anti-neutrino could, in fact, just be the right-handed partner of the neutrino. So for the Majorana mass case we don't need to introduce any new things that we haven't already seen. We just have to rename the thing that we've been calling the anti-neutrino the neutrino with helicity plus 1 instead of minus 1-- it's with j hat dot p with j dot p hat, equal to plus 1/2 instead of minus 1/2. So these would just be the two spin states of the neutrino instead of the neutrino and the anti-neutrino. And that's a possibility. And this would also change nothing as far as the counting that we did. It would just change where the factors go instead of having a factor of 2 for particle anti-particle and this type counting. We'd have a factor of 2 in the spin state factor, and a factor of 1 in the particle anti-particle [INAUDIBLE]. The particle and the anti-particle would be the same thing. OK. Any questions about that? OK. Finally-- and I think this is my last slide of the summary. At the end of the lecture we just pointed out a number of tidbits of information. We can calculate the temperature of the early universe at any time from the formulas that we already had on the slide. We know how to calculate the energy density at any time. And by knowing about black-body radiation we can convert that into a temperature. And for an important interval of time, which is when kt is small enough so that you don't make muon anti-muon pairs, but large enough so that electron-positron pairs act like they are massless, and this very large range kt is equal to 0.860 m e v, divided by the square root of time where time is measured in seconds. So in particular, at 1 second kt is 0.86 m e v. And it does apply at 1 second. Because 0.86 m e v is in this range. We also then talked about the implications of the conservation of entropy. If total entropy is conserved, the entropy density has to just fall off like 1 over the cube of the volume. Total entropy is conserved for almost all processes in the early universe. So the entropy falls off like 1 over a cubed. And that means that, as long as we're talking about a period of time during which little g does not change-- and little g only changes when particles freeze out, like when the electron-positron pairs disappear-- but as long as little g doesn't change, s [INAUDIBLE] 1 over a cubed means simply that the temperature falls like 1 over a. And when little g changes you can even calculate corrections to this as, effectively you're doing when you calculate this relationship between the neutrino temperature and the photon temperature. And finally, we talked about the behavior of the atoms in the universe as the universe cools. For temperatures above about 4,000 degrees the universe, which is mainly hydrogen, is mainly a hydrogen plasma. Isolated protons and electrons zipping through space independently. At about 4,000 Kelvin-- and this is a stat [? mac ?] calculation, which we're not doing-- but using the answer. At about 4,000 Kelvin-- which is a number which depends on the density of hydrogen in the universe, it's not a universal property of hydrogen-- but for the density of hydrogen in the universe, at about 4,000 Kelvin hydrogen recombines. It becomes neutral atoms. And slightly colder, at about 3,000, the degree of ionization becomes small enough so that the photons become effectively free. The photons decouple. In between 4,000 and 3,000 the hydrogen is mostly neutral, but they're still enough ionized so that the photons are still interacting. So the most important temperatures-- the 3,000 Kelvin, when the photons are released, when the photons are no longer trapped with the matter of the universe. And last time we estimated the time at which that happens. That should be a small t, sorry. The time of decoupling is about 380,000 years. And that number is actually very accurate, even though we didn't calculate it very accurately. And that's the end of my summary. Any questions about the summary? OK. In that case, let's go on to talk first about the spectrum of the cosmic background radiation. And then we'll move on to talk about the cosmological constant. CMB is cosmic microwave background. And that's a very, very standard abbreviation these days. So when the cosmic microwave background was first discovered by Penzias and Wilson in 1965-- which, I might point out, is going to have its 50th anniversary in the coming year-- they only measured it at one frequency. It was a real tour de force to measure it at the one frequency and to convince themselves that the buzz that they were hearing in their detector was not just some kind of random electrical noise, but really was some signal coming from outer space. And the main clue that it was some signal coming from outer space was that they were able to compare it with a cold load, a liquid helium-cooled source, and find that that comparison worked the way they expected. And the main reason for believing it was cosmological rather than local is that they got the same reading no matter what direction they pointed their antenna. This just took a lot of radio technique skill to convince themselves that it wasn't just some radio tube that was malfunctioning or something. They even worried that it may have been caused by pigeon droppings in their antenna, I actually read about in Weinberg's book. But they finally convinced themselves that it was real. They were still not convinced really that it was a sign for the big bang and-- you may recall, again, from reading Weinberg that there were two papers published back-to-back. The experimental paper by Penzias and Wilson, which really just described the experiment, mentioning that a possible explanation was in this other paper by Dickie, Peebles, Roll, and Wilkinson which described the theory that this was radiation that originated with the big bang. But it's all based on one point at one frequency. Shortly afterwards, I guess within the same year, Roll and Wilkinson were able to measure it at a slightly different frequency. And when I wrote my popular-level book I tabulated all of the data that was known in 1975. And this mess is the graph. This shows sort of the full range of interesting frequencies. The solid line here is the expected theoretical curve corresponding to a modern measurement of the temperature 2.726 degrees Kelvin. All of the interesting historical points are in this tiny little corner on the left, which is magnified above. The original Penzias and Wilson point is way down here at very low frequencies by the standards of radiation at 2.726 degrees Kelvin. The Roll and Wilkinson point is there. These blobs indicate error bars. The [? cyanogen ?] points that you read about in Weinberg are shown there and there. The first measurement that showed that, it didn't only go up but started to go down like black-body radiation should, was a balloon flight-- this 1971 balloon flight which produce that blob and that bound. This was an experiment by MIT's own Ray Weiss. And it was very important in the history because it was the first evidence that we weren't just seeing some straight line, but we were seeing something which did indeed rise and fall the way black-body radiation should. A later balloon flight in 1974 produced error bars that are shown by this gray area. Incredibly broad. So the bottom line that this graph was intended by me to illustrate is that, in 1975 you could believe that this was black-body radiation if you so wished. But there was not really a lot of evidence that it was black-body radiation. The situation did not get better quickly. The next significant measurement came in 1987 which was a rocket flight, which was a collaboration between a group at Berkeley and a group at Nagoya, Japan. I believe it was the Japanese group that supplied the rocket and the American group that supplied the instrumentation. And they measured the radiation at three points. I can give you the number that goes with those graphical points. I guess what I have tabulated here is the effect of temperature that those points correspond to. As you can see from the graph, those points are all well above the black-body curve. Significantly more radiation than what was expected by people who thought it should be black-body. And 0.2 up there would correspond to a temperature of 2.955 plus or minus 0.017 K. The size of the vertical bars there are the error bars that the experimenters found. And 0.3 was t equals 3.175 plus or minus 0.027 K. So these were higher temperatures then the 2.7 that fit the lower part of the spectrum. And very, very small error bars. So this data came out in 1987. And, in truth, nobody knew what to make out of it. The experimental group were well aware that this was not what people wanted them to find. And they certainly examined their data very carefully to figure at what could have conceivably gone wrong. And they were going around the country giving talks about this. And I heard one or two of them in which they described how surprised they were by the results, but emphasized that they analyzed the experiment very, very carefully and couldn't find anything wrong with it. And this was the situation for awhile. I should point out that I think this point number three is something like 16 standard deviations off of the theory. And usually when somebody makes a measurement that's three or four standard deviations off of your theory, you really start to worry. 16 standard deviations is certainly a bit extreme. Nonetheless, nobody had any good explanation for this. So, well, different people had different attitudes. There were some people who tried to construct theories that would account for this. And there were others who waited for it to go away. I'm pretty sure I was among those who waited for it to go away, and we were right. So the next important piece of data came from the first satellite dedicated to measuring the cosmic background radiation. The famous COBE Satellite-- Cosmic Background Explorer-- I guess I didn't write down the name here. Oh, it's in the title. Preliminary measurement of the cosmic microwave background spectrum by the Cosmic Background Explorer, COBE Satellite. So COBE was the first satellite dedicated to measuring the cosmic background radiation. It was launched in 1989, I guess, and released its first data in January of 1990. Back in those days there was no internet or archive. So you may or may not know that the way physics results were first announced to the world were in the form of what were called pre-prints, which were essentially xeroxed copies of the paper that were sent out to a mailing list. Typically, I think, institutions had mailing lists of maybe 100 other institutions. And every physics department had a pre-print library that people can go to and find these pre-prints. So this is the COBE pre-print. 90-01, the first pre-print from 1990. And this is the data. So it is kind of breathtaking, I think. It suddenly changed the entire field, and in some sense really change cosmology for the field. Where we only had approximate ideas of the way things worked, to suddenly having a really precise science in which precise measurements could be made, and cleared up the issue of the radiation. It wasn't just a mess like this, or a terrible fit like that, but a fantastically good fit. Really nailing the radiation as having a thermal spectrum. So the history is that John Mather presented this data at the January 1990 American Physical Society meeting, and was given a standing ovation. And he later won the Nobel Prize for this work. He was the head of the team that brought this data. He won the Nobel Prize in 2006 along with George Smoot, who was responsible for one of the other experiments on the COBE satellite. Yes? AUDIENCE: So do we know what happened with the other measurements? PROFESSOR: To tell you the truth, I don't think the other measurements ever-- the other people ever really published what they think happened. But the widespread rumor, which I imagine is true, is that they were seeing their own rocket exhaust. And there were, I think, some arguments going on between the Americans and the Japanese, with the Americans more or less accusing the Japanese of not really telling them how the rocket was set up. Yes? AUDIENCE: Are the error bars plugged on those points, or is it just that good? PROFESSOR: Those are the error bars? AUDIENCE: OK. PROFESSOR: And even more spectacular, a couple years later, I guess it was-- this was actually just based on nine minutes of data or something like that. But a couple years later they published their full data set, where the size of the error bars were reduced by a factor of 10. And still a perfect fit. They didn't even know how to plot it, so I think they plotted the same graph, and said the error bars are a factor of 10 smalled than what's shown. It was gorgeous. So I think I forgot to tell you what the spectrum is supposed to look like exactly. And this is just a formula that I want you to understand the meaning of, but not the derivation of. We-- as with the other stat mech results that we're relying on, we're going to relegate their derivation to the stat mech course that you either have taken or will take. But the spectrum is completely determined because the principle of thermal equilibrium is sort of absolute in statistical mechanics. And in order for a black-body radiating object to be in thermal equilibrium with an environment at that temperature, it has to have not only the right emission rate but also the right spectrum. If the spectrum weren't right you could imagine putting in filters that would trap in some frequencies and let out others. And then you would move away from thermal equilibrium if the spectrum were right or wrong because you'd be trapping in more radiation-- you could arrange for the filters to trap in more radiation than they are letting out. So the spectrum is calculable. And in terms of-- I guess this is energy density. I have to admit, I usually call energy density u and in these notes here it's called rho. We'll figure out the units after I write it down and make sure that it is energy density. Rho sub nu of nu d nu, means-- with this product it means the total energy density, energy per volume, per frequency interval, d nu-- well, it's times d nu, so if you multiply by times nu, this is the total energy for frequencies between nu and nu plus d nu. And the formula is 16 pi squared h bar nu cubed, divided by c cubed times 1 over e to the 2 pi h bar nu over kt, minus 1 d nu. OK. And actually, the unit's not that transparent. I believe this is energy density and not mass density. But maybe I'll make sure of that and let you know next time. And this is what produces that curve that you saw on the slides. I've included the subscript nu here to indicate that it's the number which, when you multiply it by d nu, gives you the energy density between nu and nu plus d nu. If instead you wanted to know the energy density between lambda and lambda plus d lambda, there'd be a kinematic factor that you'd have to put in here-- the factor that relates d lambda to d nu. And you could imagine working that out. I might add that, in Weinberg's book, he actually plots both sub lambda of nu. So his curve looks somewhat different than the curves that I showed you. This is not exactly the same thing. Now, what this extremely accurately black-body curve proves is that the early universe really was very accurately in thermal equilibrium. And that can only happen if the early universe was very dense. And of course, our model of the universe goes back to infinite density. So the model predicts that it should be in thermal equilibrium. But in particular, the numbers that we have here, if you ask how much could you change the model and still expect these curves the answer is roughly that, all of the important energy-releasing processes have to have happened before about one year after the Big Bang. Anything that happened after one year would still show up as some glitch in the black-body spectrum. So the big bag model really is confirmed back to about one year on the basis of this precise measurement of the spectrum of the cosmic background radiation. And the COBE measurement is still, by the way, the best measurement of the spectrum. We've had other very important experiments, that we'll talk about later, which measure the non-uniformity of the black-body radiation. Which is very small, but nonetheless very, very important [? effect. ?] So we've had WMAP and now Planck which have been dedicated to measuring the anisotropies of the radiation. COBE also made initial measurements of the anisotropies. And we'll be talking about anisotropies later in the course. Yes? AUDIENCE: [INAUDIBLE]. PROFESSOR: Sorry? AUDIENCE: The units of the right-hand side are energy density. PROFESSOR: Energy density. OK. Thanks. OK. Good. So my words were right. I should have called it u, I think, to be consistent with my usual notation. Thanks. OK. Any other questions about the CMB? Because if not, we're going to change gears completely and start talking about one of the other crucially important observational discoveries in cosmology in the last 20 years. OK. So what I want to talk about next is the very important discovery originally made in 1998-- also resulting in a Nobel Prize-- that the universe is accelerating. And this was a discovery that involved two experimental groups, and a total of something like 52 astronomers between the two groups. Which actually meant that-- I'm exaggerating slightly, I suppose. But it really involved the majority of the astronomers of the world, and therefore there weren't a lot of astronomers to argue with them about whether or not the result was right. But there still was some argument. The announcement was initially made at a AAS meeting in January of 1998 by-- which group was first? I think that was the High-Z Supernova-- where are they? Yeah. That was the High-Z Supernova Search Team. And then there was also a group largely based at Berkeley. The High-Z Supernova Search Team was actually fairly diffused, although based to some extent at Harvard. And the Supernova Cosmology Project was based rather squarely in Berkeley, headed by Saul Perlmutter. And they both agreed. And what they found was, by looking at distant supernovae of a particular type-- type 1a-- they were able to use these supernovae as standard candles. And because supernovae are brighter than other standard candles that had been studied earlier in history, they were able to go out to much greater distances. And that means to look much further back in time than previous studies. And what they discovered was that the expansion rate of the universe today was actually faster, and not slower, than the expansion rate about five billion years ago. And that was a big shock because until then everybody expected that gravity would be slowing down the expansion of the universe. And when these guys started to make these measurements they were just simply trying to figure out how fast the universe was slowing down. And they were shocked to discover that it was not slowing down, but instead speeding up. Initially there was some controversy about it. People did try to invent other explanations for this data. But the data has, in fact, held up for the period from 1998 to the present. And in fact, it has been strongly supported by evidence from these anisotropies in the cosmic microwave background radiation, which we'll be talking about later. But it turns out, you can get a lot of information from these anisotropies in the cosmic background radiation. So the picture now is really quite secure, that the acceleration-- the expansion of the universe is actually accelerating, and not decelerating. And the simplest explanation for that, which is the one that-- well, certainly because it's the most plausible, and the one that most of us take seriously, and it's the only one that fits the data extraordinarily well. So we've not seen any reason not to use this explanation. The simplest explanation is that there's a nonzero energy density to the vacuum, which is also what Einstein called the cosmological constant. So we should begin by writing down the equations that describe this issue. So we've learned how to write down the second order Friedmann equation, which describes how the scale factor of the universe accelerates. And on the right-hand side, once we included materials with nonzero pressures, we discover that we need on the right-hand side, rho plus 3 p, over c squared-- excuse me-- times a. Now when the cosmological constant was born, was when Einstein first turned his theory of general relativity to cosmology. Einstein invented the theory of general relativity in 1916. And just one year later, in 1917, he was applying it to the universe as a whole to see if he could get a cosmological model consistent with general relativity. Einstein at that point was under the misconception that the universe was static, as Newton had also thought, and as far as we know, as everybody between Newton and Einstein thought. If you look up at the stars, the universe looks pretty static. And people took this very seriously. In hindsight, it's a little hard to know why they took it so seriously, but they did. So when Einstein discovered this equation he was assuming that the universe consisted of basically non-relativistic stuff. Stars are essentially non-relativistic hunks of matter. So he thought that rho would be positive, the effective pressure would be zero. And he immediately noticed that this equation would imply that the scale factor would have a negative acceleration. So that if you tried to set up a static universe it would instantly collapse. And as we talked about earlier, Newton had talked himself out of that conclusion. And I think the real difference, as I think we also talked about earlier, was that Newton was thinking of the law of gravity as an action at a distance, where you determine the total force on something by integrating the forces caused by all other masses. And then things get complicated and divergent, actually, for an infinite, static universe. And Newton managed to convince himself that you could have a static universe of that type, a statement that we now consider to be incorrect even in the context of Newtonian mechanics. But this fact that it's incorrect even in the context of Newtonian mechanics was really not discovered until Einstein wrote down this equation. And then Einstein himself also gave a Newtonian argument showing that, at least with a modern interpretation of Newtonian mechanics. It doesn't work in Newtonian gravity either to have a static universe. But Einstein was still convinced that the universe was static. And he realized that he could modify his field equations-- the equations that we have not written down in this course, the equations that describe how matter create gravitational fields-- by adding a new term with a new coefficient in front of it which he called lambda. And this extra term, lambda, could produce a kind of a universal gravitational repulsion. And he realised he had to adjust the constant to be just right to balance the amount of matter in the universe. But he didn't let that bother him. And if he adjusted it to be just right, and the universe was perfectly homogeneous, he could arrange for it to balance the standard force of gravity. We can understand what lambda does to the equations because it does, in fact, have a simple description in terms of things that we have discussed and do understand. That is, you could think of lambda as simply corresponding to a vacuum energy density. Einstein did not make that connection. And not being an historian of science, I can speculate as much as I want. So my speculation is that, the reason this did not occur to Einstein is that Einstein was a fully classical physicist who was not at this time or maybe never accepting the notions of quantum theory. And in any case, quantum field theory was still far in the future. So in classical physics the vacuum is just plain empty. And if the vacuum is just plain empty it shouldn't have any energy density. The quantum field theory picture of the vacuum, however, is vastly more complex. So to a modern quantum field theory-oriented theoretical physicist the vacuum has particle, anti-particle pairs appearing and disappearing all the time. We are now convinced that there's also this Higgs field that has even a nonzero mean value in the vacuum. So the vacuum is a very complicated state which, if anything characterizes it, it's simply the state of lowest possible energy density. But because of, basically, the uncertainty principles of quantum mechanics, the lowest possible energy density does not mean that all the fields are just zero and stay zero. They're constantly fluctuating as they must according to the uncertainty principle, which applies to fields as well particles. So we have no reason anymore to expect the energy density of the vacuum to be zero. So from a modern perspective it's very natural to simply equate the idea of the cosmological constant to the idea of a nonzero vacuum energy density. And there are some unit differences-- just constants related to the historical way that Einstein added this term his equations. So the energy density of the vacuum-- which is also the mass density of the vacuum times c squared-- is equal to Einstein's lambda times c to the fourth, over 8 pi G. And this is really just an historical accident that it's defined this way. But this is the way Einstein defined lambda. Now, if the vacuum has an energy density, as the universe expands the space is still filled with vacuum. At least, if it was filled with vacuum. If it was matter it would thin out. But we can imagine a region of space that was just vacuum, and as it expands it would have to just stay vacuum. What else could it become? And that means that we know that, for a vacuum, rho dot should equal zero. Now we've also learned earlier, by applying conservation of energy to the expanding universe, that rho dot in an expanding universe, is equal to minus 3 a dot over a. Or we could write this as h times rho plus p over c squared. This is basically a rewriting of d u equals minus p d v, applying it to the expanding universe. So I won't re-derive it. We already derived it. Actually, I think you derived it on the homework, was the way it actually worked. But in any case, this immediately tells us that if rho dot is going to be 0 for vacuum energy, this has to be 0. And therefore p vacuum has to be equal to minus rho vacuum times c squared. And if we know the energy density in the pressure of this stuff called vacuum, that's all we need to know to put it into the Friedmann equations and find out how things behave. Otherwise this vacuum energy behaves no differently from anything else. It just has a particular relationship between the pressure and the energy density, with a very peculiar feature- that the pressure is negative. And that's an important feature because we had commented earlier that a negative pressure can drive acceleration. And now we're in a good position to see exactly how that works. To sort of keep things straight I'm going to divide the mass density of the universe into a vacuum piece and a normal piece, where normal represents matter, or radiation, or anything else, if we ever discover something else. But in fact it will just be matter or radiation for anything that we'll be doing in this course, or anything that's really done in current cosmology. And similarly, I'm going to write pressure as p vac plus p normal. "N" is for normal. But p vac I don't really need to use, because p vac I can rewrite in terms of rho vac. So in the end I can express everything just in terms of rho vac. And I can write down the second order Friedmann equation. And it's just a matter of substituting in that rho and that p into the Friedmann equation that we've already written. And we get minus 4 pi over 3 G, times rho normal plus 3 p normal, over c squared. And the vacuum pieces-- have two pieces because there's a vacuum piece there and a vacuum piece there. it can all be expressed in terms of rho vac and collected. And what you get is minus 2 rho vac times a. Showing just what we were talking about. That because of that minus sign, multiplies that minus sign, vacuum energy drives acceleration, not deceleration. And that's why vacuum energy can explain these famous results of 1998. And we'll see later that, for the same reason vacuum energy or things like vacuum energy can actually drive the expansion of the universe in the first place in what we call inflation. Yes? AUDIENCE: So for the equation without the cosmological constant it's, let's say, rho and p are about the constant, then wouldn't that be the equation for a simple harmonic function [INAUDIBLE] or the oscillation of a [INAUDIBLE] is some negative constant times a? PROFESSOR: That's right, except that you would probably not believe the equations with the bounds. AUDIENCE: OK. PROFESSOR: And when a went negative you wouldn't really have a cosmological interpretation anymore, I don't think. But it is, in fact, true that if rho and p were constants-- I'm not sure of any model that actually does that-- this would give you sinusoidal behavior during the expanding and contracting phase. Yes? AUDIENCE: [INAUDIBLE] the vacuum energy is constant over time, is it also makes sense [INAUDIBLE]? AUDIENCE: Are you asking, does it make sense for maybe the vacuum energy to change with time? I think, if it changed with time, you wouldn't call it vacuum energy. Because the vacuum is more or less defined as the lowest possible energy state allowed by the laws of physics. And the laws of physics, as far as we know, do not change with time. It's certainly true that, in a completely different context, you might imagine the laws of physics might change with time. And then thing would get more complicated. But that would really take you somewhat outside the sphere of physics as we know it. You could always explore things like that, and it may turn out to be right. But at least within the context of physics as we currently envision it, vacuum energies are constant, pretty much by definition. Now I should maybe qualify that within the context of what we understand, there may, in fact, be multiple vacua. For example, if you have a field theory one can have a potential energy function for one or more fields. And that potential energy function could have more than one local minimum. And then any one of those local minima is effectively a vacuum. And that could very likely be the situation that describes the real world. And then you could tunnel from one vacuum to another, changing the vacuum energy. But that would not be a smooth evolution. That would be a sudden tunneling. OK. So this is what happens to the second order Friedmann equation. It is also very useful to look at the first order Friedmann equation, which is a dot over a squared, 8 pi over 3 G. And in its native way of being written we would just have 8 pi over 3 G rho, minus k over-- kc squared over a squared. And all I want to do now is replace rho by rho vac plus rho n. And this is a first order Friedmann equation. And we can expand rho n if we want more details, as rho matter plus rho radiation. And rho matter, we know, varies with time proportional to 1 over a cubed. Rho radiation behaves with time as 1 over a to the fourth. So all of the terms here, except for rho vac, fall off as a grows. And that implies that if you're not somehow turned around firsts, which you can be-- you could have a closed universe that collapses before vacuum energy can take over. But as the universe gets larger, if it doesn't turn around, eventually rho vac will win. It will become larger than anything else because everything else is just getting smaller and smaller. And once that starts to happen everything else will get smaller and smaller, faster and faster, because a will start to grow exponentially. If rho vac dominates-- which it will, as I said, unless the universe re-collapses first-- so for a large class of solutions rho vac will dominate-- then you can solve that equation. And you have h, which is a dot over a, approaches, as a goes to infinity, the square root of 8 pi over 3 G rho vac. So h will approach a fixed value for a universe which is ultimately dominated by rho vacuum. And if a dot over a is a constant, that means that a grows exponentially. So we could maybe give this a name-- h vac. The value h has when it's completely dominated by the vacuum energy. And then we can write that a of t is ultimately going to be proportional to e to the h vac times t. Which is what you get when you solve the equation, a dot over a equals this constant. OK. Now one thing which you can see very quickly-- let's see how far I should plan to get today. OK. I'll probably make one qualitative argument and then start a calculation that won't get very far. I will continue next time. One qualitative point which you can see from just glancing at these equations is that the cosmological constant, when added to the other ingredients that we've already put into our model universe, will have the effect of increasing the age of the universe for a given value of h. And that's something that we said earlier in the course, we're looking forward to. Because the model of the universe that we're been constructing so far have always turned out to be too young for the measured value of h. That is, the oldest stars look like they're older than the universe. And that's not good. So we'd like to make universe look older. And one of the beauties of having this vacuum energy, as far as making things fit together, is that it does make the universe older. And the easiest way to see that-- at least a way to see that-- is to imagine drawing a graph of h versus t. Hubble expansion rate versus t. And if we look at the formula for h here we see that the rho vac piece just puts in a floor as h evolves with time, instead of going to 0 as it wood in most models-- at least, as it would in open-universe model. It stops at some floor. And certainly for the models that we've been dealing with, h just decreases to some-- this is supposed to represent the present time. So this is previous models. Now as you might say, that what I'm trying to describe here is not quite a theorem if you considered closed universes where this k piece could be causing a positive-- a negative contribution to h, which is then decreasing with time. Things can get complicated. But for the models that we've been considering which are nearly flat, that k piece is absent. And then we just have pieces that go like, 1 over a cubed, 1 over a to the fourth, and constant. All of which are positive. Then in the absence of vacuum energy we would have h falling. And with the presence of vacuum energy it would not fall as fast because we have this constant piece that would not be decreasing. So this is previous models. This is with rho vac. And I'm always talking about positive rho vac because that is what our universe has. So this would be the two different behaviors of h for the model without vacuum energy and the model with vacuum energy. And if we're trying to calculate the age of the universe we would basically be extrapolating this curve back to the point where h was infinite, at the big bang. And we could see that, since this curve is always below this curve, it will take longer before it turns up and becomes infinite. So the age will always increase by adding vacuum energy. With rho vac h equals infinity is further to the left. And notice that I'm comparing two different theories, both of which are the same age today. Because that's what we're interested in. We've measured the value of h. We're trying to infer the age of the universe. OK. Maybe I'll just say a couple words about the calculation that we'll be starting with next time. We want to be able to precisely calculate things like the age of the universe, including the effect of this vacuum energy. And we'll be able to do that in a very straightforward way by using this first order Friedmann equation. We know how each term in this Friedmann equation varies with a. And we can measure the amount of matter, and the amount of radiation, and in principal the amount of curvature-- it's negligibly small-- in our current universe. And once you have those parameters you can use that equation to extrapolate, to know what h was at any time in the past. And that tells you how the derivative-- it tells you the value of a dot at any time in the past. And if you know the value of a dot at any time in the past, it's a principle just a matter of integration to figure out when a was 0. And that's the calculation that we'll begin by doing next time. And we'll be able to get an integral expression for the age of the universe for an arbitrary value. We'll, at the end, express the matter density and the radiation density as fractions of omega, fractions of the critical density. And for any value of omega matter, omega radiation, and we'll even express the curvature as an omega curvature. The effective fraction of the critical density that this term represents. And in terms of those different omegas, we'll be able to write down an integral for the total age of the universe. And that really is going to be state of the art. That is what the Planck team uses when they're analyzing their data to try to understand what the age of universe is according to the measurements that they're making. So we will finally come up to the present as far as the actual understanding of cosmology by the experts. So that's all for today. see you all next Tuesday.
MIT_8286_The_Early_Universe_Fall_2013
19_The_Cosmological_Constant_Part_II.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu PROFESSOR: OK, in that case I can begin by giving a quick review of last time. We began last time by talking about the data of measurements of the cosmic microwave background, starting with the data as it existed in 1975 which I advertised as being an incredible mess, which it was. You could easily believe that this data fit this solid line curve, which was what it's supposed to fit. But you could equally well believe that it didn't. Things got worse before they got better. There was the famous Berkeley-Nagoya Rocket Flight experiment of 1987 which had a data point which missed the theoretical curve by 16 standard deviations, which might seem fairly disappointing. It reminds one, by the way, of a famous quote of Arthur Eddington-- which you may or may not be familiar with-- but Eddington pointed out that while we always say that we should not believe theories until the confirmed by experiment, it's in fact equally true that we should not believe data that's put forward until it's confirmed by theory, and that certainly was the case here. This data was never confirmed by theory and turned out to be wrong. The beautiful data was achieved in 1990 by this fabulous COBE satellite experiment, which showed-- unambiguously, for the first time-- that the cosmic background radiation really does obey an essentially perfect blackbody curve, which is just gorgeous. We then went on, in our last lecture, to begin to talk about the cosmological constant and its effect on the evolution of the universe-- completely changing gear here-- and the key issue is the cosmological effect of pressure. Earlier we had derived this equation. The equation shows the significant role of pressure during the radiation-dominated era, but it also shows that pressure-- if it were negative-- could perhaps have the opposite effect, causing an acceleration of the universe. Furthermore, we learned last time that vacuum energy-- first thing we learned, I guess, is just that's synonymous with Einstein's cosmological constant, related to Einstein's cosmological constant lambda by this equation. That is, the energy density of the vacuum is equal to the mass density of the vacuum times C squared and is equal to this expression in terms of Einstein's original cosmological constant. And most important, in terms of physics, we learned that if we have a non-zero vacuum energy-- vacuum energy by definition does not change with time because the vacuum is the vacuum, it's simply the lowest possible energy density allowed by the laws of physics, and the laws of physics, as far as we know, do not change with time, and therefore vacuum energy density does not change with time-- and that is enough to imply that the pressure of the vacuum has to be equal to minus the energy density in a vacuum, and therefore minus that expression in terms of the cosmological constant-- which is exactly what will give us a repulsive gravitational effect, where we put that into the Friedmann equation. Now I should emphasize here that the effect of the pressure that we are talking about is not the mechanical effect of the pressure. The mechanical effect of the pressure is literally zero because the pressure that we are discussing here is a uniform pressure, and pressures only cause mechanical forces when there's gradients in the pressure-- more pressure on one side than the other. And if all this pressure is always balanced, the mechanical force of the pressure is zero. But nonetheless, that pressure contributes-- according to Einstein's equations-- as a contribution to the gravitational field, and a positive pressure creates an attractive gravitational field but a negative pressure produces repulsive gravitational fields. And a positive vacuum energy corresponds to a negative pressure which, in fact, would dominate this equation, resulting in a gravitational repulsion. So it's useful to divide the total energy density into a normal component plus vacuum energy, and similarly we can divide the pressure into a normal component plus the vacuum contribution to the pressure. The vacuum contribution to the pressure will instantly disappear from all of our equations because we know how to express it in terms of the vacuum mass density. It's just minus Rho vac C squared. So we can then re-write the Friedmann equations making those substitutions, and we conceive-- in the second-order equation-- that the vacuum contribution, negative, negative is a positive, produces a positive acceleration-- as we've been saying-- and a positive vacuum energy also contributes to the right-hand side of the first-order Friedmann equation. And in many, many situations-- although not quite all-- this vacuum energy will dominate at late times. It definitely falls off more slowly than any of the other contributions. The vacuum energy is a constant, and every other contribution to that right-hand side falls off with A. The only way that Rho vac can fail to dominate if it's non-zero is if you have a closed universe that collapses before it has a chance to dominate, which is a possibility. But barring that, eventually the vacuum energy will dominate, and once the vacuum energy dominates, we just have H squared-- A dot over A is H. H squared equals a constant, and that just says that H approaches its vacuum value, which is the square root of 8 pi over 3 G Rho vac, and with H being a constant we can also solve for A. The scale factor itself is just proportional to an exponential of E to the H T, where H is the H associated with the vacuum. So this will be a very easy to obtain asymptotic solution to the equations of the universe, and, in fact, we think that our real universe is approaching an exponentially expanding phase of exactly this sort today. We're not there yet, but we are approaching it. Yes? AUDIENCE: So does an exponential A of T mean that the universe just keeps expanding forever and just spins out into nothingness? PROFESSOR: Yes, an exponentially expanding A means the universe will continue expanding forever and ordinary matter will thin out to nothing, but this vacuum energy density will remain as a constant contribution. So the universe would go on expanding exponentially forever. Now, there is the possibility that this vacuum that we're living in is actually what might be called a false vacuum-- that is, an unstable vacuum-- a vacuum which behaves like a vacuum for a long time but is subjected to the possibility of decay. If that's the case, it's still true that most of our future space time will continue to exponentially expand exactly as this equation shows, but a kind of a Swiss cheese situation will develop where decays in the vacuum would occur in places, producing spherical holes in this otherwise exponentially expanding background. We'll be talking a little bit more about that later in the course. Any other questions? OK, well what I want to do now is to work on a few calculations which I'd like to present today. If all goes well, we might have as many as three calculations, or at least two calculations that we'll do and one that we'll talk about a little bit. The first thing I want to do-- and I guess we just talked about starting it last time-- is calculate the age of the universe in this context. How do we express the age of the universe in terms of measurable, cosmological parameters, taking into account the fact that vacuum energy is one of the ingredients of our universe along with radiation and non-relativistic matter, which we have already discussed. So to start that calculation we can write down the first-order Friedmann equation. A dot over A squared is equal to 8 pi over 3 G times Rho and now I'm going to divide Rho into all contributions we know about. Rho sub M, which represents non-relativistic matter, plus Rho sub radiation, which represents radiation, plus vacuum energy, which is our new contribution, which will not depend on time at all. And then to complete the equation there is minus KC squared over A squared. And the strategy here is really simply that because we know the time evolution of each of the terms on the right-hand side, we will be able to start from wherever we are today in the universe-- which will just take from data, the values of these mass densities-- and we will be able to integrate backwards and ask how far back do we have to go before we find the time when the scale factor vanished, which is the instant of the Big Bang. So what we want to do is to put into this equation the time dependents that we know. So Rho sub M of T, for example, can be written as A of T naught divided by A of T cubed, times Rho sub M zero. And all these zeroes, of course, mean the present time. So this formula says, first of all, that the mass density falls off as 1 over the cube of the scale factor. A of T is the only factor on the right-hand here that depends on T. The numerator depends on T sub zero, but not T. The other constant, T is zero in the numerator and Rho sub M zero. Rho sub M zero denotes the present value of the mass density. And the constants here are just rearranged so that when T equals T naught, you just get Rho is equal to its present value. And we can do the same thing for radiation, and here I won't write everything out because most things are the same. The quantity in brackets will be the same but this time it will occur to the fourth power because radiation falls off like the fourth power of the matter-- fourth power of the scale factor-- and then we have Rho radiation zero. And then finally, for the vacuum energy, we will just write on the blackboard what we already know, which is that's independent of time. So this gives us the time dependents of all three terms here. Now we could just go from there, but cosmologists like to talk about mass densities in terms of the fraction of the critical density, omega. So we're going to change the notation just to correspond to the way people usually talk about these things. So we will recall that the critical density-- which is just that total mass density that makes little K equal to zero and hence the universe geometrically flat-- so Rho sub C, we learned, is 3 H squared over 8 pi G and then we will introduce different components of omega. So I'm going to write omega sub X here where X really is just a stand-in for matter or radiation or vacuum energy. And whichever one of those we're talking about, omega sub X is just a shorthand for the corresponding mass density, but normalized by dividing by this critical density. And then I'm just going to rewrite these three equations in terms of omega instead of Rho. So Rho sub M of T becomes then 3 H naught squared over 8 pi G times the same A of T zero over A of T cubed, but not I'm multiplying, omega sub M zero. And from the definitions we've just written, this equation is just a rewriting of that equation. And we can do the same thing, of course, for radiation. Rho radiation of T is equal to the same factor out front, the same quantity in brackets but this time to the fourth power, and then times omega radiation at the present time. And finally, Rho vac doesn't really depend on time-- but we'll write it as if it was a function of time-- it consists of the same factor of 3 H naught squared over 8 pi G, and no powers of the quantity in brackets, but then just multiplying omega sub vac zero. [ELECTRONIC RINGING] Everybody should turn off their cell phone, by the way. [LAUGHING] OK, sorry about that. OK, now to make the equation look prettier, I'm going to rewrite even this last term as if it has something to do with an omega. And we can do that by defining omega sub K zero to be equal to minus KC squared over A squared of T naught times H naught squared, which is just the last term that appears there [INAUDIBLE] of factor of H squared, which we'll be able to factor out. And doing all that, the original Friedmann equation can now be rewritten as H squared-- also known as A dot over A squared-- can be written as H naught squared-- oh, I'm sorry, one other definition I want to introduce here. This ratio-- A of T naught over A-- keeps recurring, so it's nice to give it a name, and I'm going to give 1 over that a name. I'm going to let X equal the scale factor normalized by the scale factor today. And I might point out that in Barbara Ryden's book, what I'm calling X is just what she calls the scale factor, because she chooses to normalize the scale factor so that it's equal to 1 today. So we haven't done that yet but we are effectively doing it here by redefining a new thing X. Having done that, the right-hand side of the Friedmann equation can now be written in a simple way. It's H naught squared over X squared times a function F of X-- which is just an abbreviation to not have to write something many times-- this is not, by any means, a standard definition. It really is just for today. It allows us to save some writing on the blackboard. So I'm going to, for today, be using the abbreviation F of X is equal to omega sub M zero times X plus omega sub radiation zero times no powers of X plus omega sub vac zero times X to the fourth, and finally plus omega sub K zero times X squared. And this just lists all the things that would occur in parentheses here if we factored everything out. Notice I factored out some powers of X squared, so the powers of X that appear here do not look familiar, but the relative powers do. That is, omega should fall like-- omega matter should fall like 1 over X cubed. Omega radiation should fall off like one power faster than that, and it does. This is one less power of X there than there, and omega vacuum should fall off like four powers of X different from radiation, and it does, et cetera. But there's no real offset here that makes the factors there not look familiar. OK, all of this was just to put things in a simple form, but there's one other very useful fact to look at. Suppose we now apply this for T equals T zero, which means for X equals 1. OK, it's true at any time, but in particular we can look at what it says for X equals 1 and it tells us something about our definitions that we could have noticed in other ways-- but didn't notice yet-- which is that we set X equal to 1 here. These just becomes the sum of omegas. The powers of X's all become just ones. And the left-hand side is just H zero squared, which matches the H zero squared here because at T equals T naught H squared is H zero squared, so these H zeros squareds cancel. So applying it to T equals T naught X equal to 1, what you get is simply 1 is equal to omega sum M zero plus omega sub radiation zero plus omega sub vac zero plus omega sub K zero, which gives us the simplest way of thinking about what this omega sub K zero really means. We defined it initially up there in terms of little K, but for this equation, we can see that omega sub K zero really is just another way of writing 1 minus all the other omegas. So you could think of this as being the definition of what I'm calling omega sub k zero. So it's a language in which you essentially think that the total omega has to equal 1, and whatever is not contained in real matter becomes a piece of omega sub k, the curvature or contribution to omega. OK, now it's really just a matter of simple manipulations and I-- the main purpose of defining F of x is to be able to write these simple manipulations simpler than they would be if you had to write out what F of x was every time. We're first just going to take the square root of the key equation up there-- the Friedmann equation-- and we get a dot over a is also equal to x dot over x. Note that the constant of proportionality there, a of t naught-- which is a constant-- cancels when you take a dot over a. So a dot over a is the same as x dot over x. And that-- just taking the square root of that equation-- is H naught over x. Hold on a second. Yeah, we're taking the square root of the equation, so yeah, we had H naught squared over x squared. Here we have H naught over x and then just times the square root of f. And these x's cancel each other. Wait a minute. Oh, I'm sorry. They're not supposed to cancel because I didn't write this quite correctly. That should have been x to the fourth. Apologies. And now here we have x squared. And this can just be-- by manipulating where the x's go-- rewritten as x times dx dt. So I multiply the whole equation by x squared to get rid of that factor on the right, and now on the right-hand side we just have H zero times square root of F. And now I just want to do the usual trick of separating the dx pieces from dt pieces in this equation. And we can rewrite that equation as dt is equal to 1 over H naught times x dx over the square root of F. And maybe I'll rewrite it as the square root of F of x to make it explicit that F depends on x. So this is just the rewriting of this equation, moving factors around, and in this form we can integrate it and determine the age of the universe. The present age of the universe can be obtained just by integrating this expression from the big bang up to the present. And that will be the integral dt from the big bang up to present, the sum of all the time intervals from the big bang until now. And it's just equal to 1 over H zero times the integral of x dx over the square root of F of x. And now-- just to think about the limits of integration-- what should limits of integration be? AUDIENCE: Zero to one. PROFESSOR: I hear zero to one, and that's correct. We're integrating from the big bang up to the present. At the big bang, a is equal to zero and therefore x is equal to zero. And, at the present time, t is equal to t naught and therefore x is equal to 1. So we integrate up to one if we want the present age of the universe. We could also integrate it up to any other value of x that we want, and it will tell us the age of universe when the scale factor had that value. So this is the final, state of the art formula for the age of the universe, expressed in terms of the matter contribution to omega, the radiation contribution to omega, and the vacuum contribution to omega, and the value of H naught. Those are the only ingredients on the right-hand side there. And then you can calculate the age. And it's the completely state of the art formula. It's exactly what the Planck people did when they told you that the age of the universe was 13.9 billion years, using that formula. Now as far as actually doing the integral, in the general case, the only way to do it is numerically. That's how it's usually done. Special cases can be done analytically. We've already talked about the case where there's no cosmological constants, no vacuum energy, but just matter and curvature-- omega, in this language. There's another special case which can be done, which is the case that involves vacuum energy and nonrelativistic matter. And this is the flat case, only, that can be done analytically. So it corresponds to omega radiation equals omega k equals zero, and that means that omega matter plus omega vac is equal to 1, because the sum of all the omegas is always equal to 1. So in this case I can write an answer for you. And I don't intend to try to derive this answer, but it's worth knowing that can be written analytically. That's the main point, I guess. So it does get divided into three cases depending on whether omega matter is larger than, smaller than, or equal to 1. So the first case will be if omega matter zero is greater than 1. And if omega matter is greater than 1, that corresponds to omega vac less than 1 because the sum of the two is equal to 1 in all cases, here. So omega vac has to be less than 0. So this is not our real universe but it's a calculation that you can do, and it's 2 over 3 H naught times the inverse tangent of the square root of omega sub m zero 1 over the square root of omega sub m zero minus 1. So if you plug this integral into mathematica, you should get that answer or something equivalent to it. For the case-- the borderline case, here-- where omega matter zero equals 1, that's the special case where omega vac is equal to 0 because the sum of these is always one. So this special case in the middle is the case we already knew, it's just the matter-dominated flat universe. So that's two thirds H inverse. So it's 2 over 3 H naught. And then, finally, if omega matter zero is less than 1 and then omega vac is positive. And this [INAUDIBLE] approximation is our universe, that is, that our universe has possibly zero curvature-- in any case, unmeasureably small curvature-- and very, very small radiation for most of its evolution. So this last case is our universe except for cases that are near the radiation-dominated era, and the formula here is 2 over 3 H naught times the inverse hyperbolic tangent of 1 minus-- square root, excuse me-- of 1 minus omega sub m zero over the square root of 1 minus omega sub m zero. OK, so this is just a result obtained by doing that integral for the special case that we're talking about. Now, I don't know any simpler way to write it except as these three cases. It is, however, a single analytic function, and when you graph it-- and I'll show you a graph-- it is one smooth function right across the range of these three cases, which is similar to what we found the earlier for the flat, matter-dominated case. So let's see. This is not that yet. This is the case that we did a long time ago, actually, the case of a matter-dominated universe with nothing but nonrelativistic matter and possibly with curvature. And I can remind you, here, that what we found for that model is that we tended to get ages there were too young. So if we take a reasonable value for H of 67 to 70 kilometers per second per megaparsec-- which is in this range-- and take a reasonable value for omega-- which is somewhere between, say, 0.2 and 1 depending on what you consider reasonable, this model doesn't work anyway-- but if you take omega anywhere between 0.1 and 1, you get numbers for the age which are in the order of 10 billion years, which is not old enough to be consistent with what we know about the ages of the older stars. And especially if you think that omega should be one, you get a very young age of more like 9 billion years, which is what we found earlier. This is just a graph of those same equations. But, if we include vacuum energy, it makes all the difference. So this now is a graph of those equations. What's shown is the age, T naught, as a function of H and for various values of omega sub m, the same omega sub m that's called omega sub m zero on the blackboard. And shown here are the Barbara Ryden a benchmark point, which is the left-most of these two almost overlapping points. And also shown here is the favored point from the WMAP satellite seven-year data. They lie almost on top of each other. I didn't get a chance to plot the Planck point, which is the one that we would consider the most authoritative these days, but I'll add that before I post the lectures. It lies almost on top of these, and it corresponds to a Hubble expansion rate of a little under 70, and a vacuum energy contribution of about 0.7, and therefore a matter contribution of about 0.3. This curve. And it gives an age of 13.7, 13.8 billion years-- perfectly consistent with estimates of the age of the oldest stars. So this age problem which had been, until the discovery of the dark energy, a serious problem in cosmology for many, many years goes away completely once one adds in the dark energy. So that's it for the age calculation. Are there any questions about the age of the universe? Yes? AUDIENCE: So when you say dark energy, are you using that synonymously with vacuum energy? PROFESSOR: Sorry, yes. I used the word dark energy there and I've been talking about vacuum energy, and what's the relationship? When I said dark energy I really meant vacuum energy. In general, the way these words are used is that vacuum energy has a very specific meaning. It really does mean the energy of the vacuum, and by definition, therefore, it does not change with time, period. We don't know for sure what this stuff is that's driving the acceleration of the universe, and hence the name dark energy, which is more ambiguous. I think the technical definition of dark energy is it's whatever the stuff is that's driving the acceleration of the universe. And the other conceivable possibility-- and observers are hard at work trying to distinguish, experimentally, between these two options-- the other possibility is that it could be a very slowly evolving scalar field of the same type that drives inflation that we'll be talking about later. But this would be a much lower energy scale than the inflation of the early universe, and much more slowly evolving. So far, we have not yet found any time variation in the dark energy. So, so far, everything we' have learned about the dark energy is consistent with the possibility that it is simply vacuum energy. Question. AUDIENCE: Is the amount of dark energy related to the amount of dark matter? PROFESSOR: Is the amount of dark energy related to. the amount of dark matter? No. They're both numbers and they differ by a factor of 2 and 1/2 or so, but there's no particular relationship between them that we know of. AUDIENCE: But doesn't dark matter imply that they have a certain attraction to bodies around it, which is a form of energy? PROFESSOR: Yeah, well let's talk about this later. AUDIENCE: Do we have any idea what dark energy is at all? PROFESSOR: OK, the question is, do we have any idea what dark energy is at all? And the answer is probably yes. That is, I think there's a good chance it is vacuum energy. Now if you ask what is vacuum energy, what is it about the vacuum that gives it this nonzero energy, there we're pretty much clueless. I was going to talk about that a little more at the end of today, if we get there. But whatever property of the vacuum it is that gives it its energy-- we know of many, it's just a matter of what dominates and how they add up-- the end result is pretty much the same as far as the phenomenology of vacuum energy. So we understand the phenomenology of vacuum energy, I would say, completely. The big issue, which I'll talk about either at the end of today or next time, is trying to estimate the magnitude of the vacuum energy, and there we're really totally clueless, as I will try to describe. OK, that's it for my slides. OK, I wanted to now talk about another very important calculation, which is basically the calculation which led to the original evidence that the universe is accelerating to begin with. OK, discovery that the universe was accelerating was made, as I said earlier, by two groups of astronomers in 1998, and the key observation was using a type 1a supernovae as standard candles to measure the expansion rate of the universe versus time, looking back into the past. And basically what they found is that when they look back about 5, 6 billion years, the expansion rate then was actually slower than expansion rate now, meaning that the universe has accelerated. And that was the key observation. So the question for us to calculate is, what do we expect, as a function of these parameters, for redshift versus luminosity? These astronomers, by using type 1a supernovae as standard candles, are basically using the luminosity measurements of these type 1a supernovae as estimates of their distance. So what they actually measured was simply luminosity verses redshift, and that's what we will learn how to calculate, and the formula that will get will be, again, exactly the formula that they used when they were trying to fit their data-- to understand what their data was telling them-- about possible acceleration of the universe. So the calculation we're about to do is really nothing new to you folks because we have calculated luminosities in another contexts. Now we will just write down the equations in their full glory, including the contribution due to vacuum energy. So we'd like to do these calculations in a way that allows for curvature, even though-- in the end-- we're going to discover that the curvature of our universe is-- as far as anybody can tell-- negligible. But people still look for it and it still very well could be there at the level of one part in 1,000 or something like that. But at the level of 1 part in 100, it's not there. So we begin by writing down the Robertson-Walker metric, ds squared is equal to minus c squared dt squared plus a squared of t times dr squared over 1 minus little k times r squared plus r squared d theta squared plus sine squared theta d phi squared, end curly brackets. OK, so this is the metric that we're familiar with. We're going to be interested, mainly, in radial motion, and if you're interested mainly in radial motion, it helps to simplify the radial part of this metric by using a different radial variable. And we've done this before, also. At this point, we really need to pick whether we're talking about open or closed. If we're talking about flat, we don't need to do anything, really. If you eliminate k, here, the radial part is as simple as it gets. But if we want talk about open or closed, it pays to use different variables, and the variable that we'd use would be different in two cases. So I'm going to consider the closed-universe case. And I'm going to introduce an angle, sine of psi being equal to the square root of k-- which is positive in this case-- times little r. And this psi is, in fact, if you trace everything back, the angle from the w-axis that we originally used when we constructed the closed Robertson-Walker universe in the first place. But now we're essentially working backwards. We've learned to know and love this expression, so we're going to just rewrite it in terms of the new variable, sin of psi equals the square root of k times r. And from this, by just differentiation, you discover that deep psi is equal to the square root of k times dr over cosine psi. And that is equal to the square root of k times dr over the square root of 1 minus kr squared. So this, then, fits in very nicely with the metric itself. The metric is just the square of this factor, and therefore it is just proportional to deep psi squared all by itself. And rewriting the whole metric, we can write it as ds squared is equal to minus c squared dt squared plus a new scale factor-- which I'll define in a second in terms of the old one-- times deep psi squared plus-- now, the angular term becomes nonstandard instead of just having an r squared here, we have sine squared of psi. Which is, of course, proportional to r squared. And that multiplies d theta squared plus sine squared theta d phi squared, end curly brackets. And a tilde is just equal to our original a divided by the square root of k. So we scaled it. And I should mention that I'm putting a tilde here because we've already written an a without a tilde there, and they're not equal to other. If you want to just start here, you can, and then there's no need for the tilde. You could just call this the scale factor and it doesn't need a tilde. The tilde is only to distinguish the two cases from each other. AUDIENCE: [INAUDIBLE] PROFESSOR: Sorry? AUDIENCE: Do a and a tilde have different units? PROFESSOR: Didn't hear you? AUDIENCE: Do a and a tilde have different units? PROFESSOR: They do have different-- yes, a and a tilde do have different units. That's right, and that's because in what one might call conventional units here, r is some kind of a coordinate distance. So in my language I'd measure it in notches, and then a has units of meters per notch. On the other hand, here psi is an angle. It is naturally dimensionless. So one doesn't introduce notches in this case, and therefore a just has units of length-- a tilde, rather-- just has units of length. OK, now we want to imagine that some distant galaxy is radiating-- or a distant supernova, perhaps-- and we want to ask, what is the intensity of the radiation that we receive on earth? And we'll draw the same picture that we've drawn at least twice before, if not more. We'll put the source in the middle. We'll imagine a sphere surrounding the source, with the source of the center, and we'll imagine that the sphere has been drawn so that our detector is on the surface of the sphere. This will be the detector. And we'll give a symbol for the area of the detector. It will be a. And we'll imagine drawing this in our co-moving coordinate system where psi is our radial variable. So the sphere here will be at some value, psi equals psi sub d-- where d stands for detector-- and psi equals zero at the center. OK, I'm going to make the same kind of arguments we've made in the past. We say that the fraction of light that hits the sphere-- which hits the detector-- is just equal to the area of the detector over the area of the sphere. Now, the area of detector is, by definition, a. The area of the sphere we have to be a little bit careful about because we have to calculate the area of the sphere using the metric. Now, the metric is slightly nontrivial, but the sphere is just described by varying theta and phi. And if we just vary theta and phi, this piece of the metric is what we're used to-- it's the standard Euclidean spherical element-- and the coefficient that multiplies is just the square of the radius of that sphere. So the radius of our sphere is a tilde times sine psi. That's the important thing that we get from the metric. The thing that multiplies d theta squared and d phi squared, et cetera, is just the square of the radius of the sphere that determines distances on the surface of the sphere. So what goes here is 4 pi times the radius squared. So it's 4 pi times a tilde squared of t naught times sine squared of psi sub d. It's t naught because we're interested in what happens when we detect this radiation today. Our detector is detecting it today and has area a today, and we want to compare it with entire sphere that surrounds this distant source as that sphere appears today, so that all of the distances are measured today, and therefore can be properly compared. The other thing we have to remember is the effect of the redshift. The redshift, we've said earlier, and it's just a repetition, it reduces the energy of each photon by a factor of 1 plus z, the redshift, and similarly it reduces the rate at which photons are arriving at the sphere by that same factor-- 1 plus z. It basically says that any clock slows down by a factor of 1 plus z, and that clock could be the frequency of the photon-- which affects its energy proportionally-- or the arrival rate of the photons. That's also a clock that get time dilated in the same way. So we get two factors of 1 plus z sub s, I'll call it. s for z of the source, the z between the time of emission at the source to a time where it arrives at us today. So 1 plus z is equal to a of t naught divided by a of t emission. I'll just put it here to remind us. One from redshift of photons and one from arrival rate. OK, putting that together we can now say that the total power received is equal to the power originally emitted by the source-- p will just be the power emitted by the source-- divided by 1 plus the redshift z of the source squared, and then just times the fraction. A over 4 pi a twiddle squared sine squared psi d. And then, finally, what we're really interested in is J-- the intensity of the source as we measure it-- which is just the power received by our detector divided by its area. So from this formula we just get rid of the A there. We can write it as the power emitted by the source, capital P, divided by 4 pi 1 plus z sub s squared a twiddle squared of t naught times sine squared psi sub d. Now that effectively is the answer to this question except that we prefer to rewrite it in terms of things that are more directly meaningful to astronomers. a twiddle is not particularly meaningful to the astronomer. The redshift is, that's OK. But a twiddle is not particularly meaningful to an astronomer, nor is sine squared of psi sub d. Now, many astronomers who know general relativity can figure this out, of course, but it's our job to figure it out. We would like to express this in terms of things that are directly measured by astronomers. So to do that, first of all, a tilde-- to get a tilde related to other things, it really just goes back to the definition that we gave for omega sub k sub 0. And if you look back at that definition, you'll find that a of t naught tilde is just equal to c times the inverse of the present Hubble expansion rate times the square root of minus omega sub k comma zero. And this is for the close-universe case. The closed-universe case, little k is positive. But if you remember the definition of omega sub k naught-- maybe I should write it back on the blackboard, or is it findable? It's not findable. The original definition big A for this omega sub k naught was just minus Kc squared over a squared of t naught H naught squared. So this is just rewriting of that, and for our closed-universe case, k is positive, omega sub k naught is negative, this is then the square root of a positive number with that extra minus sign, so everything fits together. So that takes care of expressing a tilde in terms of measurable things. We use this formula. Expansion rate is measurable, omega sub k comma zero is measurable. And then, in terms of sine squared psi sub d, we obtain that by reminding ourselves that we know how to trace light rays through this universe. Light waves just travel locally at the speed c, they travel locally on null geodesics. So if we're looking at a radial light ray, this metric tells us-- if we apply it to a radial light ray where ds squared equals minus dc squared-- where ds squared has to be zero-- that says that minus c squared dt squared plus a twiddle squared of t d psi squared equals zero. This is just the equation that says we have a null line, a null radial line. That implies that deep psi dt has to equals c over a twiddle of t, which is a formula that, in other cases, we try to motivate just by using intuition. But, in that case, we were probably not talking about curved universe where the intuition is a little bit less strong. But you see it does follow immediately from assuming that we're talking about a null geodesic in the Robertson-Walker metric. Now, the point is that the Friedmann equation, which we've been writing and rewriting, tells us what to do with that. The Friedmann equation basically allows us to integrate that because it allows us to express a in terms of x, and we know some things about x. So let me try to get that on the blackboard, here. We know that H squared-- which is x dot over x squared-- can be written as H zero squared over x to the fourth times this famous function F of x. And psi of a given redshift, according to this equation, could just be obtained by integration of the time that the source emits the radiation up to the present time of c over a twiddle of t times dt. And now to rewrite this in terms of redshift, we can use the fact that 1 plus z is equal to 1 over x because we know how to relate 1 plus z to the scale factor. 1 plus z is just the ratio of scale factors and it's precisely the ratio that we called 1 over x. And we can then differentiate this equation and find that dz is equal to minus a twiddle of t naught over a twiddle of t squared times a twiddle dot times dt, rewriting x in terms of a of t naught over a of t. And this, then, is equal to minus a twiddle of t naught times H of t times, oops, times dt over a tilde of t. And this allows us to replace the dt that appears there and the final relationship is that psi of s is equal to 1 over a tilde of t naught times the integral from zero up to z sub s of c over H of z dz. Yeah, I think that looks like it works. So it really is just a matter of changing variables to express things in terms of H and integrating over z instead of integrating over t. And the usefulness of that is simply that z is the variable that astronomers use to measure time. And this then can be written in more detail, and it really finishes the answer more or less. Psi of z sub s can be written just-- writing in what a tilde is according to our definition here-- square root of the magnitude of omega comma k zero-- this could also have been written as minus omega of k comma 0 because we know it's a negative quantity-- and then times the integral from 0 to z sub s and integral dz. Now I'm just writing H as a function of z. Earlier we had written H-- it's no longer on the screen, I guess-- earlier we had written H in terms of F of z, oh, excuse me, F of x. x is related to z simply by this formula. So since the integral was written with z as the variable of integration, I'm going to rewrite the integrand in terms of z, but it really is just our old friend F of x. So it would be the square root of omega sub m zero 1 plus z cubed plus omega sub radiation zero times 1 plus z to the fourth, all inside the square root, here, plus omega sub vac zero plus omega sub k zero times 1 plus z squared. And this, then, is the answer for psi of z. And then we put that into here and replace a twiddle by this, and we get a formula for what we're looking for, an expression for the actual measured intensity of the source at the Earth in terms of the parameters chosen here-- the current values of omega and the redshift of the source. And that's all that goes into this final formula. So if you know the current values of omega and the redshift of the source, you can calculate what you expect the measure intensity to be in terms of the intrinsic intensity. And that's exactly what the supernova people did in 1998 using exactly this formula-- nothing different-- and discovered that, in order to fit their data, they needed a very significant contribution from this vacuum energy, namely a contribution in the order of 60 or 70%. So we will stop there for today. We will continue on Thursday to talk a little bit more about the physics of vacuum energy.
MIT_8286_The_Early_Universe_Fall_2013
2_Inflationary_Cosmology_Is_Our_Universe_Part_of_a_Multiverse_Part_II.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So I want to begin by reviewing a little bit what I said last time in terms of this overview lecture. And then we'll finish the overview lecture. So summary of last lecture is actually on five slides. It's not all on this one slide. We started by talking about the standard Big Bang, by which I mean the Big Bang without thinking about inflation. And I pointed out that it really describes only the aftermath of a bang. It begins with the description of the universe as a hot, dense soup of particles which more or less uniformly fills the entire available space, and the entire system is already expanding. Cosmic inflation is a prequel to the conventional Big Bang story. It describes how repulsive gravity, which in the context of general relativity, can happen as a consequence of negative pressure. This repulsive gravity could have driven a tiny patch of the early universe into a gigantic burst of exponential expansion. And our visible universe would then be the aftermath of that event. As this happened, the total energy of this patch would be very small and could even be identically 0. And the way that's possible is caused by the fact that the gravitational field that fills the space has a negative contribution to the energy. And as far as we can tell in our real universe, there are about equal to each other. They could cancel each other exactly as far as we can tell. So the total energy could in fact be exactly zero, which is what allows one to build a huge universe starting from either nothing or almost nothing. Inflation. The next item is evidence for inflation. Why do we think there's at least a good chance that our universe underwent inflation? And I pointed out three items. The first was that inflation could explain the large scale uniformity that we observe in the universe and that large scale uniformity is seen most clearly in the cosmic microwave background radiation, which is observed to be uniform to one part in $100,000, that is same intensity all across the sky no matter what direction you look, once you account for the Earth's motion, to an accuracy of one part in 100,000. Secondly, inflation can explain a rather remarkable fact about this quantity omega, where omega is defined as the actual mass density of the universe rho divided by rho critical, the critical mass density which is the density that would make the universe precisely flat. The statement that that ratio is equal to 1 we know is accurate to about 15 decimal places at one second after the Big Bang. And prior to inflation, we didn't really have any explanation for that at all. But inflation drives omega to one and gives us an explanation for why, therefore, started out so extraordinarily close to 1. And in fact, it makes a prediction. We'd expect that if inflation is right, omega should still be one today. And we now have measured omega to be 1.0010 plus or minus 0.0065, which I think is fabulous. Finally, inflation gives an explanation for the inhomogeneities that we see in the universe. It explains them as quantum fluctuations which happened during the inflationary process and, most importantly really, as inflation was ending, the quantum fluctuations cause inflation to go on for a little bit longer in some regions than others. And that sets up these inhomogeneities. Today, we can see these inhomogeneities most accurately. inhomogeneities, of course, are huge at the level of galaxies, so they're obvious. But it's hard to connect them to the early universe. So we can make our most precise comparison between what we observe and theories of the early universe by making careful observations of the cosmic background radiation itself, which has these ripples in the intensity, It is not quite uniform. There really are ripples at the level of one part in 100,000, which can now be observed. And inflation makes a clear prediction for the spectrum of those ripples, how the intensity should vary with wavelength. And I showed you this graph last time from the Planck satellite. The agreement between the prediction and the theory is really marvelous. So we'll be coming back to that near the end of the course. Finally, in the last lecture, I began to talk about inflation and the possible implications for a multiverse, the idea that our universe might be embedded in a much larger thing consisting of many universes, which we call a multiverse. And the key point is that most inflation models tend to become eternal. And that is once inflation starts, it never stops. And the reason for that, basically, is that the metastable material, this repulsive gravity material that's causing the inflation, decays, but it also exponentially expands. And for typical models, the exponential expansion completely overpowers the decay. So even though it's an unstable material that decays, the total volume of it actually increases exponentially with time rather than decreases. Decays happen however, and wherever decay happen, it forms what we call a pocket universe. We would be living in one of those pocket universes. And the number of pocket universes grows exponentially with time as the whole system grows and goes on, as far as we can tell, forever. And that is the picture of the multiverse that inflation tends to lead to. Finally, this is my last summary slide and then we'll start new material. At the very end of lecture, I talked about a problem, which is very important in our present day thinking about physics and cosmology, and that is the nightmare that this discovery of dark energy leads to. What was discovered at about 1998 is that the expansion of the universe is not slowing down under the influence of gravity as one might expect, but Instead, it's actually accelerating. The universe is expanding faster and faster. And that indicates that space today is filled with some repulsive gravity material, which we call the dark energy. And the simplest interpretation of the dark energy is that it simply vacuum energy, the energy of empty space. Space does have an energy density that have exactly the properties that we observe, so it seems natural to draw that connection. Vacuum energy, at first, might seem surprising. If a vacuum's empty, why should it have any mass density? But in a quantum field theory, it's really not surprising because in a quantum field theory, the vacuum is really not empty. In a quantum field theory, there's no such thing as actual emptiness. Instead, in the vacuum, one has constant quantum fluctuations of fields. And in our current theory of particle physics, the standard model of particle physics, there's even one particular field called the Higgs field, which has a non-zero mean value in the vacuum besides fluctuations. So the vacuum is a very complicated state. What makes it the vacuum is simply that it's alleged to be the state of lowest possible energy density, but that doesn't have to be zero and doesn't even look like there's any reason why it should be zero. So there's no problem buying the fact that maybe the vacuum does have a non-zero energy density. The problem comes about though when we try to understand the magnitude of this vacuum energy. If it was going to have a vacuum energy density, we'd expect it to be vastly larger than what is observed in the form of the expansion acceleration of the universe. So a typical order of magnitude in the particle physics model for the vacuum energy is, in fact, about a full 120 orders of magnitude larger than the number that's implied by the acceleration of the universe. So that is a big problem. I began to talk about a possible resolution to that problem. It's only a possible resolution. Nobody has really settled on this. But there's a possible resolution which comes out of String theory, and in particular from this idea, which is called the landscape of String theory. Most String theorists believe that String theory has no unique vacuum, but instead, there's a colossal number, perhaps something like 10 to the 500, different metastable states, which even though they are metastable, are incredibly long-lived, long-lived compared to the age of the universe as we know it. So any one of these 10 to the 500 different states could serve as effectively the vacuum for one of these pocket universes. And the different pocket universes would presumably fill the whole set of possible vacua in the landscape, giving reality to all these possibilities that come about in String theory. And in particular, each different type of vacuum would have its own vacuum energy density. And because there are both positive and negative contributions-- I think I didn't read that out loud-- but there are both positive and negative contributions that arise in quantum field theories. So the vacuum energy of a typical state could be either positive or negative. And what we would expect of these 10 to the 500 different vacua is that they would have a range of energy densities that would range from something like minus 10 to the 120 to plus 10 to the 120 times the observed value. So the observed value would be in there, but would be an incredibly small fraction of the universes. Yes? AUDIENCE: Does this mean that so many pocket universes could be closed and opened as well in terms of their geometry? Or-- PROFESSOR: They're actually predicted to be open due to complications about how they form, which I'm not going to go into. But they should all be open, but very close to flat for the ones that under a lot of inflation. So they'd be indistinguishable from flat, but technically, they'd be open. Yes? AUDIENCE: Is the minus 10 to the 120 plus 10 to the 120 just chosen because we're off 520 orders of magnitude, or is it predicted somewhere else? PROFESSOR: Well, when we say we're off by 120 orders of magnitude, the more precise statement is that the estimate of what a typical range of the energy should be is 10 to 120 times the observed value. So this is basically just a restatement of that. And you might wonder why I didn't put 5 times 10 to the 120, but in fact, the 120 itself is only accurate to within a few orders of magnitude, so 5 times that wouldn't have made any difference in the way one actually interprets those numbers. 10 to the 123 is probably slightly more accurate number actually. But this is good enough for our purposes. Yes? AUDIENCE: Just a general question about inflation properties. We think of attractive gravity as driving the motion of objects through space. So why do we think of repulsive gravity like to drive the expanse of space itself? PROFESSOR: Well, for one thing, it does actually behave differently. Repulsive gravity, repulsive gravity that appears in general relativity, is not just ordinary gravity with the opposite sign. Ordinary gravity has the property that if I have two objects to attract each other with a force proportional to the masses of those objects. This repulsive gravity is actually an effect caused by the negative pressure in the space between. So if I have two objects, they will start to accelerate apart by the amount that's totally independent of the masses. This is not really the masses that's causing it. So the whole force was completely different, so we can't really just compare them. In either case, when everything is moving apart, it's really a matter of viewpoint when you think of the whole space as expanding or whether you think of the particles as moving through space. In relativity, there's no way to put a needle on space, put a pen in it and say this is stationary. So we really can't say that the space is moving or not. In cosmology, we usually find that the simpler picture and the one that we will generally use is that space expands with the matter. It gives a much simpler description of how things behave. Good question. Yes? AUDIENCE: I have a question going back a few slides. PROFESSOR: Sure, you want me to go back. AUDIENCE: How the energy of the early universe seemed to be close to zero. Are there theoretical models that would explain or that would say it should be exactly 0? PROFESSOR: Yeah, there are. I didn't mention it. But if the universe is closed, which is a possibility. Even if it's very nearly flat, it could still be closed. If it were closed, it would have exactly zero energy. Yes? AUDIENCE: So the cosmic background, microwave background, picks ups that it's pretty similar in all directions once correct for it. And this leads to the thought that the cosmological principle all over the universe is pretty identical. Is it possible that we are actually located in just a smaller like circular pathway and it may be different than [? allowed. ?] And there's many of these patches, so we-- there's actually like a speckled form. PROFESSOR: OK. So if you didn't hear the question, I was asked if it's possible that the universe is not really homogeneous on a very large scales, but really speckled, just that speckles are large and our speckle might look very different from other speckles that are far away. And that was the question. And the answer is certainly if the multiverse picture is right. That is exactly the case that's being predicted. These other pocket universes could be viewed as other speckles in your language, and they'd be very different from what we've observed. So inflation actually changes one's attitude about this particular question. Back in the old days, before inflation, the uniformity of the universe had no explanation, so it became a postulate. And nobody postulates that something is uniform on that scale. If you are going to make a postulate, you just postulate that the universe is uniform. So that was the postulate that was in use. But now that we think of the homogeneity of the universe as being generated by a dynamical process, inflation, then, it's a natural question to ask, what is the scale of the homogeneity that that generates. And it's certainly a scale that's much larger than what we can observe. So we don't really expect to see inhomogeneity as caused by different pockets of inflation, but the model seems to make it very plausible that is what we would see if we could see far enough. Any other questions while we're on a little break here? Yes? AUDIENCE: If the universe is expanding, then I think like we are expanding as well, so how can we observe the change from a distance, in particular everything is increasing scale? PROFESSOR: OK. That's a very good question. The question was if the universe is expanding, then the universe is everything. So everything is expanding. And if everything's expanding, when you compare things with rulers, they have the same length. So how would you even observe that everything was expanding? And the answer to that is that when we say the universe is expanding, we're not really saying that everything is expanding. When we say the universe is expanding, we really are saying that the galaxies are getting further apart from each other, but individual atoms are not getting bigger. The length of a ruler, determined by the number of atoms and how those atoms move to ground state, does not expand with the universe. So the expansion is now partially driven by the repulsive gravity that exists now, which is causing the universe to accelerate. But most of the expansion is really just a residual velocity from the Big Bang, whatever caused it then. I would assert inflation. And it's just a matter of coasting outward, not being pulled outward, and that coasting outward does not cause atoms to get bigger. Yes? AUDIENCE: Is the current idea that the expansion, like the acceleration, is indefinite or are we going to reach a stop point? PROFESSOR: OK. What will be the ultimate future, I'm being asked here. And the answer, as you might guess, is nobody really knows. But in the context of the kind of models I'm talking about, there is a pretty definite answer, which is that our pocket universe-- I'll answer at the level of our pocket universe and I'll answer at the level of the multiverse as a hole. At the level of our pocket universe, our pocket universe will thin out. Life will eventually become impossible because matter density will be too low. It will probably decay. Our vacuum is probably not absolutely stable. Very few things 2 String theory are, if something like String theory is the right theory. But even though it will be decaying, it will be expanding still faster than it decays. So the decay will cause holes in our universe. It will become like Swiss cheese. But the universe, as a whole, will just go on exponentially expanding, perhaps forever, as far as we can tell, forever. The multiverse is a more vibrant object. The multiverse, as I always said, would continue to generate new pocket universes forever. So the multiverse would forever be alive even though each pocket universe in the multiverse would form at some time and then ultimately die, die of thinning out into nothingness. Yes? AUDIENCE: Just to add to that. Do you believe that maybe it's a cyclic process? So it expand and decay and then come back [? yet again ?] and then happen all over again? PROFESSOR: OK. The question is could it be a cyclic process that expands, reaches maximum, comes back and crunches, and expands again. That is certainly a possibility, and there is some people who take it very seriously. I don't see any evidence for it. And furthermore, there never really was and still really isn't a reasonable theory of the bounce that would have to be a part of that theory. Yeah? AUDIENCE: But would it be the expansion overtaking the decay in our own vacuum that our universe exists in, our own little pocket vacuums of ultimate decay within our system create more little pocket universes-- [INTERPOSING VOICES] PROFESSOR: Within. Yes. Yeah, that's correct. They would not be a big fraction of the volume of our universe, but, yes. The pieces in our universe that might decay in the future would produce new pocket universes. Most of them would be very low energy pocket universes that would presumably not create life, but some of them could nonetheless have a high enough energy to create life. So we would expect new, thriving universes to appear out of our own pocket universe as it reaches this expansion death. Yes? AUDIENCE: What does distinguishes different vacua besides the cosmological constant? PROFESSOR: The question is what distinguishes the different vacua besides the cosmological constant. And the answer is that they can distinguish in many, many ways. What fundamentally distinguishes them is the rearrangement of the innards within the space, maybe a little bit more precise without trying to get into details which I probably don't understand either. But what's going on is that String theory fundamentally says that space has nine dimensions, not the three that we observe. And the way the nine becomes three is that the extra dimensions get twisted up into tiny little knots, so they occupy too small a length to ever be seen. But there are many different ways of twisting up those extra dimensions, and that's really what leads to these very large numbers of possible vacua. The extra dimensions are twisted up differently. So that means that as far as the low energy physics in these different vacua-- practically everything could be different, even the dimension of space could be different. You could have different numbers of dimensions compactified. And the whole particle spectrum would be different because what we view as a particle is really just the fluctuation of vacuum. And if you have a different structure to the vacuum itself, the kinds of particles that exist in it could be totally different. So the physics inside these pocket universe could look tremendously different from what we observe even though that we're predicating the whole description on the idea that, ultimately, it's the same laws of physics that apply everywhere. Other questions? Yes? AUDIENCE: [INAUDIBLE]? PROFESSOR: OK. I think you're asking about if we have a small patch, then that goes inflation and the rest doesn't, how does the patch end up dominating because it started out with just a small fraction of the particles. Doesn't it still have the same small fraction of particles? Is that what you're asking? AUDIENCE: Well, I guess. If you start out with the smooth particles being the excessive matter, and one of the particles behaves and the other two particles [INAUDIBLE] even if it's still just two particles? PROFESSOR: Right. It isn't the number of particles conserved, basically, as all this happens, is I think what you're asking. AUDIENCE: Well, even if it eventually [? is called ?] expanded wave because the second part will [INAUDIBLE] PROFESSOR: Well, let's see. I'm having a little trouble hearing you. But let me make a definite-- let me make a broader statement, and you can tell me if I've answered what you're asking about or not. When one of these patches undergoes the exponential expansion of inflation, the energy is really not very well described as particles at all. It's really described in terms of fields. And fields sometimes behave like particles, but not always. And in this case-- in principle, there's a particle description too, but it's not nearly as obvious as the field description. So you have energy stored in fields and the region grows. The energy stored in those fields actually increases as the region goes. The energy density remains approximately constant. And that sounds like it would violate the conservation of energy, but we discussed the fact that what saves conservation of energy and allows this to happen in spite of conservation of energy is that as the region expands, it is filled by a gravitational field, which is now occupying a larger and larger volume, and that gravitational field has a negative energy density. So the total energy, which is what has to be conserved, remains very small and perhaps zero, and the region can grow without limit while still having this very small or zero total energy. Then, eventually it decays and when it decays, it produces new particles, and the colossal number of new particles, and those would be the stuff that we would be made out of. And that number is vastly larger than the number of particles that may have been in this region when the inflation started. Yes? AUDIENCE: So does the emergence of [INAUDIBLE] just purely a conservation of energy? Like, what do you need to make these [? an organism ?], the negative energy, zero [INAUDIBLE] I guess. PROFESSOR: Are you saying the conservation of energy maybe controls the whole show, and that this is really the only thing consistent with conservation of energy? I think that's probably an exaggeration because if nothing happened, that would conserve energy too. So I think one needs more than just the conservation of energy to be able to describe how the universe is going to evolve. OK. Let me continue. Get back to the beginning there, back to the end. OK. So I just finished talking about the landscape of String theory and how it offers all these possible vacua. So in particular, and this is now the new stuff, if there are 10 to the 500 vacua of String theory, for example. We don't really know the number, but something crazy like that. And if only one part in 10 to the 120 of them have this very small energy, thus the energy densities are spread from plus 10 to the 120 times what we observe to minus 10 to the 120 times what we observe. That would mean that what we observed would be a narrow slice in the middle there occupying about 10 to the minus 120th of the length of that spread. We would then expect-- and all this, of course, is very crude estimates. It's not really the numbers that's important, it's whether or not you believe the ideas. But we'd expect then that about 10 to the minus 120 of the different vacua would have an acceptably low vacuum energy density. But that's still a colossal number because 10 to the minus 120 times 10 to the 500-- you add the exponents-- is 10 to the 380. So we would still predict that even though they'd be very rare, there might be 10 to the 380 different kinds of vacua, all which would have a vacuum energy density as well as what we observe. So there's no problem finding, in the landscape, vacua whose energy density is as low as what we observe. But then there's the question if they're so incredibly rare, wouldn't it take a miracle for us to be living in one of these incredibly unusual vacua with such an extraordinarily low vacuum energy density. That then leads to what is sometimes called Anthropic considerations or perhaps a selection effect. And to see how that works and make it sound not as crazy as it might sound otherwise, I want to begin by giving an example where I think one could really say that this effect happens. And that is suppose we just look at our own position in our own visible universe and look at, for example, the mass density. Where we're actually living is incredibly unusual in many ways, but one of the ways we could talk about, which is just simple and quantitative, is the mass density. The mass density of the things around this room is on the order of one gram per centimeter cubed give or take a factor of 10. The factor of 10 is not very important for I'm talking about here. The point is that the average mass density of the universe, the visible universe, is about 10 to the minus 30 grams per centimeter cubed. It's really unbelievable how empty the universe is. It's actually a far lower mass density than is possible for us to achieve in laboratories on Earth with the best vacua that we can make in our laboratories. So where we're living has a mass density of 10 to the 30 times the average of the visible universe. So we're not living in a typical place in our visible universe. We're living in an extraordinarily atypical place within our visible universe. And we can ask how would we explain that. Is it just a matter of chance that we're living in a place that's such a high mass density? Doesn't seem very likely if it's a matter of chance. Is it luck? Is it divine providence, whatever? I think most of us would admit that it's probably a selection effect. That that's where life happens. Life doesn't happen throughout most of the visible universe, but in these rare places, like the surface of our planet, which is special in more ways than just the mass density, but the mass density alone is enough to make it extraordinarily special. We're off by a factor of 10 to the 30 from the average of our environment. So if we're willing to explain why we live in such an unusual place within our visible universe and explain that as simply a requirement for life, then it doesn't seem to be such a stretch to maybe imagine-- and it was Steve Weinberg who first emphasized this in 1987. Certainly not the first person to say it, but the first person to say it and have people sometimes believe him. He pointed out that may be the low energy-- the low vacuum energy density could be explained the same way. If we're not living in a typical place within our visible universe, there's no reason for similar ideas to expect that we should be living in a typical place in the multiverse. Maybe only a small fraction of these different types of pocket universe's can support life. And maybe the only way to have life is to have a very small value for the vacuum energy density. And there is some physics behind that. Remember this vacuum energy density drives expansion-- acceleration, I should say. So if the vacuum energy density were significantly larger than what we observe, the universe would accelerate incredibly rapidly and would fly apart before there'd be any time for anything interesting to happen like galaxies forming. Weinberg based his arguments here on the assumption that galaxies are a necessity for life. Yes? AUDIENCE: So that's what I was going to ask. Why do we assume that our universe is the only one that could have like-- why couldn't just all the multi-universes have like-- PROFESSOR: Right. Right. Well, that's OK. That is what I am talking about. I'm trying to answer it. So if the vacuum energy density were significantly larger than what we observe, the universes would fly apart so fast that there could never be galaxies and therefore never planets, none of things that we think of as being associated with life as we know it. Conversely, if the vacuum energy density were negative, but had a magnitude large compared to what we observed, that would be a large negative acceleration, an implosion. And those universes would just implode, collapse, in an incredibly short amount of time, much too fast for life, of any type that we know of, to form. So there is a physical argument which suggests that life only forms when the vacuum energy density is very low. And Weinberg and his collaborators-- and this is the same Steve Weinberg who wrote the First Three Minutes that we're reading-- calculated what the requirements would be for galaxy formation. And they decided that, within about a factor of 5 or so, the vacuum energy density would have to be about the same as what we observe or less in order for galaxies to form. So it seems like a possible explanation. It's certainly not a generally accepted explanation. These things are very controversial one. I guess that's, in fact, what I was going to talk about on my next slide. Some physicists by this selection effect idea. I tend to buy it. But a number of physicists regard it as totally ridiculous, saying you could explain anything if you except arguments like that. And there's some truth to that. You can explain a lot of things if you're willing to just say, well, maybe that's needed for life to happen. So because of that, I would say that these selection effect arguments or anthropic arguments should always be viewed as the arguments of last resort. That is, unless we actually understand the landscape of String theory, which we do not in detail, and once we actually understand what it takes to create life, we really can't do more than give plausibility arguments to justify these anthropic explanations. But these anthropic arguments do sound sensible. I think there's nothing illogical about them, and they could very well be the explanations for some things. As I pointed out, I think it is the explanation for why we are living in such an unusual place within our own visible universe. And it means that these selection effect arguments become very attractive when the search for more deterministic explanations have failed. And in the case of trying to explain the very small vacuum energy density, I think other attempts have failed. We don't have any calculational, deterministic understanding for why the vacuum energy should be so small. So is it time to accept this explanation of last resort that the vacuum energy density is small because it has to be for life to the evolve? Your guess is as good as mine. I don't really know. But I would say that, in the case of the vacuum energy density, people have been trying very, very hard for quite a few years now to try to find a particle physics explanation for why the vacuum energy has to be small, and nobody's really found anything that anybody has found-- that any large number of people have found to be acceptable. So it is certainly a very serious problem. And I think it is time to take seriously the argument of last resort, that maybe it's that way only because in the parts of the multiverse where it's not that way, nobody lives there. So I would say it's hard to deny, as of now, that the selection effect explanation is the most plausible of any explanation that is known at the present time. To summarize things-- I'm actually done now, but let me just summarize what I said to remind you where we're at. I've argued that the inflationary paradigm is in great shape. It explains the large scale uniformity. It predicts the mass density of the universe to better than about 1% accuracy and explains the ripples that we see in the cosmic background radiation, explaining them as a result of quantum fluctuations that took place in the very early universe. The picture leads to three ideas that at least point towards the idea of a multiverse. It certainly doesn't prove that we're living in a multiverse. But the three ideas that point in that direction are, first of all, the statement that almost all inflationary models lead to this feature of eternal inflation, that the exponential expansion of the inflating material, generally speaking, out runs the decay of that material so that the volume grows exponentially forever. Second point is that, in 1998, the astronomers discovered this rather amazing fact that the universe is not slowing down as it expands, but in fact, is accelerating. And that indicates that there has to be some peculiar material in the universe other than what we already knew was here, and that peculiar material is called the dark energy. And we don't have any simple interpretation of what it is, but it seems to most likely be vacuum energy. And if it is, it leads immediately to the important question of can we understand why it has a value that it has. It seems to be much smaller than what we would expect. And then three, the String theorists give us an interesting way out here. The String theorists tell us that maybe there's not unique vacuum to the laws of physics, but maybe there's a huge number, which seems to be in fact what String theory predicts. And if there is, then of the many different vacua you expect there to be, in fact perhaps even a large number, that would have this very small vacuum energy density, a tiny fraction of the total different vacua, but nonetheless a large number of vacua that would have this property. And then this selection effect idea can provide a possible explanation for why we are living in one of those very unusual vacua which has this incredibly tiny vacuum energy density. So finally, I'd just like to close with a little sociological discussion here. Do physicists really take this seriously? And I'd like to tell you about a conversation that took place at a conference a few years ago. Starting with Martin Rees, who I don't know if you know the name or not, but he's an Astronomer Royal of Great Britain, former president of Royal Society, former master of Trinity College as well, a very distinguished person, nice guy, too, by the way. And he said that he is sufficiently confident in the multiverse to bet his dog's life on it. Andrei Linde, from Stanford, a real enthusiastic person about the multiverse, one of the founders of inflation as well, said that he's so confident that he would bet his own life on it. Steve Weinberg was not at this conference, but he wrote an article commenting on this discussion later which became known. And I always considered Steve Weinberg the voice of reason, which is why we're reading the First Three Minutes. And he said that I have just enough confidence in the multiverse to bet-- guess what's coming-- the lives of both Andrei Linde and Martin Rees' dog. That's it for the summary, or the overview. Anymore overview type questions before we get back to the beginning, actual beginning of the class? Yes? AUDIENCE: You said-- so selection effect argument says that it's because life exists within these certain constraints, omega being one and low energy larger than it generally is allowed, that life could exist in this way. But we're considering carbon-based life. What if there's some other [INAUDIBLE] life forms out there that gives us different energies and radiation and stuff like that? PROFESSOR: Yeah, what you're pointing toward is certainly a severe weakness of these selection effect arguments, that we really know about carbon-based life, life that's like us, and we can talk about what conditions are needed to make life like us, but maybe there's life that's totally different from us that we don't know anything about that might be able to live under totally different circumstances. That is a real weakness. However, I would argue-- and this is also controversial. Not everybody would agree with what I'm about to say. But I would argue that if we're willing to explain the unusual features of the piece of the universe that we live in by selection effect arguments-- the fact I used, the example is simply that we're living a place where the mass density is 10 to the 30 times larger than the mean. If we're willing to use the anthropic arguments to explain that, then I think all those same issues arise there also. If life was really teem-- if the universe was really teeming with a different kind of life that thrived in vacua, then we'd be much more likely to be one of them, extremely unusual creatures living on the surfaces of planets. So I think it's a possible weakness that one has to keep in mind, but I don't think it should stop us from using those arguments completely. But it is certainly a cause for skepticism. Yes? AUDIENCE: Isn't the point of the selection principle just the fact that exist-- the universe selected for us. Does it matter for the general of just for like carbon-based [? organisms? ?] Is the fact that we exist [INAUDIBLE] that we've been selected for [INAUDIBLE]? PROFESSOR: You're asking about, I think, how peculiar to carbon-based life should we expect these selection effect arguments to be. AUDIENCE: Doesn't the selection affect where [INAUDIBLE]? PROFESSOR: Now that's an important point, and certainly one that's not settled among philosophers, probabilist, physicists, or anybody. What you're asking-- if I'm summarizing it right-- is when we're thinking of the selection effects, should we may be only talk about carbon-based life because, after all, we know that we are carbon-based life. So what does it matter if there's other kinds of life out there? That's one way of looking at it, certainly. Or, maybe we should think about all kinds of life. That's something else that people say. The problem I would-- I tend to be by the way the kind of person that thinks that all life is relevant, not just carbon-based life. Because we happen to be carbon-based, and we happen to have fingernails that have a certain length, and we happen to have hair that's a certain length or a certain thickness, does that mean we should only think of those things as being relevant when we're thinking about selection effects? And I would say that they're not. If our hair had a different thicknesses, we would still be able to make measurements and so on. So from my point of view, when one thinks about these issues of selection effects, one should precondition only on the elements that are necessary to ask the question in the first place. And what I would like to think-- and as I point out, this is controversial, not everybody agrees with me-- is that a good theory should be a theory in which you could say that most of the people who ask this particular question will get the answer that we say. If only a tiny fraction of people who ask that question will get that answer, but that same tiny fraction happens to have hair of a certain color and you have hair of that color, to me that's still not an explanation because you don't know why you have hair of that color or why you're living in such an unusual place. OK that strikes up a lot of conversations. Yes? AUDIENCE: You mentioned last time that the different pocket universes that comprise the multiverse are disconnected from each other though they start out as patches within the preceding vacua. What starts to disconnect them fundamentally from the vacuum which they formed? PROFESSOR: The question is what is it that separates these different pocket universe's. If they start out all in the same space, don't they remain all in the same space? And the answer is they do, but the space they started out in was expanding at a very rapid rate. So in most cases, but not all actually, two pocket universes will form far enough apart from each other that they will never touch each other as they grow because the space in between will expand to fast to ever allow them to meet. However, collisions of pocket universities will occur if two pocket universes form close enough to each other, the expansion of space in between will not be enough to keep them apart, and they will glide. How frequent one should think of that as being is an incredibly tough question to which nobody knows the answer. There are actually-- at least there is at least one astronomical paper in the literature by a group of astronomers who have looked for possible signs of a collision of bubbles in our past. They did not find anything definitive. But it is something to think about, and it's something people are thinking about. There really are quite a few papers about collisions of bubbles in the literature. Yes? AUDIENCE: How long is long-lived? So if the energy density was too large and too negative would that still be long-lived if it were to collide upon itself? PROFESSOR: Talking about the lifetime of these universes that I said would collapse very quickly. How quickly do I mean? AUDIENCE: Like the metastable long-lived. PROFESSOR: I used the word long-lived at least twice in what I've talked about-- I talked about the long-lived metastable vacua. And there, by long-lived I mean anything that's long compared to the age of our universe since the Big Bang. Long means long compared to 10 to the 10 years in that context. I also said that if the vacuum energy of a universe were large and negative, it would very rapidly collapse. That could be as fast as 10 to the minus 20 seconds. It could be very fast depending on how large the cosmological constant was. Yes? AUDIENCE: So I have read that there's an effect such that if you're vacuum can be seen differently by different observers. For example, inertial-- there's something that I read in effect it says that if one inertial observer sees vacuum, another observer that's accelerating with respect to that observer would see like a number of particles [INAUDIBLE] a warm gas. So how much of the effect we observe are due to the fact that perhaps we believe the universe is accelerating, and we're accelerating perhaps with respect to some vacuum and we're just observing that. That's just a fact of our motion not necessarily the-- PROFESSOR: You're touching on something that is in fact very confusing. What is your name? AUDIENCE: Hani. PROFESSOR: Hani? What Hani said was that he had heard-- and this is correct-- that if one had simply a region of ordinary vacuum-- and I am now going to talk about special [INAUDIBLE] vacuum. You don't even need relativity. You don't general relativity, you just need this. If you had an accelerating observer moving through that vacuum, the accelerating observer would not see something that looked like vacuum, but rather would see particles that in fact would look like they had a finite temperature which you can calculate, a temperature that's determined by the acceleration. So the question is, how much of what we see should we think of as really being there and how much might be caused just by our own motion. And there's not a terribly great answer to that question that I know of except that we-- when these questions come up, we tend to just adopt the philosophy that an observer who's freely moving, which really means moving with the gravitational field, a geodesic observer as the word phrase is sometimes used, essentially defines what you might call reality and then if you calculate what accelerating observers might see in terms of that reality. And we are almost geodesic observers. The Earth is exerting a force on our feet, which violates that a little bit, but by the overall cosmic scale of things where the speed of light is what determines what's significant, we are essentially inertial or geodesic observers. Yes, Aviv? Aviv first and then the one in front. AUDIENCE: So I'm wondering about the philosophical approach to this discussion and why the very-- by the definition, we can't possibly observe another universe. And so maybe we have a theory that makes a lot of great predictions like inflation. But it may also make predictions about multiverse. We can't possibly empirically determine whether that's true or not, so a nonfalsifiable question. And so I feel like [INAUDIBLE] who [INAUDIBLE] essentially never going to be answered. And if we're going to be strict empiricists, should we not be concerned with this question? PROFESSOR: The question is if we could never see another pocket universe, is it even a valid question to discuss whether or not they exist, a valid scientific question. That is also a question which is generally debated in the community, and people have taken both sides. There certainly is a point of view, which I think I tend to take, which is that we never really insist that every aspect of our theories can be tested. If you think about any theory, even Newtonian gravity, you can certainly imagine implications of Newtonian gravity that you can calculate that nobody's ever measured. So I think in practice we tend to accept theories when they have made enough measurements that we've tested so that the theory becomes persuasive. And when that happens, I think we should, at the same time, take seriously whatever those words mean, the implications that the theory has for things that cannot be directly tested. As far as the other pocket universes, some people think it's important, and maybe I do too, that even though it's highly unlikely, incredibly unlikely, unbelievably unlikely that we'll ever acquire direct observational evidence for another pocket universe, it's not really in principle impossible because of the fact that pocket universes can, in principle, collide. So we could, in principle, describe with evidence that our universe has had contact with another pocket universe in the past. Yes. AUDIENCE: What determines the stability of a particular vacuum state? Is it simply things with higher vacuum energies are less stable and things with lower vacuum energies are more stable? PROFESSOR: The question is what determines the stability of the different vacua. Is it simply that higher energy ones are more unstable and lower energy ones are more stable or is it more complicated than that? And the answer, as far as I know, is that there is a trend for higher energy ones to be more unstable and lower energy ones to be more stable. But it's not as simple as that. There are also wide variations that are independent of the energy density. AUDIENCE: If the one that we're living in is incredibly is really ridiculously close to zero in a city that seems to make it incredibly unlikely that we would pay anything else I soon PROFESSOR: Right. The question is if our universe has such has such a small energy density relative to the average. Wouldn't that mean that we should also expect to be much more long-lived than average? And the answer is I guess so. But as far as the effect on the Swiss cheese picture that I described for the ultimate future, it doesn't change the words that I used. It just changes how frequent those decays would be. But since the future of this pocket universe, if this picture is right, will be infinite, decays will happen no matter how small the probability is. An infinite number of decays will happen in fact. OK we should probably go on now even if there are more questions. We have a whole term to discuss things like this. The next thing I want to do is handle some housekeeping details. I'd like to arrange office hours. And the problem sets are due on Friday, so what [? Tsingtao ?] and I thought was that a good time for office hours would be on Wednesdays and Thursdays. One of us on each of those days. It turns out that I can't really do Thursdays, so one of us on each of those days ends up meaning that I'll probably be having office hours on Wednesdays. This is all provisional depending on how it works with you folks. And [? Tsingtao ?] will probably be having office hours on Thursdays. Generally speaking, if one wants to have an office hour that most people can come to, I think it should be in the late afternoon. So maybe we'll start by discussing my office hours since it comes before [? Tsingtao's, ?] Wednesday versus Thursday. So on Wednesday, I can do an office hour in the late, normal afternoon, which might mean 4:00 to 5:00 I think after five some people have sports activities and things. We're told to try to avoid those hours. So 4:00 to 5:00 would be a reasonable possibility for my office hour on Wednesday. If that doesn't work, I could stay and have the office hour in the evening. That's actually what I did two years ago. I had an office hour from 7:30 to 8:30. It was also Wednesdays-- I forget. But it was in the evening, and that's a possibility. So let me ask if I have my office hour from 4:00 to 5:00 on Wednesdays, how many of you who might be interested in coming would not be able to come? 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12. A significant number, but most of you can come at least. Let me ask the corresponding question for the evening. Suppose I made the office hour from 7:30 to 8:30 in the evening on Wednesdays. In that case, how many of you who might want to come would not be able to come? 1, 2, 3 5, 6, so it's a smaller number, but not vastly smaller. OK I think I'll do it in the evening for the benefit of the difference between those two groups. And the evening also has the little slight advantage that it can be a little more open-ended if people still have questions after the normal time is over. So I will make my office hour on Wednesday's from 7:30 to 8:30. Is that particular hour as good an hour as any on Wednesday evening. Would people want to move it earlier or later? Any suggestions for moving it earlier or later? AUDIENCE: I know people have sports til technically at least 7:00, but if it's 6:30 to 7:30 might be a little-- PROFESSOR: 6:30? You'll be starting at 6-- starting at 6:30 versus-- 6:30 to 7:30, starting at 6:30. Well, I'd be happy to do that, but I suspect we might run into problems with people who have sport activities, but let's see. How many of you would be inconvenienced if I started at 6:30 instead of 7:30? 3, 4, 5, 6, a number. So I think we'll honor that and start at 7:30. I assume 7 is also a bit of a problem for those people. We'll say 7:30. Now, I have to announce that this week is going to unfortunately have to be an exception because I already have plans for Wednesday night. So for this week, I think the best thing-- the only possible thing, probably the best-- it's almost the only possible thing would be 4:00 to 5:00 on Wednesday. Wednesday's bit tomorrow. I'll send you all an email when I find a room for that. I think I'll probably not have it in my office, but maybe it will be in my office. Comment up there? AUDIENCE: Oh, I was just going to ask where, but you-- PROFESSOR: Where? OK, I guess then the fourth option's my office. I was hoping to put a sign of my office if we're someplace other than my office. So should put this on the board too. Tomorrow 4:00 to 5:00 PM. So be at my office or I'll send email. Yes? AUDIENCE: How will we be turning in the Thursday problem set? PROFESSOR: We're going to talk about that now. For Thursday, for [? Tsingtao, ?] I remember you had some constraints. So what was possible? TSINGTAO: Yeah. So I usually leave around 7:00 PM, so I have appointment [? meeting ?], and today is probably not very good at 4:00 PM. PROFESSOR: So 4:00 to 5:00 is a possibility for [? Tsingtao ?] on Thursday, and I guess later than that. But should be over by 5:00-- by 7:00 either or do you want it to be over before then. TSINGTAO: Oh, 6:00 to 7:002 Oh, 6:00 to 7:002 I guess. PROFESSOR: 6:00 to 7:00 would be OK? TSINGTAO: Yeah, that's OK. PROFESSOR: OK, so let's start with 4:00 to 5:00. If [? Tsingtao ?] was to have an office hour from 4:00 to 5:00 on Thursdays, how many of you think you might want to go would be unable to? Wow, tons! OK, that seems more than half of you I think. So I guess we try to avoid that. This impact's probably an athletic region, but maybe we'll have to do that for lack of an alternative. Suppose it were 5:00 to 6:00. How many of you who would be interesting in coming-- who might be interesting in coming, I should say I guess because it'll vary from week to week-- but how many of you who think you might be interested in coming would not be able to come from 5:00 to 6:00 on Thursdays. OK, a small group. 1, 2, 3, 4, 5, 6, 7. Looks to me like 7. And let's say, I said 4:00-- That was at 6:00-- that was 5:00 to 6:00. So maybe we should next try 5:30 to 6:30 in smaller increments here. If we're 5:30 to 6:30, how many of you would not be able to come? Looks like pretty much the same people. And if it were 6:00 to 7:00, how many of you would not be able to come? Same people, I think it is literally the same people. OK. So it looks like 4:00 to 5:00 is very bad. And all other times are about equivalent. So I think if all other times are bad equivalently, we probably might as well make it 5:00 to 6:00. And that way [? Tsingtao ?] can get off to an earliest possible start to wherever he's going at 7:00, and it also means a little more flexibility in the end if there are more questions. AUDIENCE: Where is that located? PROFESSOR: That also, I think, will require us to get a room which will be announced. So I will try to arrange rooms tomorrow morning and send it by email, and I guess I'll post it on the website as well. Any other organizational-- and questions limited to organizational questions now? Get back to physics later. Any organizational questions before we start on Doppler shifts? Yes? AUDIENCE: If I can't make a single office hour, how should I field questions when I have questions? PROFESSOR: A good question. Yeah, there may be some people, and apparently there is at least one who cannot make either of these times, even though we tried to optimize things. So by all means, don't feel like you don't have a channel for questions. If you have a question, send an email to either me, or [? Tsingtao, ?] or both. And we'll either answer it together with you or answer you by email depending on what the question is and what seems useful. And that goes for everybody. In that case, if everybody's on board, we will now start the actual material for the term. Well, the overview is an overview of the material for the term, but not at the standard pace and the standard level of detail. So what I want to talk about this week-- and I guess I'll only get to start today and finish on Thursday-- I had planned to tell you everything you need to know for the problem set by today, but that's not going to happen. So I don't-- if people complain, we could consider postponing the due date of the problem set, so consider that an option. But probably you could do the problem set anyway because it is all described in lecture notes. But if any of you have difficulties meeting that deadline, it will be a somewhat flexible deadline this week because of the fact that I'm not covering the material today as I had planned. And I'll admit that's not necessarily a good thing to do in terms of problem set. So we're going to begin the course, in principle, by talking about Hubble's law, although Hubble's law will rapidly lead us to the question of the Doppler shift, which is what I'll mainly be talking about for the rest of today and for most of Thursday. Hubble's law itself is a simple equation that v is equal to h r, where v is the recession velocity of any typical galaxy. Hubble's law is not an exact law, so individual galaxies will deviate from Hubble's law. But in principle, Hubble's law tells you what the recession velocity is of a galaxy, at least to reasonable accuracy. Where h is what is often called Hubble's constant. Sometimes, it is called the Hubble parameter. I like actually-- it's called the Hubble expansion rate. The problem with calling Hubble's constant is that it's not really a constant over the lifetime of the universe. It's a constant over the lifetime of an astronomer, but not a constant over the lifetime of the universe? And we'll be talking about universes, not astronomers, at least for the most part. And even over history, it's not a constant because the estimate of Hubble's constant has actually changed by a factor of about 10 or so since Hubble's original estimate. And the r that appears here is the distance to the galaxy. And if you look at the lecture notes from two years ago, they start out by saying that Hubble's law was discovered by Hubble in 1929. When I looked at that first sentence in my notes, and when I started to revise them for this year, I realized that I heard that that statement has become controversial. Almost everything in cosmology is controversial, so even that statement is controversial. There are claims that Lemaitre really deserves credit for Hubble's law rather than Hubble. And there's some validity to that claim. There's also some [? intrigued ?] that happens, if you want to read about this. It was discovered by several of-- I think amateur historians I think is what they are often referred to in the press-- that we know mainly of Lemaitre's work-- we being the Western speaking, the Western English speaking world-- know mainly of a Lemaitre's work through a 1931 translation in a 1927 paper he wrote about the foundations of cosmology. And it turned out that several significant seeming paragraphs in the 1927 French article somehow didn't make it to the 1931 English translation, paragraphs about the Hubble constant. And for a while, that seemed like dirty play and there were accusations that Hubble, or friends of Hubble, had suppressed those paragraphs when the article was translated. The truth finally was discovered a couple years ago by a physicist named Mario Livio who actually was on the Daily Show a couple nights ago by the way. He has a book out now, not about this, but about other things. But anyway, he discovered by going through the archives of the monthly notices of astronomy, which is where the article was published in English. And turned out it was Lemaitre himself he removed those paragraphs. The paragraphs basically gave a numerical estimate of the Hubble constant, but by 1931 Hubble's papered already been published, so Lemaitre felt that it was only a less accurate estimate of the same quantity that Hubble had estimated, so he cut it out of his translation. What certainly is true is that Lemaitre knew about Hubble's law on theoretical grounds. Lemaitre was building a model of an expanding universe. I don't know if he is really the first person to know that an expanding universe model gave rise to a linear relationship between velocity and distance, but he certainly did know about it and understood Hubble's law and give an estimate of it based on data. What he did not do, however, is try to use data to actually show that there was a linear relationship. What Lemaitre did, in those paragraphs that were not translated, was simply to look at a large group of galaxies, figure an average value for v and an average value of r and determine h from dividing those two averages. And he admitted that there was not really good enough data to tell if the relationship is linear or not. I think it is definitely fair to say that Hubble is the person who deserves credit for arguing first really with a fairly weak argument, but then got stronger over time, that there really is astronomical evidence for this linear relationship between velocity and distance. So probably it will continue to be called Hubble's law. If you look in Wikipedia, it tells you either one is acceptable at the moment, but Wikipedia articles change rapidly, so we'll see what it says next year. It's also mentioned that we should probably root for Lemaitre since Lemaitre, it turns out-- well, he was a Belgian priest, it was often described, but he was also an MIT student, had a Ph.D. for MIT, which he received in 1927. You can actually read his thesis. When I was writing my [INAUDIBLE] book, I remember going to the MIT archives and actually picking up his thesis and reading it. It's not that well-written actually, but it's interesting. Although he got his Ph.D. from MIT, it also turns out that he did most of his work down Mass Ave at the Harvard College Observatory, but the Harvard College Observatory did not give degrees in those days. It was just an observatory. So he wanted to get a degree, so he signed up at MIT for the Ph.D. Program and wrote a thesis, received a Ph.D. Onward, what I really want to talk about is, after mentioning Hubble's law-- so Hubble's law as an indication that the universe is expanding. And we'll talk more about the history of all this later, and it actually is very well-described in Steve Weinberg's book. But initially, Einstein proposed a model of the universe that was static, and it was really Hubble who convinced Einstein that observationally the universe does not appear to be static, but does appear instead to obey this expansion law. So that gave rise to the theory of the expanding universe. But what I want to talk about today is how one measures the v that appears here. There's also a big discussion about how one measures r, the distance. And that is, I think, rather well-done in Steve Weinberg's book, and I'm going to pretty much leave it to your reading of Steve Weinberg's book to learn about how distances to distant galaxies are estimated. Roughly-speaking, I might just say that they are estimated by finding objects in those distant galaxies whose brightnesses you think you know, by one means or another. And a complicated story is what objects are there in brightnesses we think we know. But once you find an object whose brightness you think you know, those go by the general name of standard candles, a standard candle being an object whose brightness you know, then you can tell how far the object is by how bright it appears. And that becomes a very straightforward way of estimating distances, and that is the only way we really have of estimating distances of distant galaxies. So it's a much longer story than what I just said, and you'll read about it in Weinberg's book. The velocity is measured by the Doppler shift, and that's what lecture notes one are mainly about, and that's what I'll be talking about for the remaining few minutes of today's class. And what we want to do in the course of this set of lecture notes, this week of class I guess it will be, is understand how to calculate the Doppler shift both non-relativistically and relativistically, and we'll just work out the primary cases of observer stationary source moving, source stationary observer moving, and all in a line, for both the relativistic and non-relativistic cases. So I think I'll launch into the first calculation, which you might even have time to finish. I'd like to consider a case where the observer is stationary and the source is moving, which is normally how we think of the distant galaxies. We work in our own reference frame, so we're stationary, the galaxy is moving. How do we calculate this redshift I should say at the asset here, however-- I don't know if I said it in the lecture notes-- that the cosmological redshift is actually a little bit different from what we're calculating this week. This week, we're calculating the special relativity redshift. But cosmology is not controlled by special relativity because special relativity does not describe gravity, and gravity plays a major role in cosmology. So the cosmological redshift, we will talk about a little later in the course, in a more precise way. But for now, we, like Hubble-- Hubble didn't know any better-- are ignoring gravity, which is OK for the nearby stars, and the further away they are, the more important these gravitational influences are, and ignoring gravity one could just use special relativity or even Newtonian kinematics to calculate the relationship between v and the redshift. And that's what we'll be talking about. So the first problem that we want to talk about-- and I guess I'll just set it up and that's as far as we get-- will be a problem where there's a source of radiation, which is moving to the right in our diagram with a velocity, v, and an observer who is stationary. Now of course, all these are frame dependent statements, but we're working in a frame where the observer is stationary. And we're also going to assume for the non-relativistic case, that the air-- we'll be talking about sound waves-- but the air is stationary in this frame. So the frame of backboard is not only the frame of the observer, but it's also the frame of the air when we're talking about the non-relativistic sound wave calculation. So to define our notation, we're going to let u be equal the velocity of the sound wave. And that would normally be measured relative to the air, but the air will be at rest in this picture, so u will be the velocity of the sound wave relative to the diagram. v is the velocity of the source already shown. And we'll be interested in two time periods, delta t sub s where s stands for source, which will be the period of the wave at the source, which is the same as talking about the period of the wave as it would be measured by the source. And delta t sub O-- that's supposed to be a capital O, not a zero. It is the period of the wave at the observer or as observed. And the important point, which is maybe obvious qualitatively, is that these two times, or time intervals, will not be equal to each other. And the reason, basically, is that because the source is moving-- and I've defined positive v the way astronomers would as moving away from us-- because the source is moving away from us, each successive wave that goes from the source to us has to travel a little bit further. And that means that each wave crest is slightly delayed from when it would have gotten here if everything were stationary. And if you delay each wave crest, it means the time between crests is larger. And that means that we expect here that delta t sub O will be larger than delta t sub s because of this extra distance that each wave crest has to travel. And what we'll be doing next time-- I think I will just leave the calculations for next time-- is calculating that. And then doing the same thing for the case where the observers moving and the source is stationary, and then talking a little bit about special relativity, and then repeating both calculations with a special relativity situation where we'll be talking about light rays and velocities that might be comparable to the speed of light. So see you folks on Thursday, but maybe I'll see some of you at my office hour tomorrow. And I will send an email about where exactly that office hour will take place.
MIT_8286_The_Early_Universe_Fall_2013
17_BlackBody_Radiation_and_the_Early_History_of_the_Universe_Part_III.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK. In That case, let's get started. First with our review of what we talked about last time. We began by talking about the dynamics of a flat radiation-dominated universe, which is very straightforward. We start with Friedmann's equation for a flat universe. For radiation, rho is proportional to one over a to the fourth. So that gets us a dot over a squared is equal to a constant over a to the fourth. Rearranging that we can write it as a times da is equal to the square root of that constant times dt. And then we can just integrate both sides. So you get one half a squared is equal to the square root of the constant times t, plus a new constant, constant prime. OK, then we say that we can adjust this constant prime by resetting our clocks. And we haven't said anything yet that determines how our clocks are set. So the standard convention is to set your clocks so that t equals 0 is the time when the scale factor is zero. And that corresponds to constant prime equals zero, as one can see by just looking at that equation. So that's what we do when the constant prime term disappears. And then we don't care about the constant of proportionality anyway. For a flat universe the constant of proportionality is completely meaningless. It just tells you how many meters per notch you're dealing with, so it defines your notch, but otherwise has no physical meaning whatever. So the important bottom line is that a of t is proportional to the square root of t for a flat [INAUDIBLE] dominated universe. And once one knows that, one can easily get quite a few other things. OK. As I was just saying, once you now how a of t behaves you can immediately calculate lots of other things. And in particular h is a dot over a and that is just 1 over 2t. So we know what h is, a function of time, even without putting any more details about what kind of radiation we have. It doesn't matter. You still get h is equal to 1 over 2t. The horizon distance is given by the formula a of t times the integral from 0 to t of c over a of t prime, dt prime, the interval represents the total co-moving distance that light could travel between time 0 at time t. And then one multiplies that by the scale factor time t to turn it into a physical distance, and that then becomes the horizon distance of time t. And that is two times ct. You might remember it was three times ct for the matter-dominated case. And then finally you even know exactly what the mass density is as a function of time. Because 8 squared is 8 pi over 3 rho a pi g over 3 times rho. And we know what h is. So that tells me what row is also. We actually know the mass density as a function of time, independent of anything else. OK. Then we began talking about black-body radiation, which is basically a gas of massless particles at a temperature t that is a gas of massless particles in thermal equilibrium. And it turns out that the temperature t determines almost everything. So the energy density of black-body radiation, u, which is the same as rho mass density times c squared, is equal to a kind of a fudge factor g times pi squared over 30 times kt to the fourth, over h bar c cubed. And this fudge factor g is equal to 2 for photons. And the reason we gave it a letter instead of just writing a two in the formula in the first place is that we'll soon be talking about black-body radiation of other kinds of particles, and g will be different for different kinds of particles. The two for photons is given the number two because there are two spin states of photons. Photons that are massless spin one particle and massless particles, all we have maximum spins, they don't have spins in the middle, so they spin along one axis for a photon is either plus or minus 1, which is two spins. That's the quantum mechanical description. It corresponds completely to the classical description that you can separate any light wave into left-circularly polarized and a right-circularly polarized part. Or equivalently, you could separate it into an x-polarized and and y-polarized part. And you can get x-polarization by superimposing left and right, so these are not alternative polarizations. They're just two different ways of describing the general polarization. The general causation is a linear combination of either left and right or linear combination of x and y. In any case, the basis has two basis elements, and that's where the two comes from. Next item on our list is the pressure which could be calculated from the statistical mechanics, and we've already calculated by other means. But the answer, not matter how you calculate is is p equals 1/3 u. Next we talked about the number density. And these are all calculations that we didn't really do. I'm just quoting standard calculations from statistical mechanics. And you can learn how to do them presumably by some [INAUDIBLE] class. The number density is a different constant g star, in general it's different. For photons it ends up being the same. But in general a different constant, g star times zeta of 3 where zeta of 3 is the Riemann zeta function evaluated at argument three. And that's equal to 1 over 1 cubed plus 1 over 2 cubed, plus 1 over 3 cubed dot dot dot. And it adds up to 1.202 to three decimal places. And then the rest divided by pi squared then times kt cubed, over h bar c cubed. So the number [INAUDIBLE] like the cube of the temperature, while the energy density went like the fourth power of the temperature. And the g star that appears here as I mentioned already I think is 2 for photons. And we'll learn more later about how to apply this to other kinds of particles. And finally the entropy density is the same g as we had for energy densities times 2 pi squared over 45 k to the fourth t cubed, again it goes like t cubed, divided by h bar c cubed. And entropy is a somewhat subtle concept. Fortunately for our purposes we will not need to know much at all about what entropy actually is. I might just say for some sense of completeness that entropy is a measure of quote the disorder of a state. And this "disorder" means some measure of the number of different microscopic quantum states that contribute to a given macroscopic classical description. The important thing for us is, well first of all, that the second law of thermodynamics tells us that entropy never decreases, but we're going to make use of a much stronger statement which holds very well for the early universe, which is that if the system stays close to family equilibrium, then the entropy doesn't change. And in the early universe I think for every process that we're going to discuss that condition holds. The system stays close to thermal equilibrium and entropy is conserved. The one exception will be inflation, which we'll learn about near the end of the course. At the end of inflation there's a humongously entropy producing phase transition. And if inflation is right essentially all of the entropy that we have int he universe today was produced during that phase transition. Before that there was only negligible entropy. OK. That's it for my summary. Any questions? Yes? AUDIENCE: I know it's a law that entropy only increases. Is there anything behind that or is it just it's a law of physics? PROFESSOR: Yeah, that's a subtle question, which I think if you ask 10 different physicists you'll likely get 10 different answers. But one thing's for sure is that it does not follow as a consequence from the other laws of physics that we know. The other laws of physics that we know are essentially time, reversal, and variant. So why entropy always increases is something of a mystery. The usual story is something like the early universe, for reasons unknown, started in the state of peculiarly low entropy. And because it started low it's approaching the equilibrium value, which is much larger. So that's a possible explanation. I actually am in the process of writing a paper about the growth of entropy in the hour of time, and maybe I'll get a chance to tell you about that sometime before the end of the course, but I won't try to explain it now. But it's a slightly different idea about what could explain it. But it's something of a mystery. Nobody really knows what determines the arrow of time, why entropy only goes one way. Any other questions? OK. What I want to do now is to continue talking about black-body radiation. I think we've said everything that we wanted to say about photons, but now we want to apply to other kinds of particles. And we're going to begin with neutrinos, because neutrinos certainly do account for a significant fraction of the entropy in the universe today even. Neutrinos were, until around 1990 or so, thought to be massless. Now we know that they in fact have a small mass. I think I mentioned last time that we have never measured the mass of a neutrino and largely for that reason, we don't actually know what the masses are. But what we have measured is something that's quantum mechanical and rather peculiar, which is that one type of neutrino, and neutrinos come in three types, or flavors. And that is nu sub e, nu sub mu, and nu sub tau. And what we've seen is that neutrinos of one flavor can turn into, just by traveling through space, a neutrino of the other flavor. And that can only happen if there's a non-zero mass. And what it particular measures is delta m squared between the neutrinos. And I think I even wrote on the formulas on the blackboard for the known limits of that. So I won't write it again. But neutrino oscillation, which is what we call this, the conversion of one kind of neutrino to another, implies that delta m squared is not equal to zero. So it's still, in principle, possible that one of these neutrinos could be massless, but they can't all be massless. If one is massless, the other two have mass. And of delta m squared is very small. So these would be smaller masses than any other particles that we know of in the universe. Now for purposes of cosmology it turns out that you can get by by thinking of the neutrinos as being massless. And then we'll start by discussing it that way. It's not quite as trivial as just saying that the mass is a small number. So if you have a formula that involves the mass, you can usually neglect it. It's a little bit more subtle because whether the neutrinos are massless or massive affects the number of spin states. And we're going to even treat the number of spin states as if the neutrino is massless, which obviously is a bit of a cheat and needs further explanation. But I'm going to do it first for massless neutrinos and we'll come back later to discuss why you can get the same answers if the neutrino has a mass. So we're going to start out imagining that the nu's are massless. And in that approximation there is only one spin state for a neutrino and it's left-handed. And that means that the spin points the opposite direction from the momentum. So if the neutrino is spinning like this, which my right hand will correspond to a spin in that direction, the momentum will always be in that direction. And I can do the same thing this way. If it's spinning like this, the spin is that way, the momentum would be that way. The momentum's always the opposite direction of the spin. Now you might realize and that that leads to a question about Lorentz invariance. If the spin and the momentum point in opposite directions in some frame, it's not obvious if they would also point in opposite directions in some other frame. But it turns out to be true. One can verify that, which we're not going to do, but it is a Lorentz invariance statement, as long as the particle's massless. If the particle's not massless it is clearly not a Lorentz invariance statement. And that's easy to say. If the particle were not massless, and say it was spinning this way, so the spin is that way, and the momentum is that way, that way the situation and the particle had a mass, then I could change Lorentz frames by jumping into a rocket ship, and shooting off in this direction, the same direction the particle is moving, and since the particle is moving with some finite velocity, if it has a mass, there has to be a finite velocity, in principle the spaceship can overtake it. And when the spaceship overtakes it, from the point of view of the spaceship, the momentum will now be pointing that way, but the spin doesn't change when the spaceship overtakes it. So this relationship between spin and momentum is manifestly not Lorentz invariant when the particle has a mass, which is why we are cheating in a somewhat big way here, by saying that we're going to treat the neutrinos is massless. We're going to even be ignoring this fact that what we're saying about the spin and momentum is not even Lorentz invariant if the particle has a mass. And we'll make better excuses for that later, but for now we'll see simple picture first, and then talk about the more complicated picture later. Anti neutrinos, which I'll write with a bar over it, also have one spin state. And as you might guess since anti particles are kind of the opposite of particles, that is right-handed. Now in addition neutrinos differ from photons in that neutrinos belong to a class of particles called fermions. Photons are bosons. How many of you already know about [INAUDIBLE] fermion or boson? OK. Most, but not quite all. Fair enough. So fermions are particles that obey the Fermi exclusion principle that you're very likely learned about in a chemistry class someplace. It says that no two particles can be in exactly the same quantum state. Bosons do not obey such a principle. In fact, bosons are even more likely to be found in the same quantum state. And that's the important difference. In quantum field theory, and only in quantum field theory, you can't do this in just non relativistic quantum mechanics, but in quantum field theory you can prove something called the spin statistics theorem, which says that particles with half integer spins, that is 1/2, 3/2, et cetera, are necessarily fermions. And particles with integer spins are necessarily bosons. So we know whether a particle is a fermion or a Boson as soon as we know its spin. And in the case of neutrino, it's spin 1/2. So nu's have spin one half. And that makes them fermions. Now the statistical mechanics formulas that we just went over come about from counting states. Basically the underlying principle is that the system is likely to be in any state that you could imagine with the probability of e to the minus the energy of that state, divided by kt all in the exponent. So it's state counting that determines these formulas. And that means it's going to be different for fermions and bosons because for bosons you're going to count situations where there are many particles in the same state. And for fermions you're not. And in particular that means you're going to be counting more states for the bosons and to be counting fewer states for the fermions. So you would expect these constants g and g star to be smaller for fermions than they are for those bosons. And that indeed is true. So for fermions g gets an extra factor that is multiplied by 7/8s. And g star is multiplied by 3/4. So I think we could have predicted that these g's would get factors that are less than 1, simply because we're counting fewer states. I think we can also predict without doing any calculations that g should have a bigger factor than g star. Remember g star is the factor that appears in the number density. So if we want to calculate the average energy per particle at a given temperature, we would take a formula which has a g in it for the energy density, and divide it by a formula that has a g star in it for the number density. The energy density divided by the number density is just the energy per particle. And this tells us that we'll get a number that's bigger than 1 for the fermions. So for fermions, there's slightly more energy per particle than there is for bosons. And I think that's easily believable because fermions obey this exclusion principle. It means once you put one particle in the lowest energy state, you can't put any more there. You have to put them in higher energy states. So the expectation would be that you have more energy per particle with the fermions and that is indeed what the [INAUDIBLE] calculation indicates. So now we're ready to write down a formula for g, for the neutrinos. And this will be the overall g that occurs in the formula for the energy density and the entropy density, and the pressure for neutrinos. And it will, we'll write it out as a string of factors. We'll first of all have this factor 7/8 coming from the fact that they're fermions. Then we'll have a factor of three because there are three flavors. And we're just going to add them together here. So we get a factor of three. Then we get a factor of two because there's particle anti particle pairs. That is, we have to count both neutrinos and anti neutrinos, and the total energy density. For photons, a photon is the same as an anti photon, so you don't get an extra particle connected with the anti particle. But for neutrinos you do, so that gives you a factor of two here. And then you always have the number of spin states, but in this case, the of number spin state is one, but I'll write it just so we can keep track of the general pattern. And when you do that multiplication, you get 21/4. So g sub nu for all three neutrinos put together is 21/4. And you could put this into all of our formulas and get the energy density pressure, and entropy density for neutrinos. Then we're also interested in g star, which will give us the number density. So g star sub nu. And it differs only by the first factor. 3/4 for the fermion nature of the particle. And the rest is just dittos. And what you get in the end is 9/2. OK, now this does not end our discussion of black-body radiation in the early universe Because as we go back in time to earlier times we can come to a times when KT, the main thermal energy, is large compared to the rest energy of an electron, m sub e c squared. So at these very high temperatures even e plus, e minus pairs contribute to the black-body radiation. So we'd like to write down a g four e plus, e minus pairs. Actually, maybe I shouldn't call them pairs because they really are contributing individually, but g for e plus, e minus. Yes? AUDIENCE: So I know said that we're assuming, even though it's a spin 1/2 particle that's just one spin state. I was wondering is it possible for a spin 1/2 particle to be massless? I can't think of any offhand, because don't they have to have m equals negative 1/2 state? [INAUDIBLE] in general. PROFESSOR: Right. No, it is perfectly consistent, would be perfectly consistent for neutrinos to be massless. And then they would only have and m equals 1/2 state in spite of the fact that their spin 1/2. And the m equals minus 1/2 state would just be missing, but that's OK. It's similar to what happens with photons. Photons are spin one particles. And the m equals 0 state is missing. And that can only happen with massless particles. For massive particles, by basically the argument that I gave about catching up to the neutrino, it's not possible to have one helicity and not have the other. But for massless particles it is possible. The photon does it. We used to think the neutrino did it. Turned up the neutrino doesn't do it, but we can still describe the old theory, which is simpler than the new theory. And that's what I'm doing. AUDIENCE: Is there any reason they're missing those? PROFESSOR: OK. The question is is there any reason why they're missing those spin states. The answer basically is that there's no solid reason, which is why we didn't know if it was going to be missing the states or not. But for the case of massless particles only the m states are all completely disconnected from each other under Lorentz transformations. For any non-zero mass, all the spin states mix. And because they all mix under Lorentz transformations you can't have any one without having all of them. But that statement disappears when the particle becomes massless. So when the particle becomes massless, essentially, each spin state is its own kind of particle. And it might be part of a spectrum of real particles, or might not. And we don't know any fundamental principle that answers that question, other than experiment. OK. So I was just going to write down a g for e plus, e minus pairs. Remember this g determines the energy density, the pressure, and the entropy density. And it again has a factor of 7/8 because these are fermions. It has a factor of one to sort of follow the general pattern because there's only one species of electrons. For neutrinos we had three, but for electrons we just have one. We do get a factor of two for particles, anti particles because an e plus is different from an e minus. And there are two spin states for the electron. If could be spin up, or spin down, or spin equals 1/2, and there's spin equals minus 1/2. So there are two spin states. And that gives us a factor of 7/2 for g. And for g star e plus, e minus, the only difference again, is in the first factor, which is 3/4 for fermions for g star. And then times dot, dot, dot. And that then is equal to just a nice round factor of three. And then if we go back to this earlier time, we have not only the e plus, e minus pairs, which started to exist when we crossed this threshold, we also have photons and neutrinos. So for kt, large compared to m sub ec squared, and I should say small compared to the next threshold, m sub nu c squared, mass of a muon. The mass of the muon as 106 MeV Mass of electron is a half of an MeV. So there's are good long range here where this is the right number. So g total is equal to 2 plus 21/4 plus 7/2 equals 10 and 3/4. So this number 10 and 3/4 plays an important role in a long segment of the history of our universe. Another number that you might be interested in is the energy density of radiation in the universe today. And the e plus, e minus pairs are of course long since frozen out because kt is a lot less than half of an MeV. So we just have neutrinos and photons. You might think that we could just add the formulas for photons to the formula for neutrinos, but that turns out not to be right. And the reason it's not right now is that we believe that the temperature of the neutrinos is not the same as the temperature of the photons. And this is a problem that you'll be working out on the next homework set, so I won't give all the details here because I want you to have the fun of working out those details. But I'll tell you the outline of what it is. The transition occurs when kt crosses this magic threshold of m sub e c squared, which means that the electron positron pairs are going to start to disappear from the thermal equilibrium mix. Now those electron positron pairs have both energy and entropy. It turns out that the energy is very hard to keep track of. And the reason why that is true is that we know that du dt is equal to minus 3h, times rho plus p, I think i have this right. times u. No, not times u. And this might be a u. Yeah, I think this is right. But what is definitely true is that it involves the pressure as well as the energy density. And keeping track of what the pressure's doing as a function of time is complicated because depending on exactly what the mix of particles are and the pressure is even given by a more complicated formula when you're very near the threshold, that is when kt is near the mass of a particle, there's a more complicated formula for the pressure. So the pressure is hard to keep track of. But the entropy turns out to be easy to keep track of because entropy is simply conserved during this process. So on the problem that you'll be doing for the next problem set you'll be looking at the entropy contained in these e plus e minus pairs. And then there's an important assumption which is valid. We will not really try to justify it, but at the time when the e plus, e minus pairs go out of equilibrium, when they disappear from the thermal equilibrium mix as kt falls below m sub ec squared, at that point the photons are still interacting strongly with everything else, and with each other. But the neutrinos have pretty much decoupled. They're not really interacting with anything anymore. So when the e plus, e minus pairs disappear they give essentially all of their entropy to the photons, and essentially not of their entropy to the neutrinos. And that means that you can calculate the entropy density of the photons, and the entropy density of the neutrinos and you can calculate from that the relative temperatures between the two. And the net effect is that the photons end up with a higher temperature than the neutrinos. So cosmology makes a clear prediction here. The universe today should be bathed in a thermal distribution of neutrinos. The temperature I think ends up being about 2.4 degrees Kelvin, a little colder than the photons. And it's really the great challenge of observational cosmology to try to measure those thermal neutrinos. Because neutrinos interact so weakly, nobody has come close to measuring the existence of those neutrinos. Everybody thinks they must be there, if anybody's discovers they're not there, it'll be a major shift in our understanding of cosmology. But nobody really knows how to measure them. So you guys could try to do that sometime during your career as physicists. So putting these things together what you'll show is that the temperature of neutrinos should be equal to 4/11 to the 1/3 power times the temperature of the photons. And given that, we can write down a formula for the energy density in radiation today. So u [? rad ?] 0 for today is equal to g for photons, plus 21/4, which is g for the neutrinos, but then we have to correct for the fact that the neutrinos are at the lower temperature, and remember energy density's go like the fourth power of the temperature. So there's a correction here, which is the fourth power of this 4/11. So this multiplies 4/11 to the 4/3 power, the fourth power of the temperature. And then the rest of the formula for energy density. Pi squared over 30 times k, times the temperature of the photons to the fourth power, divided by h bar c cubed. And if you plug numbers into this, you get a number which I wrote on the blackboard when we started talking about radiation for the radiation density of today's universe. It's 7.01 times 10 to the minus 14 jewels per meter cubed. So now we know how to derive this formula that I pulled out out of hat when we first started talking about energy density radiation. You might remember we used this to calculate when radiation matter equality took place. OK. Any questions about this? OK. Well in that case now I'm ready to come back and talk about neutrino masses more realistically. OK. [INAUDIBLE] the mass we now know. Or at least two out of the three do. And still I argued, or told you, that the mass didn't matter for the calculations we just did. And that seems a little strange because the calculation we just did depended on counting the numbers of spin states, and the number of spin states, of course, changes if the particles have a mass. Neutrinos could not have just one spin state if they had a mass. So the question then is what happens to these right-handed neutrinos. Remember the neutrinos we know and love are left-handed. OK. Nobody has ever seen a right-handed neutrino. But if the neutrino has a mass, they must exist. And one way to see is that argument I gave you about catching up to the neutrino and going faster than it. And then a left-hand neutrino would turn into a right-handed neutrino, which is, this is all you did, was change frames, and the world is Lorentz invariant. It means that right-handed neutrinos must in principle exist in any frame. OK. It turns out that there are two theories of how the neutrino mass works. And we don't know yet which of these theories is correct. And they are called the possibility of a Dirac mass or the possibility of a Majorana mass. And these are both named after people, so don't try to parse anything about the physical meaning of those words from the name. The Dirac mass is the easier one for us to understand because it's the same kind of mass that an electron has. So if the neutrinos have a Dirac mass it really would mean that there are right-handed neutrinos, which are a new species a particle that we haven't seen yet, but which would be implied by the existence of this mass. The catch is that because of the very peculiar way that the neutrinos interact it is possible for the right-handed neutrinos to interact vastly more weakly than the left handed neutrinos. And that would be the explanation for why we've never seen them because they interact so extraordinarily weakly. And even in the early universe where everything almost reaches thermal equilibrium, the cross-section for producing right-handed neutrinos would be so small that they would essentially never be produced. So right-handed nu's interact so weakly that they're not seen in lab, or in early universe. OK. The other type is more peculiar. And there are no other particles in nature that are known to have this type of mass, but it's a theoretical possibility. Majorana masses can only be possessed by particles which are neutral, but neutrinos are neutral. And if neutrinos have a Majorana mass it would mean that the right-handed partner of the left handed neutrinos that we see would in fact be a particle that we already know about, but it will be the particle that we have previously identified as the right-handed anti neutrino. So if the Majorana hypothesis is true, the anti neutrino and the neutrino would be the same particle. And one would be the right-handed version and the other would be the left-handed version. And that is also a possibility, we just know know. And in this case, clearly what we said about the early universe still works because we did the counting right. If it's the direct possibility we would have to do a further argument, we justify the fact that these right-handed partners interact so weakly that we would never see them even in the early universe, but that is the case. And for the Majorana case there clearly is no real difference from the calculations that we did. Any questions about that? Yes? AUDIENCE: What exactly do you mean when you say the direct mass is like the electrons? PROFESSOR: Right. What I mean is that it appears in the equations of motion in the same way that the that the electron mass appears, and therefore like the electron there's a right-handed electron and there's also a left-handed handed electron, which are just related to each other by a parity transformation. Yes? AUDIENCE: Is it not a problem in the Majorana case [INAUDIBLE] particle? Doesn't there have to be an [INAUDIBLE]? PROFESSOR: OK. The question is doesn't there have to be an anti particle, how can the neutrino and the anti neutrino be the same particle. The answer is no. The photon does not have an anti particle. So particles that have a charge of any kind must have an anti particle, but if the particle is really neutral then it can just be it's own anti particle, and that would be the case here. Yes? AUDIENCE: Sorry, I have two questions. One is what is the difference between a neutrino and a anti neutrino besides? I mean, how do we know they're the same, that they are the same particle [INAUDIBLE]. It's not just a matter of semantics. If the only difference that we can see is [INAUDIBLE]. Whether we call it the anti neutrino or say that it's the same particle, just [INAUDIBLE]. PROFESSOR: Right. OK. You're saying how do we know whether the anti neutrino was the same particle as the neutrino, and what do we mean by that statement anyway. I guess the answer is really something that comes out of the fundamental equations in the context of the quantum field theory. But maybe I can say something more concrete, however. There's a quantum number called Lepton number, and actually divides into three kinds of quantum numbers, so there's an electron number, muon number, and a tau number. And for all the interactions that we've seen that number is conserved, that is the sum of the number of electrons minus the number of anti electrons, plus the number of electron neutrinos, minus the number of anti electron neutrinos is conserved. And if that were a rigorous conservation law, then it would really mean that the neutrino was not neutral. It would have this nonzero charge called electron charge for the electron neutrino. So the Majorana possibility requires that that quantity is not really conserved. So that's an important fact about nature that we haven't really learned yet, whether or not it's exactly conserved or only highly approximately conserved. The belief is that it's only highly approximately conserved, but we don't know. Yes? AUDIENCE: Does a neutrino [INAUDIBLE]? PROFESSOR: Yeah. You are right. You're very quick. You are right that neutrino oscillations are certainly enough to prove that the individual Lepton numbers are not conserved, but they still have the possibility that total Lepton number can be conserved, which we don't think is true, but I don't think we've ruled it out yet either. Any other questions? Yes? AUDIENCE: In the Majorana case, if they are the same particle, do they still come in left and right-handed forms so as to keep the [INAUDIBLE] the same value, or do they only come in the left-handed phase, in which case the [INAUDIBLE] would be decreased [INAUDIBLE]? PROFESSOR: Well, the counting that we had before would be correct. And the only question is whether the factor of two that we put here for particle, anti particle is here where maybe the particle's really neutral and the factor of two is the number of spin states, but it's the same result either way. It's all in the question of whether the left-handed neutrino is the anti particle or the right-handed neutrino, or another spin state of the right-handed. I'm sorry. I'm saying this wrong. The question is whether the right-handed neutrino is the anti particle of the left-handed neutrino or whether it's another spin states of the left-handed neutrino. But either way the number particles that we think exists would be the same in the mayorana description. OK? OK. Now we're ready to actually write down for example a formula for kt. So sticking to this time range of kt is much, much bigger than m sub eb squared, but kt is much, much less than m sub nu c squared. We can write down a formula-- I'm sorry. Actually the formula's more general. I didn't really look carefully what I wrote. Forget this. As long as we know what little g is, and this formula's going to have a little g in it, and then you can fill in the right value for any time period you want. We can write down a formula for kt as a function of time. And that's because we've learned how to write energy density as a function of time, and we've learned how to express energy density in terms of the temperature. And putting those two together gives us a formula for the temperature as a function of time. And that formula has the form of 45 h bar cubed c to the fifth, divided by 16 pi cubed little g times capital G, Newton's constant, whole thing to the 1/4 power, times 1 over the square root of t. So if we had nothing but radiation, and if little g were constant, this would be the formula for the temperature as a function of time. Now what actually happens is that little g changes as we go through these different thresholds for which particles are contributing to the black-body radiation. And that means that the exact formula is a little bit more complicated than this. But as long as you're well into any one of these periods, as long as you're not near any of these border lines, this formula is in fact a very accurate approximation to the temperature as a function of time. And an interesting time period is about one second after the Big Bang. And that corresponds to this g equals 10 and 3/4 that we talked about earlier, where we have neutrinos e plus, e minus pairs, and photons. And since this is before the e plus, e minus pairs have disappeared, the temperature of the neutrinos at this stage is still the same as the temperature of the photon. So everything is at the same temperature here. We just add up the g's. And the end result for that is that kt is equal to 0.860 MeV million electron volts, divided by the square root of t in seconds. There are the units that you could express it in in the notes. Other sets of units are given. So if 0.86 is about 1, which it is for many purposes, then roughly speaking we're saying at 1 second after the Big Bang, kt was about one MeV, which means it's higher than the rest mass of the electrons, which is 1/2 MeV. So the electron positron pairs were still pretty much present at one second after the Big Bang, but they start to disappear pretty much immediately after that. One can also write down what the temperature itself is. The temperature is 9.98 times 10 to the ninth k, divided by the square root of t in seconds. OK. The next item we want to talk about following the history of the early universe is recombination and decoupling. Until the temperature fell to about 4,000 Kelvin, the hydrogen in the universe, and the universe was mostly hydrogen, about 20% helium and the rest hydrogen, and until the temperature fell to 4,000 Kelvin, the hydrogen would be ionized. This is another [? stat net ?] calculation that we're not going to be doing. 4,000 degrees is not some magic temperature associated with hydrogen. The point at which the hydrogen ionizers depends on its density. But for the densities in the early universe, the ionization point is about 4,000 Kelvin. So at t equals 4,000 Kelvin the hydrogen recombines. Now the word recombine has somehow historically taken hold. So everybody calls it recombination in spite of the fact that according to our theory it was never combined previously at any time. So the prefix re there has absolutely no meaning whatever, but nonetheless it is completely conventional. OK. To estimate when this happens we can use an important fact, which we haven't said yet. Because entropy is conserved and entropy density is equal to some constant times t cubed, if entropy is conserved then like mass density in a matter-dominated universe, as the universe expands, the entropy thins out. And therefore just like the mass density in a matter-dominated universe, the density should go down like 1 over a cubed, 1 over the volume. Now strictly speaking this is only true if g is fixed. Well, actually that's not right. It's always true. What I'm about to write next is only true if g is fixed. If g is fixed then entropy density is proportional to the temperature cubed. And if you put these together you could see that t is just proportional to 1 over the scale factor. So as long as little g, as long as the number of degrees of freedom contributing to this black-body radiation is fixed, the temperature just falls like 1 over the scale factor. OK. OK. So we said that recombination occurs at 4,000 degrees. There's actually another number that's more interesting, which is decoupling, which is slightly colder. Now the definitions are that recombination is usually defined as the temperature at which half of the hydrogen has recombined, but some significant fraction of it is still not yet recombined. It doesn't happen suddenly. It happens gradually. So you have to pick some point along the way to say this is the temperature that defines the recombination. Usually take us to the halfway point, which I think is the number used to calculate that. But perhaps of more interest is the temperature at which the photons decouple in the sense that as the photons scatter between then and now, a typical photon has not scattered at all. And that's colder because when you cross 4,000 K you still have half of the hydrogen ionized, which means there's still plenty of electrons around for these photons to scatter off of. So you have to cool to a colder temperature until the photons cease to interact with the electron positrons in any significant way. And that's why decoupling temperature is somewhat lower than recombination temperature. We can estimate at what time decoupling happened. Now this is only a crude estimate, but will in fact be pretty accurate. Since we don't yet even know about dark energy, we're going to estimate the evolution of the universe between the time of decoupling and now as being entirely matter-dominated. The time of decoupling is long after the time of matter radiation equality. So the universe is matter dominated at the time of decoupling and we're going to assume it's matter dominated up to the present day. And that means that we know that the scale factor will evolve like t to the 2/3, and therefore the temperature will evolve, like 1 over t to the 2/3. And that will be enough to tell us how much time is needed to go from 3,000 Kelvin to the present temperature 2.7 Kelvin. So the time of decoupling making this approximation is just the ratio of the temperatures t zero over t decoupling, to the 3/2 power, times the time today. [INAUDIBLE] this fraction in the past. And plugging in numbers that's 2.7 over 3,000 to the 3/2 times 13.8 trillion years. And that's about 380,000 years. And this in fact this is pretty much exactly the number that people quote when they calculate this more accurately. So we really hit it just about on the nose. So the time of decoupling was about 400,000 years after the Big Bang. And I should add, and then we'll stop, that when we look at the cosmic background radiation, what we are really seeing is an image of the universe at this time, at the time of decoupling, because from this time onward photons have pretty much just travelled in straight lines. And that means that when we look at the cosmic background radiation we're really seeing an image of the early universe in exactly the same way as you're seeing an image of me when you observe the photons there traveling from my face to your eyes along straight lines. It's the same principle. So this determines what we're actually seeing in the cosmic background radiation. And therefore it's a very, very important number. And we'll stop there now and continue next time.
MIT_8286_The_Early_Universe_Fall_2013
12_NonEuclidean_Spaces_Open_Universes_and_the_Spacetime_Metric.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK. Good morning, everybody. Welcome to lecture 12 of A286. I can't think of any announcements for today, but let me begin by asking if there are any questions either about logistics or about physics. OK. In that case, let's get started. I want to begin by having a rapid run through of the things we talked about last time just to firm everything up in our minds and get us ready to go on. So, last time we were talking about non-euclidean geometry in a serious way. We began by considering the surface of a sphere, just a two-dimensional sphere embedded in a three-dimensional space, described by the simple equation x squared plus y squared plus z squared equals R squared. We said that if we wanted to talk about the surface itself, we'd want to have coordinates for the surface and not just speak of things in terms of x, y, and z. So we introduced the standard polar coordinates-- theta and phi, which are related to x, y, and z by these fairly well-known equations. Then we wanted to know the metric in terms of our new variables theta and phi, which is the main goal-- to figure out the metric. So we first considered varying the two variables one at a time. By varying theta, we see that the point described by theta phi would sweep out a circle whose radius is R, and the angle subtended is d theta. So the arc length is just R times d theta. So for varying theta, the arc length is given by that simple equation. Similarly, we went on to ask ourselves what happens when we vary phi. As we vary phi, the point described by theta comma phi again sweeps out a circle, but this time it's a circle in the horizontal plane whose radius is not R but whose radius has this projection factor that's R times sine theta. So the angle is again-- excuse me, the arc length is again d phi, the angles, times the radius, but the radius is R sine theta. So ds, the total arc length, is R times sine theta d phi. Then, to put them together, we notice that these two variations are orthogonal to each other, which you could see pretty directly from the diagram. So if we do both of them at the same time and ask what's the total length of the displacement, it's just a simple application of the Pythagorean theorem. And we get the sum of the squares. So, varying theta gives us Rd theta. Varying phi gives us R sine theta d phi. And putting them together, we just get ds squared is the sum of the squares of those, which is R squared times d theta squared plus sine squared theta d phi squared. And that's the standard metric for the service of a sphere in polar coordinates. So that was a warm up. What we really want to do is to elevate that problem one more dimension, and then we have a model for the universe. We can use the same method to construct a three-dimensional space, which is a three-dimensional surface of a sphere embedded in four euclidean dimensions, and that becomes a perfectly viable homogeneous, isotropic, non-euclidean metric that can describe a universe and, in particular, describes the type of universe called a closed universe. So to do that, we introduce one more axis, w. And we consider the sphere described by x squared plus y squared plus z squared plus w squared equals R squared. So it's a three-dimensional surface of a sphere in four dimensions. We then need to introduce one more variable to describe points on the surface, and we introduce this in the form of a new angle. The new angle I chose to call psi. And we measure that angle from the new axis, from the w-axis. So the new angle psi is simply the angle from the w-axis, which means that the projection of our vector from the origin to the point in the w direction is just the R cosine psi and the projection into the x, y, z subspace is R times sine psi. And the four equations that describe x, y, z, and w are shown there. And all we did is we set the w-coordinate equal to R times cosine psi, which is just the statement that psi measures the angle from the w-axis. Nothing more. And then we multiplied x, y, and z by a factor of sine psi, so that now we still have maintained the condition that x squared plus y squared plus z squared plus w squared is equal to R squared, which you can prove directly by manipulating this using the famous identity sine squared plus cosine squared equals 1. Nothing more profound than that. So, we're now ready to go ahead and find the new metric, and this time it'll really be something nontrivial, something you didn't already know from high school. The new displacement is to vary psi. If we vary psi, it's really the same story as we've seen before except in a different plane. ds is just equal to R times d psi. I guess, as we vary psi, the point described by these coordinates makes a full circle of radius R. OK, now what we want to do is put all this together. If we vary psi, we know that ds is equal to R times deep psi. If we just vary theta or phi, it's the same thing we had before. We don't need to rethink it. All we need to do is remember there's an extra factor of sine psi in front of all [INAUDIBLE] in the XYZ subspace. So if you vary theta or phi, ds squared is just equal to what we had before for the metric multiplied by the extra factor of sine squared of psi. Then to put them together, if we assume for the moment that they are orthogonal to each other, then we just add the sum of the squares. And that is the right answer. But I'll justify it in a minute. But jumping ahead and making the assumption that these separate displacements are always orthogonal to each other, ds squared is then just the sum of the squares, and we get this matrix to describe our closed universe in terms of the variables psi, theta, and phi. To prove this orthogonality, which is crucial for believing that result, I gave an argument last time, and I'll outline again on the slides here. We can consider the two displacement vectors that we're trying to show to be orthogonal. dR sub psi is a four-dimensional vector, which represents the displacement of the point being described by these coordinates when psi is changed to psi plus deep psi, infinitesimal change in the psi coordinate. Similarly, I'm going to let dR sub theta be the displacement vector that the point described by these coordinates undergoes when theta is varied by an infinitesimal amount, d theta. And what we're trying to show is that these two are orthogonal to each other. So if we do them both, the magnitude of the change is just the sum of squares, the square root of the sum of squares. So, first looking at dR sub theta, we notice that dR sub theta has no w-component. And to make that clear, we should go back a couple slides and look at how w is defined. w is defined as R times cosine psi. So if we vary theta, w doesn't change. It doesn't depend on theta. So if dR sub theta has no w-component, it means that when we take the dot product of dR psi with the dR theta, we want to show that this is 0 to show that they're orthogonal. The w-components won't enter, because one of the two w-components is 0, and the dot product is a sum of the product of the x-components plus the product of the y-components plus the product of the z-components plus the product of the w-components. So w-compontents only enter as a product of the two w-components. So as long as one of them is 0, there's no contribution there. So the four-dimensional dot product reduces to a three-dimensional dot product. And here I'm introducing a peculiar notation when I put a subscript-- a superscript, rather-- 3 in a vector. I just mean take the first three components and ignore the fourth and think of it as a 3 vector. So the dot product that we're trying to calculate now is just the dot product of two 3 vectors-- the one that we get when we vary psi and the one that we get when we vary theta. Next thing to notice is that we can look at the properties of these two vectors. And dR psi, the vector we get when we vary psi, I claim is in the radial direction in this three-dimensional subspace. And we can see that by looking again at these formulas that relate the angles to the Cartesian coordinates. When we vary psi, sine psi changes, but sine psi multiplies x, y, and z all by the same amount. So sine psi changes. It changes x, y, and z proportionally. And if you change x, y, and z proportionally, it means you're moving in the radial direction in this three-dimensional subspace. On the other hand, dR theta is what we get when we vary theta. And from the beginning, theta was defined in a way that parametrized the sphere. So varying theta only moves you along the sphere. It does not change your distance from the origin. So varying theta is purely tangential. So we have a dot product between a radial vector and a tangential vector, and those are always orthogonal to each other. So we get a dot product of 0, as claimed. So the two original four vectors are orthogonal to each other, which is what we're trying to prove. OK, everybody happy with that? It is a crucial step. You haven't really gotten the results unless you know these vectors are orthogonal. OK, almost done now. We then later in lecture talked about the implications of general relativity, and here we didn't prove what we were claiming. We just admitted that there are some things in general relativity that we're just going to have to assume, and this really is almost the only one. General relativity tells us how matter causes space to curve. And it does that in the form of what are called the Einstein field equations. And we're not going to learn the Einstein field equations. That's the subject of a general relativity course. So we're just going to have to assume what general relativity tells us about how space curves, and in particular in this instance, what it tells us is that the radius of curvature R-- this R that we've introduced into our metric-- is the radius of curvature of the space, is related to the matter and motion by R squared being equal to a squared of t divided by k. And we did argue last time that, that k in the denominator really is necessary just to make the units turn out right. So we really know by dimensional analysis that this formula has to hold up to some factor. The fact that the factor is 1 is a fact about general relativity, which we're not showing at this point. When one puts this back into the metric to express the metric in terms of a of t, we find finally that the metric can be written as shown in the box here, and this is the last equation, where I've made a substitution of variables. I replaced the angle psi by a radial coordinate little r, which is defined to be sine of psi divided by the square root of k. And this form of the metric is what's called the Robertson-Walker metric. And it's a famous form of the metric. This is what people normally use. So that finishes everything we said last time, I think. Any questions? Yes. AUDIENCE: What is the motivation for saying that the-- where you can describe space as a three-dimensional sphere in a four space. Is it because it's only real geometry that, where there's isotropian [INAUDIBLE]? PROFESSOR: Yes, that's right. I was going to be saying that shortly, but yes. This metric and its open universe counterpart and flat space together make up the most general possible metric, which is homogeneous and isotropic. At this stage, I'm really not going to claim that anyway, but at this stage, what we do know is that this metric is homogeneous and isotropic. And certainly what we're trying to construct is metrics, which are homogeneous and isotropic. But this also is actually the only possibility within that small class. Any other questions? OK. In that case, we will continue on the blackboard. So what we have derived so far is the metric for a closed universe. Maybe I'll start by getting on the blackboard the same formula that's up there, just so I can see it better. Even though you can probably see equally well either way. A closed universe is described by ds squared is equal to a squared of t times dr squared over 1 minus kr squared plus r squared d theta squared plus sine squared theta d phi squared. And to relate this variable r to our previous definitions, little r is equal to the sine of psi divided by the square root of k. OK. Question back there? AUDIENCE: Psi is-- sorry this is just [INAUDIBLE] think. Psi is which angle? PROFESSOR: OK, the question is psi is which angle. Psi is the angle we introduced when we went from two-dimension sphere embedded in three dimensions to one dimension higher. Psi is the angle from the new axis. The angle from the w axis. AUDIENCE: OK. PROFESSOR: OK. So, we've covered a lot of ground here. We have our first non-euclidean metric that's visibly important. But we know from our work on the Newtonian model of the universe that this little k doesn't have to be positive. It can be positive, negative, or 0. If k is 0, the metric is actually just the metric of a flat space. But when k is negative, it's a case that we haven't talked about yet. So we want to know, what will we write for a metric if k were negative? And the answer to that turns out to be perfectly simple. This formula has a k in it. Lots of times in our experience-- I'm sure we all know-- when we write an equation for one sine of a variable, we find that the same equation works even if the variable has the other sine. So if you're buying and selling stocks, if the stocks go up and the stocks go down, you can use the same equations. The price today is the price yesterday plus the increment, and the increment could be positive or negative. But that equation-- price today equals price yesterday plus income-- it still works. And same thing here. If k it happens to be negative, there's nothing wrong with this formula. It, in fact, describes an open universe just as well as it describes a closed universe. Now notice, however, that things are a little bit tricky. If you look at the equation that I wrote immediately below here, if k is negative, we have the square root of a negative number here, so the denominator would be imaginary, and what would that say about r and psi? It would obviously confuse us. So you really have to write the metric and the correct form before you can just change the sign of k. If we had written the metric in terms of psi and not made the substitution, we could just as well have written the metric for a closed universe as a squared of t divided by k times d psi squared plus sine squared psi d theta squared plus sine squared theta d phi squared. This is an alternative metric for the closed universe. It's, in fact, where we started. We then made a substitution, replacing sine psi by little r. If we had the metric in this form and we said, well, let k be negative instead of positive, then notice that a squared is certainly positive, so we'd have a negative number out here times things which are also manifestly positive. We would have a negative definite metric instead of a positive definite metric. So we could not change the sign of little k in this formula and get what we want. So you have to careful. It doesn't always work. But it does work when you write the metric in this form. Now since it doesn't always work, and since we haven't really made any sound arguments yet, I'd like to spend a little time describing-- I'm not going to do the calculation because it's too messy, but I'd like spend little time describing how you would show that this metric works for an open universe. So first thing to recognize is-- what do we actually mean when we say it "works"? Can somebody tell me what I probably mean when I say that? Yes. AUDIENCE: It doesn't have any glaring contradictions? PROFESSOR: Doesn't have any glaring contradictions. Yeah, that's good, but we can be more specific, especially since I'm going to try to describe how we would actually show it, and it's a little hard to show that something doesn't have glaring contradictions. What do we actually care about in constructing these metrics? Yes? AUDIENCE: Does the goal that they hold well in limits, i.e., the flat universes? PROFESSOR: That the physics will hold well in certain limits, like they should approach the flat universe and limit. We certainly do want that to happen, but there is something else that we want that doesn't involve taking limits, because you have different things which all approach the same limit, of course. Making sure an answer approaches the right limits is a good way to test the answer, because most wrong answers will not have the right limits. But merely knowing you have the right limits does not prove that you have the right answer. Yes? AUDIENCE: It could also reflect an isotropic and homogeneous non-euclidean? PROFESSOR: Exactly. Exactly. What we're looking for is a homogeneous and isotropic non-euclidean space, because that's what we know about our universe. It's homogeneous and isotropic, and we're trying to build a mathematical model of those facts. So we want homogeneity and isotropy. If we limit isotropy to isotropy about the origin, which is enough if we're going to later prove homogeneity, which will prove that all points are equivalent, isotropy about the origin is obvious here, because the angular part is just exactly what we had for a sphere in three euclidean dimensions. So it behaves on angles exactly like a euclidean problem, so we know that it's isotropic. If you point out that algebraically as you look at it, it's not obvious that it's isotropic. We just know where that expression came from. It came from the sphere. In terms of theta and phi, it's not manifestly isotropic. And that's because in choosing theta and phi, we chose a special point, the North Pole, to measure our angle theta from. And the choice of that special point for our coordinate system broke the isotropy. But we know that deep down it is isotropic. And that idea, that you can have such isotropy without having manifest isotropy is also crucial to how homogeneity plays out in this metric. I claim, and we know really, that this metric is homogeneous. At least we know where it came from. It came, again, from the surface of a sphere one dimension higher than just the angular part. And then spherical picture--it's obviously homogeneous. But nonetheless, in building our coordinate system, we had to break the homogeneity. We chose a special point-- again, what we might call the North Pole-- in this case, the point where w has its maximum value and made that point special. The point where w had its maximum value in the x, y, z, w space is the point which is now the origin of this coordinate system. So, if we wanted to prove that this metric really is homogeneous, we would like to prove it for the k equals minus 1 case. But let's first, imaging, what would we do if we wanted to prove that it was homogeneous for the k equals plus 1 case or k positive case? The case that we really think we do understand. The closed universe. For the closed universe, this does not look homogeneous. It looks like the origin is special. r equals 0 is special. But we know that it came from the sphere, and if somebody asked us to prove that, that metric was homogeneous-- more particularly, somebody might, for example, challenge us to construct a coordinate transformation, which would preserve the metric and map some arbitrary point r 0 theta 0 phi 0 to the origin. We might undertake that challenge. It's a lot of work. We're not going to actually do it. And I promise I'll never ask you to do it. But I want to talk a little bit about how we would do it, because we do have a method, which we know will work. And knowing that we have a method that works is all we really need to know. So suppose we wanted a coordinate transformation that preserves the form of the metric and maps some arbitrary point, and I'll give the coordinates to this arbitrary point a name. I'll call it r 0, theta 0, and phi 0. These are coordinates of a point. So we're going to map this arbitrary point to the origin. Notice that this is a concrete statement about homogeneity. If we can map an arbitrary point to the origin while preserving the metric, we're really proving that an arbitrary point is equivalent to the origin. And if an arbitrary point is equivalent to the origin, then all points are equivalent to the origin and equivalent to each other. We're done. That proves homogeneity. So, suppose we wanted to do this. How would we do it? The point is that knowing how we got this metric from the sphere allows us to go back to sphere and rotate the sphere and rotate back and derive the coordinate transformation that we want. And I'll just describe that in slightly more detail. What we would first do is-- I claim we can do it in three steps, each of which we know how to do although they're messy. I guess I'll start over here. So step one is to find the x, y, z, w-coordinates that go with this point. Because once we have the x, y, z, w-coordinates, we're in our four-dimensional space where we know how to do rotations. So the first thing we do is we just find the corresponding x, y, z, w-coordinates where x0 is just the x-coordinate that goes with r0, theta0, and phi0. And y0 is the y-coordinate that goes with r0, theta0, and phi0. And z0 is the z-coordinate, and w0 is the w-coordinate. And these are just points we had before. I'm writing them symbolically, but we know how to express the Cartesian coordinates in terms of the angles. Yes? AUDIENCE: Do we have a psi0 as well? PROFESSOR: r0 replaced psi0. r0 is the sine of psi divided by root k. So we only need three coordinates. We could have different choices of what we call them. We could've used psi here. The reason I'm using little r is that I want to, when I'm done, describe what we would do if k were a negative. And if k were a negative, we already said that psi does not actually work. We have to use different coordinates in order to smoothly write an open universe coordinate system. Yes? AUDIENCE: How is that not like cheating? Because, I mean, you did define r with the root k, and now we're just kind of ignoring them. PROFESSOR: Well that's exactly-- the question is why is this not cheating? And the reason it's not cheating is because of what I'm about to show you. What I'm saying really is that just setting k negative is something you might expect to probably work. I think you have good grounds to expect it to probably work. Now what we're talking about is how to actually show that it works. Yes? AUDIENCE: Professor, what's the necessity of defining w as opposed to, I don't know, t? The traditional, like, t-- PROFESSOR: OK. The question is why did I call the fourth variable w and not t. The answer is that the variable we're talking about here is not time. It's another spatial coordinate. So, for that reason I think it's better to call it w than to call it t. Of course, needless to say, the name of a variable doesn't have any actual significance. So I certainly could have called it t equally well, but I think that would have caused some confusion by people thinking it was time, which it's not. Any other questions? OK. So, what I'm outlining is the steps that you would use to prove that this is homogeneous. I'm doing it for the closed case. Now the point is that if we can do it for the open case, we'll prove that the open metric is what we want it to be-- homogeneous and isotropic. And that's the only criteria that we have for goodness of a metric. So, continuing with the closed case in mind, the first step of doing this mapping, to map some arbitrary point to the origin, is to first find its x, y, z, w-coordinates, its Cartesian coordinates. Once we have the Cartesian coordinates, we know that in the four-dimensional space we can perform ordinary euclidean rotations. Rotations in four dimensions [INAUDIBLE], not three dimensions. And even rotations in three dimensions are not all that simple, but nonetheless in principle, we know how to do rotations in four dimensions. And we know what we're trying to do in the four-dimensional picture. We're trying to rotate this point to the origin of our coordinate system. And the origin of our coordinate system-- the way we've done this mapping-- is to make w equals capital R, w equal to its maximum value, the center of our coordinate system. That's where psi was equal to 0, and now where our new variable little r is equal to 0. So we'd like to map this point, whatever it is, to the point where w has a maximum value, and the other coordinates all vanish. So we can do that. We can find a rotation that does that. It's not even unique, because you can always rotate about the final axis. But in any case, we imagine that we can do that. And that's step two-- is to find the right rotation. And a general rotation is a linear transformation, so you can write it as x prime, y prime, z prime, w prime, as a four vector, is equal to some 4 by 4 rotation matrix times the original four coordinates. So an equation of this form would describe a four-dimensional rotation. And we in particular want the four-dimensional rotation which maps-- maybe I'll write it as a similar matrix equation-- what we want is that the matrix, when it operates on x0, y0, z0, and w0, which remember are just the four coordinates that correspond to our original point, r0, theta0, phi0, we want this to map into 0, 0, 0, r, which is the four-dimensional description of the origin of our new coordinate system. And then finally, step three is the obvious one. Once we've found the transformation that maps to the origin, now we just go back to our original angular coordinates. So, now we set r prime equal to the radius function of x prime, y prime, z prime, and w prime. And this just means the r-coordinate-- that corresponds to those four euclidean coordinates and similarly for the other variables. Theta prime is the theta function of x prime, y prime, z prime, and w prime. And phi prime is equal to the phi function. All these are functions that we know. I just don't want to write them out explicitly, because that's a lot of work. So it's phi of x prime, y prime, z prime, w prime. So now with the three steps, we have our mapping. We could start with an arbitrary point, perform the rotation, and then calculate the angular variables again. Ok, do people understand what I'm talking about here? Oh good. OK. And the good things is I promise I won't make you do it. And I've never done it either, to be honest. But it's obvious that we can do it. And if we can do this, this would prove and especially demonstrate homogeneity. It would be a mapping that would map an arbitrary point to the origin, proving that, that arbitrary point was equivalent to the origin as far as the metric is concerned. And now my claim, I mean, I'm not really going to prove this either, but it's a claim that could be verified by going through things. And it also seems highly plausible-- that if you looked at each of these steps-- these are all just algebra steps, these are all just algebraic equations-- that they would work just as well for negative k as they will for positive k. And by doing these same series of manipulations for negative k, you would prove that the Robertson-Walker metric for negative k is homogeneous, which is our goal. We already know it's isotropic. If we can prove it's homogeneous, we're home free. It's [INAUDIBLE]. Yes? PROFESSOR: Does the necessity of placing the origin at r, like big R, on w is that just because that thing is expanding? Or-- I'm kind of having trouble understanding why it wouldn't be just straight 0s. Like, why there's that [INAUDIBLE] value [INAUDIBLE]? PROFESSOR: OK, the question is why does the origin look like this as opposed to just being all 0s. The answer is that all 0s is not even in our space. Because, remember the space we're interested in is the surface of the sphere. And the surface of the sphere obeys x squared plus y squared plus z squared plus w squared equals r squared. So if all the coordinates were 0, it's not part of our space at all. So we're using this four-dimensional space, the embedding space, to make things simple. But in the end, we're only interested in the three-dimensional surface. So the origin of our coordinate system for that three-dimensional surface had better be in the three-dimensional surface. So of course, other choices we could have made-- we could have put it anywhere we want on the surface. Choosing to put it where w has its maximum value is just an arbitrary convention. Yes? AUDIENCE: So you were saying that most metrics are not homogeneous? PROFESSOR: Oh yeah. Sure. Most metrics are not homogeneous. Most objects are not around. AUDIENCE: When we're doing this math to show that the Robertson-Walker was homogeneous, it didn't seem that we use the exact form of the Robertson-Walker metric at all in it. We just said, it is possible to do these-- PROFESSOR: Well, no. We did use the form when we made this rotation. We used the form that, in the euclidean formulation, we knew that it was rotationally invariant. And the rotational invariance in the euclidean formulation is homogeneity and guarantees homogeneity, but it's a special property. If it was ellipsoidal shaped instead of spherical, when you rotated it, it would not be invariant. If it had any bumps or lumps when you rotated it, it would not be invariant. Yes, Aviv. AUDIENCE: I feel like there should be a fourth step where you show that the metric doesn't change forms [INAUDIBLE]. PROFESSOR: Yeah. We do need to know the metric does not change form, but I think we do have a guaranteed-- maybe we should've said some words. I don't think it's really a fourth step in the sense that I don't think it requires anymore algebra. But the point is that we know that the metric is invariant, that the four-dimensional metric is invariant under this rotation, which was really the only non-trivial step. And otherwise, besides the rotation, all we did is we went from the r theta phi variables to the utility euclidean variables, and then we went back from the euclidean variables to the r theta phi variables. But we already know how to go from euclidean variables to r theta phi variables, and it results in that metric. And it will still result in that metric when, if a prime is on all of the coordinates. AUDIENCE: So you don't have to say, like, suppose we have a [INAUDIBLE] byproduct. And do the mapping, figure out what the ds squared is in terms of r theta phi to show that it's the same? Do we have to do that [INAUDIBLE]? PROFESSOR: OK, the question is do we have to explicitly show that ds squared is the same for the new variables as it was for the old variables. We certainly want to be convinced that that's true, and we certainly want to have an argument which convinces us. But I would claim that if you think about what underlies these steps, I only gave a schematic description of them. If you think about what underlies these steps, I think it's implicit that the form of the metric is what we had. The form of the metric that we had was completely dictated by the transformation, which expressed r theta and phi in terms of x, y, z, and w. And as long as you know the metric in x, y, z, and w, and that's the euclidean metric both before and after our rotation, then when you use the same equations to go from x, y, z, w to r, theta, and phi, you'll always get the same metric. So I think we are guaranteed by this process to get a metric for our new variables, r prime, theta prime, and phi prime, which has exactly the same form as the original metric. Because it's the same calculation again. The only difference is that this time the variables all have primes on them. And the crucial step, the step that was nontrivial, is the fact that this rotation did not change the metric. That's where the homogeneity was built in, that we started with a sphere that we knew was rotationally invariant. And this whole calculation just extracts that homogeneity that we built in from the beginning. Yes? AUDIENCE: For k negative, it's r squared over k, so is r negative? PROFESSOR: No. r is still positive when expressed in terms of x, y, z, and w. Let me think if I can show that. I'll show that next time. It's a little involved, but it will be positive. I might add that if we look at this formula and ask what's going on, what's going on-- we don't necessarily need to know this-- but what's going on is that in going from the closed case to the open case, k goes from positive to negative and therefore square root of k becomes imaginary. But psi also becomes imaginary. To describe the relationship between the closed metric and the open metric, if you're using psi, you have to say that for the closed metric, you'll use real values of psi, and for the open metric, you'll use imaginary values of psi. And that makes r real and makes this formula work. And you could also then see how this formula works. If psi is assigned imaginary values, then deep psi squared is negative, so this negative sign cancels that negative sign, and you again get a positive definite metric. Yes? AUDIENCE: So how can we choose the imaginary [INAUDIBLE] of sine and psi. Is that just to reflect or is that a choice that we're making for our model? PROFESSOR: OK. Yeah. The question is why do we choose to use the imaginary value of psi. And the answer is perhaps that I failed to state all the conditions we're interested in when I said what properties we want this metric to have. We want the metric that we're seeking to be homogeneous and isotropic, as we said. What we didn't say, but what I kind of had in the back of my mind as an assumption, is that the metric should also be positive definite. You can construct other metrics which are homogeneous and isotropic but not positive definite. In fact, you would if you let psi be real and let k be negative. You'd have a negative definite metric, which would still be homogeneous and isotropic. So to enforce all three properties, you have to use some imagination. And the easiest way to do it is to write the metric in the magical form where you can just let k go to minus k, and that's the Robertson-Walker form here. And if you write it this way, when you let k go to minus k, it becomes negative definite, and you have to scratch your head. And if you really clever, you might say, well, if I assign negative-- excuse me, if I assign imaginary values to psi, it'll become positive definite again. That works, but it's less straightforward. Yes? AUDIENCE: Do we want it to be positive definite so that it's visible? PROFESSOR: Exactly. We want it to be positive definite so it's visible. AUDIENCE: Is there any way we can have a negative definite for a multidimensional space that's reflects a 3D positive definite space? PROFESSOR: Is there anyway in higher dimensional space or something that we can have it maybe have mixed signs or be negative. Well in fact, we will see shortly, because we're going to add in time, time will occur with the opposite sign, and it will not be positive definite anymore. But that will nonetheless correspond to real physics. AUDIENCE: Right. So why do we have to force that ds squared into spaces? PROFESSOR: Because now we are talking only about space, and certainly for our universe, space is positive. Now, I might add since you brought this up, that in relativity, there's no clear distinction between space and time, so you might wonder why should I be saying that space is positive and time is negative. And perhaps I'm oversimplifying a bit when I say that space is positive and time is negative. But what is a requirement for general relativity to match our universe, and therefore a requirement that we impose on the general relativity theory, is that the metric have three positive eigenvalues and one negative eigenvalue. And that's how it's described, and that's called the signature of the metric-- the number of positive eigenvalues and the number of negative eigenvalues. And reality clearly has-- well, I shouldn't say clearly. String theory is more dimensions-- but to describe our macroscopic world, clearly, we have quantities that we intuitively identify as three space dimensions in one time dimension. And the metric that describes that is a metric whose signature is three positive eigenvalues and one negative eigenvalue. Although, some people reverse the sign conventions and say it's three negative and 1 positive, which works just as well. You can define the metric with either sign. But this is the one that we're using, the one that corresponds to space being positive. OK, any other questions? OK. In that case, let's move onward. What did I want to talk about next? OK. I wanted to write on the blackboard a statement which came about earlier due to questioning but hasn't been written on the blackboard yet, which is that there's a theorem which says that the most general possible three-dimensional metric, which is homogeneous and isotropic, is this space. So any three-dimensional, homogeneous, and isotropic space can be described by the Robertson-Walker metric. We are not going to prove this, but it is a theorem. If you want to see a proof, there's a proof, for example, in Steve Weinberg's gravitation and cosmology textbook. And in a lot of other books, I'm sure. But we'll take it for granted. These are certainly the only homogeneous and isotopic spaces that we know how to construct. And in fact, it's the only ones that exist. Now, I emphasize that if the space is homogeneous and isotropic, obviously the metric does not have to look exactly like that, because you can choose different coordinates. We could take these coordinates and make some arbitrary transformation and make the metric look incredibly ugly. It would still be a homogeneous and isotropic space. But the claim is that any homogeneous and isotropic space can be written in metric that looks exactly like this with a proper choice of coordinates. OK. Next thing I want to discuss is the size of these universes. Size in the generalized sense of, actually the question of whether it's positive-- whether it's infinite or finite. Notice that our closed universe, one can see from the embedding in four euclidean dimensions, is finite. The surface of a sphere is finite. And one can also see it from the form of the metric, although one has to think a little bit about how exactly it works. If we start at the origin and let r get bigger, clearly something funny's going to happen when r is equal to 1 over the square root of k. That is, when kr squared is equal to 1, this metric will become singular. If one goes back to the angular description in terms of psi, it's clearer what's going on there. When kr squared is 1, that's exactly where sine psi is one. And that just means you've reached the equator of your sphere. Quick picture. Size measured from the w-axis. The equator corresponds to sine psi equals 1. And then if you continue, psi gets bigger up to pi, but r starts getting smaller again. So r is double valued in the sense that there are two latitudes at which r has the same value-- one of the northern hemisphere and one in the southern hemisphere. But in any case, r starts from 0, goes to some maximum value, and then goes back to 0. Everything is finite. And one could integrate and find the total volume, and it's finite. And I think it was or will be a homework problem where you do exactly that integral at the volume of a closed universe. On the other hand, if k-- let's first set it equal to 0. If k is 0, we have here just the euclidean metric and polar coordinates describing flat space. And there's no limit on r. r can become as large as you want. You can hypothesize that space somehow ends, but we don't believe-- we don't know of any end to space. And there are-- in any case, a precise way you could describe what it would mean for space to end in general relativity and the usual postulates of general relativity is that , that doesn't happen-- that space doesn't just have an arbitrary end. So the flat case is an infinite space when k is equal to 0. It's just an infinite euclidean space. Similarly, if k is negative, then nothing funny happens as r increases. So there's no reason not to let r increase to infinity. Anything else would just be putting in an arbitrary wall into space without any motivation for believing that such a wall is there. And one has to remember that r is not physical distance. So the fact that r can go to infinity doesn't necessarily make the space infinite, but you can calculate physical distance. We can calculate the physical distance from the origin to the radius r, and we get that just by integrating the metric. The metric tells us what the actual physical length is for an infinitesimal segment. That's what the metric meant in the first place. So the integral that we'd be doing of an a of t out front, and then we'll be integrating dr prime over the square root of 1 minus kr prime squared from 0 up to r. Remember k is negative. So this is a positive quantity under the square root. It's not going to cause any problems by vanishing on us. And this is an integral, which is in fact, doable. And it's just equal to, still the a of t in front, which I think I left out in the notes, times an inverse hyperbolic cinch of the square root of minus kr over square root of minus k. Remember k is negative. See these are all square roots of positive numbers. And that cinch function, the inverse cinch can get to be as large as one wants by letting r be as large as one wants. It grows without bound. And that means the physical distance grows without bound as r grows to infinity. And it grows faster than linear, I think. I take that back. I'm not sure. Yes? AUDIENCE: I guess I'm still confused because r is sine of psi over square root of k, so sine psi is bounded by 1 and negative 1, so if we're letting r go to infinity, how-- PROFESSOR: That formula only works for the closed case. r equals sine psi over root k. We can apply it to the open case if we let psi become imaginary. But then the bounds that you said no longer apply. The sine of an imaginary variable is, in fact, the cinch of a real variable. OK? Next thing I want to point out is that the Gauss-Bolyai-Lobachevski geometry that you did a homework problem about or it was an extra credit problem, so some of you did not. But we talked about the Gauss-Bolyai-Lobachevski geometry. That really is just an open Robertson-Walker, RW Robertson-Walker metric, but in two space dimensions. But it's completely analogous to the Robertson-Walker metric in three space dimensions that we're talking about here. So you might recall that Felix Klein construction looked very complicated. That's because of the coordinates that he used. Those coordinates might be simple from some point of view, but from the point of view of illustrating homogeneity, they're very complicated coordinates. And anyway, to physicists, the Robertson-Walker open coordinate system is familiar, and the Felix Klein coordinate system is not. OK. If there are no further questions about these spatial metrics, the next thing I want to talk about is adding time to the picture. Because in the end, we're going to be interested in a spacetime metric, because that's what general relativity is all about-- spacetime metrics. OK. Everything is going to hinge on an important fact from special relativity, which we are going to assume but not prove, because most you have had courses about special relativity elsewhere. And for those you who have not, you can either, if you wish, read an appendix to lecture notes five, in which the fact that I'm about to show you is derived, or you could just assume it. Whichever you prefer, depending on how much time you have. This is not a course about special relativity. You're not required to learn how to derive the fact that I'm about to write. And what I'm about to write starts with a definition given any two events. An event is a point in spacetime. Snapping my finger-- this clearly speaking event. It happens at a certain place in a certain time. And every real events occupies some small volume of spacetime. An ideal event is a point in spacetime. And we'll be talking about ideal events. Which, our model, as I said, has points in spacetime. So given any two events, one can talk about a separation between those events. And they will be separated in both space and time, although either one of those could be 0. But they're not both 0, or it's the same event. And it's possible to define an interesting quantity, which is the difference between the x-coordinates of the two events. This will be the separation between two events, which I'll call a and b, and xa and xb are the x-coordinates of those two events. And probably you all have enough imagination to guess that ya and yb are the y-coordinates of those two events. And za and zb are the z-coordinates of those events. And now here's a surprising one, if you haven't already seen it. We're going to have minus c squared times ta minus tb squared. Now, this is all in special relativity. I maybe should clarify. We haven't gotten to general relativity or cosmology yet. But we need to understand something from special relativity first. So in special relativity, it's natural to define that interval between two events. And the magical property, which is why we define this integral in the first place, is that if we had two different inertial observers, we could calculate how the coordinates as seen by one observer are related to the coordinates as seen by the other observer. And that's called the Lorentz transformation. And one finds that this particular quantity will have exactly the same value to both observers always. The two observers will, in general, find different values for every one of the four quantities here. But when the four quantities are added up with a minus sign in front of the time term, the calculations would show that you get the same value for inertial observers. So this quantity is called Lorentz invariant, meaning it's invariant under Lorentz transformations. And it makes it a very important thing to talk about because in the end, physical things have to be essentially Lorentz invariant because the laws of physics are Lorentz invariant. The laws of physics are the same in all Lorentz frames, so ultimately they have to involve quantities, which have some simple relationship from one Lorentz frame to another. OK now, we'd still like to have a clearer notion, I think, of what this quantity means. It's defined by that equation and principle, but it would be nice if we had some understanding of what it means. And I think the easiest way to describe what it means is to look at special frames, even though the important feature of this quantity is that it has the same numerical value in all frames. So the numerical value is the same in all frames, but some frames make it easier to interpret it. That's what I'm claiming. So, what that frame is depends on the value of s squared, or at least the sine of it. For sab squared greater than 0, which means it's dominated by the spatial terms there, because those are the positive ones. And therefore, the separation is called spacelike. Some books put a hyphen between space and like and some don't. I don't. And there's a theorem that says that if the separation between two events is spacelike, there always exists a Lorentz frame, an inertial frame, in which the two events happen simultaneously. Backwards E is a There Exists symbol. Then there exists an inertial frame in which a and b are simultaneous. In that frame, we could look at what that formula tells us sab squared is. Since they're simultaneous, ta equals tb, and therefore the last term does not contribute. So in that frame, s squared is just xa minus xb squared plus ya minus yb squared plus za minus zb squared. And we know what that is. That's just the euclidean length, the euclidean distance between the two points. So, in this frame, sab squared is just equal to the distance between events squared. Or you take the square root because they're positive numbers. You could say sab is equal to the distance between the two events. So when sab squared is positive, sab is just the distance between the two events in the frame in which they're simultaneous. If sab squared is not positive, it could be negative or 0. Let me go to the negative case first. For sab squared less than 0, for it to be less than 0, it means that this expression is dominated by the time term because that's the negative term. And therefore the separation is called timelike. And again, there's a theorem. The theorem says that if the separation between two events is timelike, there exists a frame in which it happened in the same location. If they're at the same location, we could again look at that equation that defines sab squared and ask what form does it take in this special frame where the two events are the same location. That means the first three terms are all 0 because-- same location. It means in that frame, sab squared is negative, and it's just minus c squared times the time separation squared. So in that frame, sab squared is equal to minus c squared tau ab squared, where tau ab is just equal to the time separation. So when sab squared is negative, its meaning is as minus c squared times the square of the time separation in the frame where the two events happened at the same place. Now, this notion of the two events happening at the same place has a particularly simple intuition if the two events that we're talking about happen on the same object, like two flashes of the same strobe if that strobe is moving at a constant velocity. Otherwise, all bets are off. But if that strobe is moving at a constant velocity so that the frame of the strobe is an inertial frame, then the frame of the strobe is, in fact, the frame in which the two events happened at the same place. They both happened at the bulb of the strobe light, which in the frame of the strobe is just one point. So this time interval, which the sab squared measures, is simply the time interval as measured by the object itself, is measured by an observer following the strobe. And if we place the strobe by a person with a wristwatch, this notion of time, which is called proper time, is simply the time measured by the person's wristwatch. A clock that follows the object so that anything that happened to that object happens at the same location. So if events happen to the same object, ta ab is just the time interval measured by that object. And as you give these things names for the spacelike case, sab is often called the proper distance between the events, and ta ab is the proper time interval between the events. OK. One more case to do, which is, if it's not positive or negative, there's only one remaining choice, which is it's got to be 0. If sab squared is 0, then again looking back at the original definition, it means that the spatial piece is equal to minus c squared times the time piece so that they all cancel. If you think about it, that's precisely the statement that these two events are located in just the right situation so the light beam that leaves one will just arrive at the other. Because it says that some of the first three terms, which is the distance squared, is equal to c squared times the time interval squared. And that just says that something travels at the speed of light. It could travel that distance in that time and go from point a to point b or vice versa. Only one or the other, not both. But it's always one or the other. So, for that reason, the interval is called lightlike. And it means that a light pulse can travel from a to b, or I could have interchanged a or b. Everything is squared. It doesn't matter which is which. Now, there's a peculiar thing here. You would think that if a light pulse can travel from a to b, there would still be some relevant measure to how far apart a and b are. However, what we're basically seeing here is that if a and b are lightlike separated, in any given reference frame you could talk about what the time interval is and that will be equal to the space interval up to a factor of c. But if we imagine looking at this at different frames, different inertial frames, these two points can get arbitrarily close together or arbitrarily far apart, depending on what frame we look at them in. There is no Lorentz invariant measure of how far apart they look. The Lorentz invariant-- the only Lorentz invariant measure simply tells us that they're lightlike related to each other. And this leads to some very peculiar issues when you try to prove rigorous theorems about relativity. You can't really say whether two lightlike points, two lightlike separated points are close or far. Because there's no real meaning for them to be close or far. OK. Let me just say one more fact about special relativity, and then we'll quit for today and come back on Thursday and then talk about how to extend this into general relativity. OK, there's a question. Yes? AUDIENCE: Just really quick. For the Lorentz invariant to equal to 0, does that mean that the objects should be moving at the speed of light relative to each other? Is it like that? PROFESSOR: OK. The question is, if the separation is lightlike, s squared is 0. Does that mean that these two objects are moving at the speed of light relative to each other or something like that? No, it does not. It only talks about their positions. It doesn't say anything about the motion of these objects. It's only a statement about their x and t-coordinates at some instant. OK, let me still write one more equation on the blackboard to kind of finish the special relativity part of the discussion. In the end, we are interested in the metric. And what makes a metric a little bit different from a distance function is that metrics refer to infinitesimal distances. So we're going to want to know the infinitesimal form of that. And it's obvious, so it's nothing to make a big deal about. But I think it's worth writing on the blackboard. The infinitesimal form of that equation is that ds squared is equal to dx squared plus dy squared plus dz squared minus c squared dt squared where dxdy, dz and dt are the infinitesimal coordinate differences between two events. And it's in that form that we'll be beginning from and taking off into the world of general relativity and the metric of general relativistic spacetimes. So we'll continue with this on Thursday.
MIT_8286_The_Early_Universe_Fall_2013
3_The_Doppler_Effect_and_Special_Relativity.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, in that case, let's go on. At the end of last class, we were beginning to talk about the Doppler shift. And we defined our terms. And I guess I'll repeat the definitions on the blackboard here. We are talking initially about the case where the observer is stationary and the source is moving with velocity v. We are talking initially about sound waves, waves which have a fixed speed relative to some medium. And the speed relative to the medium will be called u. v will be the velocity of recession, as shown on the diagram. Delta t sub s, s for source, will be the time interval between wave crests as measured at the source-- so period of wave at source. And delta t sub o will be the period of the wave at the observer. And what we're trying to calculate is the relationship between delta t sub o and delta t sub s. OK, at this point, I'd like to go to the screen and go through the different stages of what happens as this process takes place. We start in frame one with an observer in some location, a source at some different location. Source is moving to the right. Source is emitting the first wave crest in this slide number one. Nothing too interesting so far. Next picture, the source emits the second wave crest. But meanwhile, the source has moved. The time between wave crests as seen by the source is delta t sub s. So the distance that the source will move during that time integral is v times delta t sub s. And we'll call that delta l. And this really is the important slide. I think I have it highlighted here. That picture is what really counts for the whole Doppler shift. It says that the second wave crest has to travel a little further than the first wave crest by this amount delta l. So delta l will be the crucial quantity that will control the answer. Third slide, the waves have traveled. Now the first crest in this third frame has just hit the observer. Next frame, the last crest, the second crest, has hit the observer. And to figure out the Doppler shift from those images, all we have to do is realize that if both objects were stationary, there'd be no time difference between observation and source. Each could occur later by some fixed amount, the amount of time it would take the sound wave to travel. But they will occur later by the same amount if there was no motion. So if there's no motion, delta t sub o would be equal to delta t sub s, if there was no motion. But because there is motion, we said that the wave, the second crest, is going to have to travel further by the amount delta l. So it'll be delayed by the amount of time it takes for the wave to travel delta l. And that's just delta l divided by v-- so plus delta l divided by v. And we know what delta l is. Delta l is just v times delta t sub s. I'm sorry, this is divided by u. u is the speed of the wave. v is the speed of the source. So here we have in the numerator v times delta t sub s, and the denominator is u. This is just what we said delta l is. So this really is our result. It tells us what delta o is in terms of delta t sub s. And we can solve for the ratio. It tells us that delta t sub o over delta t sub s is equal to 1 plus v over u. Solve that equation. Now there's a standard definition that's used to describe redshifts, which is that this ratio, which is also the ratio of wavelengths, which is how I usually think of it, the wavelength at the observer divided by the wave length at the source-- the wave length being disproportional to the delta t. The wave ones are just the wave speed times delta t. This is defined to be 1 plus z where z is what's called the redshift. And the astronomers take out the one so that if both objects are stationary, z is equal to 0. That corresponds to know redshift. That means the wave length is the same at the source and at the observer. So the ratio of the wavelength as its observed to the wavelength of the source is what's called 1 plus z. So in this case, it follows immediately that the redshift for this case is just v over u. So maybe I'll just write that again in a box and label it. z is equal to v over u corresponds to the nonrelativistic case, or the sound wave case, with the source moving. OK? Everybody on board? Any questions? OK, straightforward enough, I think. OK, now we will go on to do the alternative simple case, where the observer is moving and the source is stationary. So keep the source on the right and the observer on the left. But this time it's the observer that's moving again with speed v. So v is always for these two cases in my notation the relative velocity between the source and the observer. So now we have a new sequence of pictures. So first picture again is fairly trivial. The first wave crest is being emitted by the source. Picture number two, the second wave crest is emitted by the source. Picture number three, the first wave crest arrives at the source. And picture number four now, the second wave crest arrives at the source. And for this case, it is this last frame where all the action is. The point is that between the time when the first crest hits the source and the time when the second crest hits the source, that is the time between the third and fourth frame, the source has moved. And it's moved by distance which is v times the time between those images. And the time between these images is just the time that the source experiences between the receipt of the two waves. And it's therefore what we've been calling delta t sub o, the time interval as measured by the observer. So the distance traveled is just v times t sub o. And it's that last frame inside that box where all the action happens for determining the answer for this problem. So we can put that into equations also. This time it's slightly more complicated. It starts basically with the same idea. Delta t sub o is equal to delta t sub s, which is what it would be if there was no motion. But it's a little bit longer because of the extra distance the second pulse travels. And that extra distance is again called delta l. So the time delay is again delta l divided by u, the wave speed. But this time we have a different formula for delta l. This time delta l is v times delta t sub 0, instead of last time it was v times delta t sub s. So this time the equation is ever so slightly more complicated, because delta t sub 0 appears on both sides of the equation. Last time we immediately wrote down an equation for delta t sub o. But nonetheless, this is one equation with one unknown, so of course it's trivial to solve it for delta t sub o. And when we do that, we get delta t sub o. I'll maybe divide by delta t sub s. By just doing a little bit of algebra on that, you discover it's 1 minus u over v inverse. And then z, the redshift, is delta t sub 0 over delta t sub s minus 1 by definition. AUDIENCE: Is that 1 minus v over u? PROFESSOR: Oh, do I have it wrong here? Yes, it's 1 minus v over u. Thank you very much. Right, coming from the v over u here, obviously, as you noticed. OK, so to write down a final equation for z, it's delta t0 over delta t sub s minus 1. So it's 1 minus v over u inverse minus 1. And then just putting things over our common denominator, it ends up being v over u divided by 1 minus v over u. And that then is our final answer. So this now is the answer for the nonrelativistic case-- again, we haven't done relativity yet-- and the observer moving. OK, now it's worth pointing out that when velocity is small compared to the wave speed, as is often the case if we're going to be talking about light which we'll be talking about in a minute, but could be the case with respect to sound as well. Then these two formulas are almost the same. They're both proportional to v over u, when I'm talking about the case where v over u is a small quantity, where the motion is slow compared to the sound speed or wave speed. And the only difference is that denominator. Here we have a denominator of 1 minus v over u. And here the v over u is the whole show. There's no denominator. And if v over u is small, the denominator is close to 1. So the two answers are going to be almost the same. And one can describe that a little bit more succinctly perhaps, by saying that z with the observer moving minus z with the source moving is equal to v over u quantity squared times 1 minus v over u in the denominator. Just a little bit of algebra to get that result. And what that shows explicitly is that second order in v over u. It's proportional to v over u squared, not to v over u. So if v over u were a part in 1,000, the difference would be a part in a million. So for slow velocities, it doesn't matter whether it's the source that's moving or the observer that's moving. But these answers can, of course, be very different if the velocity v is comparable to u. OK, any questions about that? Yes? AUDIENCE: Does this violate Galilean relativity? PROFESSOR: Does it violate Galilean relativity? Not really. Although, one has to maybe be careful about how one defines Galilean relativity. What makes it legitimate as a classical mechanical calculation is that unshown, but crucially important on the blackboard, is the air that the sound wave is moving through. So in one of these pictures, the air was at rest. The other picture, the air was moving. Or rather, well, I said that wrong. The air was at rest in both pictures. But if you want to make a Galilean transformation to relate one picture to the other, then after you've made the Galilean transformation, the air would be moving and it would not really be the same picture. So it is all consistent with Galilean relativity. And one has to remember that the air is playing a crucial role here. When we say that the observer or the source is at rest, the full sentence really is that it's at rest relative to the medium that's transporting the wave. Any other questions? Yes? AUDIENCE: This is not really a question, just kind of an observation. I just find it interesting that in the case if v is greater than u that this one is always positive, no problem. But if v is greater than u in the other case, then you have a negative number, which, I don't know if that-- I just find that weird. PROFESSOR: Right, let me think. If v is greater than u, then the observer moving case becomes negative. And it presumably means the wave never reaches it. If the observer is moving faster than the wave speed, the wave never reaches them. And that's why the answer is peculiar. If the source is moving faster than the wave speed, the wave still reaches the observer. So there is a nontrivial and believable answer in that case. OK. What we're going to move on to now is the relativistic case. And as I think I described in the course handouts, relativity is has sort of an odd role in this class. We need a little bit of relativity. But at the same time, there are the courses in relativity. So I don't want this to be a course in relativity. In the old days, when there weren't so many other courses in relativity, we did in fact spend two weeks of this course doing special relativity. But I don't think that's worthwhile anymore. Well, I'll maybe ask for a show of hands. How many of you have had relativity in some other course? That's what I thought, most of you, perhaps not all. Well, maybe I should ask the other question. How many of you have not had relativity? OK, a number. So I do want to make the course completely intelligible to the people in that second group. Special relativity is not a prerequisite for this class. So what my goal will be to tell you enough special relativity so that you'll be able to follow what comes next. But I will not be driving those results. I'll just leave for other courses for people who want to take them. And if you don't want to take them, that's fine too. But I want to make this course coherent. So what we're going to do is discuss the consequences of special relativity without trying to relate those consequences directly to the underlying ideas of special relativity. I will, however, mention where special relativity comes from. It arose in the mind of Albert Einstein, because he realized that the physics that we knew well, basically Newtonian physics at that time, possess this property, Galilean relativity, which came up in a question just a minute ago. Galilean relativity says that if you look at any given physical process in a frame that's moving at a uniform velocity relative to the first frame that you use to describe it, it should also be consistent with the laws of physics in the second frame. Incidentally, I-- maybe I'm more ignorant about history than most-- only learned a few years ago what Galileo had to do with this. It did actually play a very important role in the history of Galileo in the physics that he was debating about. A key issue in the time of Galileo was whether the earth moved around the sun or the sun moved around the earth. And that was something Galileo was intensely involved in. And one of the arguments that said that it must be the sun that moves-- it can't be the earth that moves-- was that if it's the earth that moves, it means we move at very high velocity. The velocity of the earth around the sun is high by ordinary standards. And obviously, we'd feel that motion, people thought. So it was a proof that it must be that the earth is stationary and the sun moves. Because otherwise, we'd feel the effect of this high velocity motion. So it was crucial to Galileo's point of view that it was really the earth that was moving that you don't detect motion. If you're in uniform motion, the laws of physics are exactly the same as they would be if you were at rest. And that's basically what Galilean relativity is all about. And it was in fact very clearly enunciated by Galileo in his writings. So that was the case for mechanics. But at the same time, in the 1860s, Maxwell invented Maxwell's equations, or completed them is maybe a more accurate description for what Maxwell really did. Most of those equations already existed. And a prediction of Maxwell's equations is that light travels at some fixed speed, which could be calculated in terms of epsilon naught and mu naught that appeared in those equations, a speed that we call c. Now if light travels at speed c, it would mean that if you got into a spaceship and chased a light beam, it'd say half the speed of light. The implication of the physics that was known at the time would have been that, from the point of view of that spaceship traveling at half the speed of light, the light pulse would only be receding at half the speed of light. You would have half caught up with it. But that would mean that from the frame of this rapidly moving spaceship, the laws of physics must somehow be different. Maxwell's equations must not hold in their standard form. So there was this tension between Maxwell and Newton, if you like. The tension was not a contradiction. It would be perfectly possible for there to be a fixed frame in which Maxwell's equations had their simple form. But Newton's equations could perhaps have the same form in all frames. And that in fact was what people thought at the time. To account for this situation, physicists invented the idea of an ether which was a medium through which light waves travelled, similar to the air in which sound waves traveled. And the frame in which Maxwell's equations had their simple form was the rest frame of the ether. And if you moved relative ether, the equations would be different. And that's what people thought in 1904. And it was a consistent point of view, but it meant that there was this dichotomy between electromagnetism and mechanics. So Einstein thought maybe physics is not so sloppy. Maybe there's a more elegant way which all this plays out. And he realized that if you modified the equations that are used to transform between one frame and another, you could make Maxwell's equations frame invariant. You could make it so that Maxwell's equations are valid in all frames. And if we go back to our example of the spaceship chasing the light beam, with these new transformation equations that Einstein suggested, it would turn out, even though it's very contrary to intuition, that when the spaceship measures the speed of the light pulse, it would still measure that the light pulse was moving away at speed c, even though it had moved at half c trying to catch up with the light pulse. So it's not obvious how that can happen. But it turns out it can happen. We'll be talking a little bit more about how it happens. And that was basically Einstein's proposal. It was a proposal that there is no ether, that the laws of physics both electromagnetism and mechanics are the same in all frames. And in order to do that, he had to say that the equations of transformation between one frame and another are different from what Galileo believed. So these are what we call the Lorentz transformations. We might write them down later in the course, but we're not going to write them down today. But what goes into them are three physical effects, which we will talk about. If we're talking about the time dilation, we only need one of those three effects. So I'm going to start by just discussing that for a minute, and then we'll come back at the end of the class, or perhaps next class, depending on how timing works out, to discuss the other two primary effects that are needed to make up the theory of special relativity and explain how it could be that the speed of light could look the same to all observers, even if we're talking about observers that might be moving. OK, so time dilation is the simple statement that if I were to watch a moving clock, the moving clock would appear to me to be running slower. I'll just mention for a moment now that I put the word appear in quotation marks. That means we're going to come back and discuss in detail what is meant by the word appear. But to finish the sentence first, the moving clock would appear to me in my reference frame to always be running slower. And so by a very predictable amount, a famous expression in special relativity, gamma, where gamma is 1 over the square root of 1 minus beta squared, where beta is just an abbreviation for v over c, the velocity of the clock divided by the speed of light. So as long as v over c is small, this is a small effect. Gamma is near 1. And running slowly by a factor of 1 means not running slowly at all. So the fact it was near 1 means it's a very small effect. But moving clocks will always appear to be running slower. That's one of these three effects of special relativity that we'll be discussing in the course of lecture notes one. Now let me come back now to talk about this word "appear," because that's a little bit subtle. Might just add that there was a series-- this is just an aside-- but I guess broadcast last year there was a four-part series of Brian Greene's Fabric of the Cosmos that was broadcast on PBS. And the interesting thing about that, which is relevant here, is that he tried to illustrate time dilation. And he did it by having sort of a parable of a person sitting in a chair and somebody else carrying a clock over his head, walking towards the person sitting in the chair. And the camera showed what the person sitting in the chair would see and showed the clock running slowly. That's wrong. It's not what he would actually see. And that's the crucial issue of the word "appear" here. When we say that the clock appears to run slowly, we're not talking about what an observer would actually see. The complication of literally seeing is that when you see something, what you're doing is you're measuring the light pulses as they arrive at your eyes at a given time. And since light has a finite travel time, it means that you're seeing different things at different times. In particular, if there's a large object coming towards you, say this laser pointer coming towards me, I would be seeing the front of it at an earlier time than I would be seeing the back of it. The other way around, actually. I'd be seeing the back at an earlier time, because the light that leaves here at an earlier time would take longer to reach my eye, reach my eye the same time as light which leaves the front of the object later. So the point is that as this is coming toward me, I'm seeing different pieces of it at different times in terms of the actual existence of this laser pointer. And that makes things complicated. So what you actually see when you take into account special relativity is fairly complicated. You can calculate it, but there's no simple expression for it. You really just have to calculate point by point what you'll be seeing for every part of the object at a given time, nothing very simple. So the simple expression, which just says, clocks run slowly by a factor of gamma, and we'll learn later e expressions about how things contract and how simultaneity changes, those simple expressions are not based on what any observer would actually see. But rather, they're based on what ends up giving a simpler picture, a picture in which you imagine that what we're discussing is not what an individual sees but rather what a frame of reference sees. So we're talking not about what the observer sees but rather what is seen in the observer's frame of reference. And a frame of reference, I think you could think of it pretty concretely, as kind of a structure of rulers connected together to each other to form a grid of rulers and with clocks located everywhere along this grid. So all the observations are made locally. That is, if you want to measure a time in a given reference frame, you don't use a central clock, waiting for the light pulse to reach that central clock. Rather, you measure the reference frame is filled with clocks, all of which have been synchronized to start with. And if you want to know what time an event happens, you look at the clock that's next to it. And that clock tells you what time that event happened. So that's typically what we draw when we draw a coordinate systems and so on. It's really the way we normally think. The point is, though, if you really want to think what one observer would see, it's more complicated. Then you have to take into account the light travel time. So it's only after you take out the way travel time and calculate how local clocks would compare that you see this time dilation in the simple form, that the clock always runs slower. So in particular, for this example of the person sitting in a chair and the clock coming towards them, that's exactly what we're going to be talking about. That is the Doppler shift. And what we'll find is when the clock comes towards them, he will see the blueshift not a redshift. He'll see the clock running faster not slower, the opposite of what was shown in the TV program. But the difference is that what causes it to look like it's going faster is the fact that each pulse travels a shorter distance if the clock is coming towards the observer. And that becomes a bigger effect than the fact the clock itself, if it were measured relative to clocks that it passes, would be running slowly. Yes? AUDIENCE: So in that case, if the clock, let's say, was moving toward you really fast, could you measure it when it was directly perpendicular to you? And then that would be the special relativity time dilation? PROFESSOR: Well, if it was moving alongside you. If it was coming right at you, it would just hit you. And that was the case, Sonya, more or less. But yes, you're exactly right. If the clock were moving at right angles to the observer so that the observer saw it, what you really want for this to be the pure effect is you want to have the velocity of the clock to be perpendicular to the velocity of the photon that the observer is seeing, as measured in the observer's reference frame. Actually that matters. Then you see would see pure time dilation effect. That's exactly right. OK, so might just add that Brian Green and a bunch of people here at MIT, actually-- I was involved and so were some others. There's a number of MIT people on this program. So we ended up having a long conversation with Brian Green about it by email. And everybody at MIT thought it was just plain wrong. Brian Green actually took the position that it was very intentional on his part, and he was just trying to illustrate the time dilation effect, and he didn't want to talk about the Doppler shift. And since he didn't want to talk about it, he could ignore the fact that it was there. We all thought that was bad pedagogy. But we never convinced Brian, I should say. OK, now what we want to do is go through these Doppler shift calculations again, this time recognizing that moving clocks run slowly by a factor of gamma. So we're now going to be doing the relativistic case where the wave is sound waves-- excuse me, where the wave is light waves. I'll get this straight. And the velocities might be comparable to the speed of light. So this time dilation effect is large enough so that we want to take it into account. Now in this case, what we are hoping, and everything would be wrong, inconsistent, if we can find this, we're hoping that in this case, the two answers should be the same. It shouldn't matter whether the source is moving or the observer is moving. Earlier it did matter, and we said that was explicable, because we knew that air was involved. And if we made a velocity transformation to go from one picture to the other, from the picture where the source is moving to the picture where the observer was moving, the air would have a different velocity in the two pictures. It would be stationary in one and moving in the other. So we would not expect to get the same answer as we would have gotten assuming the air was stationary. But in this case, if anything has a different velocity when you go from the picture where the source is moving to the picture where the observer is moving, it would have to be the ether that has a different velocity. But the basic axiom of special relativity is that there is no ether, at least there's no physical effects coming from the ether. So you might as well pretend it does not exist. You can't really prove that it does not exist if it has no properties. It still exists. But the basic axiom of special relativity is that there are no physical effects coming from this either. So for special relativity, we should get the same answer, whether it's the source that's moving or the observer that's moving. It's really the same situation, just viewed in different reference frames. And special relativity says it cannot matter what reference frame we're doing the calculations in. So it's the same pictures, but this time we want to take into account the fact that moving clocks run slowly by a factor of gamma. So looking at these pictures, we can first start just glancing at them and saying, where's there a clock that's moving? That might be something we need to think about. And maybe I'll ask you-- which of these four frames shows a moving clock? AUDIENCE: All of them? PROFESSOR: Sorry? AUDIENCE: All of them? PROFESSOR: All of them, I guess so. But the time interval measured on the clock is only actually relevant to one of them. AUDIENCE: Two. PROFESSOR: Two, exactly, two. Here we said that the source is moving, and we said the time measured on the source's clock between the emission of these two wave crests. Incidentally, we're usually talking about continuous waves like light waves. And then we're talking about the time between successive wave crests. Well, if we just as well imagine that the source is emitting a series of pulses where each pulse represents a wave crest, and somehow to me that sounds a little simpler to describe, because you don't have to think about the sine wave associated with the signal that the source is actually creating. So in any case, the time between these pulses, I'll call them, as measured by the source clock, is what we call delta t sub s, is the time that the source would actually measure. And the source is moving in this picture. So relative to our frame, we want to think of this entire series of pictures as all being consistent in our frame. It's very important, since transformations between frames are a little complicated in special relativity. It's very important when you're doing any problem to pick what frame you're going to use for your description and be sure to stick to it. If anything is initially described in another frame, you have to figure out what it looks like in your frame in order to fit it together with the other events that you're describing in your own particular reference frame. So for this problem, our frame will be the frame of the slide, the frame, which is at rest relative to the observer. So we could also call it the observer's frame. And relative to that frame, the source is moving. And therefore, the clock-- a source is emitting a series of pulses. That's a clock. Anything that does anything at regular intervals is a clock. So the source represents a moving clock. And we need to take into account the fact that the source will be running slowly by a factor of gamma. And otherwise, nothing changes. The observer has a clock also, which the observer is going to use to measure the time between crests. But the observer's clock is at rest in our frame. So there's no time dilation associated with the observer's clock, only a time dilation associated with the clock on the source. So again, the important issue is all inside that yellow box. And what do I do now is look at the equations and see how the equations are modified. I guess I should start back at the beginning of the blackboards. Maybe I should turn the blackboard lights on when I work on the blackboard. So the time integral as seen by the observer will be equal to-- last time we just had delta t sub s as our first term, which would be what it would be if there was no velocity. And that's still true if there was no velocity. But if the clock is going to be running slowly by a factor of gamma, that would mean that the time interval that we would measure, even if there was no change in path length, which will be the next term, if there was no change in path length, the time that we would measure as the observer would be different from the time interval as measured by the source by a factor of gamma. But one has to figure out whether the gamma goes in the numerator or the denominator. And sometimes it's a little tricky. It helps a lot, I think, to just sort of imagine an example. Any example is clear, it turns out. But when you try to write down the answer in general, you sometimes get it wrong. That's my experience with myself and with other people. So this clock is wanting slower. And say we're talking about a second time interval. If this clock is running slower, it means it takes longer for it to take off a second. We would see it as maybe running slower by a factor of two. It would mean that it would only take off a second every two seconds. So that means that the time interval that we would see-- and that's what we're trying to calculate, the time interval as the observer-- is going to be longer than delta t sub s by a factor of gamma. So the first term changes from what it was before. And what we're doing is we're rewriting this equation. So the first term changes from what it was before by putting in a factor of gamma in front of it. Because the source clock is running slowly. And then the second term is still delta l divided by u. But the equation for delta l changes also, because delta l is the time interval that it takes for the light to travel the extra distance, where the extra distance was because of the time between the clicks. And that is now changed because of the time dilation of the source clock. So the second term also changes by a factor of gamma. So it's gamma times delta t sub s plus v times gamma times delta t sub s divided by u. So the whole answer just changes by a factor of gamma. So it's gamma times 1 plus v over c times delta t sub s. And now if you do a little bit of arithmetic here-- gamma, remember, is 1 over the square root of 1 minus v squared over c squared. And a squared minus b squared-- I'm now thinking of the denominator 1 minus v squared over c squared. Maybe I'll write this on the side. 1 minus v squared over c squared-- it's worth remembering here-- can be written as 1 plus v over c times 1 minus v over c. And we have the square root of that appearing here in the denominator. Gamma is 1 over the square root of this. So this becomes the square root in the denominator, resulting the square root of 1 plus v over c appearing in the answer. And this remains in the denominator. And what we get simplifies to simply 1 plus beta over 1 minus beta inside the square root times delta t sub s. So this is the special relativity answer, the relativistic answer, and this is for the source moving. Now we expect that the answer won't depend on whether the source is moving or not, but certainly the calculations do. So this is what we got from the calculation, which we assumed that it was a source that was moving, and the observer was stationary. It's just corrected from the previous answer by this factor of gamma. Any questions about that? OK, so to summarize on the slide what's changed is that this time interval, this extra distance delta l for the relativistic problem, is not v times delta t sub s as it was. But rather what we said is that it's gamma times v times delta s, because the clock on the source is running slowly by a factor of gamma. And it was this difference that we just used on the calculation on the blackboard to get the new answer. OK now the next calculation, now we're going to do the same calculation again, which we already did for the nonrelativistic case. But this time it will be the observer moving. And we're going to try to do the relativistic case. So this time it's the clock carried by the observer that's running slowly. And remember, this is running slowly relative to us, relative to our frame of reference, where our frame of reference is by definition the frame of reference of the slide that we're looking at. Source is stationary, so delta t sub s is just an honest time as we would measure it. But times as measured by the observer, delta t sub o, are going to be different. So to correct the calculation for relativity, that's the crucial box as it was even in the nonrelativistic case. And we're going to have to change that equation by replacing delta l. Instead of v times delta t sub 0 is v times delta t prime. Now delta t prime is not exactly delta t sub 0 or gamma times delta t sub 0. It's a little trickier. Let's see. Hold on. The definition I want to use for delta t prime is that it's the time as measured between the third frame and the fourth frame. But it's the time as we measure it. I still want to describe everything in terms of the way we would measure it. So delta t sub prime is not measured either by the source or the observer. It's measured by us. And it's related to the time as the observer would measure it by a factor of gamma, because relative to us, the observer's clock is running slowly by a factor of gamma. AUDIENCE: So we're the source? PROFESSOR: Well, we're stationary relative to the source, because the source is stationary. But we're the frame of reference of the slide, is the way I like to think about it. But it is the frame of reference, same as the frame of reference as the source. That's right. So delta t sub 0 is related to delta t sub prime by a factor of gamma. And again, one has to think a little bit to make sure one gets the gamma in the right place, the numerator or the denominator. We're saying that the observer clock appears to be running slowly relative to us. And that means that during the amount of time it would take for it to say take off one second, since it's running slowly, should take more than a second relative to us. And that's what this formula would give, if we multiply by gamma. It would say that delta t prime is equal to gamma times delta t sub 0. So we would say that the time that the observer clock ticks off one second we might measure two seconds. That's the direction of the time shift, all given by that formula. And those are all the pictures. Now we just have to write down the equations that go with those pictures. And the hope is that we'll get the same answer as we got last time. I think I'll leave that on the board. Let's see. What do I do? No, I won't leave it on the board. OK, this time the calculation we're mimicking is the calculation up here. This was the calculation we had for the nonrelativistic case where the observer was moving. Now we want to put in the time dilation that would correct that calculation. And the key equation is that the time interval between the receipt of the two wave crests as we would measure it, which is why I call it delta t sub prime, so prime is not the source or the observer. It's us. It's the same frame of reference as the source, but the source isn't located there. So we think of it as the time interval as measured in the frame of reference. That's the frame of the slide, which is our frame. And it is equal to gamma times delta t sub 0, where delta t sub 0 is the time as it would actually be measured on the observer's clock. So, we're keeping their labels s and 0 to mean what would actually be measured on the clocks at the source and at the observer. That's what we're trying to establish a relationship between. So the basic formula that we have up top there would first become delta t sub 0 is equal to-- hold on. Now actually, we would not start by writing down an equation for delta t sub 0. Rather what we want to do is start by writing down the relationship as we would see it in our frame, which is going involve delta t sub prime. We'll worry about delta t sub 0 later. So our equation is going to become delta t prime is equal to delta t sub s plus v times delta t sub prime divided by the wave speed, which now we'll call c, since we're doing a relativistic problem. So this is the basic equation for the time delay as we would see it in our frame, where delta t sub prime is the time between the receipt of the first and second crests, the time between frames three and four as we would measure it. And now we can do the same thing that we did over there, first for delta t sub prime. And we discover that delta t sub prime can be written as 1 minus v over c inverse, just doing algebra on this equation, times delta t sub s. So that equation relates delta t prime to delta t sub s. And now we use the equation up top here to see what the observer himself would actually measure. And that becomes just 1 over gamma times delta t sub prime. So delta t sub observed is equal to 1 over gamma times 1 minus v over c inverse times delta t sub source. And now since it's important we get the same answer, I'm going to write in some intermediate algebra here just so we can all really see that it works out. The 1 over gamma, I am going to write as-- remember gamma is 1 over the square root of 1 minus beta squared-- so 1 over gamma is the square root of 1 minus beta squared. And I'm going to write that as the square root of 1 plus beta times 1 minus beta. So this is the factor 1 over gamma. Now we have explicitly here a factor of 1 over 1 minus beta. The inverse makes it 1 over. Did I write something wrong? AUDIENCE: From the second until the third time? PROFESSOR: Second to third, you're there? AUDIENCE: I thought it wasn't-- PROFESSOR: Yeah, this goes to the other side. It is minus. OK, finally, the 1 minus beta, you see occurs to the first power in the denominator. And the 1/2 power to the numerator. So we do indeed get exactly what we wanted. Delta t sub 0 is equal to the square root of 1 plus beta divided by 1 minus beta times delta t sub s. And now we know that this is the relativistic answer for either source or observer moving. And then if we want to write down what z is, this is delta t observer divided by delta t source minus 1. So that becomes just the square root of 1 plus beta over 1 minus beta minus 1. So we found what we expected, what we wanted to find to be consistent with the basic ideas of relativity. It's the same answer no matter which one is moving, because it doesn't matter what frame of reference we do the calculation in. OK, questions about either the calculations or the ideas behind them? OK in that case, let's move on. I do want to come back and talk about the other two kinematic effects of special relativity, which are Lorentz contraction and a change in the notion of simultaneity. But before I get to that, there's one other issue I want to discuss first, mainly because it's something you need to understand to do the problem set that's due tomorrow. And that is the situation that's needed to describe clocks which might be accelerating. Special relativity really only describes inertial reference frames and how things change from one inertial frame to another. So if you know how a clock behaves if it's at rest in one reference frame, special relativity completely dictates without any ambiguity what it would look like at a frame that was moving at a uniform velocity, relative to the original frame, which means it completely dictates how that clock is going to behave if it is moving at a constant uniform velocity. But nonetheless, in the real world, we have very few clocks around that are completely inertial. Every clock that we see around us from the clock on the wall, which is moving with the earth or my wrist watch which moves more, is constantly undergoing accelerations. So we want to be able to talk about clocks which are accelerating. So we need to say a little bit about how we would do that and how we do it if the clocks were moving at relativistic speeds , which also happens with satellites, for example. The GPS system as you've probably been told wouldn't actually work unless the calculations were done in such a sophisticated way that they even take into account the effects of general relativity as well special relativity. So moving clocks and how they behave is a crucially important topic technologically. So what do we say that a clock that's accelerating? There's I think a common myth that to describe acceleration you need general relativity, and therefore we have to put off talking about accelerating clocks until we take a course in general relativity. That's actually totally false. What general relativity does is provide a theory of gravity which basically says the gravity and acceleration are intimately linked. And that's where acceleration gets pushed into general relativity. But special relativity alone is enough for us to describe any system that could be described by equations that are consistent with special relativity. Special relativity does not describe gravity. So in any situation where gravity is important, special relativity is at a loss to make real predictions about what should happen. But as long as gravity is absent, as long as we're only dealing with electromagnetic forces that we think we understand, there's nothing that prevents us from using the equations of special relativity to describe what happens. We have to use the dynamical equations of special relativity that talk about how things respond to forces. And whenever there's a force, there's an acceleration. But there really are such equations. We can combine, for example, electromagnetism with relativistic mechanics to describe a system of particles that are interacting electromagnetically, completely consistent with special relativity. And even those particles are accelerating, we could say everything we want to say about them. So in particular, if there's a physical clock, to the extent that we could describe that clock as made out of particles whose physics we understand, special relatively will still tell us what that clock will do even when that clock accelerates. The answer, however, from that calculation, you might imagine, is very, very complicated. Because the physics of any actual clock, my wrist watch as an example, is pretty damn complicated. And we're not really going to write down the equations that describe my wrist watch to figure out how it's going to behave when it accelerates. So what are we going to do? Let me point out that you already have pretty much experience with accelerating clocks, because all of you-- well, many of you-- wear wrist watches like I do that are accelerating all the time. And they tend to work. You basically assume that even though they're accelerating they've been designed well enough so they can withstand the accelerations that your wrist gives them and still read the right time. On the other hand, one could imagine contrary situations. Probably my wrist watch would survive this, but if you take a mechanical clock, a windup clock, and heave it against the wall and let it smash against the wall and come to a stop, as it smashes against the wall, it wanted to go a very large acceleration. And if the acceleration is large enough, we can predict the effect it will have the clock, even though it's a complicated interaction. If the acceleration is large enough, it'll simply break the clock and it will stop. And that's one possible effect that acceleration can have a clock. And other effects are similar in nature. If there's any effect that the motion of my hand has on the wrist watch, it would be a mechanical effect that you'd calculate by understanding the mechanics of how the watch works, not by understanding any principles of general relativity. What's at stake underlying this-- you might wonder what the real difference is-- special relativity can make precise predictions about how a clock will behave if it moves at a uniform velocity, even without knowing anything about the details of that clock. Special relativity could make that prediction because there's a symmetry, Lorentz symmetry, which relates those two situations. And that's an exact symmetry of nature. So no matter what the clock is made out of, if it's moving at a uniform velocity, special relativity tells you there's no doubt it would run slowly by a factor of gamma. On the other hand, there's no such principle of any kind, either in special relativity or general relativity, about acceleration. So if you want to know the effect of acceleration on a clock, it really depends in detail on how large the acceleration is of course and the detailed physics of the clock and how much acceleration it takes to effect it and in what way it affects, precisely. So what's the bottom line? The bottom line is that when we want to talk about an accelerated clock, which we really do all the time, what we always do is simply assume that the clock is built well enough so that the acceleration does not affect its speed. And that really can be said very precisely. The assumption that we're going to be making is that these are ideal clocks, meaning that they're built well. And when we say the acceleration does not affect the speed of the clock, what we're saying is that the clock will run at precisely the same speed as another clock that's, say, moving instantaneously alongside it with the same velocity but with no acceleration. So at any point in the motion of my arm here, my wrist watch will have some specific velocity. The velocity will affect in some tiny way the speed of the clock by this factor gamma, which will be very close to 1 for that case. But we're going to assume, if we call my watch an ideal clock, that in the even time, it will be running at exactly the same speed as a clock which is not accelerating, but which is moving with the same velocity as the wrist watch. Therefore, the factor of gamma will be there, but there'll be no effect of acceleration. The speed of the clock will be determined only by its velocity relative to our reference frame. OK, is that clear enough? And that's what you need to assume about some accelerating clocks that show up on the problem set. That's why I wanted to get it in today. OK, if there are no other questions about that, I'd like to come back and talk a little bit more about special relativity and its consequences. Sometime later in the course, we'll talk a little bit more about what I would call the dynamical consequences of special relativity, which include well known equations like e equals mc squared, for example. But before one talks about energy and momentum, which are quantities which I will dub dynamical, there kinematic effects of special relativity of which this time dilation is one. And by kinematic I really just mean the consequences of special relativity for the measurements of times and distances. And if one limits oneself to discussing consequences for times and distances, kinematic consequences, there really are precisely three and no more consequences of special relativity. And really all special relativity in some sense is embodied by these three statements that we're going to be talking about, the first of which was time dilation, which we've already seen. I'll just remind you. Time dilation says that any clock which is moving at speed d relative to a given reference frame will appear in quotation marks to an observer using that reference frame to run slower than normal by a factor denoted by the Greek letter gamma. Turn out the board lights in case they're distracting. And again, "appear" refers not to how it would actually look to any particular observer, because any particular observer in a particular location is going to be waiting for light rays to reach that observer. And they'll take different amounts of time depending on where they start. "Appear" refers to measurements made in the reference frame of the observer, where we assume that all the actual measurements are made by on the spot clocks and rulers who measure where the object actually were at the time these events happened and not what it looks like at sometime later when light rays reach some observer. OK, the second consequence, and again all these will involve the word "appear." And I'll always write it in quotation marks to remind you that it's not exactly what a person would see. The second one is another famous effect of special relativity, Lorentz contraction, or sometimes called Lorentz-Fitzgerald contraction. Any rod which is moving at a speed v along its length relative to a given reference frame will appear-- and again appear to an observer using that reference frame-- to be shorter than its normal length by the same factor, gamma. A rod which is moving perpendicular to its length does not undo that change in apparent length. So these pictures kind of show it all. A rod, that bar is the rod, a rod which is moving at speed v will look contracted, will appear to be contracted, by a factor of gamma. And a rod which is moving perpendicular to its length, a rod which is like this moving this way, has no such effect, we'll appear to have its natural length. And this is a very famous consequence of special relativity. It means that moving rocket ships shorter and shorter and faster they go, and so on. And again, you should remember, it's not what you'd actually see. It's what you'd measure if you had on the spot local observers making these measurements, which you then later compile. OK, that's actually all I want to say about the contraction. Any questions? OK, next and last is an effect we just talked about less because it's a little bit more intricate to describe. But the other crucially important effect-- these would not be consistent if you didn't have all three. The other crucially important effect is the changing simultaneity, the relativity of simultaneity. And it takes more words describe it, so there's more words on the slide than on the other slides. And the pictures are a little bit more complicated too. But the pictures do say it all, really. The point is that if you have a system consisting of two clocks which have been synchronized in their own rest frame-- they're both at rest with respect to each other and they've been synchronized in their own rest frame-- and they're connected by say a rod, which has some length which we'll call l sub 0 in the rest frame of the two clocks . If that whole system moves relative to us by speed v along the length, those clocks, even though they were synchronized in the rest frame, would not look synchronized to us but rather would look like they're out of synchronization. And in particular, it will look like the trailing clock, the one on the back of this combination of clocks, will look a little bit like it reads a little bit later in the day by a factor which is beta times l sub 0 times c. Beta remember is v over c. l sub 0 is the distance between the clocks as measured in the rest frame of the clocks. And c is of course the speed of light. On the other hand, if these two clocks were moving in a direction which is perpendicular to the line that joins them, then there's no change in the synchronization. Now I should mention that this really is crucially important to the consistency of the whole picture. And actually showing that the picture is consistent is more than we're going to do. It's not impossible to do at this level by any means. But it's more than we're going to do in this class, since we're not focusing on special relativity. But when you hear about special relativity and look at these postulates, you might realize that there seems to be a pretty obvious tension between the idea that moving clocks run slowly and that all observers experience the same laws of physics. Because it means that if you and I are moving with respect to each other, I would claim that your clock was running slowly. But at the same time, you would claim that my clock was running slowly. Because from your point of view, you're at some fixed velocity and therefore an inertial frame. And I'm moving relative to you. So as you would describe it, I would be a moving clock. And my clock would be running slowly. So I think your clock is running slowly. You think my clock is running slowly. That seems to contradict. What happens if we hold the clocks next to each other and really just watch how they compare? Which one gets ahead? How can we disagree on that? Well of course, we can't hold the clock next to each other and also have them moving relative to each other. That's part of how you get out of this conflict. But let's think in a little bit more detail about what we're really saying when I say that your clock is running slowly. Remember, I want to make all my observations not by watching you, because then there's this time delay effect which complicates things. I make all of my observations by having a family of local observers surrounding me all at rest relative to me. And they report back to me. And only when I receive those reports and piece them together do I get the simple picture of what happened where when, which is the simple picture that is described by these "appear" relationships. So when I say your clock is running slowly, what I mean is that if I have a network of clocks all at rest relative to me, and your clock comes shooting by, I would measure what time it reads as it goes past all my local clocks. Rather, they would measure what time your clock reads as it goes past each local clock. And then they would all report back to me. So if your clock is running slowly, for example, let's say by a factor of two, it would mean that when you're clock passes my clock and my clock reads one second, your clock would only read half a second, because it's running slowly. It hasn't ticked off as much time. When it passes some later clock of my sequence of clocks, where my clock reads two seconds, your clock will read one second, and so on. So in that sense, I would say your clock was running slowly. Now that has to be consistent with you thinking that my clocks are all running slowly as well. So if you agree that my clocks were all synchronized, then you would conclude that my clocks must be running fast. Because when your clock reads a half second, my clock reads one second. When your clock reads one second, my clock reads two seconds. You would say that my clocks were reading fast if we just made that direct comparison. But at the same time, we know that's not the right answer. You should see the same physics and I see. If you and I are moving with respect to each other, you should see my clocks running slowly. So the way out of that is this question of simultaneity. From the point of view of your clocks going past all my clocks, if you just looked at the time on my clocks as you passed them, you would actually think that they were running fast relative to your clock. But you would also, however, not think of those clocks were synchronized with each other. So you don't determine what speed they're running by looking at two different clocks. If you want to figure out whether my clocks running fast or slow, you want to look at one of my clocks and see how it changes with time, not comparing different clocks. Because the different clocks would just be out of synchronism with respect to each other as you would see it. And we're not going to go through the details. But if you do look at my clocks consistently using a family of your clocks that are stationary relative to you, just as I thought about a network of clocks when I was trying to measure the speed of your clock, then everything's consistent. You would see all my clocks running slowly. I would see all of your clocks running slowly. And because we disagree on what's simultaneous, there are no contradictions. So simultaneity is absolutely crucial to get out of what would otherwise be a glaring contradiction in the whole system. OK, that's about all I really wanted to say today. But let me just give a preview of things that we'll talk about later in the course concerning relativity. So far I think we've said all we are going to say, unless the questions. I think we've said all we're going to say about the kinematic consequences of special relativity. And we're not going to be trying to derive them, as I said. If you're interested, by the way, the notes recommend several references, including the lecture notes from eight to 86 from earlier years when special relativity was included as a real topic. So certainly if you're interested in learning about this and you haven't already seen it, I'm happy to help you. But otherwise, it will not be part of this course to discuss how these three consequences of special relativity arise from the basic postulates of special relativity. But later we will be saying things that follow further along the line by pursuing the consequences of special relativity for momentum and energy, which will be important to us. The connection is the important connection and simple connection that energy and momentum are only interesting to us if they're defined in a way which makes them conserved quantities. That's why energy and momentum are important in physics. Because for a closed system, the total energy and the total momentum do not change. Energy and momentum can be transferred from one part of a system to another. But energy and momentum cannot be either made nor destroyed. Now if we took Newton's definitions of energy and momentum, and used relativistic kinematics, what we would find is that if we looked at, say, a collision of two particles, the Newtonian definitions of energy and momentum, if we took those seriously, would tell us what might happen in a collision. Usually there's an angle that's undetermined. But given an angle, it determines everything else. If we used special relativity to then describe what that same collision would look like in a different frame, we would find that these Newtonian definitions of energy and momentum would not be conserved in the other frame, if they were conserved in the first frame. The conservation laws are dependent on what reference frame you're using. So what Einstein amended was slightly modified definitions of energy and momentum, which are determined by the criterion that these slightly modified definitions of energy and momentum should, if they are conserved in one frame be also concerned in any other frame, which are related by the relationships to special relativity. So that's why it was essential once one changed the kinematics of going from one frame to another to also change the definitions of energy and momentum so that the conservation laws would hold in all frames using the new transformation equations to get from frame to frame. And that's why later in the course we will be introducing a slightly modified, slightly non-Newtonian, definitions of energy and momentum of moving particles. OK, that's all for today. I will see folks next Tuesday.
MIT_8286_The_Early_Universe_Fall_2013
23_Inflation.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, in that case, let's get going. Today's lecture will be mainly on the blackboard. I have a few slides I want to show. And what we want to talk about is the inflationary universe model. So I'll start by describing the mechanism of inflation, how it happens. Inflation is based on the physics of scalar fields and gravity. As I think we've said, in present day particle theory, and by present day I mean not yet string theory, all particles are described as fields, quantum excitations of the field. The analogy that most people are at least qualitatively familiar with, is the photon, which is a quantum excitation of the electromagnetic field. But in fact, to describe a relativistic theory of interacting particles, the only way we really know for any kind of particle is to introduce a field and describe the particle as a quantized excitation of the field. So when we talk about a scalar field, that's the quantum representation of some kind of a scale or particle. And scalar in this case, means spinless, same in all directions. A scalar field then, is just a number defined at each point in space. The only scalar field that we've actually seen in nature so far, is the Higgs field. And indeed, inflation is very much modeled on the Higgs field. Although the field that drives inflation, which is by definition, called the inflation, is probably not the Higgs field of the standard model. Although recently, actually, in the past few years, people have written a number of papers proposing that maybe the Higgs field of the standard model could, in fact, be the field that drives inflation. So we don't know. It's an open question. But in any case, the field drives inflation is some kind of cousin, at least, of the Higgs field of the standard model and that has many of the same properties. In particular, the properties of a scalar field are pretty much summarized by its potential energy function. Energy density is a function of the value of the field. And there are two kinds of potential energy functions that I like to talk about. One is the kind of that is used in New inflationary models. And potential energy versus field value. And it has a plateau with a peak at someplace, which is usually assumed to be phi equals zero and a potential energy function which may or may not be symmetric about phi equals zero but I'll assume it is, just for simplicity. And the second type, which I'd like to talk about, mainly for comparison-- This is really the one that will be interesting. But one could also imagine a potential energy function which really has a local minimum someplace, which is not the global minimum. And I'll draw it with the local minimum at the origin and the global minima, two of them degenerate, elsewhere. And this, again, is a graph of mi versus phi. And the reason I'm drawing this is partly for historical interest. This was what was used in the original inflationary model. It is my original paper. It does not work. But we'll want to talk about why it does not work. In both cases we're interested in a state, which can be called a false vacuum, which is a state where the scalar field is just sitting at phi equals zero. In the case of the second of these potentials, phi equals zero being a local minimum, is classically completely stable. If one had a scalar field in some region of space just sitting in that minimum, there'd be no place where energy could come from that would drive it out of that minimum over the barrier. In the second case, the field is classically stable. But it's still possible for it to quantum mechanically tunnel through the barrier. And that process has been calculated and understood. Originally by Sidney Coleman and collaborators. And an important feature of that tunneling is that does not happen globally. You might think that there would be some probability that suddenly everywhere in the universe the scalar field would tunnel over the barrier and go down to the other side. The probability of that is zero, as you might realize if you thought about a bit more. There's just no way that the scalar field that far over there is going to know the tunnel at the same time as the scalar field far over there. So the timely happens locally and it really happens in a very small region, which then tunnels over the barrier and the scalar field start rolling down on the other side in this small region. And then that region grows. The scalar field, as it rolls over the barrier, pulls the scalar field nearby. And the region grows with a speed that rapidly approaches the speed of light. In the New inflationary potential, where we have actually a local maximum here, the situation is classically meta stable, in the sense that the smallest possible fluctuation can start the field rolling down the hill. And in particular, quantum fluctuations will, if nothing else, start the field rolling down the hill in some finite amount of time. We're interested in the case where the amount of time it takes for quantum fluctuations to push the field off the hill is relatively long compared to time scales involved in the early universe. And the key time scale involved in the early universe is the Hubble time. And the Hubble time is just driven by the energy density. So one can calculate the Hubble time. And one is interested in the case for building inflationary models, where the top of the hill is smooth enough, gentle enough, has a small enough second derivative so that the amount of time it will take for the scalar field to roll off the hill is long compared to the Hubble time of the universe. So in both cases, for short times, the scalar field is just stuck at the origin. And that's what's important, as far as what we want to talk about next. So the characteristic of this state called the false vacuum is that the scalar field is pinned at a high energy state. Scalar field is pinned at a high energy density. And by pinned, I mean in advance you just can't change quickly. In general, particle physicists use the word vacuum to mean the state of lowest possible energy density. When we call this a false vacuum, we're really using the word false in the sense of the word temporary. These states are temporary vacuums, in that for some period of time, which is long by early universe standards, the energy density can't get any lower. So it's acting like a vacuum. Now what are the consequences of that? The important consequence of that is that the pressure has to be large in negative and in fact, equal to the negative of the energy density. And there are two ways we can convince ourselves of that. The first is that if we remember the cosmological equation that we derived somewhere in the middle of course for row dot. We learned that row dot is equal to minus 3 a dot over a where a is the scale factor times row plus the pressure over c squared. Now what we're saying here, is that as the universe expands, the scalar field is just stuck at this false vacuum value. The energy density is stuck at the energy density associated with that value of the field, the potential energy density of the field itself. And therefore, row dot will be zero as the universe expands. And if row dot is zero, we can just read off from this equation what the pressure has to be. Row dot equals zero implies that the pressure is just equal to minus the energy density times c squared, which is another way of saying it's minus-- Excuse me, the mass density times c squared, which is another way of saying it's minus the energy density. I'm using u for energy density and row for mass density. And they just differ by a factor of c squared. So this is the straightforward equation method of seeing the answer here. But if you want to explain this to your roommates or somebody who is not taking this class, there's also a simple argument based on a thought experiment, which I think is worth keeping in mind. And we've used this argument before, actually, in similar contexts. If we imagine a piston chamber in our thought experiment. And in our piston chamber, we're going to put false vacuum on the inside. And the false vacuum will have an energy density, I'll call it u sub f. And on the outside. we're going to have a zero energy vacuum. Now we've learned that since 1998, we've known that our vacuum is not a zero energy vacuum. We seem to be seeing a non-zero vacuum energy in our universe. However, even if that's true, the vacuum energy of our universe is incredibly small compared to the false vacuum energy density that we're talking about in terms of the early universe. So you could still very well approximate it as zero and not worry about it. So that's what we'll be doing here. So we'll think of the outside as being either a fictitious vacuum, which by definition, has zero energy density and we can talk about it even if it doesn't exist. Or we could think of it as being the real vacuum in our universe, which has an energy density which is approximately zero on this scale. Now what we want to do is just imagine pulling out that piston. So we have now created an extra region on the interior of the piston chamber. And we're going to be assuming that we've somehow rigged the walls of the chamber so that the false vacuum will be stretched as we pull out the piston. The piston is attached to the false vacuum in some way. So this entire area inside region is now false vacuum. And therefore, the volume of the false vacuum region has enlarged. And if we call the extra region here delta v, the volume of that region, we now have a situation where the energy has increased by the energy density of the false vacuum times delta v by changing the volume of the chamber. Now, energy has to be conserved. So this energy has to be equal to the work that was done by whatever force pulled out on this piston. We won't need to specify who was pulling on the piston, but the work done when one pulls out on a piston is just equal to minus p times delta v, the work done by the person pulling on the piston. So the normal case, the pressure would be positive, the piston would be pushing out on the person holding the piston and the interior would be doing work on the person pulling out. And it would be positive if the pressure were positive. The work done on the person, but this is supposed to be the work done on the gas. And that's minus p times delta v. So if energy is conserved, the work done on the gas has to be equal to the change in energy of the gas. And the change in energy of the gas is that. So conservation of energy implies that delta e equals delta w or use of f times delta v equals minus p times delta v, which of course implies that p is equal to minus use of f, as we said before. So the point is that if the energy inside the piston is going to increase, the person pulling out on the piston had better be doing work, had better be doing positive work. And if the pressure inside were positive, the person pulling out on the piston would be doing negative work. The piston would be pushing on him. So for it to make sense here, with the energy in the piston increasing, the person pulling out has to really be pulling against a suction. He has to do work to pull out. And a suction means a negative pressure, if we have zero pressure outside. Pressure inside has to be negative. So we could reach this conclusion either of two ways and we get the same conclusion. The pressure is just equal to minus the energy density. Yes? AUDIENCE: I'm a little confused about why the energy's increasing inside. Because why couldn't you just say the energy density decreases with the increased volume? PROFESSOR: OK the question is why couldn't you just say that the energy density would decrease with the increased volume. That certainly is what will happen if you have normal gas inside. What makes this particular false vacuum odd is the origin of this energy density, which is the potential energy density of the field. So if we were talking about the situation, for example, which is the clearest cut, the only way the energy density here could go down is if the scalar field goes up over the barrier and then comes down over here. And there's no way to drive it there, except to wait for a quantum fluctuation, which is a very slow process. And similarly, here there's no barrier. So it can't just roll down. But that takes a certain amount of time. And we're assuming that all the things we're talking about here are happening on a time that's fast compared to the amount of time it takes for the scalar field to roll. So what makes this peculiar false vacuum special is that it cannot lower its energy density quickly. And that's what the word false vacuum implies. And there are states like that. And then those states necessarily have a negative pressure, or pressure that's equal to good accuracy to minus the energy density. Or I say to good accuracy only because the energy density could change a little bit slowly, at least for the top case. But it's limited how much it can change. OK, now what are the consequences of this cosmologically? Well we've also learned that we could write the second order Friedman equation, which is the equation really tells us what the force of gravity is doing. A double dot is equal minus 4 pi over 3 g times row plus 3 p over c squared. Now for the false vacuum, p is equal to minus row c squared minus the energy density. And that means that this term is negative and three times as big as that term. So for the false vacuum, this quantity, which we normally think of as being positive, becomes negative. I should write factor of a here. And that means that instead of gravity slowing down the expansion the universe for a false vacuum, the expansion is accelerated. And that's also what we're seeing today with the vacuum energy, which behaves the same way as this false vacuum and produces gravitational repulsion in exactly the same way. So false vacuum implies gravitational repulsion. OK, this basically is the mechanism of inflation. So we're sort of finished with this chapter. Are any questions before I go on about how this gravitational repulsion arises? Michael. AUDIENCE: So, for the top vacuum that you've drawn up there, where there's no barrier, you just roll slowly, are we assuming that it takes a long time for it to begin to roll or after it's started rolling that it also takes a very long time to reach the bottom. PROFESSOR: I guess I'd say both, Begin to roll is not that well defined. Because it may have it an infinitesimal velocity from the time you start discussing it. And then that infinitesimal velocity gets bigger and bigger. But I think what we're saying is that the whole process, however you divide it up, it's going to take a long time compared to the time it takes for the exponential expansion to set in. OK. So now, I'd like to take this physics and just put a scenario around it. And we'll call the new inflationary scenario because that's what it is. Maybe now I should mention a little bit more about the history here. When I wrote my original paper, I was assuming a potential of something like this, because it seemed generic and created inflation and I was able to understand that inflation would solve a number of cosmological problems, which are the problems that we've talked about. And we'll come back to talk about how inflation solves them. But But, one still has to end inflation. In this model, inflation would end only by the tunneling of the scalar field through the barrier, which as I said, happens in small regions which then grow. Those regions are spherical so they're called bubbles. And the whole process really is very much the way water boils. When you boil water, it forms very small bundles initially and the bundles grow and then start colliding with each other and making a big frothy mess. And it turns out that that's exactly what would happen in the early universe if you had this model. When I first started thinking about it, I hoped that these bundles could collide with each other while they're still small and merge into a uniform, hot region of the new phase, a phase where the scalar field is not there, but there. But that turned out to be the case. It turned out that the bubble formation process produced horrible inhomogeneities that there did not seem to be anyway to cure. And that then was the downfall of the original inflationary model. But a few years later, Andrei Linde in the Soviet Union, and independently, Albrecht and Steinhardt in the US, proposed what came to be called the new inflationary model, which started with a different assumption about what the underlying potential energy function for the scalar field was. Instead of assuming something like this, which might be called generic in some sense, they instead assumed something that's a little bit more special, a potential energy function with a very flat plateau somewhere, which, well, we normally put in the middle. And this has the advantage, that the inflation ends, not by bubbled nucleation by tunneling, but instead, by just small fluctuations building up and pushing the scalar field down the hill. And what makes it work, basically is that those small fluctuations have some spatial correlations built into them. So over some small region, which I will calla coherence region, the fluctuations are essentially uniform. And the other important feature is that once the scalar field starts to roll, it still has some nearly flat hill to roll on. So a significant amount of inflation happens after this homogeneous coherence region forms. So the initial coherence region can be microscopic, but it is then stretched by the inflation that continues as the scalar field rolls down the hill towards the bottom. So that process of stretching the coherence region after it has already formed is what makes this model workable, while this model was not. So that's the basic story of how new inflation succeeded in allowing inflation to end gracefully, is the phrase that was used. The problems associated with this model came to be called the graceful exit problem. And this is the first solution to the graceful exit problem. They're now other solutions. But they're very similar actually. So I'll just write here that's it's a modification of the original inflationary model to solve this graceful exit problem problem. Now I should say a little bit about how inflation starts. But I can only say a little bit about it because the bottom line really is we don't know. We still don't have any real theory of initial conditions for cosmology, whether it's inflationary cosmology or any kind of cosmology. The nice feature of inflation is that it allows a significantly broader set of initial conditions than is required, for example, in the standard cosmological model, where, as we discussed, the needed initial conditions are very precisely specified. I might say a few things though, about ideas people have had. One idea, which I think sounds very reasonable, is due to Andrei Linde. And it's a vague idea, so it really needs to be more precise before it could really be considered a theory. But this is just the idea that the universe started out with some kind of chaotic random initial conditions. And then the hope is simply that inflation will start somewhere. That somewhere in the initial chaotic distribution there'll be a place where the scalar field will have the right properties, the right configuration to initiate inflation. There are also models by Vilenkin, Alex Vilenkin of Tufts, and independently, Andrei Linde, who by the way, is at Stanford. They both worked on models where the universe could begin by a quantum tunneling process, starting from absolutely nothing. I wrote here absolutely nothing and that's more nothing than nothing. You might think of nothing as just empty space. But from the point of view of general relativity, as you already know enough to understand, empty space is not really nothing. Empty space is really a dynamical system. Empty space can bend and twist and stretch and do all kinds of complicated things. It's really no different, in some basic sense, from a big piece of rubber. So nothingness really is intended to mean a state where there's not only no matter present, but also no space and no time, really nothing. One way to think of it, perhaps, is as a closed universe, the limit as the size of the closed universe goes to zero so that there's nothing left. None of these theories are precise. We don't really know how to precisely formulate them. In this tunneling from nothing, one is talking about tunneling in the context where the structure of space itself changes during the tunneling process. So it's tunneling in the context of general relativity. And we don't really have a successful quantum theory of general relativity. So these ideas are very speculative and quite vague. But they do indicate some possibilities for how the universe might have started. An idea closely related to this tunneling from nothing is the Hartle and Hawking-- This is Jim Hartle, of UC Santa Barbara, who's also the author of a general relativity textbook now. And Stephen Hawking, who you must know from Cambridge University. They proposed something called the wave function of the universe. From their point of view, it's self contradictory to talk about the universe having an origin because before the origin of the universe, space and time we're not even defined. And therefore, you could not think of there being a time before the universe was created. And therefore universe didn't actually get created. It just is and has some earliest possible time. And that's what this wave function of the universe formalism reflects. But otherwise, it's pretty similar, really, to the idea of tunneling from nothing. The idea is that the universe had some kind of a quantum origin, which determined the initial state of the universe. In any case, for the purpose of inflation, what we really need to assume, and this could be an assumption which follows from any of these theories, we need to assume that the early universe contained at least a patch, and we don't know exactly know how big the patch has to be, but greater than or about equal to the inverse Hubble constant times the speed of light, the Hubble length. And this initial patch also has to be expanding, or else it would just collapse. It really has to be expanding faster than a certain threshold. But I won't try to put that all into the one sentence. Oh, I didn't say patch of what yet. Whoops. That's where the average value of phi is about zero. And by average, I mean averaging over rapid fluctuations, if there are any. And if one has this, no matter where one got it from, inflation will begin. And once inflation begins, it doesn't matter much how it begins. To see what happens next, it's easiest to at least pretend, that to a good approximation, you can treat a small region of this patch as if it were homogeneous and behaving like a Robertson-Walker universe of the type we know about. Then we can write the first order Friedman equation, which is a little bit more informative than the second order one. I'm going to leave out the curvature term. We'll argue later that the curvature term becomes small quickly. But for our first pass, we'll just assume that the universe is described by something as simple as the Friedman Robertson-Walker equation for a flat universe. For row, we're just going to put row sub f. We'll assume that our space is just dominated by this false vacuum energy density. And this can easily be solved. It just says that the first derivative divided by the function itself is a constant. Just take the square root of this equation. And that is an equation which just immediately get solved and gives you an exponential. So you find that for late times, you just get a of t behaving as a constant times an exponential of time where the exponential time constant, which I'm calling chi. Chi is just the square root of this coefficient. The square root of 8 pi g over 3 times row sub f. So clearly this is the solution to that equation. And for late times, it is the solution that will dominate. Actually it is the only solution, the way I've already simplified this. But if we started with the full system of equations, there'd be other solutions with different initial conditions. But this is always what you would be led to, for late times. The exponential expansion would dominate. So that's what innovation basically is. It's a period of exponential expansion. There are a few features of inflation, which helps to understand why it is so robust. That is, why no matter how it starts, it leads to the same result. So one feature of inflation I'd like to mention is the cosmological no-hair. Some people call it a theorum and some people call it a conjecture. I think the more precise statement about this theorem is that you can prove it as a theorem perturbatively. That is, if all initial deviations are small, you can really prove it, but people think it's true, even beyond perturbation theory. And in that case it's a conjecture. But it's basically the statement that if one has a system with p equals minus row c squared and row is greater than zero, if that describes the matter, then essentially any metric that you start with will evolve into this exponentially expanding flat metric. Any system will evolve to locally resemble a flat exponentially expanding space time. And the word locally there is needed to make it true. If, for example, you start with a closed universe, as just a simple example, which has this kind of matter filling it. It will start to grow exponentially. It will always stay a closed universe. It will never become literally flat. But as it gets bigger and bigger, any piece of it will look flatter and flatter. And it will keep getting bigger and bigger exponentially fast forever. So it will rapidly approach a space which looks like an exponentially expanding flat space. Now this exponentially expanding flat space time has a name, which is de Sitter space, named after a Dutch astronomer. It was discovered early on in the history of general relativity. 1917, I think, was the date that de Sitter wrote his paper about de Sitter space. It has some very interesting properties, which De Sitter do not notice all of them. In spite of the fact that I'm describing it as a flat exponentially expanding space time, that's not the only possible description. It turns out that the same space time, by changing what you call equal time services, can be described as either an open or closed Robertson-Walker universe, completely homogeneous in both cases. So it's very weird. But the easy way to think of it for most purposes, is as this flat exponentially expanding picture. OK Next thing I want to point out about de Sitter spaces is that they have what are called event horizons. Now early in the course we talked about horizons and didn't really try to quantify the name. The horizons that we used to talk about are technically called particle horizons. Those are horizons that have to do with the past history of the universe and are related to the fact that since the universe has only a finite past history, or as a cosmological model at least, there's a finite distance that light could have traveled up until this time. And we cannot have any way of seeing anything that's further away than that maximum distance that light could have traveled. That's the particle horizon. These event horizons are different. They're related really, to the future of the universe, rather than the past. It's a statement that, because of the fact that these universes are exponentially expanding, if two events that happen at a particular time are separated from each other by more than a certain distance, then the light from one will never reach the future evolution of the other. And one can see that by looking at the total coordinate distance that light could travel between any two times. So I going to let delta r of t1 t2 be equal to the coordinate distance that light travels from t1 to t2. And I'm going to assume that a of t is given by exactly this formula. And I'll write out const because I don't want to write it too many times. I could give it a one variable symbol if I wanted to. This delta r of t1 t2 is just the integration of the coordinate velocity of light from t1 to t2 of c divided by a of t dt. The coordinate velocity of light is just c divided by a of t. We've seen that formula before. And this can easily be done by putting in what a of t is. And we get c over the constant that appeared in that formula, whatever it is, times chi, the exponential expansion rate, times e to the minus chi t 1 minus e to the minus chi t 2. And now, the question we want to ask ourselves is, suppose we let this light ray travel for an arbitrarily long amount of time, which means taking t2 to infinity. And the important feature of this expression is that as t2 goes to infinity, the expression approaches a finite value. The second term just disappears. And you're left with the first term. So no matter how long you wait, anything that started out with a coordinate separation larger than that value, that asymptotic value, will just never be reached by the light pulse that you've sent. And that's what this event horizon is. And it's easy to see what it actually amounts to numerically. If you want to know how far away the object has to be now, in physical terms, so that its coordinate distance is larger than the maximum we get here, we know how to do that. The maximum value can be written as just the limit as t2 goes to infinity of delta of r1 r2. And we have the expression for it right here. It's just the first piece of this answer. And this is the coordinate distance. If we want to know the present physical distance of something which is at that coordinate distance, we would just multiply it by the present scale factor. And present here means, t1 and t2 are the arguments here, and we just want to multiply by a of t1 to get the physical distance of an object which is at this boundary, the boundary of what we'll be able to receive a light ray from and what we won't. So this is the event horizon distance, physical distance, and it's just equal to c times chi inverse. When you multiply by a of t1, you cancel the constant of the denominator and you cancel the e minus chi t1. And you're just left with c times chi inverse. Which is the Hubble length. It's the inverse Hubble constant times the speed of light, which is the Hubble length. So anything that is further away one Hubble length, from us now, if that object emits a light ray, we will never receive it. And that's called the event horizon. Now the reason this is important is nothing travels faster than light. And that means that in a de Sitter space, everything is limited in how far it can ever get. And an important implication of that is that if, in our full space, which may not be entirely de Sitter space, if we have a de Sitter region, but junk outside that, which we don't understand, don't know how to predict, could be anything, we would still know, even without knowing what's outside, that whatever's outside can never penetrate into the de Sitter region by more than one event horizon, by more than one Hubble length. So the interior of the de Sitter region is protected from anything on the outside. And that is a rigorous theorem of general relativity, this protection. And that means that once you have a sizable region of de Sitter space, no matter what's going on outside, it's never going to disappear. It will always be protected by this event horizon. I should give you now a few sample numbers associated with this scenario. And here I have to say that we don't really know very accurately what are the right numbers to give here. So I think the word sample numbers was well chosen there. What we don't know is what energy scale inflation actually happened at in the history of universe. Turns out that the consequences are pretty much identical for most questions, or all the questions that they have been able so far to investigate observationally, regardless of what energy scale inflation happened. Inflation was originally invented in the context of Grand Unified Theories. And I think that's still a very plausible context in which inflation might have happened. And the sample numbers I'll give you will be numbers associated with Grand Unified Theories. And what starts the whole story is the energy scale of Grand Unified Theories, which is about 10 to the 16 GeV billion electron volts. And this number is arrived at by measuring, at accessible energy with accelerators, the interaction strengths of the three fundamental interactions of the standard model of particle physics. The standard model of particle physics is based on three different gauge groups, su 3, su 2 and u1. Each one of those gauge groups has associated with it an interaction strength. And they can be measured. And that's where we start this discussion. Then once you measure them at accessible energies, which is like 100 GeV, or something like that, then you can theoretically extrapolate to much higher energies. And what is found is that to good accuracy, the three actually meet at one point. And that is the underlying basis, really, of Grand Unified Theories. That's what allows the possibility that all three interactions are really just a manifestation of one underlying interaction, where the one underlying interaction is made to look like three interactions at lower energies through this process called spontaneous symmetry breaking, which was talked about a little bit in a lecture I gave the time before last, I think, in, probably in Scott's lecture. Now this meeting of the three lines is decent in the context of what is literally the standard model of particle physics. But if one modifies the standard model of particle physics by incorporating supersymmetry, a symmetry between fermions and bosons, and that involves adding a lot of extra particles because none of the particles that we know of make up a fermion boson pair. So in a supersymmetric model for every known particle, you introduce a new unknown particle. Which would be it's supersymmetric partner. In that minimal supersymmetric extension of the standard model, the meeting of the lines works much better. So it's a piece of evidence in favor of supersymmetry. In any case, where the lines meet to good approximation in either one of these two discussions, whether it's supersymmetric or not, is it about 10 to 16 GeV. So that becomes the fundamental math scale of the end unified theories. Hold on a second. That' what I'm looking for. Now once one has this mass scale, one can figure out an appropriate mass density. And that's what we're really interested in, what would be an appropriate mass density for a false vacuum in a grand unified theory. And one can develop that, and we really don't know how to do any better. Because as I've told you, we don't know really know how to calculate vacuum energies anyway. But as a dimensional analysis answer, we can get the answer because it is really uniquely determined by dimensional analysis up to factors. If one wants to make an energy density out of E gut plus constants of physics, the only way to do that is to take E gut to the fourth power and divide it by h bar cubed c to the fifth. And you can convince yourself at home that that gives you an energy density. And you could even evaluate it numerically, by mass density, excuse me. And this is about equal to 2.3 times 10 to the 81 grams per centimeter cubed. So it's a fantastically high mass density, 10 to the 81 grams per centimeter cubed. And if one puts this into the formula for chi, the exponential expansion rate, chi turns out to be about 2.8 times 10 to the minus 38 seconds. And c times chi inverse, the Hubble length, it turns out to be about 8 times 10 to the minus 28 centimeters. So all these numbers, off scale by human standards. And that's just a feature of the fact the Grand Unified Theories are off scale by human standards. AUDIENCE: [INAUDIBLE] PROFESSOR: Do I have this backwards? No, this is incredibly small. This is 10 to the minus 28. AUDIENCE: So then it's chi [INAUDIBLE] PROFESSOR: I'm sorry. Hold on. Yeah, no this-- AUDIENCE: Chi should be [INAUDIBLE] PROFESSOR: Chi inverse is a time. C times the time is a distance. So I think that's right. AUDIENCE: So is chi inverse 10 to the [INAUDIBLE] PROFESSOR: Yeah, if we're in cgs units, Chi inverse by itself would differ by a factor of 10 to the 10. So it would be 10 to the minus 38, Hm. Hold on. This must be chi inverse. AUDIENCE: Oh, OK. PROFESSOR: There is an inconsistency here. You are right. Yes, that's chi inverse. This is time. And then this just multiplies by c. OK so the way this scenario would work is, we would start with the early universe with some patch or order of magnitude this size. Which I might point out is 14 orders of magnitude smaller than the size of a single proton, which would be about 10 to the minus 13 centimeters. So 15 orders of magnitude, maybe. And then we would need enough inflation, so that at the end of inflation, the patch should be on the order of maybe one to 10 centimaters or more It has to be at least about this big, but could be much bigger. There's no problem with the being much bigger. Much bigger would just mean there's more inflation than you minimally needed. There's no problem with having too much inflation. And then it's a matter of checking and a calculation, which I'll tell you the answer of. If we want to go from some size of the end of inflation to the present universe-- And that's really what we're interested in, ultimately, getting the present universe. --there'd be a further coasting expansion from the end of inflation until now, which can just be calculated by using the idea that a times temperature, scale factor times temperature, is a constant. So the increase in the scale factor is proportional to decrease in the temperature. And the reheat temperature of this model-- Maybe I didn't describe reheating exactly, I'll describe it quickly in words. At the end of inflation, the scalar field is destabilized by these fluctuations and rolls down the hill, then oscillates about the bottom. And when it oscillates about the bottom, we need to take into account the fact that this field interacts with other fields. And it then gives its energy to the other fields, basically the standard model fields ultimately, heating them up, producing the hot soup of particles that we think of as the starting point for the conventional Big Bang Theory. So this reheating process at the end of inflation as the inflaton field oscillates about its minimum, reproduces the starting point of the conventional Big Bang Theory. And it produces a temperature which is comparable to the temperature that you started at, which is the temperature scale of the theory. So if it's Grand Unified Theory scales, we would reheat to a temperature of order 10 to the 16 GeV. And then, to ask what will be the expansion factor between then and now, it would be 10 to the 16 GeV times the Boltzmann constant times 2.7 Kelvin. This is the ratio of the temperature then to the temperature now, both expressed as energies. And then we might want to multiply this by 10 centimeters, if we say that at the end of inflation, the universe was 10 centimeters across. Size at end of inflation. This, I worked out at home, is about 450 times 10 to the 9th light years. And we would want something like 40 times 10 to the 9th light years to explain the present universe. So this is about 10 times too big. And that's OK. It means that we could get by with one centimeter and 10 centimeters is being a bit generous. So inflations would start with this tiny patch. At the end of inflation the patch would have grown to one or maybe 10 or perhaps more, centimeters in length. And then by coasting up to today it becomes something that's larger than the region that we now observe. And that's basically how the scenario works. Any questions about those numbers or the general pattern of what we're talking about here? OK. What I want to talk about next, and this will pretty much be where we'll stop, although a few other things we might mention if we have time, I want to talk about how it solves the three cosmological problems that we have discussed of the conventional Big Bang model. And the explanations are actually quite simple. So we can go through them pretty quickly. First we had the horizons slash homogeneity problem. Remember that was caused, or could be stated as, the problem that the early universe expanded so fast that the different pieces of it did not have time to talk to each other. And, in particular, when the cosmic microwave background was released, points at opposite sides of the universe were separated from each other by about maybe 50 horizon distances, we calculated. And that means there's no way they could have communicated with each other, and therefore no way we could explain how they turned out to have the same temperature at the same time. In this case, what we've done is, we've inserted into the history of the universe an extra phase of evolution, the inflationary phase. And if we go back to the beginning of the inflationary phase, we see that that problem is just not there. And if it's not there, it doesn't develop later. At the beginning of the inflationary phase, by assumption, the region that we're starting to talk about was about horizon length in size. And if we had enough inflation to produce 10 centimeters out of that, that was 10 times more than we needed, it would mean that the entire observed universe would be coming from a region that would be only about a tenth of the size of this Hubble length. So that would therefore be well inside the horizon at that time. And that means that if you allow a little bit of leeway with these numbers by having a little bit of extra inflation, there can be plenty of time for the entire region that's going to become our personally observed region, to come to a uniform temperature by the ordinary processes of thermal equilibrium. Because they're much less than the horizon distance apart. And then once the uniformity is established, before inflation, when the region that we're talking about is incredibly tiny, inflation takes over and stretches that tiny region so that today, it's large enough to encompass everything that we see. And therefore everything that we see had a causally connected past and had time at the early stages to come to uniform temperature, which is then preserved as the whole thing expands. So that gives a very simple explanation for the homogeneity problem. Basically before inflation the region was tiny. Second on our list was the flatness problem. And the basis of that problem was the calculation that we did about how omega minus 1 evolves in time. And we discovered that omega minus 1 always grows in magnitude during conventional evolution of the universe. And that therefore, for omega minus 1 to be small today, it would have to be amazingly small in the early universe, as small as 10 to the minus 18 at one second after the Big Bang to be consistent with present measurements of omega minus 1. The key element there was this unstable equilibrium and the fact omega L minus 1 always grew with time. And that depended on the Friedman equations. During inflation, the Friedman equations in some schematic sense were the same equations, but the rows that go into it are different. So the equations basically are different. And if we look at the key equation, the first order Friedman equation, H squared equals 8 pi over 3 G row minus Kc squared over a squared. This was the equation that we used to derive this flatness problem. We could see immediately, if we now think about it, during the inflationary process, things are completely reversed. Omega is driven towards 1, and exponentially fast. And the way I see that is to just ask what is this equation do during inflation. And during inflation, we just replace row by this constant value row sub f, the energy density of the false vacuum is fixed. And that means that during inflation, this term is fixed. This term is falling off like 1 over a squared. And a is growing exponentially with time. So that means that this term is decreasing relative to that term by a huge factor, by the square of the expansion factor. So in our sample numbers over there, we were talking about an overall expansion from 10 to the minus 27 centimeters to 10. That's expansion by a factor of 10 to the 28. In that case, during inflation, this term decreases by a factor of 10 to the 56 while this term remains constant. And that means that by the end inflation, this is completely negligible and this equation without this extra term means you have a flat universe. So during inflation, the universe is driven towards flatness, like one over a squared, which is 1 over the square of this exponential expansion factor, so very, very rapidly. And finally, the third of the problems that we talked about was the monopole problem. We argued, originally Kibble argued, that you'd expect approximately one of these monopoles to form per horizon volume, just because the monopoles are basically twists in the scalar fields. And there's no way the scalar fields can organize themselves on distances larger than the horizon distance. So you'd expect something on order of-- It's a crude argument. --but something on the order of 1 not in the scalar field per horizon volume. And that led to far too many magnetic monopoles, fantastically too many. And the formation of one monopole per horizon volume is hard to avoid. I don't know of any way of avoiding it. But what gets us out of the problem here, is that we can easily arrange in our inflationary model for the bulk of the inflation to happen after the monopoles form. And that means the monopoles will be diluted by the exponential expansion that will occur after the monopoles form. The rest of the matter is not diluted, because when inflation takes place it's at a constant energy density so the amount of other stuff that will be produced is not diminished by this extra expansion. But the monopolies, which will produce first, will be thinned out by the expansion. So the basic idea here is that the volume goes by a factor of the order, using our sample numbers, it's linearly growth by a factor of 10 to the 28. Volumes go like cubes of linear distances. So 10 to the 28 cubed is 10 to the 84, I think. Probably right. And that means that we can dilute these monopoles by a fantastic factor and make everything work, if we just arrange for the monopoles to be produced before the exponential expansion sets in. OK. Finally, and I think this is probably the last thing we will talk about, another problem that we could have talked about and we'll talk about the solution of it now, even though we never really talked about as a problem, is the small scale nonuniformities of the universe. And if we look out around the universe we don't see a uniform mass distribution, we see stars and stars collected in galaxies and galaxies collected in clusters and clusters collected in super clusters, a very complicated array of structure in matter. Those are all nonuniformities. And we think we understand how they evolve from early times because we also see in the cosmic microwave background radiation, small fluctuations, which we can now actually measure to very high degree of accuracy. Those small fluctuations provide the seeds for the structure in the universe that happens later because of the fact that the universe is gravitationally unstable. So at very early times what we're seeing directly in the CMB, these nonuniformities were only at the level of one part in 100,000. But nonetheless, in regions where there was slightly more mass density, that pulls in slightly more matter, producing still stronger gravitational field pulling in more matter and that amplifies the fluctuations. And that affect, we believe, is enough to account for all the structure that we see in the universe as originating from these tiny ripples on the cosmic microwave background. But that still leaves the question of where do these tiny ripples come from. And in conventional cosmology, one really had no idea where they come from. One knew they had to be there even before they were seen because we had to account for the structure in the universe and how it evolved. When they finally were seen, it was seen just right. Everything fit together. In inflationary cosmology, the exponential expansion tends to smooth everything out. And for a while, those of us working on it were very worried that inflation would produce a perfectly smooth universe and we'd have no way of accounting for the small fluctuations that were needed to explain the existence of stars and galaxies. But then it was realized that quantum theory can come to our rescue. Classically, inflation would smooth everything out and produce a uniform mass density everywhere. But quantum mechanically, because quantum mechanical theories are fundamentally probabilistic, the classical prediction of a uniform density turns into a quantum mechanical prediction of an almost uniform density, but with some places being slightly higher than that uniform density, other places being slightly lower. And qualitatively, that's exactly what we see in the cosmic microwave background radiation. And furthermore, we can do it quantitatively. One can actually calculate the effects of these quantum fluctuations. And that's what I want to show you now. The actual data on that, which is just gorgeous. Shown here is the Planck seven year data. Shown here is the Planck seven year data, where what's being plotted is the amplitude of the fluctuations versus the angular wave length. One is seeing these as a pattern on the sky. So the wavelength you see as an angle, not as a distance. And long wave lengths are at the left. Short waive lengths are at the right. It's really done as a multiple expansion, if you know what that means. And those numbers are showing on the top. And the data points are shown as these black bars with their appropriate errors. And the red line is the theoretical prediction due to inflation, putting in the amount of dark matter that we need to fit the data that we also measure from the supernovae. And it's absolutely gorgeous. So I have a little Eureka guy to show you how happy I was when I saw this graph. And with the help of Max Tegmark, we've also put on this graph what other theories of cosmology would give. So if we had an open universe, where omega was just 0.2 or 0.3, as many people believed before 1998, we would have gotten this yellow line. If we had inflation without dark energy, making omega equal what out of matter, out of ordinary matter, we would get this greenish line, which also doesn't fit the data at all. And there's also something called cosmic strings that we haven't talked about. It was for a time, thought to be a possible source of the fluctuations in the universe. But once this data came in, that became completely ruled out. Now this is not quite the latest data. The latest data come from the Planck satellite. And it was released last March. And I don't have that plotted on the same scale, but this is the latest data which, as you see, fits even more gorgeously than the data from WMAP. The more accurately it gets measured, the better it fits the theoretical expectations. Now I should mention for truth in advertising, that this data is to some extent fit to the model. It's actually a six parameter fit. But of those six parameters, I don't have time to talk about them in detail, but four of them are pretty much determined by other features. Two of them are just fit to the data. And one of them is something that changes the shape a little bit. It's the opacity of the space between us and the surface of last scattering. An important parameter that's fit that you should know about, is the height of the curve. The height of the curve can, in principle, be predicted by inflation if you knew the full details of this potential energy function that I've erased for the scalar field. But we don't. We just have some qualitative idea about what it might look like. So the height of the curve is fit to the data. But nonetheless, the location of all these peaks and everything really just come out of the theory and it's just gorgeous. And it works wonderfully. So the bottom line is I think inflation does look like a very good explanation for the very early universe. It's kind of bizarre since it talks about times like 10 to the minus 35 seconds after the Big Bang which seemed like a totally incredible extrapolation from physics that we know. But nonetheless, marvelously, it produces data that agrees fantastically with what astronomers are now measuring. So we'll stop there. I want to thank you all for being in the class. It's really been a fun class to teach. I have very much enjoyed all of your questions and enjoyed getting to know you and hope to continue to see you around. Thank you.
MIT_8286_The_Early_Universe_Fall_2013
1_Inflationary_Cosmology_Is_Our_Universe_Part_of_a_Multiverse_Part_I.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, in that case we can jump into the early universe. So on the opening slide here I have a picture of the Planck satellite, which is a satellite that was launched a few years ago dedicated to measuring the cosmic background radiation. Cosmic background radiation is really our biggest clue for the early history of the universe. The Planck satellite is actually the third satellite to go up completely dedicated to measuring the cosmic background radiation. The first was called COBE and then WMAP and now Planck. Planck is still in orbit. It actually is finished with its data-taking, although it's nowhere near finished with the analysis of that data. So they made one major data release last March, and there are still very important pieces of their data that I haven't looked at yet. And we'll be talking more about what exactly these satellites see. Onward. I want to begin by talking about the standard Big Bang, which will in fact be the main focus of this course. We'll probably spend about 2/3 of the course or so talking about the standard Big Bang and then move on to topics like inflation. That actually is, I think, a very sensible balance, because as you'll see once we get under studying inflation, it's a fairly straightforward thing once you know the basic equations coming out of standard cosmology. So I think spending about 2/3 of the term or so on the conventional cosmology before we get to inflation is very sensible, because that will set up all the principles that we'll be using later to discuss more advanced topics like inflation. The conventional Big Bang model is basically the theory that the universe as we know it began some 13 to 14 billion years ago. And now we even have a pretty precise number to replace this 13 to 14 billion years. This is based on the Planck satellite combined with a few other pieces of information. The number is 13.82 billion years, plus or minus 0.05. So it's pretty well pinned down now, the age of the universe since the Big Bang. I should add, though, I put in the qualifier "the universe as we know it." What that really means is that I want to leave it out, because we don't really know that the universe began with what we call the Big Bang. So we have a very good picture of the Big Bang, and we're very confident that it happened and that we understand what it looked like. But whether or not anything came before it is a much more open question which I think is really completely open. I think we should not act like we know that the universe began with the Big Bang. And in fact later at the very end of the course when we discuss some of the implications of inflation and the multiverse, we'll see that there are strong suggestions that the Big Bang was perhaps not really the beginning of existence, but really just the beginning of our local universe, often called a pocket universe. OK. In any case, what the Big Bang theory tells us is that at least our region of the universe 13.82 billion years ago was an extremely hot, dense, uniform soup of particles, which in the conventional standard Big Bang model filled literally all of space. And now we certainly believe it filled essentially all the space that we have access to uniformly. Now I should point out that this is contrary to a popular cartoon image of the Big Bang, which is just plain wrong. The cartoon image of the Big Bang is the image of a small egg of highly dense matter that then exploded and spewed out into empty space. That is not the scientific picture of the Big Bang. And the reason is not because it's illogical. It's hard to know what's logical or illogical in this context. But simply based on what we see, if there was a small egg that exploded into empty space, you would certainly expect that today you would see something different if you were looking toward where the egg was versus looking the opposite direction. But we don't see any effect like that. When we look around the sky, the universal looks completely uniform, on average, in all directions to very high degree of accuracy. I'll talk a little bit more precisely later. So we don't see any sign of an egg having happened anywhere. Rather the Big Bang seems to have happened everywhere uniformly. OK. The Big Bang describes a number of important things, and we'll be talking about this more as the course goes on. It describes how the early universe expanded and cooled-- and will be spending a fair amount of time understanding the details behind those words. The point is that the Big Bang really is a very precise model based on very simple assumptions. You basically assume that the early universe was filled by a hot gas in thermal equilibrium. And that this gas was expanding and being pulled back by gravity. And from those simple ideas you can really calculate-- and we'll learn how to calculate-- how fast the universe would have been expanding at any given instant of time, what the temperature would have been at any given instant of time, what the density of matter would have been at any instant of time. So all the details really can be calculated from some rather simple ideas, and we'll have fun exploring that. The Big Bang also talks about how the light chemical elements formed. And that actually is the main topic of Steve Weinberg's book "The First Three Minutes." Because that was more or less the time period during which the chemical elements formed. It turns out that most of the elements in the universe did not form in the Big Bang, but formed much later in the interior of stars. And those elements are then strewn out into space in supernova explosions and collected into later generation stars, of which our sun would be one. So the stuff that we're made out of was actually not produced in the Big Bang, but rather was produced in the interior of some distant star that exploded long ago. And maybe many stars, whose material collected to form our solar system. Nonetheless, most of the matter in the universe-- as opposed to most of the different kinds of elements-- did form in the Big Bang. Most of the matter in the universe is just hydrogen and helium. About five different isotopes of hydrogen, helium, and lithium were primarily formed in the Big Bang, and because we do have this detailed picture of the Big Bang that we'll be learning about, it's possible to actually calculate the predicted abundances of those different isotopes. And the predictions agree very well with the observations. And this is certainly one of the major confirmations we have that this picture of the Big Bang is correct. We can predict what the abundance of helium-3 should be, and we measure it, and it agrees. It's rather marvelous. Finally-- and this subject we're not going to talk about much because it goes beyond the level of complexity that the course is going to have-- but finally the Big Bang does discuss how the matter ultimately congealed to form stars, galaxies, clusters of galaxies. We'll talk about that a little bit, but we won't really follow that very far. That is still in principle a work in progress. People do not understand everything about galaxies. But the general picture of that-- it started out with an almost uniform universe, and then the lumps congealed to form the galaxies and the structures-- we say certainly seems to be a working picture. And one can understand a lot about the universe from this very simple picture. OK what I want to talk about next is what the conventional Big Bang theory does not talk about, where new ideas like inflation come in. First of all, the conventional Big Bang theory does not say anything about what caused the expansion. It really is only a theory of the aftermath of a bang. In the scientific version of the Big Bang, the universe starts with everything already expanding with no explanation of how that expansion started. That's not part of the Big Bang theory. So the scientific version of the Big Bang theory is not really a theory of a bang. It's really the theory of the aftermath of a bang. In addition, and maybe in a similar vein, the conventional Big Bang theory says nothing about where all the matter came from. The theory really assumes that for every particle that we see in the universe today, there was at the very beginning at least some precursor particle, if not the same particle, with no explanation of where all those particles came from. In short, what I like to say is that the Big Bang says nothing about what banged, why it banged, or what happened before it banged. It really has no bang in the Big Bang. It's a bangless theory, despite its name. Inflation, it turns out, fills in possible answers, very plausible answers, for many of these questions. And that's what I'll be talking about mainly for the rest of today. And as I said, in terms of the course, it's where we'll be aiming to get about 2/3 of the way through the semester. What is cosmic inflation? It's basically a minor modification, in terms of the overall scheme of things, of the standard Big Bang theory. And the best word to describe it is a word that I think was invented by Hollywood-- inflation is a prequel to the conventional Big Bang theory. It's a short description of what happened before, immediately before, the Big Bang. So inflation really is an explanation of the bang of the Big Bang in the sense that it does provide a theory of the propulsion that drove the universe into this humongous episode of expansion which we call the Big Bang. And it does it in terms of something that I like to think of as a miracle of physics. When I use the word "miracle" in this context-- referring to a miracle in the scientific sense-- simply something that's so surprising that I think it merits being called a miracle even though it's something that's a part of the laws of physics. There are just a few features of the laws of physics that are actually crucial to inflation-- I'll be talking about two of them-- which I consider miracles simply because-- well, mainly because when I was an undergrad nobody talked about them at all. They were just not part of physics then, even though they really were. They just weren't parts of physics people noticed and talked about. So the miracle of physics I'm talking about here is something which actually is known since the time of Einstein's general relativity-- that gravity is not always attractive. Gravity can act repulsively. And Einstein introduced this in 1916, I guess, in the form of what he called his cosmological constant. And the original motivation of modifying the equations of general relativity to allow this was because Einstein thought that the universe was static. And he realized that ordinary gravity would cause the universe to collapse if it were static. It couldn't remain static. So he introduced this cosmological constant term to balance the overall attraction of ordinary gravity to be able to build a static model of the universe. As you'll soon be learning, that's dead wrong. That's not what the universe looks like at all. But the fact that general relativity can support this gravitation repulsion, still being consistent with all the principles that general relativity incorporates, is the important thing which Einstein himself did discover in this context. And inflation takes advantage of this possibility, within the context of general relativity, to let gravity be the repulsive force that drove the universe into the period of expansion that we call the Big Bang. And in fact when one combines general relativity with conventional ideas now in particle physics, there really is a pretty clear indication, I should say-- not quite a prediction-- but a pretty clear indication that at very high energy densities, one expects to find states of matter which literally turn gravity on its head and cause gravity to become repulsive. In terms of the details which we'll be learning about more later, what it takes to produce gravitational repulsion is a negative pressure. According to general relativity, it turns out-- and we'll be talking more about this later-- both pressures and energy densities can produce gravitational fields. Unlike Newtonian physics, where it's only mass densities that produce gravitational fields. And the positive pressure produces an attractive gravitational field, which is what you would probably guess if somebody just asked you to guess. Positive pressures are sort of normal pressures, and attractive gravity is normal gravity. So normal pressures produce normal gravity. But it is possible to have negative pressures, and negative pressures produce repulsive gravity. That's the secret of what makes inflation possible. So inflation proposes that at least a small patch of this repulsive gravity material existed in the early universe. We don't really know exactly when in the history of the universe inflation happened, which is another way of saying we don't know exactly at what energy scale inflation happened. But a very logical, plausible choice-- I don't know if logical is the right word, but plausible is a good word-- very plausible choice for when inflation might have happened, would be when the energy scales of the universe where at the scale of grand unified theories. Grand unified theories-- we'll talk about a little bit later-- are theories which unify the weak, strong, and electromagnetic interactions into a single unified interaction. And that unification occurs at a typical energy of about 10 to the 16 GeV, where GeV, in case you don't know, is about the mass-- or the energy equivalent of the mass-- of a proton. So we're talking about energies that are about 10 to the 16 times the equivalent energy of a proton mass. And at those energies we think that these states that create repulsive gravity are very likely to exist. And if that happened at that scale, the initial patch would only have to be the ridiculously small size of about 10 to the minus 28 centimeters across to be able to lead ultimately to the creation of everything that we see on the vast scale at which we see it. The patch certainly does not have to be the entire universe. And it could in fact be incredibly rare, because one thinks that outside of that patch essentially nothing will happen interesting. So we expect that the universe that we observe today would be entirely the consequence of such a patch. The gravitational repulsion created by this small patch of repulsive gravity material would be then the driving force of the Big Bang, and it would cause the region to undergo exponential expansion. And by exponential expansion, as you probably know, it means that there's a certain doubling time, and if you wait the same amount of time it doubles again. If you wait the same amount of time, it doubles again. And because these doublings build up so dramatically, it doesn't take very much time to build the whole universe. In about 100 doublings, this tiny patch of 10 to the minus 28 centimeters can become large enough not to be the universe, but to be a small marble size region, which will then ultimately become the observed universe as it continues to coast outward after inflation ends. So the doubling time would be incredibly small if this was all happening at the grand unified theory set of numbers-- 10 to the minus 37 seconds, which is pretty fast. The patch would expand exponentially by at least a factor of about 10 to the 28, which as I mentioned takes only about 100 doublings, and could expand to be much more. There's no cut off there. If there's more expansion than we need to produce our universe, it just means that the patch of universe that we're living in is larger than we see. But that's OK. Everything that we see looks uniform as far as we can see, and how much there is beyond that we really just have no way of knowing. So larger amounts of inflation are perfectly consistent with what we see. The amount of time it would take would only be about 10 to the minus 35 seconds, which is just 100 times 10 to the minus 37 if you can do that complicated arithmetic in your head. And the region that's destined to become our presently observed universe at the end of inflation would have been only about the size of a marble-- about one centimeter or so across. Now what ends inflation is the fact that this repulsive gravity material is unstable. So it decays, using the word decay in the same sense that a radioactive substance decays. It doesn't necessarily mean exactly that it rots like an apple decays, but it means that it turns into other kinds of material. And in particular, it turned into material which is no longer gravitationally repulsive. So the gravitational repulsion ends, and in fact the particles produced by this energy that's released at the end of inflation become the hot soup of the conventional Big Bang. And this is where the prequel ends, and the main feature begins-- the conventional Big Bang theory. The role of inflation is just to set up the initial conditions for the conventional Big Bang theory. Now there's a little caveat here. Inflation ends because the material is unstable, but it only ends almost everywhere, not quite everywhere. And this is basically the way exponentials work. And we'll come back to this when we talk about the late time behavior and the idea of eternal inflation. This repulsive gravity material decays, but it decays like a radioactive substance-- which is also an exponential-- as a half life. But no matter how many half lives you wait, there's still a tiny little bit, a tiny fraction that remains. And that turns out to be important for the idea that in many cases inflation never completely ends. We'll come back to that. So I want to talk more about what goes on during this exponential expansion phase. There's a very peculiar feature of this inflation-- this exponential expansion driven by repulsive gravity-- which is that while it's happening, the mass density or energy density of the inflating material-- this repulsive gravity material-- does not decrease. You would think that if something doubled in radius, it would multiply by a factor of eight in volume. You would think the energy density would go down by a factor of eight. And that certainly happens for ordinary particles. It's certainly what would happen if you had a gas, an ordinary gas, that you just allowed to expand by a factor of two in radius-- the density would go down by a factor of eight with volumes of cubes of distances. But this peculiar repulsive gravity material actually expands at a constant density. Now that sounds like it must violate conservation of energy, because it really does mean that the total amount of energy inside this expanding volume is increasing. The energy per volume is remaining constant, and the volume is getting bigger and bigger exponentially. So the claim is that I've not gone crazy, that this actually is consistent with the laws of physics as we know them. And that it is consistent with conservation of energy. Conservation of energy really is a sacred principle of physics. We don't know of anything in nature that violates this principle of conservation of energy, that energy ultimately cannot be either made or destroyed, that the total amount of energy is basically fixed. So it sounds like there's a contradiction here. How do we get out of it? What's the resolution? Well, this requires my second miracle of physics. Energy-- it really is exactly conserved. I'm not going to tell you about any miracles which changed that. But the catch here is that energies are not necessarily positive. There are things that have negative energies. And in particular, the gravitational field has a negative energy. This statement by the way is true both in Newtonian physics and in general relativity. We'll prove it later. I might just say quickly if some of you have learned in an E&M course how to talk about and calculate the energy density of an electrostatic field-- probably many of you have, maybe all of you have-- the energy density of an electrostatic field is a constant times the square of the electric field strength. And you can prove that energy is exactly the energy that you need to put into a system to create an electric field of a given configuration. If you think about Newton's law of gravity and compare it with Coulomb's law, you realize that it really is the same law, except they have a different constant in front of them. They're both inverse square laws in proportion to the two charges, where in the case of gravity it's the masses that play the role of charges. But they have opposite signs. Two positive charges, as we all know we tell each other, two positive masses attract each other. So in fact the very same argument which allows you to calculate the energy density of a Coulomb field can allow you to calculate the energy density of a Newtonian gravitational field-- still sticking to Newtonian physics-- and this change in sign of the force just carries through. It changes the signs of all the work that's being done, and you get the negative answer that is the correct answer for Newtonian gravity. The energy density of a Newtonian gravitational field is negative. And the same is true in general relativity in a more subtle way. So what that means in terms of conservation of energy is that we can have more and more matter, more and more energy building up in the form of ordinary matter-- which is what happens during inflation-- as long as there's a compensating amount of negative energy that's created in the gravitational field which is filling this ever larger region of space. And that's exactly what happens in inflation. The positive energy of this repulsive gravity material which is growing and growing in volume is precisely canceled by the negative energy of the gravitational field that's filling the region. So the total energy does remain constant, as it must, and there's certainly a good possibility that the total energy is exactly zero. Because everything that we know of is at least consistent with the possibility that these two numbers are exactly equal to each other or something very close. Schematically, the picture is that if one thinks about the total energy of the universe, it consists of a huge positive amount in the form of matter and radiation-- the stuff that we see, the stuff that we normally identify the energy of-- but there's also a huge negative amount of energy in the gravitational field that fills the universe. And as far as we can tell, the sum is at least consistent with being 0. In any case, what happens during inflation is the black bar goes up and the red bar goes down. And they go up and down by equal amounts. So certainly what happens during inflation conserves energy, as anything consistent with the laws of physics that we know of must conserve energy. I just remembered I was planning to turn out these blackboard lights. It probably makes it a little more comfortable to watch the screen. OK. So, onward. I want to talk some about the evidence for inflation. So far I've described what inflation is-- and I'm sort of done describing what inflation is for today. As I said, we'll be coming back and talking about all this during the coming semester. Now let's move on to discuss some of the reasons why we think that our universe may very likely have actually undergone this process called inflation I was just telling you about. So there are three things I want to talk about. The first of which is the large scale uniformity of the universe. Which is related to what I told you at the beginning, that if you look out in different directions in the universe, it really looks the same in all directions. And the object that can be measured with the most precision in terms of how things vary with angle, is the cosmic background radiation-- because we can measure it from all directions, and it's essentially a uniform background. And when that's been done, what's been found is that the radiation is uniform to the incredible accuracy of about one part in 100,000-- which really is a rather spectacular level of uniformity. So it means the universe really is rather incredibly uniform. I might mention one proviso here just to be completely accurate. When one actually just goes out and measures the radiation, one finds something-- one finds an asymmetry that's larger than what I just said. One finds an asymmetry of about 1 part in 1,000, with one direction being hotter than the opposite direction. But that 1 part in 1,000 effect we interpret as our motion through the cosmic background radiation, which makes it look hotter in one direction and colder in the opposite direction. And the effect of our motion has a very definite angular pattern. We have no other way of knowing what our velocity is relative to the cosmic background radiation. So we just measure it from this asymmetry, but we're restricted. We can't let it account for everything. Because it has a very different angular form, we only get to determine one velocity. And once we determine that, that determines one asymmetry and you can subtract that out. And then the residual asymmetries, the asymmetries that we cannot account for by saying that the Earth has a certain velocity relative to the cosmic background radiation, those asymmetries are at the level of 1 part in 100,000. And this is 1 part in 100,000 that we attribute to the universe and not to the motion of the earth. OK. So to understand the implications of this incredible degree of uniformity, we need to say a little bit about what we think the history of this cosmic background radiation was. And what our theories tell us-- and we'll be learning about this in detail-- is that in the early period-- Yes. AUDIENCE: I'm sorry. I'm curious. When they released WMAP and stuff, did they already subtract out the relativistic effect? PROFESSOR: Well, the answer is that they analyze things according to angular patterns and how they fit different angular patterns. So in fact, I think they don't even report it with WMAP, but it would be what they would call L equals 1, the dipole term. They analyze the dipole, quadrupole, octupole, et cetera. So it really does not contribute at all to anything except that L equals 1 term, which is one out of 1,800 things that they measure. So basically, I think they don't even bother reporting that one number, and therefore it's subtracted out. OK. Do feel free to ask questions, by the way. I think it's certainly a small enough class that we can do that. OK. So what I was about to say is that this radiation during the early period of the universe, when the universe was a plasma, the radiation was essentially locked to the matter. The photons were moving at the speed of light, but in the plasma there's a very large cross section for the photons to scatter off of the free electrons in a plasma. Which basically means that the photons move with the matter-- because when they're moving on their own, they just move a very short distance and then scatter, and then move in a different direction. So relative to the matter, the photons go nowhere during the first 400,000 years of the history of the universe. But then at about 400,000 years the universe cools enough-- this is all according to our calculations-- the universe cools enough so that the plasma neutralizes. And when the plasma neutralizes, it becomes a neutral gas like the air in this room. And the air in this room seems completely transparent to us, and it turns out that actually does extrapolate to the universe. The gas that filled the universe after it neutralized really was transparent, and it means that a typical photon that we see today in the cosmic background radiation really has been traveling on a straight line since about 400,000 years after the Big Bang. Which in turn means that when we look at the cosmic background radiation, we're essentially seeing an image of what the universe looked like at 400,000 years after the Big Bang. Just as the light traveling from my face to your eyes gives you an image of what I look like. So that's what we're seeing-- a picture of the universe at the age of 400,000 years, and it's bland-- uniform to 1 part in 100,000. So the question then is, can we explain how the universe to be so uniform? And it turns out that if you-- Well, I should say first of all that if you're willing to just assume that the universe started out perfectly uniform to better than one part in 100,000, that's OK. Nobody could stop you from doing that. But if you want to try to explain this uniformity without assuming that it was there from the beginning, then within the context of the conventional Big Bang theory, it's just not possible. And the reason is that within the evolution equations of the conventional Big Bang theory, you can calculate-- and we will calculate later in the course-- that in order to smooth things out in time for it to look smooth in the cosmic background radiation, you have to be able to move around matter and energy at about 100 times the speed of light. Or else you just couldn't do it. And we don't know of anything in physics that happens faster than the speed of light. So within physics as we know it, and within the conventional Big Bang theory, there's no way to explain this uniformity except to just assume that maybe it was there from the very beginning. For reasons that we don't know about. On the other hand, inflation takes care of this very nicely. What inflation does is it adds this spurt of exponential expansion to the history of the universe. And the fact that this exponential expansion was so humongous means that if you look at our picture of the universe before the inflation happened, the universe would have been vastly smaller than in conventional cosmology which would not have this exponential spurt of expansion. So in the inflationary model there would've been plenty of time for the observed part of the universe to become uniform before inflation started-- when it was incredibly tiny. And then would become uniform just like the air in the room here tends to spread out and produce a uniform distribution of air rather than having all the air collected in one corner. Once that uniformity is established on this tiny region, inflation would then take over and stretch this region to become large enough to include everything that we now see, thereby explaining why everything that we see looks so uniform. It's a very simple explanation, and it's only possible with inflation and not within the conventional Big Bang theory. So, the inflationary solution. In inflationary models the universe begins so small that uniformity is easily established. Just like the air in the lecture hall-- same analogy I used-- spreads uniformly to fill the lecture hall. Then inflation stretches the region to become large enough to include everything that we now observe. OK. So that's the first of my three pieces of evidence for inflation. The second one is something called the flatness problem. And the question is, why was the early universe so flat? And the first question maybe is, what am I talking about when I say the early universe was flat? One misconception I sometimes find people getting is that flat often means two dimensional. That's not what I mean. It's not flat like a two dimensional pancake. It's three dimensional. The flat in this context means Euclidean-- obeying the axioms of Euclidean geometry-- as opposed to the non-Euclidean options that are offered by general relativity. General relativity allows three dimensional space to be curved. And if we only consider uniform curvature, which is-- we don't see any curvature, actually, but-- We know with better accuracy that the universe is uniform than we do that it's flat. So imagine in terms of discussion of cosmology three possible curvatures for the universe, all of which would be taken to be uniform. Three dimensional curved spaces are not easy to visualize, but all three of these are closely analogous to two dimensional curved spaces, which are easy to think about. One is the closed geometry of the surface of a sphere. Now the analogy is that the three dimensional universe would be analogous to the two dimensional surface of a sphere. The analogy changes the number of dimensions. But important things get capped. Like for example on the surface of a sphere, you can easily visualize-- and there's even a picture to show you-- that if you put a triangle on the surface of a sphere, the sum of the three angles at the vertices would be more than 180 degrees. Unlike the Euclidean case, where it's always 180 degrees. Question? AUDIENCE: Yeah. Is the 3D curving happening in a fourth dimension? Just like these 2D models assume another dimension? PROFESSOR: Good question. The question was, is the 3D curvature happening in a fourth dimension just like this 2D curvature is happening in a third dimension? The answer I guess is yes. But I should maybe clarify the "just like" part. The third dimension here from a strictly mathematical point of view allows us to visualize the sphere in an easy way, but the geometry of the sphere from the point of view of people who study differential geometry is a perfectly well defined two dimensional space without any need for the third dimension to be there. The third dimension is really just a crutch for us to visualize it. But that same crutch does work in going from three to four. And in fact when we study the three dimensional curved space of the closed universe, we will in fact do it exactly that way. We'll introduce the same crutch, imagine it in four dimensions, and it will be very closely analogous to the two dimensional picture that you're looking at. OK. So one of the possibilities is a closed geometry where the sum of the three angles of a triangle is always bigger than 180 degrees. Another possibility is something that's usually described as saddle shaped, or a space of negative curvature. And in that case the sum of the three angles-- they get pinched, and the sum of the angles is less than 180 degrees. And only for the flat case is the sum of the three angles exactly 180 degrees, which is the case of Euclidean geometry. The geometries on the surfaces of these objects is non-Euclidean, even though if you think of the three dimensional geometry of the objects embedded in three dimensional space, that's still Euclidean. But the restricted geometry to the two dimensional surfaces are non-Euclidean there and there, but Euclidean there. And that's exactly the way it works in general relativity. There are closed universes with positive curvature and the sum of angles being more than 180 degrees. And there are open universes where the sum of three angles is always less than 180 degrees. And there's the flat case-- which is just on the borderline of those two-- where Euclidean geometry works. And the point is that in our universe, Euclidean geometry does work very well. That's why we all learned it in high school. And in fact we have very good evidence that the early universe was rather extraordinarily close to this flat case of Euclidean geometry. And that's what we're trying to understand and explain. According to general relativity this flatness of the geometry is determined by the mass density. There's a certain value of the mass density called the critical density-- which depends on the expansion rate, by the way, it's not a universal constant of any kind. But for a given expansion rate one can calculate a critical density, and that critical density is the density which makes the universe flat. And cosmologists define a number called omega-- capital omega-- which is just the ratio of the actual mass density to the critical mass density. So omega equals 1 says the actual density is the critical density, which means the universe would be flat. Omega bigger than 1 would be a closed universe, and omega less than 1 would be an open universe. What's peculiar about the evolution of this omega quantity is that omega equals 1 as the universe evolves in conventional cosmology behaves very much like a pencil balancing on its tip. It's an unstable equilibrium point. So in other words, if omega was exactly equal to 1 in the early universe, it would remain exactly equal to 1. Just like a pencil that's perfectly balanced on its tip would not know which way to fall and would in principle stay there forever. At least with classical mechanics. We won't include quantum mechanics for our pencil. Classical pencil that we're using for the analogy. But if the pencil leans just a tiny bit in any direction, it will rapidly start to fall over in that direction. And similarly if omega in the early universe was just slightly greater than 1, it would rapidly rise towards infinity. And this is a closed universe. Infinity really means the universe has reached its maximum size, and it turns around and collapses. And if omega was slightly less than 1, it would rapidly dribble off to 0, and the universe would just become empty as it rapidly expands. So the only way for omega to be close to 1 today-- and as far as we can tell, omega is consistent with 1 today-- the only way that can happen is if omega started out unbelievably close to 1. Unless it's this pencil that's been standing there for 14 billion years and hasn't fallen over yet. Numerically, for omega to be somewhere in the allowed range today, which is very close to 1, it means that omega at one second after the Big Bang had to be equal to 1 to the incredible accuracy of 15 decimal places. Which makes the value of the mass density of the universe at one second after the Big Bang probably the most accurate number that we know in physics. Since we really know it to 15 decimal places. So if it wasn't in that range, it wouldn't be in the [? lab manuals ?] today. We have this amplification effect of the evolution of the universe. So the question is, how did this happen? In conventional Big Bang theory, the initial value of omega could have been anything, logically. To be consistent with what we now observe it has to be within this incredibly narrow range, but there's nothing in the theory which causes it to be in that narrow range. So the question is, why did omega start out so incredibly close to 1? Like the earlier problem about homogeneity, if you want to just assume that it started out-- exactly like, it had to be-- at omega equals 1, you could do that. But if you want to have any dynamical explanation for how it got to be that way, there's really nothing in conventional cosmology which does it. But in fact, inflation does. In the inflationary model we've changed the evolution of omega because we've turned gravity into a repulsive force instead of an attractive force, and that changes the way omega evolves. And it turns out that during inflation, omega is not driven away from 1-- as it is during the entire rest of the history of the universe-- but rather during inflation omega is driven rapidly towards 1, exponentially fast, even. So with the amount of inflation that we talked about-- inflation by a factor of 10 to the 28 or so-- that's enough so that the value of omega before inflation is not very much constrained. Omega could have started out before inflation not being 1, but being 2 or 10 or 1/10 or 100 or 1/100. The further away you start omega from 1, the more inflation you need to drive it to 1 sufficiently. But you don't need much more inflation to make it significantly far away from 1 because of this fact the inflation drives omega to 1 exponentially. Which really means it's a very powerful force driving omega to 1. And giving us a very simple, therefore, explanation for why omega in the early universe appears to have been extraordinarily close to 1. So I think that's-- Oh, I have a few more things to say. There's actually a prediction that comes out of this, because this tendency of inflation to drive omega to 1 is so strong, that you expect that omega really should be 1 today. Or to within measurable accuracy. You could arrange inflationary models where it's say, 0.2-- which is what people used to think it was-- but in order to do that, you have to arrange for inflation to end at just the right time before it makes it closer. Because every e-fold drives it another factor of 10 closer. So it's very rapid effect. So if you don't fine tune things very carefully, most any inflationary model will drive omega so close to 1 that today we would see it as 1. That did not used to appear to be the case. Before 1998 astronomers were pretty sure that omega was only 0.2 or 0.3, while inflation seemed to have a pretty clear prediction that omega should be 1. This personally I found rather uncomfortable, because it meant whenever I had dinner with astronomers, they would always sort of snidely talk about how inflation was a pretty theory, but it couldn't be right because omega was 0.2, and inflation was predicting omega is 1. And it just didn't fit. Things changed a lot in 1998, and now the best number we have-- which comes from the Planck satellite combined with a few other measurements, actually-- is that now the observational number for omega is 1.0010, plus or minus 0.0065. So the 0.0065 is the important thing. This is very, very close to 1, but the error bars are bigger than this difference. So it really means to about a half a percent or maybe 1%, we know today that omega is 1, which is what inflation would predict. That it should essentially be exactly 1 today. The new ingredient that made all this possible, that drove-- changed the measurement of omega from 0.2 to 1 is a new ingredient to the energy budget of the universe, the discovery of what we call dark energy. And we'll be learning a lot about dark energy during the course of the term. The real discovery in 1998 was that the universe is not slowing down under the influence of gravity as had been expected until that time, but rather the universe actually is accelerating. And this acceleration has to be attributed to something. The stuff that it's attributed to is called the dark energy. And even though there's considerable ignorance of what exactly this dark energy is, we can still calculate how much of it there's got to be in order to produce the acceleration that's seen. And when all that is put together, you get this number, which is so much nicer for inflation than the previous number. Yes. AUDIENCE: So, was the accelerating universe like the missing factor which they-- gave a wrong assumption which made them think that omega was 0.2 or 0.3? PROFESSOR: Yeah, that's right. It was entirely because they did not know about the acceleration at that time. They in fact were accurately measuring the stuff that they were looking at. And that does only add up to 0.2 or 0.3. And this new ingredient, the dark energy, which we only know about through the acceleration, is what makes the difference. Yes. AUDIENCE: And that data that they were measuring is really just sort of the integrated stuff in the universe that we see through telescopes? Very straight-forward in that way? PROFESSOR: That's right. Including dark matter. So it's not everything that we actually see. There's also-- not going into it here, but we will later in the course-- there is also stuff called dark matter, which is different from dark energy. Even though matter and energy are supposed to be the same, they are different in this context. And dark matter is matter that we infer exists due to its effect on other matter. So by looking, for example, at how fast galaxies rotate, you can figure out how much matter there must be inside those galaxies to allow those orbits to be stable. And that's significantly more matter than we actually see. And that unseen matter is called the dark matter, and that was added into the 0.2 or 0.3. The visible matter is only about 0.04. OK. So, so much for the flatness problem. Next item I want talk about is the small scale nonuniformity of the universe. On the largest scale, the universe is incredibly uniform-- one part in 100,000-- but on smaller scales, the universe today is incredibly lumpy. The earth is a big lump in the mass density distribution of the universe. The earth is in fact about 10 to the 30 times denser than the average matter density in the universe. It's an unbelievably significant lump. And the question is, how did these lumps form? Where did they come from? We are confident that these lumps evolved from the very minor perturbations that we see in the early universe, that we see most clearly through the cosmic background radiation. The early universe we believe was uniform in its mass density to about one part in 100,000. But at the level of one part in 100,000, we actually see in the cosmic background radiation that there are nonuniformities. And things like the Earth form because these small nonuniformities in the mass density are gravitationally unstable. In regions where there's a slight excess of matter, that excess of matter produces a gravitational field pulling more matter into those regions, producing a still stronger gravitational field pulling in more matter. And the system is unstable, and it forms complicated lumps which are galaxies, stars, planets, et cetera. And that's a complicated story. But it all starts from these very faint nonuniformities that existed, we believe, shortly after the Big Bang. And we see these nonuniformities in the cosmic background radiation, and measuring them tells us a lot about the conditions of the universe then, and allows us to build theories of how the universe got to be that way. And that's what these satellites like COBE, WMAP, and Planck are all about-- measuring these nonuniformities to rather extraordinary accuracy. Inflation has an answer to the riddle of where the nonuniformities came from. In the conventional Big Bang theory, there was really just no explanation. People just assumed they were there and put them in by hand, but there was no theory of what might have created them. In the context of inflationary models where all the matter really is being created by the inflation, the nonuniformities are also controlled by that inflation, and where nonuniformities come from is quantum effects. It's a little hard to believe that quantum effects could be important for the large scale structure of the universe. The Andromeda galaxy doesn't look like it's something that should be thought of as a quantum fluctuation. But when one pursues this theory quantitatively, it actually does work very well. The theory is that the ripples that we see in the cosmic background radiation really were purely the consequence of quantum theory-- basically the uncertainty principle of quantum theory, which says that it's just impossible to have something that's completely uniform. It's not consistent with the uncertainty principle. And when one puts in the basic ideas of quantum mechanics, we can actually calculate properties of these ripples. It turns out that we would need to know more about the physics of very high energy-- the physics that was relevant during the period of inflation-- to be able to predict the actual amplitude of these ripples. So we cannot predict the amplitude. In principle, inflation would allow you to if you knew enough about the underlying particle physics, but we don't know that much. So in practice we cannot predict the amplitude. But inflationary models make a very clear prediction for the spectrum of these fluctuations. And by that I mean how the intensity varies with wavelength. So the spectrum really means the same thing as it would mean for sound, except you should think about wavelength rather than frequency because these waves don't really oscillate. But they do have wavelengths just like sound waves have wavelengths, and if you talk about the intensity versus wavelengths, this idea of a spectrum is really the same as what you'll be talking about with sound. And you can measure it. This is not quite the latest measurements, but it's the latest measurements that I have graphed. The red line is the theoretical prediction. The black dots are the measurements. This goes through the seven year WMAP data. We have a little Eureka guy to tell you how happy I am about this curve. And I also have graphs of what other ideas would predict. For a while, for example, people took very seriously the idea that the randomness that we see in the universe-- these fluctuations-- may have been caused by the random formation of things called cosmic strings that would form in phase transitions in the early universe. That was certainly a viable idea in its day, but once this curve got measured, the cosmic strings were predicting something that looked like that, which is nothing at all like that curve. And they have since been therefore excluded as being the source of density fluctuations in the universe. And various other models are shown here. I don't think I'll take the time to go into, because there are other things I want to talk about. But anyway, marvelous success. This is actually the latest data. This is the Planck data that was released last March. I don't have it plotted on the same scale, but again you see a theoretical curve based on inflation and dots that show the data with little tiny error bars. But absolutely gorgeous fit. Yes. AUDIENCE: What happened to your theory of inflation after they discovered dark energy? Did it change significantly? PROFESSOR: Did the theory change? AUDIENCE: Or like, in the last graph there was a different curve. PROFESSOR: Well it's plotted on a different scale, but this actually is pretty much the same curve as that curve. Although you can't tell. AUDIENCE: Sorry. PROFESSOR: Oh. Oh, inflation without dark energy, for example. I think it's not so much that the theory of inflation changed between these two curves, but the curve you actually see today is the result of what things looked like immediately after inflation combined with the evolution that took place since then. And it's really the evolution that took place since then that makes a big difference between this inflationary curve and the other inflationary curve. So inflation did not have to change very much at all. It really did not. But of course it looks a lot better after dark energy was discovered because the mass density came out right, and gradually we also got more and more data about these fluctuations which just fit beautifully with what inflation predicts. OK. I want to now launch into the idea of the multiverse. And I guess I'll try to go through this quickly so that we can finish. We're not going try to understand all the details anyway, so I'll talk about fewer of them for the remaining 10 minutes of the class. But I'd like to say a little bit about how inflation leads to the idea of a multiverse. Of course we'll come back to it at the very end of the class, and it's certainly an exciting, I think, aspect of inflation. The repulsive gravity material that drives the inflation is metastable, as we said. So it decays. And that means that if you sit in one place and ask where inflation is happening, and ask what's the probability that it's still happening a little bit later, that probability decreases exponentially-- drops by a factor of two every doubling, every half life. But at the same time, the volume of any region that's inflating is also growing exponentially, growing due to the inflation. And in fact in any reasonable inflationary model the growth rate is vastly faster than the decay rate. So if you look at the volume that's inflating, if you wait for a half life, indeed half of that volume will no longer be inflating-- by the definition of a half life. But the half that remains will be vastly larger than what you started with. That's the catch. And that's a very peculiar situation because it doesn't seem to show any end. The volume that's inflating just gets bigger and bigger even while it's decaying, because the expansion is faster than the decay. And that's what leads to this phenomena of eternal inflation. The volume that is inflating increases with time, even though the inflating material is decaying. And that leads to what we call eternal inflation. The word "eternal" is being used slightly loosely because eternal really means forever. This is forever into the future, as far as we can tell, but it's not forever into the past. Inflation would still start at some finite time here, but then once it starts, it goes on forever. And whenever a piece of this inflating region undergoes a transition and becomes normal, that locally looks like a Big Bang. And our Big Bang would be one of these local events, and the universe formed by any one of these local events where the inflating region decays would be called a pocket universe. Pocket just to suggest that there are many of them in the overall scale of this multiverse. They are in some sense small, even though they'd be as big as the universe that we live in. And our universe would be one of these pocket universes. So instead of one universe, inflation produces an infinite number, which is what we call multiverse. I might just say the word multiverse is also used in other contexts and another theories, but inflation, I think, is probably the most plausible way of getting a multiverse, and it's what most cosmologists are talking about when they talk about a multiverse. OK. Now how does dark energy fit in here? It plays a very important role in our understanding. To review, in 1998 several groups-- two groups of astronomers discovered independently that the universe is now accelerating, and our understanding is that the universe has been accelerating for about the last five billion years out of the 14 billion year history of the universe. There was a period where it was decelerating until five billion years ago. An implication of this is that inflation actually is happening today. This acceleration of the universe that we see is very much like inflation, and we really interpret it according to similar kind of physics. We think it has to be caused by some kind of a negative pressure, just as inflation was caused by a negative pressure. And this material that apparently fills space and has negative pressure is what we call dark energy. And dark energy is really just by definition the stuff, whatever it is, that's causing this acceleration. If we ask, what is the dark energy, really? I think everybody agrees there's a definite answer to that, which is something like, who knows? But there's also a most plausible candidate, even though we don't know. The most plausible candidate-- and other candidates are not that different, really, but we'll talk about the most plausible candidate-- which is simply that dark energy is vacuum energy. The energy of nothingness. Now it may be surprising that nothingness can have energy. But I'll talk about that, and it's really not so surprising. I'll come back to that question. But if dark energy is really just the energy of the vacuum, that's completely consistent with everything we know about, what we can measure, the expansion pattern of the universe. Yes. AUDIENCE: Why is it that only in the last five billion years has the universe started accelerating? PROFESSOR: To start accelerating. Right. Right. OK. I'm now in a position to say that. I wasn't quite when I made the first statement, but now that I've said there's probably vacuum energy, I can give you an answer. Which is that vacuum energy, because it is just the energy of the vacuum, does not change with time. And that's the same as what I told you about the energy density during inflation. It's just a constant. At the same time, ordinary matter thins out as the universe expands, throwing off in density like one over the cube of the volume. So what happened was that the universe was dominated by ordinary matter until about five billion years ago, which produced attractive gravity and caused the universe to slow. But then about five billion years ago the universe thinned out enough so that the ordinary matter no longer dominated over the vacuum energy, and then the vacuum energy started causing repulsion. Vacuum energy was there all along causing repulsion, but it was overwhelmed by the attractive gravity of the ordinary matter until about five billion years ago. Does that make sense? AUDIENCE: Yes. PROFESSOR: OK. Good. Any other questions? OK. So. The first thing I want to talk about here is why can nothing weigh something? Why can nothing have energy? And the answer is that actually this is something the physicists are pretty clear on these days. The quantum vacuum, unlike the classical vacuum, is a very complicated state. It's not really empty at all. It really is a complicated jumble of vacuum fluctuations. We think there's even a field called the Higgs field, which you've probably heard of, which has a nonzero value in the vacuum on average. Things like the photon field, the electromagnetic field, is constantly oscillating in the vacuum because of the uncertainty principle, basically, resulting in energy density in those fluctuations. So there's no reason for the vacuum energy to be zero, as far as we can tell. But that doesn't mean that we understand its value. The real problem from the point of view of fundamental physics today is not understanding why the vacuum might have a nonzero energy density. The problem is understanding basically why it's so small. And why is smallness a problem? If you look at quantum field theory-- which we're not going to learn in any detail-- but quantum field theory says that, for example, the electromagnetic field is constantly fluctuating. Guaranteed so by the uncertainty principle. And these fluctuations can have all wavelengths. And every wavelength contributes to the energy density of the vacuum fluctuations. And there is no shortest wavelength. There's a longest wavelength in any size box, but there's no shortest wavelength. So in fact, when you try to calculate the energy density of the vacuum in the quantum field theory, it diverges on the short wavelength side. It becomes literally infinite as far as the formal calculation is concerned because all wavelengths contribute, and there is no shortest wavelength. So what does this mean about the real physics? We think it's not necessarily a problem with our understanding of quantum field theory. It really is, we think, just a limitation of the range of validity of those assumptions. They certainly-- quantum theory works extraordinarily well when it's tested in laboratory circumstances. So we think that at very short wavelengths, something must happen to cut off this infinity. And a good candidate for what happens at short wavelengths to cut off the infinity is the effects of quantum gravity, which we don't understand. So one way of estimating the true energy density as predicted by quantum field theory is to cut things off at the Planck scale, the energy scale, length scale, associated with quantum gravity-- which is about 10 to the minus 33 centimeters. And if you do that, you can calculate the number for the energy density of the electromagnetic field of the vacuum and get a finite number. But it's too large. And too large not by a little bit, but by a lot. It's too large by 120 orders of magnitude. So we are way off in terms of understanding why the vacuum energy is what it is since our naive estimates say it should be maybe 120 orders of magnitude larger. Now I should add that there is still a way out here. The energy that we calculate here is one contribution to the total vacuum energy. There are negative contributions as well. If you calculate the fluctuations of the electron field, that turns out to be negative in its contribution to the energy. And it's always possible that these numbers cancel-- or cancel almost exactly-- but we don't know why they should. So basically there's a big question mark theoretically on what we would predict for the energy density of the vacuum. Let's see. What should I do here? I am not going to be able to finish this lecture. I think it's worth finishing, however. So I think what I'll do is I'll maybe go through this slide and then we'll just stop, and we'll pick up again next time. There are just a few more slides to show. But I think it's an interesting story worth finishing. But to come to a good stopping place here-- we still have a minute and a half, I think. I want to say a little bit about the landscape of string theory, which is going to be a possible explanation-- only possible, it's very speculative here-- but one possible explanation which combines inflation, the eternal inflation, and string theory produces a possible explanation for this very small vacuum energy that we observe. It's based on the idea that string theory does not have the unique vacuum. For many years string theorists sought to find the vacuum of string theory with no success. They just couldn't figure out what the vacuum of string theory would look like. And then a little more than 10 years ago many string theorists began to converge around the idea that maybe they could not find a vacuum because there is no unique vacuum to string theory. Instead, what they now claim is that there's a colossal number-- they bandy around numbers like 10 to the 500th power-- a colossal number of metastable states, which are long lived, any one of which could look like a vacuum for a long period of time, even though ultimately it might decay or tunnel into one of the other metastable states. So this is called the landscape of string theory. This huge set of vacuum like states, any one of which could be the vacuum that fills a given pocket universe, for example. When one combines this with the idea of eternal inflation, then one reaches the conclusion that eternal inflation would very likely populate all of these 10 to the 500 or more vacua. That is, different pocket universes would have different kinds of vacuum inside them, which would be determined randomly as the pocket universes nucleate, as they break off from this inflating backbone. And then we would have a multiverse which would consist of many, many-- 10 to the 500 or more-- different kinds of vacua in different pocket universes. Under this assumption, ultimately string theory would be the assumed laws of physics that would govern everything. But if you were living in one of these pocket universes, you actually see apparent laws of physics that would look very different from other pocket universe's. The point is that the physics that we actually see and measure is low energy physics compared to the energy scales of the string theory. So what we are seeing are just small fluctuations in the ultimate scheme of things about the vacuum that we live in. So the very particle spectrum that we see, the fact that we see electrons and quarks, quarks that combine to form protons and neutrons-- could be peculiar to our particular pocket. And in other pockets there could be completely different kinds of particles, which are just oscillations about different kinds of vacuum. So even though the laws of physics would in principle be the same everywhere-- the laws of string theory-- in practice the observed laws of physics would be very different from one pocket to another. And in particular since there are different vacua in the different pockets, the vacuum energy density would be different in different pockets. And that variability of the vacuum energy density provides a possible answer to why we see such a small vacuum energy. And we'll talk more about that next time on Tuesday. See you then.
MIT_8286_The_Early_Universe_Fall_2013
20_Supernovae_Ia_and_Vacuum_Energy_Density.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, in that case, let's begin in our usual way by going through a review of last time's lecture. Last time, we talked really about two calculational problems. One was the calculation of the age of the universe, taking into account a universe model which has matter, radiation, vacuum energy, and curvature. And we got the general formula. And then for the same type of cosmological model, we also calculated how one finds the brightness of a distant source-- the energy flux in terms of the redshift of that source. So first, the age of the universe calculation-- that really just depends on the first-order Friedman equation, which I've rewritten here. We put three terms on the right hand side for the mass density-- a matter term, a radiation term, and a vacuum energy term. And we know-- and this is the important ingredient-- we know how each depend on the scale factor. Non-relativistic matter falls off like 1 over the cube of the scale factor. Radiation falls off like 1 over the fourth power of the scale factor. And vacuum energy is just constant. Next step that we did was just to rewrite this equation, where we put in the explicit time dependence in the form of this x which is the ratio of a of t to the present value of the scale factor-- a of t 0. And furthermore, we expressed the matter density in terms of the present contribution to omega. And rewriting equation in that language, it takes that form. And then, I pulled a fast one. I said we could also write this last term to look pretty much like the others. It just is a constant that falls off like 1 over a squared. So if you define omega sub k 0, which is exactly what you need to make this look like that, and in terms of omega sub k 0, all four terms have the same characteristic. They're just a constant times a power of x. So this is, then, the rewriting of the Friedman equation one more time, just using this new definition of how we're going to treat the curvature of the universe. And simply by looking at this formula and applying it to x equals 1, you can see that that becomes then 1 is equal to the sum of these omegas. And that can be thought of as a clearer, perhaps, definition of what omega sub k 0 is. It's just 1 minus all of the other contributions to omega. So it's how much the actual mass density of the universe differs from the critical density. Then once we have this equation, which is the equation which tells us what x dot is as a function of x, we could just rewrite that by bringing dt to one side of the equation and dx to the other and integrating both sides. And that leads to our final result. The age of the universe is simply given by that integral. And this is a very neat expression for the age of the universe in terms of the present value of the Hubble expansion rate and each contribution to omega in terms of its present value. And you just plug those into this formula. In general, you have to do the integral numerically, because the integral's a little too complicated to have an analytic expression. And that will give you the age of universe for any model that meets this description. So any questions about that calculation before we go on? OK, very good. The next calculation we did last time was the calculation of radiation flux versus redshift. And this is exactly what the astronomers were measuring in 1998 when they concluded that the universe was accelerating. They were looking at distant supernova type 1a explosions. They made the assumption that all supernova type 1a explosions have the same intrinsic power output. That's based roughly on observation and guesswork. There's not really a good theory for it, so it's mostly a matter of being consistent with observations. But then they could calculate for any given model in terms of these different omegas what you expect in terms of received radiation as a function of redshift. And they compared their data with the models-- and I'll show you that data shortly-- and found that the models only fit if one had a significant component of vacuum energy causing universe to accelerate. So to do the calculation, we need a metric for the universe. And I considered only the closed universe case. There's also the flat case and the open case, which are similar. And you'll actually be asked to do those on the homework set. So the metric for a closed universe can be written this way, where sine psi is the square root of k times r, to relate it to the other way-- the more standard way-- of writing the Robertson Walker metric. But for our purposes for this calculation, it's easiest to do it this way, because we're going to be interested in radio trajectories of photons. And this metric simplifies the radial direction as much as it can be. It's just d psi squared. Oh, it's the computer that froze. You never know with Windows. I think we're in business now. Back to where we were. We have the metric. Now what we want to do is imagine a light source being received by a detector. And we put the light source in the center of our coordinate system. We put the detector at some distance corresponding to psi equals psi sub D, where psi is our radial coordinate and psi sub D is the radial coordinate of the detector. We imagine a whole sphere with the same radius as the detector, because we expect the source to be spherically symmetric. And therefore, the light emitted by the source will be uniformly spread over that sphere. And that will allow us to calculate how much of it will hit the detector. The fraction hitting the detector will just be the area of the detector divided by the area of the sphere. The area of the detector is whatever it is. We call it capital A. The area of the sphere is 4 pi times the radius of the sphere. And the radius of the sphere in physical coordinates is the scale factor squared times the sine squared of psi sub D, coming from the metric. It's the radius that appears in the angular part that counts, because it's the angles that we're integrating over to get the area of the sphere. So the radius is just a tilde squared times sine squared is the radius squared. Then we also need to remember something we've said a number of times previously in this class, which is that when the photons travel from the source to the detector, their intensity is suppressed by two powers of 1 plus z, two powers of the redshift. And one of those factors in 1 plus z comes from redshifting each photon. The frequency of each photon is redshifted, and that means that the energy of each photon is redshifted-- goes down by a factor of 1 plus z. But in addition, the rate of arrival of the photons is essentially a clock which is also time dilated. So the rate of arrival of the photons as seen by the observer is suppressed by another factor of 1 plus z. So putting all that together, the received energy flux, which is the power received divided by the area, is just the power emitted by the source divided by 4 pi. We get this factor of 1 plus z squared, due to what we just discussed. And then the a squared of t sine squared psi sub D. So it's just the total power times that fraction that we receive times the two factors of 1 over 1 plus z. And this then is essentially the final answer, except we want to know how to evaluate a tilde squared of t 0 and sine squared of psi sub D in terms of things that we more directly measure. So to do that, a tilde of t 0 turns out to be easy, because it really is just related by the definition of omega sub k 0 to omega sub k 0. So this formula is just a rewriting of the definition of omega sub k 0. To figure out what psi is, we want to integrate along the line of sight to be able to figure out the time of emission in terms of psi. And that time of emission could then be related to the redshift, because the redshift is just the ratio of the scale factors between reception and emission. So we look first at the metric. And say we're going to be looking at null geodesics in the radial direction. And null means ds squared equals 0, and that's minus c squared dt squared plus a tilde squared of t times d psi squared. And that implies immediately that the psi dt is just equal to the speed of light divided by a tilde. And then we can get the total increment in psi between the source and us by integrating between the time of emission-- the time of the source-- to the present time-- t sub 0. And then it's just a matter of changing variables to express the variable of integration. Instead of st, we could express it as z-- the redshift itself. And that brings in a factor of h, because h is a dot over a. And I showed the manipulations last time, but it brings in a factor of h. But we know what h is as a function of z. It comes from the Friedman equation. And that then gives us an expression for psi of z sub s as an integral over z. And writing in what h of z is and what a tilde of z is from that expression, the expression for psi of z becomes the equation that's boxed. Just a matter of algebraic substitutions involving the Friedman equation, which determines what h of z is. And then putting everything together, J is just given by this expression, where all I've done is to substitute the expression for a tilde of c 0. And sine squared psi is still here, but it gets evaluated according to that formula. And putting these together, we have a complete calculation of the received radiation flux as a function of cosmological parameters-- the omegas and the h 0-- and the redshift of the source. And that's the end of the calculation. And that's where we finished last time. So any questions about that calculation? OK, fine. In that case, moving on, the next thing I wanted to show you was some real data. So here are some real data from one of those two teams that made the original announcements in 1998. This is from the High-Z Supernova Search Team. And I should write some definitions on the blackboard. The vertical axis there is essentially brightness. But you wouldn't expect the astronomers to just call it brightness, because they like to use fancier words. So they write it as little m minus capital M-- measured in magnitudes, they put in parentheses. And little m minus capital M has the name, it's called the distance modulus, meaning it's a way of measuring distance. They think of brightness as a way of measuring distance, which indeed is what it's being used for. And it's defined as 5 times the logarithm base 10 of d sub L over 1 megaparsec, which means the luminosity distance-- I'll define this in more detail in a second-- d sub L is the luminosity distance-- distance as inferred from the luminosity. And they're measuring it in megaparsecs and taking the logarithm base 10. And then, by convention, there's an offset here of 25. Why not? So this is the definition of the distance modulus. And d sub L is defined by the relationship of what J would be in a flat Euclidean universe if you were receiving that luminosity. So J is equal to the actual power output of the source divided by 4 pi d sub L squared. This defines d sub L. So d sub L is the distance that that source would have to be at in a static Euclidean universe for you to see it with the brightness that you actually see. This, I guess, completes the definitions, but we can put these together. And m minus M is then equal to minus 5/2 times the logarithm base 10 of 4 pi J times 1 megaparsec squared, divided by the actual power output of the source, and then, of course, plus 25. So this relates this distance modulus to the energy flux and the power output of the original source. There's also on this slide the acronym MLCS. MLCS stand for multi-color light curve shape. And what that refers to is the High-Z Supernova Search Team invented a method of compensating, to some extent, for small variations in the actual power output of the supernovae type 1a. Instead of assuming that they all have exactly the same brightness, they discovered by looking at nearby supernovae of this type that there's a correlation between the absolute brightness of the supernovae and the shape of the light curve-- that is, light versus time. So they were careful to measure the light versus time for the supernovae that they used in this study. And they used that as a way of applying a small correction to what they interpreted as the intrinsic brightness of each supernova. And the results are these points. [INAUDIBLE] the top are the raw points, and three different curves for three different models. And they characterize the models in the same way we would-- in terms of different contributions to omega. So the top model is the cosmological constant dominated model, where omega sub lambda, which is what we've been calling omega sub vac is 0.76. And it's a flat model, so 0.24 for omega matter. And radiation is ignorable. They compared that with the middle model of these three, which was a model that had no vacuum energy, and omega matter of 0.2. That was essentially the dominant model at the time, the belief that the universe was open then had about a critical density of 1/5 or 1/4. And then they also compared it with a model where omega was 1-- entirely of matter with no vacuum energy. And that was this dashed curve, which is the lower of these three curves. And when the data is just plotted, it's a little hard to see how much difference there is between the three curves. So they re-plotted the data, plotting the middle curve as a straight line by construction. And then they plotted deviations from that line. And they did that for both the theoretical curves and the data. And in this magnified picture, you can see a little bit better that this top curve fits things the best. And that's what they call the lambda CDM model. It corresponds to omega m equals 0.24. Omega lambda equals 0.76. So it's the model with a cosmological constant, with a vacuum energy. And lambda CDM stands for lambda and cold, dark matter. And cold, dark matter is just what we've been calling non-relativistic matter. So the claim is that these data points, even though there's a fair amount of scatter, fit the top curve much, much better than they fit the middle curve or the bottom curve. And statistically, that's true. It really is a much better fit, even though by eye, it's not that clear what's going on. I think by eye it looks clear that the top one fits it better than others, but it's not that clear how important the difference is. But nonetheless, the astronomers were thoroughly convinced that this was a real effect. There was considerable discussion about possible systematic errors. And I guess next, I'll say a few words about that. First of all, I should maybe just clarify a little bit better what's being seen. What's being seen is that for a given redshift, this curve, which basically shows brightness in a funny, funny way, where dimmer is upward, larger values of little m minus M-- there's a minus sign in this formula-- means a dimmer galaxy, one that looks further away. And basically, when astronomers see this, they think distance. So larger values means further away. So what's being seen is that these distant supernovae are a little bit dimmer than what you would expect in either of the other two models, either of the models that do not have vacuum energy. And the amount by which they're dimmer is a few tenths of a magnitude. And each tenth of a magnitude corresponds to about 10% in brightness. So what they're saying is that these distant if supernovae, if we assume they really fit this curve, are 20% to 30% dimmer than you would have expected in other models. It might be worth saying a little bit about why dimmer is the right sign to correspond to acceleration, which is by no means totally obvious, I don't think. So we're plotting-- AUDIENCE: What year is this? PROFESSOR: What year? This was old data that was published in 1998. It has gotten better. Now it's much more unambiguous that this works. So this is distance as inferred by brightness. So this is basically what's being plotted. If one thinks about a fixed z in which way that you go-- up or down-- I find that totally cryptic. I don't really parse that very well in my own head. But it's much clearer if you think about the other way. You could think about a galaxy-- or a supernova in this case-- at a fixed distance, and ask, suppose I compare different models-- ones that accelerate and models that don't accelerate. So if we fix the distance and say, what would we expect for the redshift of a given galaxy, in an accelerating model versus a non-accelerating model-- remember, the redshift is basically a measure of the velocity, or at least it's strongly influenced by the velocity of the object. So if the universe is accelerating, it means that the universe was expanding slower in the past than you would have thought otherwise. It's speeded up to reach its present expansion rate. So an accelerating universe is a universe that was expanding slower in the past. And slower in the past means that a galaxy at a given distance would have been moving slower, and hence would have had a lower value of z. So the effect of acceleration for a given distance-- we'll fix the distance-- should be to move the line that way, towards lower z. And by moving the dot that way, it puts it above the curve. So it's the same as shifting things up, which is the more natural way of describing what's seen in the graph. The points are higher than the curve. So the bottom line, though, is that what they're saying is distant supernovae are 20%, 30% dimmer than you might have thought. And from that, they want to infer that the universe is accelerating, which is a rather dramatic conclusion. So naturally, you want to ask, are there other things that can cause supernovae to look dimmer? And of course, there are other things that can cause supernovae to look dimmer than you might have thought. And there are two main ideas that were discussed at the time. One of them is just plain dust. If you're looking at something through a dusty atmosphere, it looks dimmer than it would otherwise. And that is a genuine possibility that was strongly considered. The arguments against dust were mainly twofold. The first is that dust very rarely absorbs uniformly across the spectrum. Dust usually-- depending on the size of the dust grains-- absorbs more blue light than red light, leaving more red light coming through. So the effect of seeing something through dust is normally to cause it to look more red. And this reddening was not seen. The spectrum of the light from the existing supernovae was analyzed very carefully. And the spectrum of the distant ones looked just like the spectrum of the nearby ones-- appropriately redshifted, of course, but otherwise not distorted in any way. There was no sign of this reddening. Now, it's possible to have what the astronomers refer to as gray dust, which is by definition dust that absorbs uniformly across the spectrum of what you're looking at. But the grains have to be unusually large. And nobody was ever able to figure out a source for dust grains of that sort. So based partly on theoretical grounds and partly on what nobody has ever found, there's no evidence for dust grains that would possibly cause dimming that would look this way, that would be dimming that was uniform across the spectrum. Yes? AUDIENCE: How do you tell the difference between reddened light from dust and redshifted light? PROFESSOR: OK, how do you tell the difference between reddened light from dust and plain old redshift? The difference is that the plain old redshift uniformly shifts everything by the same factor. So the whole spectrum is just moved down uniformly towards the red. This reddening effect really means that the blue part of the spectrum is depressed relative to the red part. So the shape of the spectrum is changed. So one argument is that we don't see reddening, and we don't know any way to make dust that would be gray. The second argument is that if dust was a major factor, presumably most of the dust that would be relevant would be dust in the same galaxy as the supernova explosion itself, because there's not that much dust in intergalactic space. And if dust in the galaxy of the supernova itself were relevant, then-- let me draw a little picture here. So if dust in what's called the host galaxy-- the galaxy which has the supernova in it-- then you would have a picture where there would be a ball of dust filling the galaxy. And the supernova that you're looking at might be there, or it might be there. And let's say we're looking from over here. So depending on where the supernova was in the galaxy, we would see very different amounts of intervening dust. And if dust were causing this dimming, it would mean we would be seeing a significant scatter in the amount of dimming depending on where the supernova happened to be in its host galaxy. And that spread was not seen. The spread that one sees in that curve could be measured and calibrated against known uncertainties in the brightness of supernovae and then the detection apparatus. And the spread that was seen was just what you expect without any additional spread associated with a dusty galaxy acting as the host. So no evidence for the spread of brightnesses that would be expected from a dusty host. Another item that was considered-- these are the main arguments against dust-- another argument that was considered, another possible source of dimming, is galactic evolution. And there, the main effect that people worried about was the production of heavy chemical elements during the life of a galaxy. As you've certainly learned about from your reading-- I don't know if we've talked about it in class or not-- the early universe was essential all hydrogen and helium. Heavier elements were made later in stars that produce supernovae explosions. And these supernovae explosions gradually cause galaxies to become more and more enriched with heavy elements. And by heavy, I mean anything heavier than helium. And that could affect, in principle, the behavior of supernovae explosions. So the evidence against that was simply that every other characteristic that astronomers could measure of these supernovae in the distant galaxies looked exactly like what was seen for nearby galaxies. So no evidence for any kind of evolution was seen. And there are many properties you could measure that are independent of distance, like the shape of the spectrum and things like that, and the pattern of the light curve versus time. So all those characteristics that astronomers can measure seem to be exactly the same for the very distant supernovae which happened billions of years ago, and the more nearby ones that happened recently. And furthermore, among the nearby ones, there's a big spread of abundances of heavy chemical elements, just because different galaxies have had different histories. So among the nearby ones, you could look for is there an effect caused by the relative abundance of heavy elements, and astronomers didn't find any. So there was no sign that galactic evolution could be playing a role here, even though one does need to worry about it. So the point is that distant supernovae 1a look like nearby ones. I'll call that a in my outline. And b is that among the nearby 1a's, heavy element abundance had no perceptible effect. So the dominant opinion gradually shifted, and now I think it's almost 100% that this acceleration is real. The acceleration, by the way, is further confirmed by measurements of fluctuations in the cosmic background radiation measurements that have been done by some ground-based experiments, and also the satellite experiments of WMAP and now Planck, which measure the anisotropies-- the ripples-- in the cosmic background radiation. It's hard to see what those ripples would have to do with the amount of vacuum energy. But it does turn out-- and we'll talk more about this a little bit later-- that we really do have a detailed theory of what makes these ripples. We can calculate what the spectrum of those ripples should look like. And the calculations depend on parameters which include the amount of vacuum energy. And in order to make things work, one does have to put in essentially exactly the same amount of vacuum energy as has been detected in these supernova 1a observations. So everything fits together very tightly. And I think now, just about everybody is convinced that the universe really is accelerating. The acceleration could, in principle, have at least two different causes that we can talk about. One is vacuum energy, which is the one that I'm focusing on, which is the simplest explanation. The other possibility that is discussed in the literature is something called quintessence, which is a made-up word. And what it refers to is the possibility that the acceleration of the universe today could be caused by a mechanism which is really in principle exactly the same as what we talk about for inflation in the early universe and will be talking about later. Specifically, there could be a slowly evolving scalar field which is essentially uniform throughout the universe, and changing slowly with time so it looks like it's a constant. And it could be the energy density of that scalar field that is looking to us as if it were vacuum energy. But that's the minority point of view. And that introduces extra parameters that don't seem to be necessary. But it's up for grabs. Nobody really knows. OK, any questions about what we just talked about? In that case, let me go on to my next topic, which is I want to talk a little bit more about the physics of vacuum energy. What is it that we understand about it, and why is it that most physicists say it's the least understood issue in physics? We really don't understand vacuum energy, even though we do understand why it might be nonzero. Where we're totally at a loss is trying to make any sense out of the value of the energy density that is actually observed. So where does vacuum energy come from in a quantum field theory? There are basically, I would say, three contributions. Maybe I should say in quantum field theory. The other context in which this might be discussed would be string theory. I may or may not say something about string theory, but I won't say much. But in quantum field theory, there are basically, I think, three contributions. The first is the easiest to understand, which is quantum fluctuations in bosonic fields, where the best example is the photon, or the electromagnetic field. Now, in a classical vacuum, e and b-- the the electric and magnetic fields-- would just be 0, because that's the lowest possible energy density. But just as you are probably aware that there's an uncertainty principle in quantum mechanics which tells you that the momentum and position of a particle cannot be well-defined at the same time, it is also true that e and b cannot be well-defined at the same time. So the uncertainty principles applied to the field theory imply that e and b cannot just be 0 and stay 0. E and b are constantly fluctuating. And that means that there's energy associated with those fluctuations. And the mathematics of it is actually incredibly simple. If one imagines the fields inside a box, to be able to at least avoid the infinity of space, the fields inside a box could be described in terms of standing waves, where each standing wave is either a half wavelength or a full wavelength across. And by the way, you'll be doing a homework problem on this. And each standing wave has the physics of a harmonic oscillator. It oscillates sinusoidally with time, the wave. And when one works out the mathematics, and even the quantum mechanics, it's exactly the same as a harmonic oscillator. So each standing wave has a zero-point energy. You may know that the zero-point energy of a harmonic oscillator is not 0, but it's 1/2 h bar omega, or 1/2 h nu, depending on whether you're using nu or omega to describe the frequency of the oscillator. So each standing wave contributes 1/2 h bar omega. And then the problem is how many standing waves are there? And the answer is, there's an infinite number of them, because there's no limit to how short the wavelength can be. So there's no limit to how many ups and downs you can have in your standing wave from one end of the box to the next. So the answer you get is infinite. It diverges. Now, the fact that it diverges at short distances can be used as an excuse for getting the problem wrong. Obviously, it's wrong. The answer's not infinite. But we have an excuse, because we certainly know there are wavelengths that are short enough that we don't understand the physics at those length scales anymore. We're basing everything on extrapolating from wavelengths that we can actually measure in the laboratory. So one could imagine that there's some wavelength beyond which everything we're saying here is nonsense, and we don't have to keep adding up 1/2 h bar omega anymore, because the arguments that justify the 1/2 h bar omega no longer apply. So we can use that as a cutoff for the calculation. And a typical cutoff-- by typical, I mean typical in arguments that physicists talk about, so typical in physics speak. So a cutoff that's often invoked here is the Planck scale, which is the square root of h bar times G divided by c cubed. And that has units of length, and it's equal to about 1.6 times 10 to the minus 33 centimeters. And what makes the scale significant is it's the scale at which we expect the effects of quantum gravity to start to be important. And we know that this quantum field theory that we're talking about does not include the effects of gravity. And we don't really even know how to modify it so that it would include the effects of gravity. So the quantum effects of gravity are still something of a mystery. So it makes sense to cut the theory off, if not earlier, at least at the Planck scale. Yes? AUDIENCE: So I would imagine what we're doing, in order to say that we have a standing wave, we have to have a box. And then in order to realize the fact that the universe may be large, you just take the limit as the box gets large. But is it really OK to do that? I mean, to treat an infinite system as the limit of a finite system? PROFESSOR: OK, the question is-- what we're going to be doing here is I talked about putting the standing waves in a box. And then at the end, we're going to take the limit as the box gets bigger and bigger. And the question is, is that really a valid way of treating the infinite space? And the answer is, in this case, it is. I'm not sure how solid an argument I can make. Certainly what one does find is what you'd expect, that as you make the box bigger and bigger, the energy that you get is proportional to the size of the box. So you're calculating an energy density. And probably the most precise thing I can say at the moment is that if it were not true, if the answer you got really depended on the way in which the space was infinite, then you'd be learning something about the infinite universe by doing an experiment in the lab, which is a little far-fetched. That is, if you do an experiment in a lab, it really doesn't tell you anything about whether the universe is infinite or turns back on itself and is closed. And calculations certainly do show that you get the same-- you could do, for example, a closed universe without a box. And you get the same energy density, as long as the universe was big, as we're getting this way. So I think there's a pretty solid calculational evidence that what you get does not depend on the box. Yes? AUDIENCE: Going off that question, do we use the maximum size of our box as the size of our observable universe, then? PROFESSOR: OK, the question is, what do we use as the maximum size of the box? Is it the size of the observable universe? The answer really is that what you find is that you get an energy density that's independent of the size of the box, as long as the box is big. And it's that energy density that we're looking for. We don't claim to know anything about the total energy. And we don't really need to know anything about the total energy. Everything that we formulated here in terms of energy densities. Now, the catch is, that if one puts in this cutoff and takes into account only the energies of 1/2 h bar omega going up to this cutoff and stopping there-- or down to the cutoff if one's thinking of length as the measure-- you could then ask, do we get an energy density that's in any way close to what the astronomers tell us the vacuum energy actually is? And the answer is emphatically no. We don't get anything close. We're in fact off by about 120 orders of magnitude, which even in cosmology is a significant embarrassment, which is why physicists consider this question of the vacuum energy density to be such an incredible mystery. We really have no idea how to get a number as small as what we observe. Let us go on to talk about other contributions, because they are certainly important in the way we think about things. So far I have number one, right? So next comes two. And that is the quantum fluctuations of Fermi fields, where the best-known example here is the electron. Now, in quantum field theory, I should point out that all particles are described by fields, not just the photon. The electron is described by a field also. It's called the electron field. And because the electron is a fermion and not a boson, the electron field has somewhat different properties than bosonic fields, reflecting the fact that the fermions themselves obey the exclusion principle. It turns out that for fermions, there are also quantum fluctuations. They're also of order 1/2 h bar omega. But actually, it's a little bit different. They're in some sense h bar omega and not 1/2. But what's peculiar is that for electrons, the contribution is negative. And the origin of this negativity I think has a fairly simple explanations, although the explanation is not ever given, actually. The exploration that's used in quantum field theory books involves looking at equation 47 and seeing that there's an anticommutator there. And because the fields anticommute, there's a minus sign. And that means the energy is negative. And that is basically the way it's described in textbooks. That certainly says where the minus sign appears in which equation. But I don't think it's really an explanation of what the minus sign is talking about. But I think there is an explanation of what the minus sign is talking about, which goes back to the old picture that Dirac himself introduced when he first invented the Dirac equation. When Dirac first invented the Dirac equation, he was trying to interpret it more or less in the same language as the Schrodinger equation. We don't quite do that anymore. But in doing that, Dirac discovered that his Dirac equation, which was the natural relativistic generalization of the Schrodinger equation to a particle which has spin 1/2, which I'm not sure how Dirac knew it had spin 1/2, but in any case, it's the equation for a particle of spin 1/2. And what he found was that if you just look at the energy the spectrum that the equation itself gives you, it's symmetric about 0. So if we plot energy going this way, if there's a state here, there's also a state there at negative energy. And if there's a state there, there's another state there, exactly opposite it. It's completely symmetric up and down. Now, the interpretation that Dirac gave to that was not that there are a lot of ways of making negative energy. He realized that the vacuum is by definition the state of lowest possible energy. And if you can lower the energy by adding a particle to these negative energy states, that would mean that there'd be a way of lowering the energy, and the state would not be the vacuum. So the vacuum, Dirac proposed, is the state in which all of these negative energy levels are filled. And the action of putting all these x's on the picture is often called filling the Dirac sea. S-E-A-- sea, where sea refers to this ocean of negative energy states, which is infinite. It just keeps going down. You can imagine filling all of them to describe the vacuum. Then if you ask what is the physics after you've done that-- what are the possible excitations of the vacuum, what states does this theory contain other than the vacuum? And the answer is that there could be occupations of these positive energy states, and those are called electrons. It's also possible to remove-- if you put in the right amount of energy-- one of the negative energy states, which is filled, but we could take away the particle that's there. And the absence of a particle there-- a hole in the negative energy sea-- is a positron. So electrons are there. The e plus is a hole in the Dirac sea. Now, the difficulty with this picture, and the reason why it's not often use these days, is that it makes it look like there's an intrinsic difference between electrons and positrons. Nonetheless, Dirac was perfectly aware that when you went through the math, they were completely symmetric. The fact that you described it this way is just really a feature of your description, but it doesn't make any measurable difference. So a positron really is just a perfect image of an electron, but with the opposite charge, with otherwise all the same physical properties. And there's ways of describing it where you don't make this distinction between particles and holes. But the particle hole way is I think the easiest way of understanding where the negative energy is coming from. The negative energy came by saying that the energy was 0 before we filled any of these levels. And as you fill the negative energy sea, you're lowering energy all the time. And it's that contribution which makes up the infinite negative contribution coming from the Fermi fields. And the algebra is certainly exactly right. The energy that people write down for the negative energy of the Fermi fields-- what they get by anticommuting two operators in equation 37-- is exactly the expression you get for what it takes to fill the Dirac sea. Yes? AUDIENCE: Are we pretty confident that the smallness of the vacuum energy can't come from cancellations between the bosonic and the Fermi fields? PROFESSOR: OK, the question is, are we confident that the cancellation cannot come from the cancellations between the Fermi fields and the bosonic fields. No, we're all confident that it cannot come from that. it very likely does come from that. But we are confident that we have no idea why that happens. And therefore, it's a big mystery. Certainly our ignorance allows for any answer. Because we have a positive infinite contribution, we're just going to cut off and make it large. And we're going to have a negative contribution, which we're going to cut off and make it large in magnitude but negative. And then we're going to add them, and we have no idea what we're going to get. But the fact that we get something that gets incredibly close to 0, and not something that's at all the same magnitude as the pieces you're adding together-- the positive piece or the negative piece-- means there's something going on that we don't understand. There's a cancellation that's happening that we cannot explain. Now, I should maybe add that there's one context where we would expect a cancellation. And that is, there are theories that are what are called supersymmetric, which have a perfect symmetry between bosons and fermions, which would relate the positive energy from the photon to the infinite negative energy you would get from particles called photinos, which would be the supersymmetric partner of the photons-- a spin 1/2 particle that's a mirror image of a photon but has a fermionic character. So in an exactly supersymmetric theory, you would get an exact cancellation between the positive and the negative contributions. And the answer has to be 0 in an exactly supersymmetric theory. However, the world is clearly not exactly supersymmetric. This photino has never been seen. And there'd be a particle called the selectron, which would be the scalar partner of the electron, which also has not been seen. And every known particle would have a partner, which has not been seen. There are no supersymmetric pairs which are known. So supersymmetry is still a possibility as a broken symmetry of nature. And a lot of people think-- for pretty good reasons, I thin-- that it's very likely that the world does have an underlying supersymmetry. But as long as the supersymmetry is broken, it no longer guarantees this cancellation. And you could estimate what the mismatch is. And it does make things a little bit better here. If we just take this Planck scale cutoff, we miss an energy density by a factor of about 10 to the 120. If we apply supersymmetry and make an estimate of what the supersymmetry breaking scale is and what effect that has on the mismatch of these calculations, then it gets reduced. Instead of being 120 order of magnitude problems, it's got a 50 order of magnitude problem, which is a lot better, but not good enough. Now, I do want to mention a third contribution here for completeness. The third one is likely be finite, so it's not as problematic as the other two. AUDIENCE: [INAUDIBLE] PROFESSOR: Same thing. Planck scale. AUDIENCE: Oh, OK. PROFESSOR: The third contribution is that some fields are believed to have nonzero values in the vacuum. And the famous example of that is the Higgs field, for which the particle associated with the Higgs was discovered a year ago at CERN, after over 50 years of looking for it. And the Higgs field is maybe the only field that's part of the standard model that has a nonzero expectation value, a nonzero value in the vacuum. But in more sophisticated theories like grand unified theories, there are many more fields that have nonzero values in the vacuum. So that's a likely extension of our standard model of particle physics. So the bottom line is that it's easy for particle physicists to understand why the vacuum energy should be nonzero, but damned hard to have any idea of why it has the value that it has. We'll talk maybe at the end of the course about the possibility that the value of the vacuum energy density is, quote, "anthropically selected." That is one possible explanation, which maybe shows how desperate physicists are to look for an explanation here. One possible explanation begins with ideas from string theory, where string theory tells us that there isn't just one kind of vacuum, but in fact, a huge number of different types of vacuum, perhaps 10 to the 500. And that would mean that if there were sort of random values for these infinite numbers that get cut off, that get cut off with different values-- and there are other ways of looking at the vacuum energy in string theories-- you'd expect coming out of string theory that the typical vacuum energy would be about the same as what you get when you cut off the quantum fluctuations of the electromagnetic field at the Planck scale. That is, the typical vacuum energy coming out of a string theory would be at the Planck scale, which is this huge number compared to what we observe. But string theory would be predict that there would be a spread of numbers going essentially from plus the Planck scale to minus the Planck scale, with everything in between. There'd be a tiny fraction of those vacua that would have a very small vacuum energy like what we observe. That's what you'd expect from string theory-- a large number, but a tiny fraction of vacua that would be in that integral. And then the only problem would be to explain why we might likely be living in such an unusually small fraction of the set of all possible vacuums. And the answer to that that's discussed is that it may be anthropically selected. That is, life may only form when the vacuum energy is incredibly small. And that is not built entirely from whole cloth. There is some physics behind that. We know that this vacuum energy affects the Friedman equation, which means it affects the expansion rate of the universe. So if we had a Planck scale vacuum energy, that would cause the universe to essentially blow apart at the time scale of the Planck scale, which is about 10 to the minus 40 something seconds, due to the huge repulsion that would be created by that positive vacuum energy. And conversely, if there was a huge negative vacuum energy on the order of the Planck scale, the universe would just implode on a time scale of order of the Planck scale-- 10 to the minus 40 something seconds. So assuming that life takes billions of years to evolve and assuming nothing else about life, one can conclude that life can only exist in the very narrow band of possible vacuum energy densities which are incredibly small, like the one that we're living in. So it could be that we're here only because there isn't any life anyplace else. So all living things see a very, very small value of this vacuum energy density, even though if you plunk yourself down at a random place in this multiverse, you'd be likely to see a vacuum energy that's near the Planck scale. OK, I'm done talking about this for now. Any further questions about it before we leave the topic? I had suggested that we go on to talk about problems with the conventional big bang model, but, there is actually something else I wanted to do. I don't know how long it will take exactly, but I have a little historical interlude to talk about here. We've been talking about the Friedman equations and how they're modified by the cosmological constant, which of course is an item that was very dear to Einstein's heart. So I'd like to tell you a little history story about Albert Einstein and Alexander Friedman, which I think is very interesting. The punchline of the story is that Einstein made pretty much of a fool out of himself on this. And the reason why I like the story is maybe twofold. One is, I find it very comforting to know that even perhaps the greatest physicist of all time can make dumb mistakes just like the rest of us make dumb mistakes. I think that's a very comforting thing to keep in mind. And the other moral of the story is, I think, the importance of trying to be open-minded about issues in physics. Einstein was very much convinced that the universe was static, and so convinced that, in fact, he really made stupid mistakes trying to defend his static universe. So this will be a story of such a mistake. So those are the two people. Friedman was a Russian natural-- he was really a meteorologist. They didn't really have that many theoretical physicists back in those days. But as a meteorologist, he was an expert in solving partial differential equations, and got himself interested in general relativity, which was a new theory at this point. And in 1922, he published an actual physics paper, I think the first physics paper he ever published, and one of two. He wrote basically two papers about the Friedman equations-- one for closed universes, and one for open universes. So the first of those papers was published in June 29, 1922 in the premier physics journal of the day-- the Zeitschrift fur Physik, a German journal. And almost immediately-- or a few months later, when Einstein noticed this article-- Einstein submitted a comment about the article claiming that the article was entirely wrong, just mathematically wrong. And the article was titled "Remark on the Work of A. Friedmann 'On the Curvature of Space' " by A. Einstein, Berlin, received September 18, 1922. Looking at these dates-- the original article was received in June 1922, and Einstein was responding by September 18, a few months later. And this is a translation, which comes from a book called Cosmological Constants, which is basically a book of famous articles in cosmology, like Friedman's, and all these original articles. It's a great book if you can still get a copy of it. It's no doubt out of print. It was written by Jeremy Bernstein and Gary Feinberg. And I'm taking the translation from there, because this was written in German. I don't know German. "The works cited contains a result concerning a non-stationary world which seems suspect to me. Indeed, those solutions do not appear compatible with the field equations." And I guess A is the label of the field equations as they appeared in Friedman's paper. "From the field equation, it follows necessarily that the divergence of the matter tensor Tik vanishes." That is, energy momentum is conserved as a four-vector quantity. "This along with ansatzes C and D"-- equations from the paper-- leads, according to Einstein, to an equation which we can all recognize the meaning of-- the partial of rho with respect to x sub 4-- time-- is 0. Einstein convinced himself that the equations of general relativity led to the conclusion that rho cannot change with time. And he then goes on to say "which together with 8 implies that the world radius R"-- that's the scale factor. That's what we call a of t-- "is constant in time. The significance of the work, therefore, is to demonstrate this constancy." All Friedman does once you correct his equations, according to Einstein, was prove that the only cosmological solution is rho equals a constant, which was Einstein's static solution. This was entirely wrong-- no basis whatever in mathematics. But it took a while before Einstein got himself straightened out. And he did actually publish this. The sequence of events was, June 29, Friedman submits his paper. September 18, Einstein submits his rebuttal to the paper. Friedman didn't learn about this until the following December. Friedman had a friend who played a key role in this story-- Yrui Krutkov, who was visiting in Berlin during this time. And Friedman actually learned from Krutkov that Einstein had submitted a rebuttal. So Friedman apparently was able to track it down and read it. And he wrote a detailed letter to Einstein explaining to Einstein what he got wrong, which is a gutsy thing to do, but Friedman was right in this case. But Einstein was traveling and actually never read the letter, at least not until much later. Then the following May, Krutkov and Einstein are both at a conference in Leiden, a conference that they were both attending, which was a farewell lecture by Lorentz, who was retiring at that time. So they met and started talking, and continued talking. And we know most about it from a series of letters that Krutkov wrote to his sister back in Saint Petersburg. And according to those letters-- and I'm now quoting from a rather lovely book called Alexander A. Friedmann-- The Man who Made the Universe Expand, by Tropp, Frenkel, and Chernin. Krutkov wrote to his sister that on Monday, May 7, 1923, "I was reading, together with Einstein, Friedman's article in the Zeitschrift fur Physik. And then on May 18, he wrote, "I defeated Einstein in the argument about Friedmann. Petrograd's honor is saved!" Petrograd is what we now call Saint Petersburg, and where they were all from-- that is, Friedman and Krutkov. And then shortly after that, on May 31, Einstein submitted a retraction of his refutation of Friedman's paper. And the retraction is-- again, I'm quoting from Cosmological Constant, which translates all these nice papers into English. Einstein wrote, very briefly, "I have in an earlier note criticized the cited work-- Friedmann 1922. My objection rested however, as Mr. Krutkov off in person and a letter from Mr. Friedmann convinced me, on a calculational error. I am convinced that Mr. Friedmann's results are both correct and clarifying. They show that in addition to the static solution to the field equations, there are time varying solutions with a spatially symmetric structure." Anyway, the expanding universe that we now talk about. Einstein did have to admit, ultimately, that algebra is algebra, and you can't really futz with algebra. And the Einstein equations do not imply that rho cannot change with time, and that Friedman was right. There's an interesting twist on this retraction letter. This is just a photo of Einstein at this time period, and Krutkov. There's an interesting twist on the retraction letter, which is that the original draft still exists. I forget what museum it's in. But it's quoted in another marvelous book about this history called The Invented Universe, by Pierre Kerzberg. And I Xeroxed this from the book. And this is the original draft. And notice there are some cross-outs. And the last cross-out, which followed this explanation that there is this expanding solution-- in Einstein's original draft, he wrote but then crossed out "a physical significance can hardly be ascribed to them." So his initial instinct, even after having been convinced that these were a valid solution to the equations, was to say that they couldn't possibly be physical, because they're not physical. The universe is static. But somehow, before he submitted it, he did realize that there wasn't actually any solid logic behind that reasoning. So logic did prevail, and he decided that he really had no right to say that the solution has no physical significance, which is a good thing, because now, of course, it is the solution that we consider physically significant-- the expanding solution of Friedman. So [INAUDIBLE] is a mystery, I think. OK, we have just a couple minutes left in the class. So I think that is nearly enough time for me to at least introduce what I want to talk about next. What we'll be talking about next time-- and I'll just introduce it now-- are a set of two problems associated with the conventional big bang theory. And by the conventional bang big bang theory, I mean basically the theory we've been talking about, but in particular, the big bang theory without inflation, which we will be talking about later. But so far, we've been talking about the big bang theory without inflation. And the two problems that we'll talk about are called the horizon or horizon homogeneity problem, and the flatness problem. Both of these are problems connected with the initial conditions necessary to make the model work. So this horizon homogeneity problem is a problem about trying to understand the uniformity of the observed universe, which we've just put in as part of our initial conditions. The model that we've constructed was just completely homogeneous and isotropic from start to present. The evidence for the uniformity of the universe shows up most strongly, as I think we said before, in the cosmic background radiation, which can be measured to fantastic precision. And this radiation is known to be uniform in all directions to an accuracy of one part in 100,000, which is really a phenomenal level of accuracy. Now, what makes this hard to understand in the conventional big bang theory is that if instead of just putting it in as an assumption about the initial conditions, you try to get it out of any kind of dynamics, that turns out to be impossible. And in particular, a calculation that we'll do next time is we'll imagine tracing back photons from the cosmic background radiation arriving at the Earth today from two opposite directions in the sky. Now, the phenomenology is that those photons come with exactly the same temperature to an accuracy of one part in 100,000, and that's what we're trying to explain. Now, we all do know that systems do come to a uniform temperature. If you heated the air in this room in a corner and then let the room stand, the heat would scatter throughout the room, and the room would come to a uniform temperature. If you take a hot slice of pizza out of the oven, it gets cool, as everybody knows. So there is this so-called zeroth law of thermodynamics which says that everything tends to come to a uniform temperature. And it's a fair question to ask, can we perhaps explain the uniformity of the universe by invoking this zeroth law of thermodynamcs? Maybe the universe just had time to come to a uniform temperature. But one can see immediately when one looks at details that that's not the case. Within the context of our conventional model of cosmology, the universe definitely did not have time to come to a uniform temperature. And the easiest way to drive that home will be a calculation that we will do first thing next time, which is that we will trace back photons coming from opposite directions in the sky and ask, what would it take for them to have been set equal to the same temperature when they were first emitted? And what we'll find is that when we trace them back to their emission sources, that those emissions took place at two points which were separated from each other by about 50 horizon distances. So assuming that physical influences are limited by the speed of light-- and according to everything that we know about the laws of physics, that's true-- there is no way that the emission of that photon coming from that direction could have had any causal connection with the emission of the photon coming from the other direction. So if the uniformity had to be set up by physical processes that happened after the initial singularity, there's just no way that that emission could have known anything about what was going on over there, and no way they could have arranged to be emitting photons at the same energy at the same time. Now, everything does work if you're willing to just assume that everything started at uniform. But if you're not willing to assume that, and want to try to derive the uniformity of the universe as a dynamical consequence of processes in the early universe, there's just no way to do it in the conventional big bang theory because of this causality argument. And later, we'll see that inflation gets around that. Yes? AUDIENCE: How do we know that the homogeneity wasn't just created when the universe was smaller, in such a way that the speed of light limit wouldn't be violated, and that it would just maintain [INAUDIBLE]? PROFESSOR: OK, the question is, how do we know that the uniformity wasn't established when the universe was very small, and then the speed of light might not have to be violated? Well, the point is that if the dynamics is the conventional big bang model, what we'll show is that there's not really enough. No matter how early you imagine it happening, it still is 50 horizon distances apart. And there's no way that those points could've communicated, no matter how close you come to t equals 0. Now, you are of course free to assuming anything you want about the singularity at t equals 0. So if you want to just assume that somehow the singularity homogenized everything, that's OK. But there's no theory behind it. That's just speculation. But it is satisfactory speculation. There's nothing it contradicts. But the beauty of inflation is that it does, in fact, provide a dynamical explanation for how this uniformity could have been created, which, at least to many people, is better than just speculating that somehow it happened in the singularity. OK, I think that's it for now. I will tell you about the other problem we'll talk about next time next time. And I will see you all next Tuesday.
MIT_8286_The_Early_Universe_Fall_2013
13_NonEuclidean_Spaces_Spacetime_Metric_and_Geodesic_Equation.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, in that case, let's start-- I want to begin by giving a quick review of where we were last time. And then we'll pick up from there. Today's lecture, the main subject will be the-- see if this works. The main subject will be the spacetime metric, which is what we'll begin by talking about. And later, I hope we'll be doing the geodesic equation. Last time, we began by talking about open universes. And we got to open universes by way of closed universes. And we started with this Robertson-Walker form of the closed universe metric, which holds for k greater than 0 describing a closed universe. And then we said if we want to describe an open universe, we can use the same equation, but let k be less than 0. And this metric has a name, the Robertson-Walker metric, which applies for k being positive, negative, or 0 as a special case. When k is 0, this just becomes the Euclidean metric written in polar coordinates. So it's a flat space for k equals 0. Then we addressed the question of, why should we believe this as a proper description of an open universe? We know how to write it. And we could make it look a little better perhaps by introducing a kappa, which is minus k. So that kappa could be positive when k is negative. To answer that question, we had to define what criteria we had in mind for the metric we were looking for. And what we're trying to do is write down a metric that will describe a homogeneous and isotropic universe. Because from the beginning of course, we said that those are the key properties that describe, to a good approximation, the universe that we live in. So what we want to know is that this metric is homogeneous and isotropic when kappa is positive and k is negative, the new case. For the closed universe case, we already knew those things because in the closed universe case, it's obvious because we know we got it from a sphere. And a sphere is clearly homogeneous and isotropic from the start. So looking at this metric for kappa positive, we see immediately that it's obviously isotropic, at least about the origin. Because if we just sit at the origin and looked at different angles-- theta and phi-- the theta phi dependence of this metric is simply given by that expression. And this is exactly the metric of the surface of a sphere whose radius happens to be a of t times little r. And we know that that sphere is isotropic. It doesn't look manifestly isotropic because when you put theta phi coordinates on the surface of a sphere, you choose a north pole and measure everything from that. So your coordinates break the isotropy. But you know perfectly well that the sphere itself is completely isotropic. So isotropy is settled about the origin. And if we're soon going to prove homogeneity, it's enough to know that it's isotropic about the origin because homogeneity will demonstrate that all points are equivalent. So if it's isotropic about the origin, we will ultimately know that's isotropic about all points. So homogeneity is the hard thing. How do we convince ourselves that this metric is homogeneous? Now, if we look at it, it doesn't look homogeneous. It certainly looks like the origin is special. That was the case for the closed universe Robertson-Walker metric as well. So it's certainly not decisive. But it also doesn't prove that it's homogeneous by itself. So we have to figure out how we would prove that it's homogeneous. And here, we only sketched an argument. We didn't really go through it in detail because it gets messy. But I think logic of the argument is clear. And it's, I think, rather persuasive that this argument would work if you wrote out all the equations. So let me go through the argument again. We start by thinking about how we would demonstrate that the closed universe was homogeneous in a mathematical way using algebra rather than words. And if we wanted to prove that the closed universe was homogeneous using algebra, we would set out with the goal of trying to show that any point-- and we'll let r sub 0, theta sub 0, pi sub 0 denote an arbitrary point. What we'd like to show is that any point is equivalent to the origin. And by equivalent to the origin, what we really mean is that we could define a new coordinate system where this arbitrary point would become the origin, and the metric would look just like it looked to start with. And then this new arbitrary point will be playing exactly the same role that the origin played in the first place. So we're looking for a transformation which will map r0, theta 0, and phi 0 to the origin while maintaining the form of the metric. And we really know how to do that, because we know from the beginning that this universe is homogeneous because of the way we constructed it as the coordinization of the surface of a sphere. And the sphere is manifestly homogeneous. You can rotate any point on the sphere into any other point just by performing a rotation, which certainly does not change anything about the metric on the sphere. So we want to basically take advantage of that fact. And we could imagine-- and we can even carry it out if we have to, but I only want to imagine it. I want to imagine constructing a map from r, theta, and phi to some new r prime, theta prime, phi prime. We want to map the entire space. But we want it to have the property that the special point-- r0, theta 0, and phi 0-- gets mapped to the origin. So we want to construct this general mapping which has the property that our special point is mapped to the origin. And we can do that in three steps. And they're shown schematically here. We first simply transform from our r, theta, and phi coordinates to the four coordinates x, y, z, w that we, in fact, started with, the four coordinates that describe the euclidean four dimensional space in which this three dimensional sphere is embedded. Once we have the four dimensional space, we can perform euclidean rotations in that four dimensional space. And we can perform any rotation we want. And we, in principle, know how to write those out in detail. And we can choose the rotation, which maps r0, theta 0, and phi 0, keeping track of where it went, to the values of x, y. Z, w that will ultimately correspond to the origin of our coordinate system when we get back to r prime, theta prime, and phi prime. So we can arrange for that to happen by choosing the right rotation here. And once we've rotated, we can then define r prime, theta prime, and phi prime in terms of x prime, y prime, z prime, and w prime in exactly the same way as we did in the first place when we didn't have primes. We just used the same formulas again. And that will ensure that the metric and r prime, theta prime, and phi prime will be just the metric that we've had, because it's determined from the euclidean metric in the four dimensional space in exactly the same way. So this does it. And we could, in principle, do it all in detail. And we would get a concrete expression for r prime, theta prime, and phi prime in terms of r, theta, and phi that would have the property that we want of mapping the arbitrary point that we chose to become the origin of the new system. So the point is that once you've written that all those equations, you know that they work for k positive. But in the end, you'd just have a set of equations that define r prime, theta prime, and phi prime in terms of r, theta, and phi. And those equations are just as valid for k negative as they are for k positive. And the fact that the metric will be unchanged is also just as valid for k positive and k negative, because the metric is really just determined by derivatives of the new coordinates with respect to the old coordinates. And those are all just algebraic expressions. And if an algebraic expression is an equality for one sine of k, it will be an equality for the other sine of k. So I think we have good faith-- although it'd be more convincing perhaps to actually write out all equations, but I think we have good faith that the same map will work for k less than 0. And by "work," it means that it will show that any point could be mapped to the origin by a metric-preserving transformation, which is the key thing, which is what we need to show that the space is actually homogeneous even though when we write in these coordinates, it doesn't look homogeneous. So does that make sense to everybody? Are there any questions at this point? The next thing we did-- ah, I'm sorry. I guess we-- point out that we're not going to actually show this explicitly because the algebra involved in the steps does get very complicated. I wanted to mention a few other facts about this Robertson-Walker metric. One important fact, which we will not show-- to show it would take approximately another lecture. It's not an incredibly deep mathematical fact, but it requires establishing some formalism to handle descriptions of curved spaces without yet knowing what the metric is going to be. But in any case, it can be shown-- and we're going to store this in back of our heads-- that any three dimensional, homogeneous isotropic space can be described by this Robertson-Walker metric. Now, it's important to realize that that does not mean that the Roberson-Walker metric is the only way to write down a metric for a homogeneous and isotropic space. You could choose different coordinate systems that would make things look different. But the point is that for any homogeneous and isotropic three dimensional space, it is always possible to assign coordinates who make the metric the Roberson-Walker metric, which means that if you understand the Robertson-Walker metric, you need to understand anything else. Any homogeneous and isotropic space can be described this way. Next, we pointed out last time-- and did a short calculation to demonstrate for ourselves-- that for k greater than 0, the universe is finite. And that's clear from the beginning, because it was described as a surface of a sphere and the surface of a sphere is finite. But for k less than or equal to 0, the variable little r of the Robertson-Walker metric can become arbitrarily large. That by itself does not imply that the space is necessarily infinite. But you could also calculate the distance from the origin as a function of little r. And that, you can show, becomes arbitrarily large. And that does mean that the space is literally infinite in size. So for the flat case or the open case, the Robertson-Walker metric describes an infinite universe. And next, so we mentioned-- and this is a homework problem, or an optional homework problem on the current set-- that the Gauss-Bolyai-Lobachevsky geometry is actually simply the open universe in the two dimensional case rather than three dimensional case. So in the language of the Robertson-Walker metric, I think it's much easier to describe than in the coordinate system that Felix Klein invented. But it's the same space. And on the homework set, you can work out the mapping between the Klein coordinates and the Robertson-Walker coordinates to see that they're the same space. Any questions? OK. Next, we changed subjects and started talking about the topic that we'll be continuing now because we did not finish this discussion-- the discussion of how to go from this space metric that we now understand to the spacetime metric, which is the fundamental quantity of general relativity and which we'll be using to describe our model universes. Time ends up not playing an important role in what we'll be talking about. But nonetheless, it is an important part of the basic formalism of general relativity. And for some questions, it's crucial how time enters the metric. So we will discuss how time enters the metric. So we want to generalize the metric from a spatial metric to a spacetime metric. And the first thing that means is that we want to understand the relativistically invariant interval between two points in spacetime. A point in spacetime is also called an event. And we begin with special relativity because that's what all this is a generalization of. In special relativity, one can define the Lorentz invariant distance between two events. Here, the events are A and B. And the coordinates of those events are xA, yA, zA, and tA for event A. And as you would obviously guess, xB, yB, zB, and tB for event B. And the Lorentz invariant interval between those events is s squared sub AB. And the first and most important thing about this interval is that it is Lorentz invariant. That is, you can compute it in any inertial frame, in any Lorentz frame, and it will have the same value even though the different pieces of it will have different values. The differences will cancel out. And in the end, when you calculate the sum of these four terms to make s squared sub ab, you will find they'll have the same value in any Lorentz frame. And we're not going to show that fact. But we're going to use that fact, a very well known fact to anybody who studied special relativity in a reasonably way. It's important to think a little bit about what the meaning of this peculiar quantity is that mixes space and time. And I think the easiest way to think about the meaning of it is to think about special cases. And it's a real number which could be positive, negative, or 0. And those are the special cases I want to think about-- positive, negative, or 0. So if s squared is positive, it means that the separation between the two events is what's called spacelike. It's dominated by the spatial term. And if that's the case, it's always possible to find a frame where the two events are simultaneous. And in that frame, s squared is just the square of the distance between the two events, so has a clear interpretation. It's just the distance in the Lorentz frame in which they occur simultaneously. Similarly, if s squared is negative, it means it's dominated by the negative time term. And in that case, the separation is called timelike, as you'd guess. And it has the property that it's always possible to find a Lorentz frame in which the two events occur at exactly the same point in space. And in that frame, s squared is equal, up to a factor of minus c squared, to the time separation between the two events. So s squared measures the time separation between the two events in the frame at which they happen in the same place. And s squared is actually minus c squared times the time separation squared. And finally, if s squared is 0, that's called a lightlike separation. And it means that the two events are separated by just the right distance so that if a light pulse left one, it would just arrive at the other location at exactly the time of that event. So the two events could be joined by a light pulse that travels at the speed of light. And that's the significance of s squared being 0. OK, any questions? OK-- actually, I don't want to that slide. We'll get back to that. OK, that finishes the review of last lecture. Now, what we want to do is continue talking about spacetime intervals and how we fit that into the metric, which will ultimately describe distances in both space and time. So I'm going to work on the blackboard for awhile now. So the formula that we're starting with-- and we'll put dot dot dot here and minus c squared tA minus tB squared. The dot dot dot means the y term and the z term which you can probably imagine are there without my writing them. From this expression, knowing that what we want to do is to take advantage of these geometrical ideas introduced by people like Gauss which described distances in terms of infinitesimal distances between infinitesimally close points, we can write the analogous equation for an infinitesimal distance. And that becomes ds squared is equal to dx squared plus dy squared plus dz squared minus c squared dt squared. So this would be the Lorentz invariant separation between infinitesimally separated events. And this is what we're going to try to generalize to our curved space situation. So this would be the metric for special relativity. It's called the Minkowsky metric. OK, now I want to move into the general relativity generalization of this idea. And general relativity makes use of the idea that Gauss originally suggested that distances should always be quadratic functions of the coordinate differentials. So we're going to keep that. Einstein kept that. Now, in talking about coordinate differentials, we should emphasize here that in general relativity, unlike special relativity, coordinates are just arbitrary labels for points in spacetime. In special relativity, coordinates actually measure distances and times directly, which is why the metric is so simple. You don't really need the metric in special relativity. The coordinates themselves will tell you the distances and the times. But in general relativity, that will not be the case. There's no way to do that for a curved space or a curved spacetime. So in general relativity, the coordinates are just arbitrary labels of points in spacetime. And to know anything about actual distances, you have to look at the metric. The coordinates themselves don't tell you the actual distances. This immediately implies something about the kinds of coordinate transformations that you might want to think about. In special relativity, we have a privileged set of coordinates, namely the coordinates of Lorentz frames, of inertial frames. And the physics is simple when described in terms of those coordinates. In principle, you can use any coordinates you want, even in special relativity. But you never do, because the physics is so much simpler in the inertial coordinates that there's never any motivation for using any other coordinate systems. But in general relativity, there is no privileged coordinate system. And it's very common to make all kinds of transformations of coordinates in the context of general relativity. And the formalism is set up so you could make any coordinate transformation you want, and it's just thought of as a relabeling of the points in space and time. And the formalism of general relativity works for an arbitrary labeling of points in spacetime. So in general relativity, any coordinate transformation is allowed. But there's an important feature of these coordinate transformations-- is that when we make a coordinate transformation, we're always going to readjust our metric so that ds squared between any two nearby spacetime points has the same value in the new coordinate system that it had in the old. We will always change our metric to reflect our changes of coordinates. So ds squared must have the same value in any coordinate system. So the statement is that ds squared is coordinate-invariant. OK, to define what we mean by ds squared, which if you notice, I haven't quite done yet. It's going to be, of course, the analog of what we've been talking about in special relativity. In special relativity, we did have this special class of observers, inertial observers, observers whose measurements of length and time really corresponded to the inertial frames and whose observations are related to each other by Lorentz transformations. It's important to start out by asking is there any class of observers in general relativity which might play the same roles-- the observers that sort of define the measurements that you want to talk about. And it's clearly a little bit more complicated in general relativity. The inertial observers of special relativity are characterized by the statement that there are no forces acting on them. So they just travel at a constant velocity. And you can always go to a frame where that velocity is 0 and you an talk about the rest frame of any inertial observer. In general relativity, we need to distinguish to some extent between non-gravitational forces and gravitational forces. Non-gravitational forces, like say, electrical forces, are treated in general relativity in a way that's fundamentally similar to the way that such forces are treated in special relativity. But gravity is treated totally differently. Gravity is really just going to be described by the metric of spacetime-- by the distortion of spacetime. And we already know, by way of simple examples, that if general relativity actually works to describe the universe that we've been talking-- which it'd better or we'd be in trouble-- we have a system where if we just look at the co-moving observers, each co-moving observer has no non-gravitational forces acting on him. He's just sitting still as far as he's concerned. But nonetheless, these co-moving observers are accelerating relative to each other as the universe expands and as that expansion changes its expansion rate, which we've already calculated. So if there's going to be any observers that are going to play the role of inertial observers, it's presumably going to be a class that includes these co-moving observers. And the question of whether or not there are gravitational forces acting on the co-moving observers ends up depending on your point of view. Each co-moving observer would think that there's no gravitational forces acting on him. He would just be standing still. But he would see all these other co-moving observers accelerating relative to him. So he would say that there are gravitational forces acting on these other observers. So gravitational forces in general activity becomes coordinate-dependent ideas. And the Hubble expansion is one example of that where every co-moving observer would consider himself to be unaccelerating but would see all the other co-moving observers accelerating. The other famous example, which is part of the original motivation of general relativity, is the famous Einstein elevator, which is also discussed in Ryden's textbook. If we have an elevator box, we could imagine letting the elevator fall. There was a rope there, but somebody cut it. And the elevator's now falling. And we have a person in it. And the person in my-- the version of the story that I have in the lecture notes, the person's holding a bag of groceries. And if the elevator is falling freely and we ignore any air resistance or any other kind of friction so the elevator's falling at exactly the freefall rate, inside the elevator, everything will be falling with the same acceleration. The person could lift his feet up off the floor, and he would just hover there. He would feel no gravity pushing him towards the floor. And similarly, he could let go of the bag of groceries, and they will just appear to float in front of him as long as he's undergoing this freefall. So the effects of gravity have been completely removed. On the other hand, from outside the elevator, if we use the frame of reference of the Earth, we said that there very definitely is a force of gravity acting here. It's just acting the same on all the objects. And this gets elevated into the equivalence principle of general relativity. And maybe before I annunciate that, I should consider the other case here. This is one example of how things work. A similar situation can involve the same elevator, but this time, let's have it just be sitting on the floor of the building that it's located in. In that case, the person inside would feel himself pushed against the floor by gravity. If he was holding his bag of groceries, he would notice he has to apply force to the bag of groceries to stop the groceries from falling to the floor. He would say that he's being acted on by the force of gravity. That would be the natural description. But we can consider an analogous case where we have the same elevator in empty space with a rocket ship up here that I didn't allow myself room to draw tied by cables to the elevator. And if the rocket ship accelerated with acceleration little g, the person inside the elevator would feel himself pressed against the floor in exactly the same way as you would here. So again, we have a situation where there's gravity in one case and no gravity in the other case, but no difference in what the person inside would feel. And that is what becomes this principle of equivalence, which says that the physics of the accelerating frame of the elevator in that analogy-- so this is accelerating frame but with no gravity-- is equivalent to feeling the gravitational field of the Earth. In short, if you were living inside the elevator, you cannot tell which of those two pictures describe the world that you're actually part of. And this is a very deep principle. It has very strong implications. It really does mean that everything you'd ever want to know about how gravity affects physical systems can be described by understanding how accelerations affect the physical systems. So it reduces the questions of what gravity does to just understanding what happens when you're in an accelerating coordinate system. OK, this also opens the door for the question I began with-- is there a special class of observers here? And we can identify a special class of observers. But the special class is not observers which have no forces acting on them, which is what we would have said in special relativity. But rather, the special observers are the observers who have no non-gravitational forces acting on them, like the co-moving observers in our model of the universe. But gravitational forces, you could never say if they're there or not because they're always there in some frames and not there in other frames. So we have no control or no way of making any frame-invariant statements about the force of gravity. So what we'll be interested in as our primary observers and which we're going to use to define things in this class of observers with no non-gravitational forces-- and those will be called free-falling observers. And they're called free-falling because you have no way of knowing whether they're just observers for which there is no gravity, which would be an example of a free-falling observer by our definition. But the situation is indistinguishable from this one, where free-falling has its obvious meaning-- that the guy there is falling relative to the Earth's frame. But he's freely falling. And therefore, he does not feel, relative to his environment, any forces whatever. Now, I should emphasize that this equivalence principle holds only in small regions. In principle, it only holds in infinitesimal regions because there are, in gravitational systems, what we call tidal effects, where a tidal effect simply means that the gravitational field is never completely uniform. And if the gravitational field is not uniform, you do not completely cancel it by going into the accelerating frame of the elevator. But in any infinitesimal region, you can always cancel the effects of gravity by going into a properly accelerating frame. And that's what the equivalence principle says. OK, are there any questions about that? OK, this being said-- it was a long prelude-- we can now define what ds squared is supposed to represent. And the answer is simply that ds squared has the same meaning as in special relativity except that inertial observers are replaced by free-falling observers. OK, so let's review what exactly that means. It means that if ds squared is positive, it means that there will always be a class of free-falling observers for whom those two events will occur at the same time. And ds squared will be the distance between those two events as measured by those inertial observers-- bah, I said "inertial"-- free-falling observers. And similarly, if ds squared is negative, it means there will be a class of free-falling observers for whom those two events will occur at the same location. And ds squared will measure, up to a factor of minus c squared, the time separation squared between those two events. And it will again be the case that if ds squared is 0, it will mean that the two events are separated by just the right distance so that a light pulse can travel from one to the other. OK, any questions about that? It's was kind of a long-winded discussion. But I think it does pay to actually understand what df ds means rather than just to write down a formula for it and say that's like special relativity. OK, having said all this, our next goal is to figure out how time enters the Robertson-Walker metric to give us a spacetime metric instead of just a spatial metric, which we already have written down. And I'm going to write down the answer and then describe why that has to be the right answer. I think it's the easiest way to handle it here. So the right answer is that when we incorporate time and think of this as a metric for spacetime, ds squared is going to be minus c squared dt squared plus a squared of t times dr squared over 1 minus k r squared-- right now, it's just the same spatial part that we had before-- plus r squared d theta squared plus sine squared theta d phi squared. End parentheses. End curly brackets. So all I've done is I've added a minus c squared dt squared term to the metric. Now, why is this the right metric? I'm going to first consider two special cases, which will verify some of the terms there. And then I want to also discuss why there aren't any other terms besides the ones that we know have to be there. So first, let's just consider the case-- case one will just be dt equals 0. If there's no time separation between the two events, then we're only interested in spatial separations. And we've already talked about how to describe spatial separations in a way which makes the description homogeneous and isotropic. And we said that this is the most general way of describing spatial separations that are in a space which is homogeneous and isotropic. So from what we said previously, this has to be the answer. That's how we describe homogeneous isotropic spaces. So when dt vanishes, it just reduces to the case we've already discussed. Simple enough. Case two, which involves a little bit of new thinking-- suppose dr equals d theta equals d phi equals 0 so that only time changes. OK, this describes the situation about our co-moving observers. They're sitting at fixed spatial coordinates and evolving in time. And we've already said that the thing that we call cosmic time is simply time as measured by the wrist watches of our co-moving observers. So t, if you want t to be cosmic time, which we do-- we're trying to describe a metric for our spacetime as we've already described it. We're now just trying to write a metric for it. So t should be cosmic time. So t should be the time as measured on the wrist watches of the co-moving observers. And that's exactly what this metric says. It says that if ds squared defines the measurements of our free-falling observers, which is what we said is the definition of ds squared, that it is just equal to minus c squared times the change in the coordinate time. And coordinate time means cosmic time because that's the coordinate system we're using. So putting in the minus c squared dt squared term is the only way that it can be so that the wrist watches of our co-moving, free-falling observers measure the same thing that the coordinate measures, which is what we defined cosmic time to be in the first place. So I think that justifies this term. And notice, if you had any coefficient here other than c squared, there'd be a multiplicity offset between what the wrist watches of your observers are measuring and what cosmic time is ticking off. And we're not allowing that, because we define cosmic time to be the time measured by the wrist watches. OK, so that takes care of these two cases. And I think it implies that these terms have to be here in exactly the form that we've written. And we could stop now and pretend that we've solved the whole problem. But I always like to be what I consider to be thorough. So I like to sort of imagine the questions that could pop up if people were inquisitive. So you might imagine that you have some difficult roommate who says, why can't I put some other term, and what else could there be? The only thing that we've left out here are terms that involve products of dt with either dr, d theta, or d phi. So what about terms like the product of dr times dt or d theta times dt or d phi times dt? Question mark. So one possible answer is I looked in a book and it wasn't there. But that's not the best of all possible answers. It's good to understand why things are in books and why other things or not. So you might want to construct an argument of why these terms have to be absent. And the reason why those terms have to be absent is because if they were there, they would violate isotropy. Roughly speaking, the notion is that if you have a dt times some d spatial coordinate, that singles out a certain direction in spatial coordinates space because dr is not the same as minus dr. Dr points in a certain direction. To be more explicit about that, in the notes, I discuss a thought experiment which basically gives a concrete realization of the asymmetry that I just discussed. So to see how those terms explicitly violate isotropy, we can imagine a thought experiment where we start by thinking about some particular point in space. And we'll give it coordinates r, theta, and phi. And I'll assume r is non-zero. We could then imagine that two people sitting at this point-- and in the Lewis Carroll spirit, I call them Tweedledee and Tweedledum-- can decide to do an experiment by first synchronizing their clocks. And they might as well synchronize t cosmic time, let's say. And then, one of them can go off in the direction of positive r, and the other can go off in the direction of negative r at the same coordinate velocity, which I'll call v. And by coordinate velocity, I mean dr dt, because that's the simplest thing to talk about here. It many not be the same as the physical velocity, but we don't care. It'll be the same for both of them. And the experiment will be that they will each travel until there's some-- until cosmic time-- they're passing a lot of cosmic time clocks as they travel. And they agree to travel until cosmic time ticks off until some chosen final time. And when they each finish the experiment by noticing that the cosmic time clocks now read t sub f, they will look at their own watches and see how much time elapsed. So they're basically measuring time dilation-- how do their wrist watch times, when they move, differ from cosmic time. And the point is that if we have a dr dt term in the metric, these people get different values because the time that they measure will be what they call ds squared, up to a factor of minus c squared. And that will include this dt dr term. And dr for one of them will be the coordinate velocity they chose times d cosmic time, the amount of cosmic time interval they travel for. They've agreed on that before they take off. And for the other, dr will be negative vc times dt, so cosmic. So this term will give a different contribution to the ds squared that each of these two entities, Tweedledee and Tweedledum, will measure. And therefore, they'll be measuring different ds squareds, and that means they'll be measuring different things on their wrist watches. And that means they have an asymmetry in directions. By going one direction or the other, they could determine whether their time dilation will be increased or decreased. And that, if our universe is isotopic, should not be possible. And therefore, if we want to write a metric which describes an isotropic universe, we have to omit the dr dt term. And a completely identical argument implies that we have to also omit d theta dt and d phi dt. So isotropy implies dt dr term is not allowed. Because otherwise, we would have a Tweedledee Tweedledum time dilation asymmetry, which we're not allowed to have an isotropic universe. OK, any questions about that? OK, if you have no questions about that, now we're ready to go onto our next topic. Now that we've described the metric of our universe, there it is-- the full Robertson-Walker spacetime metric for a homogeneous and isotropic universe. The next thing I'd like to about is how do we calculate motion in a metric in the context of general relativity. And our treatment here will be completely general. We'll learn how to calculate motion in an arbitrary metric. And we'll in fact use the Schwarzschild metric, which describes spherically symmetric objects like stars or even black holes as an example. But our real purpose is to understand things like motion in the universe. And I guess at this point, I am going to-- load up screen. OK, good. OK, I'm going to just use the equations from the lecture notes. This particular calculation involves a lot of long equations, so I think doing it on the blackboard would probably be a bit too tedious. So I'm instead going to just lift the equations and talk about them directly from the lecture notes themselves. So what we're interested in is thinking about a geodesic in some arbitrary metric. And we're going to start with the simplest possible example of the two dimensional spatial metric of the same kind of spaces that Gauss and Bolyai and Lobachevsky were talking about using this notation of differential geometry of thinking of a metric in terms of coordinates, which in this case, we'll initially call x and y. It will generalize perfectly straightforwardly to spacetimes because all the ideas are the same. But it's easiest to start out by thinking about you're simply talking about measuring distances in a two dimensional space. A geodesic is defined as a line between two points in space which has the property that the length of that line is stationary with respect to any variations. Stationary means the first derivative vanishes. Now, in this two dimensional space example, our stationary lines will also always be minima. That is, you can minimize the distance between two points by finding the shortest possible line. There is no longest possible line. And there aren't any saddle points either, I don't think. So I think in this case the minima-- the stationary points will always be minimum I believe. But in general, when we have spacetime metrics, especially when things are not even positive definite, these geodesics, the stationary lines, can be either maxima or minima or saddle points. So you should imagine that all those possibilities are there. However, the equations we'll derive will really just be the equations that say that the first order difference vanishes. If you vary the path a little bit to first order, the length does not change. And that will be true for maxima, minima, or saddle points. We won't have to care in deriving the equations. So we start by imagining a metric like that. And the first thing we want to do is just adopt a better notation for the metric. And there are two improvements. The first is to number the coordinates instead of thinking of them as different letters. So instead of talking about x and y, we're going to talk about x super one and x super 2. Now, these 1's and 2's have the danger of possibly being confused with a power. We probably never write x to the first power, but you might write x super 2 and think of if as x squared. Many times we, of course, do that. So one always has to hope that the context will make it clear what that index refers to. But here, these upper index objects-- those superscripts are just indices. They're not powers. You might wonder why we tolerate such a crazy notation when we could have written them as subscripts. And then there would not be this confusion. But the answer is that in general relativity, one does make use of both subscripts and superscripts in a slightly different way. And to some extent, you'll see that in what we'll be doing. So it's useful in general relativity to have two kinds of scripts. And the only places that seem to exist are up and down. So they're superscripts and subscripts. And one simply hopes that there's no confusion with powers. OK, so step one is number your indices and to number your coordinates. And then instead of writing that the sum of three terms that we had-- and of course, it gets to be much more. If you have four coordinates, it would be 16 terms or 10 terms, depending on how you collected them. But instead of writing that mess, you can write it using the summation notation. Sum from i equals 1 to 2, sum j equals 1 to 2 of g sub ij of the x-coordinates times dx i dx j. And when you sum over i and j, you're summing over 1 and 2, which means you're summing over x and y. And the sum includes the x, x term, which is now called the 1, 1 term. And the y, y term which is now called the 2, 2 term, and the x, y term, which is now called the 1, 2 or the 2, 1 term. And those are identical to each other. And they just get added. So that shortens the notation considerably. But then there's one further simplification that was actually introduced by Einstein himself. And it's always called the Einstein summation convention. Notice that in this equation, the letter i appears twice as an index-- as an upper index there and as a lower index on the metric, g sub ij. And the Einstein convention is that whenever you have a repeated index where one is upper and one is lower, you automatically sum over them without writing the summation sign. The summation sign is implied. So then this equation get simplified to that equation, which is the form that we'll actually be using. And that's as about as simple as it gets. OK, so far, that's just notation. OK, now what we want to do is to talk about a path between two points. And we want to discuss how we're going to describe the path and how we're going to derive the equations that will tell us that this path has the minimum possible length, which is what we're trying to do. We're trying to find the equations that tell us when a path has an extreme value of the length. So to describe the path itself, we're going to imagine parameterizing it, which means we're going to think of a function xi of lambda, where xi-- remember, xi means 1 and 2. It means specifying one both x and y as a function of lambda. So this is really two functions of lambda. And this function of lambda is going to show us how the point that traces out this path varies from point A to point B. And we'll allow it to vary as a function of this parameter that we've introduced, lambda. And we'll adopt the convention that lambda is 0 at one end and lambda sub f at the other end. So xi of 0 will be required to be the coordinates of point A If we want to start at some specified point A, we want to end up at some specified point B. So we'll insist that the coordinates evaluated at lambda sub f are xi sub B. And as long as xi of lambda is a continuous function, which we will also insist on, then xi of lambda will describe a path from A to B, which is what we're trying to do. OK, to apply the metric and write down an expression for the length of this path, that's what we want to do next. And then we want to figure how to extremize that expression for the length. First step, we need to get an expression for the length. So the metric is written in terms of infinitesimal separations. So we want to imagine dividing this path up into little segments, each corresponding to some d lambda. Each little segment goes from some lambda to some lambda plus d lambda, where d lambda is infinitesimal. And the change in the coordinates over that interval are then just the derivative of xi with respect to lambda-- remember, we have this function, xi of lambda-- times d lambda. This will give us the differential coordinates between any two neighboring points along the line. Then ds squared is defined in terms of d xi. And we just plug this formula into the expression for ds squared in terms of the infinitesimal separations. So we have the metric. And then where we had previously just d xi, now we have d xi d lambda times d lambda. And similarly, where we previously had just d xj, now we have the derivative of xj with respect to lambda, again, times d lambda. So we have two powers of d lambda appearing in this expression. ds itself will be the square root of ds squared. In this case, we are talking about positive, definite distances. So we can take the square root. So we put a square root sign over it. And now, we have only one power of d lambda. And this describes the length of the segment that goes from lambda to lambda plus d lambda, the length as defined by the metric. The full length of the line is obtained just by integrating that from 0 to the final value of lambda. So equation 540 here is what we're looking for-- the expression for the length of the line in terms of the parametrization that we've chosen. OK, any questions about that formula? OK, next step-- and here's where things get kind of complicated with the algebra, although I think the ideas are still pretty simple. The next step is to figure out how we determine when that expression is at its minimum value. How do we determine when the path has the right properties that we found the minimum length? So to do that, we want to imagine varying the path. We want to consider comparing the length of the path that we're thinking about to the length of an arbitrary nearby path. And to do that, we can introduce a little bit of extra notation here. Here's the point xA Here's the point xB. x of lambda is the path that we're thinking about. And we're asking the question, is this the path of minimum possible length? And to do that, we're going to compare it with an arbitrary nearby path. So the arbitrary nearby path is what's called x tilde in this diagram. It starts at the same point xA and ends at the same point xB. But along the way, it deviates by an infinitesimal amount from the original path. And we're going to parametrize that by equation 541a here. The tilde path will be equal to the original path plus a parameter that I'm going to introduce called alpha times a function wi of lambda, where wi of lambda is really an arbitrary function. And I've introduced this extra parameter alpha just so I could say in a simple way what it means for these paths to be infinitesimally close, which just means that if alpha has an infinitesimal value, the two paths are infinitesimally close. And the function wi, we'll think of being a perfectly finite function with values like 2 and 5, not values that are infinitesimally small. OK, with this parametrization of our paths, we want to impose one important criteria, which is that the two paths are supposed to start and end at the same points, A and B. And that means that this w super i that describes the derivation between two paths have an advantage at those two end points. Or else, the paths aren't going from the same starting point to the same ending point. And, certainly if you move the endpoints, you can always find a shorter path. There's no geodesic if you allow yourself to move the endpoints. So we insist therefore that wi of 0, which is wi at the first endpoint, is 0. And similarly, wi of lambda sub f at the other endpoint is also 0. That's what we insist on. OK, now having set up this formalism, we can now write down a very simple equation that says this path is an extremum. The path is an extremum if ds d alpha is equal to 0 for all choices of wi of lambda. OK, if it's an extremum, it means any small variation, any small variations are proportional to alpha. Any small variation produces 0 derivative. So ds d alpha should equal 0. And that should be the case for any possible deviation if we really have found the minimum possible length. OK? OK with everybody? OK, now, it's mostly just a lot of gore to get the answer. The key step will be a crucial integration by parts that you'll see in a minute. But let's just go through the algebra together. I'm going to define an auxiliary quantity a of lambda alpha, which is just the metric times the derivatives of the functions. The path length of the deviated path-- these are tilde functions here. So a is the integrand for the length of the perturbed path, the tilde path. So s of x tilde is just the integral of the square root of a, d lambda. OK, now we need some pieces to carry out our derivative. So I've introduced a few auxiliary calculations here that we can then put into the big calculation. We're going to need the derivative of the metric with respect to alpha. Now, the metric does not depend directly on alpha. But the metric does depend on x tilde. It's evaluated at the point x tilde for any given value of lambda. And x tilde depends on alpha, because remember, x tilde was equal to the original path plus alpha times this wi, the derivation. So it's a chain rule problem to figure out what the derivative of gij is with respect to alpha. So it's the derivative of gij with respect to xk times the derivative of xk tilde with respect to alpha-- just straightforward chain rule. And the derivative of x tilde with respect to alpha is just this function, w super i, or in this case, w super k. That's what defines the deviations. So this is our result, then, for the derivative of gij with respect to alpha. Then, we apply that to differentiating s itself, finding all the alpha's inside that square root. And scroll up a little bit so we can see the definition of a. a consists of gij times the dx's themselves. And the dx's themselves depend on the alphas. So we're going to get terms coming from differentiating those with respect to alpha. And we get a term coming from differentiating the gij with respect to alpha. So the whole quantity in the integrand here has a square root operating on it. So the derivative of the square root of a quantity is 1 over the square root of the same quantity times the derivative of the quantity, just differentiating the 1/2 power of a. So that gives us a 1/2 and 1 over the square root of a. And then inside here, we have the derivative of a with respect to alpha. And one of those terms, we've already calculated. It's this multiplied by dx i d lambda dx j d lambda, which come along for the ride. And they lose their tilde because we're trying to calculate the derivative at alpha equals 0. So once we differentiate one factor with respect to alpha, we evaluate the other factors at alpha equals 0. So that's what we've done. We've evaluated the other factors at alpha equals 0. And then, when we differentiate dx i tilde with respect to d lambda, we just get DWI d wi with respect to lambda. And dx j d lambda comes along, now evaluated at alpha equals 0. And similarly, the second term is where we differentiated the second factor here with respect to alpha. And we differentiate with respect to alpha, we bring down the w. So this becomes dx i d lambda dx j d lambda. So this is the expression. And now we want to simplify it a little bit and figure out how to write down an equation which tells us when it's actually 0. So first, we want to simplify it a little bit. And I guess I want to-- do they fit? Almost. What I want to argue is that these last two terms are really equal to each other up to just rearranging the indices. Remember, i and j are just being summed over. So we could have called them any letter we wanted, and they would still just be summed 1 and 2. And in particular, we can interchange what we call i with what we call j. And then, these two terms would become identical. And we're allowed to do that because these are just what are called dummy indices. They're just names of indices that are being summed over. And you get the same sum no matter what you call the index you're summing over. So those terms can be combined, giving us just 2 times either one of those two terms we can keep. And now, we only two terms in our expression, which is not bad. But things are still a little complicated. And what makes them complicated at this point, which is what we have to get rid of, is the fact that w occurs as a multiplicity factor in the first term. But w is differentiated with respect to lambda in the second term. And when it's written that way, there's no direct way you could see what properties w has to have or the other terms have to have so that the expression vanishes. But the crucial trick for handling that particular issue-- and it's the only real issue in this problem. The rest is just straightforward or sometimes tedious manipulations. The key step is to integrate by parts to turn the derivative of w expression into an expression that is just multiplicative in w. And the miraculous thing is that for the situation we have, there are no boundary terms that arise from that integration by parts. So here's integration by parts spelled out in gory detail. We're going to use the famous formula that says that the integral of udv is equal to minus the interval at vdu plus the product U times V evaluated at the two endpoints and subtracted. So this is just the standard formula that defines integration by parts. The U is the term that starts out not having derivatives and later acquires derivatives. So that's the 1 over root a gij dxj d lambda of this-- this is the quantity we're trying to calculate. And the du will just be-- I'm sorry-- dv will just be the factor which is a differential in the original expression, d wi d lambda times d lambda we're going to let be equal to dv. So this u and this dv give us the original integral. This integral is the same as the integral we're trying to evaluate. Now when we integrate by parts, we apply a derivative to U to write down dU, which means it's just the derivative with respect to lambda of the quantity in brackets times d lambda. That's the du. And dv is just easily integrable. D is equal to wi. Now, the important thing is to look at the boundary term. Because if the boundary term went on 0, we might not have accomplished anything. But the boundary term is 0 because the boundary term is the product of U times V, and V is wi. And wi vanishes at both boundaries that remember, was just the condition that the path goes between A and B. When you vary the path, you don't vary the points A and B. You only vary the path in between. So wi vanishes at the endpoint. So that means that v vanishes at the endpoints. So that means if the product of U times V vanishes at the end points. And that means our boundary term, our service term, does not contribute. So we turned the original integral into another integral where now wi appears as the multiplicity factor and it's no longer differentiated. Lots of other things get differentiated in the process. But wi gets to sit by itself. And that now makes it easy to combine these two terms and see under what circumstances the sum vanishes. So the integral, after we make this integration by parts on one of the two terms, becomes this expression where now, the w's are always multiplicative. And by rearranging the names of these dummy indices-- as we initially have it, w has a superscript k in the first term and the subscript i and the second term. But one could rearrange these dummy indices-- we could name them anything you want-- so that in both cases, w has the same index. And then you can factor it out. And then you get this marvelous equation, which is now very close to being an equation that we're prepared to deal with. The question we want to address now is under what circumstances does this vanish for every wi? Now, if we only know that it vanished for some particular wi, then we would not be able to say very much. Because it's very easy for an integral to be nonzero all over the place, literally everywhere except maybe at some isolated points and still integrate to 0. It could be positive in some places, negative in other places and zero only at crossing points and still integrate to 0. But if this is going to vanish for any wi, then the claim is that the quantity in curly brackets has to vanish identically. And I think the best way to prove that is to say that if the quantity in brackets did not vanish identically, then you could let wi be equal to the quantity in brackets. Remember, if this is non-zero for any wi, we have a contradiction because we're going to require this to vanish for any wi. So if the quantity in brackets were nonzero, we could let wi be the same quantity. And then we would just have the integral of a perfect square. And then clearly, the integral would not vanish. So that shows that if the quantity in brackets does not vanish, the integral does not vanish, at least for some wi. And that means that if the integral's going to vanish for every wi, which is what we're trying to impose, the quantity in curly brackets has to vanish identically. And that's our conclusion. So that implies-- and I guess here is where we're going to stop. But we get to the famous boxed equation. And this really is-- we'll simplify it a little bit afterwards next time. But this really is the result. The geodesic equation is that equation, which is just the equation that the quantity that we had in curly brackets vanishes. So if the path that we've chosen has the property that these derivatives are equal to each other-- and notice it depends on the metric and it depends on the path. Because you have dx d lambda appearing everywhere. And x of lambda is the path. But if this equation holds, then that path is a stationary point. It's an if and only if statement as long as paths are continuous. That is the geodesic equation. It tells us whether or not our path is the minimum. And next time, we will simplify it a bit. And we'll look at examples and understand how the formula works. So I'll see you folks again next Tuesday.
Business_Strategy_Lecture
Business_Strategy_12_Strategy_Implementation_Strategic_Projects_Initiatives.txt
foreign [Music] by now we're pretty far advanced in our strategic planning process we have covered the key strategic inputs the external environment internal environment we also covered the key aspects of the strategy development and today we're actually going to move on to the strategy implementation so we will become very practical and a lot of the things that i am going to share with you today are actually my own experiences as head of strategy in various companies so i want to start with this right away by sharing with you what a typical strategic planning cycle in a company looks like there are basically three phases the first phase is the preparation phase which basically covers the strategic inputs the second phase is then the actual strategy development and the third phase is then moving on to implementation so in the first phase that typically happens around july maybe two in july where the strategy team sits down looks at the external environment looks at trends and then also studies the internal environment the strength and weaknesses of of the company and uses that as the baseline and the input for the strategy development then the month of august is typically spent on strategy development so here we develop a top-down corporate strategic direction that is then presented to all the business units towards the end of august or the beginning of september the month of september and october is then used for the business units to come up with their strategic plans bottom up based on what they have heard in the strategic plan presentation in the first draft around the end of october beginning of november all the business units then have their strategic plans and also a first draft of the budget and this is then presented to the board of directors towards the mid of november with the input from the board of directors there's some final revision which brings us to the beginning of december and that's the time when the strategic plan typically is complete and the batches are being set for the next year and then we move on to the strategy implementation in december we develop the project plans and then from january onwards all the plans are being executed and implemented so in a nutshell here are the key steps from strategy to implementation so the first big step is to develop a strategy paper from the strategy paper we can derive then strategic initiatives so key strategic projects that will help us implement the strategy then there's an implementation planning step and road mapping that then leads to the implementation and measurements here we put in place kpis and and key measurements um to implement the strategy and ideally if you do everything right that leads to the desired impact which of course ideally should be above average returns so i want to go now step by step through this process and start with a strategy paper so the strategy paper is a comprehensive well-structured document and in its core it summarizes the findings and the recommendations that come out of the strategic planning process now there are different time horizons for this strategy paper but typically it's either three year or five year planning periods or five year time horizon very occasionally there are also longer term visions that might be considered for example in agenda 2030 or agenda 2050 but but that happens quite rarely typically it's a three or five year planning cycle now this paper the strategy paper is typically structured like this so i took here a 2021 23 paper with a five-year planning cycle we would start in the first chapter with the current state of the business which typically includes some facts figures tables and on the current performance as well as a full year landing what we call a landing so the forecast for the remainder of this current year and the financial outlook for this year the second chapter is then the environment so the economic environment and the competitor environment industry trends and so on the third part is a high level vision typically coming from the ceo or from the chairman or from from the board of directors which is the high level five year vision for the company so if we are in the year 2021 that would be the vision 2026 so five years from now chapter four is then the core of the document which is probably about seventy percent of the document which is the strategic plan the actual strategic plan 2022-26 so five-year plan in this chapter four then this is followed by a financial forecast that is based on chapter four so chapter five directly flows from chapter four and gives a financial forecast for the next five years and then also the budget for the next year in this case 2022 and then typically there's an appendix with lots of details and lots of additional information about the strategic plan now these papers these strategy papers can be very short so i've seen strategy papers as short as three to five pages but uh typically they consist of about 50 to 100 powerpoint slides or 5200 pages with a lot of details on the strategies as well as the strategic initiatives that come out of it now i would like to provide you with a couple of frameworks useful frameworks to structure the strategic paper and basically in my past couple of years in strategy and strategic planning i have used either one of those four frameworks so the simplest one is the one on the left which is a simple list of strategies or strategic initiatives so it's very simple typically one to five or one to one to four key strategies that the company wants to implement and then a few lines of details this is ideal for single business unit strategies for simple businesses if the business is a bit more complicated like for example it has several business units then a structured list is the next level of complexity where it takes basically that simple list from from the very left and breaks it down into different areas so for example in a hyper market business you could talk about the actual grocery and hyper market operations as well as the real estate operations so that the real estate that happens around the hyper market so basically the real estate strategy could be two areas to cover in a structured list now for businesses that use different time horizons especially when there's a crisis and you need a short-term and a long-term set of actions it might make sense to use a time-based structure so in different horizons you talk about the strategies in the short term in the medium term and in long term for example short term could be the next year medium term could be the next three to five years and the long term could be five years and and above the most systematic and comprehensive framework on the is the one on the right one which is the strategy house this is basically applicable to any business any type of business but is ideal for businesses that are more sophisticated in the strategic planning process and for this one i would like to just explain how this typically looks like so in the strategy house the roof basically is the vision so this is the high level vision the five-year vision coming from top management from the board and this roof that this vision is carried by the strategic pillars so strategic pillars could be for example an international expansion pillar so that could be pillar a for example could be international expansion um pillar b could be a digitalization strategy for example and then pillar c could be a revamp or a revitalization of the core business so different pillars that carry this five-year vision within these pillars you see a bunch of bricks and these bricks basically are the strategic initiatives so a1 a2 a3 are initiatives under the first pillar so if the first bill is internationalization or international expansion pillar a1 might be expansion with in southeast asia and a2 might be expansion to south asia bangladesh india and pillar a3 might be expansion into china and north asia for example um you see that the house is standing on a plateau on a plat on a platform on a base on a strong base and these are the enablers now enablers are typically things like a strong human resource strategy putting in place ide infrastructure or digital infrastructure sustainability is typically also an enabler that supports the entire strategy house and ensures that all of these strategic pillars actually are stable and and happen as they should and these enablers are core to make this happen so you see the strategy house gives a very clear structure to the strategy and basically covers all the elements from the vision to the strategic pillars through to the enablers so once we have the strategy paper written we need to move on to strategic initiatives and translating basically what is in the strategy paper into specific initiatives and here i took an example from a strategic plan that i had written and this is something that is very public so i can share this here which is the 2019 plan for minor food and in this plan we listed each strategy and then later it was translated into specific initiatives so here in this example the strategy is to set up an innovation engine so a dedicated team that would take care of innovation new meals new menus new concepts new formats and on this page that you see here we highlighted examples of other companies that do that that implemented a similar initiative and we explained how this initiative could look like for minor food and out of this we established three initiatives so they were directly derived out of this single page and these three initiatives in this case were to establish a dedicated team of three to four people the second one was to build a facility a kitchen facility that would basically be the home of this innovation team an innovation lab so to say and the third was then to change the processes and implement systematic work processes that would incorporate this team and this facility into the day-to-day work of minor food so these three initiatives were actually implemented so um the result and again this is very public was that the minor food innovation team mfit was set up in april 2020 also the facility like the 340 square meter mfit innovation facility was opened a couple of months later at riverside plaza in bangkok the team consists of chefs of three chefs that work across all brands of miner food and in terms of the work process initially this was led by by myself by the chief strategy officer and later was handed over to the chief marketing officer once it was successfully completed and the process had been established so this is a typical case example of where a strategy is translated into initiatives and then those initiatives are being implemented a tool a useful tool for strategy implementation is a simple excel table like this so you see here a simple list of the key pillars and initiatives being derived from this and then in the fourth column you see the action items that come out obviously here standardized and for confidentiality reasons so the watch is actually sitting in that action item column and then each action item has a due date and an owner so by when does it need to be completed and who owns this and this is a very useful tool to have regular project updates and then follow up because that increases the likelihood of making the implementation successful and frankly this is where most companies fail it's relatively easy to set to put in place a um a strategy in the strategy paper but then to make it happen is a totally different game and tools like this like a simple tracker help to make this implementation happen now out of this implementation plan could be specific projects and specific initiatives and for those it might make sense to develop specific project plans so this is the example of the repositioning of the copy club thailand another case example from miner food that was implemented in 2020 and this strategic initiative was translated into a basically 10-week project that was also led by the strategy team and after the 10-week implementation project it was then handed over to the business as in terms of ongoing impact tracking and implementation now what is typically helpful is after a strategic planning process you have a lot of activities a lot of strategies and initiatives that come out of those strategies it can be very useful to map those out on a single gantt chart like here and this gives the big picture of the strategy implementation it also shows the resource requirement and sometimes helps to recalibrate and say okay in year one maybe we are a little bit heavy or in this case towards the end of year one we are quite heavy in terms of strategic initiatives and we might have to delay some of them because of resource constraints all on the other side it could also help to put the resources in place in the first place and plan for recruiting and so on to make those initiatives happen so once this roadmap is in place and the strategic plan is in place the initiatives are in place then we can move on to the actual implementation and this needs to be accompanied by impact tracking and this is very very important like tracking the key performance indicators checking the performance is a very important element of making the strategy happen and why this is so important is basically reflected here in some of these quotes from senior executives from this these companies that are shown here one of the vps at heteropack said you can't manage without measuring and only what is measured gets really done so measurement is very important to make things tangible and it is important to communicate the priorities and helps to get things done um one of the senior executives of procter and gamble um also want and this is a very important warning here that if you put in place wrong measurements then it might drive undesired behaviors and have undecided consequences so this measurement process is very important for example if you give a kpi or like a measurement of sales the team might be single-mindedly focused only on sales and might totally leave profitability out of the picture and this is certainly from a top management perspective not something that might be very desirable so it is important to balance um those measurements the balance scorecard as a term comes to mind here and make sure that we have the right measurements in place that support the strategy now when we put these measurements in place we need to differentiate between key performance indicators and these are the key measurements that we would like to put in place key result indicators which show the result and the basic performance indicators and to make this a little bit tangible for you i put this into an example of a weight loss program here so key result indicators are the ones in on the top so they tell you what you have done for example the before and after picture that is a key result indicator the actual weight the weight target is a key result indicator it's the result of many many actions that you take in order to lose weight also your muscle mass or your body mass index all of these are key result indicators that come out of many things many actions that you take to lose weight key performance indicators tell you what to do to really increase the performance and achieve those results that you desire a key performance indicator for a diet for example would be to have less than 1 200 calories of intake per day it could be working out and burning 400 calories per day or lifting 10 more pounds every month very specific things that help you achieve the key results now besides those kpis the key performance indicators there's also a bunch of performance indicators they tell you what to do so you see here for example walk more or don't eat burgers don't eat chocolate don't eat junk food eat more vegetables and do more activities like swimming and so on now all of these are very small initiatives and help you to achieve your kpis they contribute to the kpis they are important in terms of the process but they are not so critical overall to be tracked on a regular basis sometimes to detail for top management so um to get an idea of how to do the performance tracking and how these how to set up these performance indicators it's easy to get overwhelmed with too many performance indicators so a couple of academics and practitioners have put rules in place here and i cited um a few of those rules here so kaplan and norton who are also the basically the inventors of the balance scorecard they say no more than 20 kpis so 20 kpis is basically the maximum that people can keep track of uh hope and fraser even say fewer than 10 kpis i pretty much like and that's why i put the picture of the book here as well um david parmenter a book that i can highly recommend called key performance indicators a little bit dated from the year 2007 but still highly relevant now 14 years later he put in place a rule of thumb the 1080 10 rule so the rule of thumb is that the performance score card should have 10 kpis and 10 kris so 10 key performance indicators and 10 key result indicators and then on the side you can track up to 80 or so performance indicators but these are typically not part of the performance scorecard of of the balance scorecard so basically going pretty much in line with kaplan norton who say no more than 20 kpis so parmenter polo said rule and says 10 kpis and 10 kris so once all these measures are in place then we can start implementing and tracking the performance and if we do things right then we get to the desired target which is to achieve above average returns meaning we beat the competition in terms of the returns and the results of the company and basically we could stop the strategic planning process here but in the next three sessions i would just like to give a few more details of this implementation process and a few tools and and frameworks and processes of how to make this successful [Music]
Business_Strategy_Lecture
Business_Strategy_07_Competitive_Rivalry_Competitive_Dynamics.txt
[Music] in the last session we kicked off the strategy formulation process by looking at business level strategy which is the most basic type of strategy specifically we looked at porter's generic strategies which include cost leadership differentiation strategies and segmentation strategies now in today's session we're going to dive a little bit deeper into business level strategies by looking at competitive rivalry and competitive dynamics we're going to look at what drives competitive attacks and responses and how we can possibly predict those attacks and responses of our competitors to illustrate what i mean by competitive rivalry here are a few examples of big famous competitive rivals for example coca-cola and pepsi adidas and nike mcdonald's and burger king and here in thailand specifically pizza company and pizza hut or lotus and big c in the hyper market segment so these are companies that fight very hard for space and for market share in their respective industries now let's kick off with some basic definitions as usual competitors are firms that are operating in the same market offering similar products and targeting similar customers so for two companies to be identified as competitors they would have to meet basically these three requirements what is competitive rivalry it is the fight of those competitors for market share one advantageous position in the market and it consists of competitive actions and the respective responses and then finally competitive dynamics are all those actions and responses put together of all the firms in the market now why do we need to study competitive library in the end we want to understand our competitor we want to understand the competitor's way of thinking and eventually we want to predict the competitor's behavior we want to see whether the competitor is going to attack us or how the competitor is going to react and respond if we are attacking the competitor now before jumping into the analysis of the competitor we first need to take a step back and need to be very clear of what is actually a market and what is the competitive environment that we are playing in and a good way to start is to look at economics and economics identifies competitors by looking at products that can substitute our product and the way to measure this is cross-price elasticity what does it mean it means how does the quantity that we can sell change if the competitor changes its price so basically what happens here is if this equation is positive then the two products are substitutional products or competitive products meaning for example if the competitor increases their price the quantity for our product will go up so both sides of the equation are positive and therefore the entire equation will be positive as well our products are substitute products or competitive products it can also happen that we have a complementary product that is closely linked to ours and then this equation would be negative but the larger eta the elasticity is the more competitive are the two products now these are two products now how can we define the whole market economics also has an answer to this one and that is collective price elasticity collective price elasticity is a bit of a thought experiment it is identified as the change of the quantity when the change of all firms in the market happens for the price of their products so basically we are saying that if this equation is small meaning we change the price of all firms in the market and the quantity remains basically unchanged that means that the customer doesn't have a choice there's no substitute products and therefore we have actually covered in our analysis the entire market so all the companies that we have considered actually make up the market now if we miss the company then this change in quantity will be large so if all the companies that we have in our analysis change the price and the quantity drops because of the price increase that means that customers go to another company so there must be another element another company in the market that we have forgotten our analysis but if this is small then we have covered the full market now you can see already from me talking about this that this is very theoretical and it's very difficult in practice to observe to study this and to actually make it meaningful for our strategic analysis so we need a simpler way and luckily business studies has a way to do that in a very qualitative matter and this is based on two factors the first factor that we look at is market criminality so the question here is how many markets are two competitors jointly involved in and what is the degree of importance that each of the companies is assigning to this individual market that's the market commonality and the second aspect is resource similarity so how similar are the resources tangible and intangible resources in terms of type amount and quantity and this is illustrated in a 2x2 matrix as you can see here on the x-axis we have resource similarity and on the y-axis market commonality let's talk through the four different fields that we have here our two companies are depicted as blue and red for company x and company y and resources that are similar resources a are shown as rectangular shapes and the triangle would be a different type of resources b so let me talk through an example here in this field we would have high resource similarity and high market commonality so going back to the example that we looked at earlier lotus vs big c both of them are hyper market players both of these players have very similar resources the building looks very similar both have shopping malls attached to the hyper market the pos systems the number of people the number of resources that are used um the shelving the lighting everything is very very similar between the two companies and the market the target market in the lower uh to mid customer segment is also very similar so there's a very high overlap and therefore um they are fierce competitors in the market now here is an example where the resource similarity is still high but the market criminality is not that high anymore market commodity is low an example for this one would be lotus and macro so lotus again is a hyper market player here in thailand saya macro is a cash and carry format that targets mostly b2b customers now the resources that they use are quite similar the number of people that they hire the qualifications that they hire the pos system the layout of the store all of these are fairly similar however the market is not lotus is targeting individuals it's targeting end consumers is targeting families macro on the other side is targeting business customers so restaurants hotels catering companies now they have some overlap in terms of large families so the large families might actually buy from either of the companies but that overlap is quite limited the next example is a situation where we have low resource similarity but very high market commonality an example for this one could be lotus again and gummy market which is a premium supermarket here in thailand now in terms of resources gourmet market has much more services attached it is more premium it has different types of products the shelving the store layout everything looks quite different from what lotus has but the target market is fairly similar so there's a big overlap especially in the middle middle and upper middle class segment and therefore a high market commonality but low resource similarity and finally this is an example where we have both low market commonality and low resource similarity the example that comes to mind here is maybe lotus again and poochie super which is a japanese supermarket that targets specifically japanese customers so it has japanese speaking staff the products are imported from japan is quite a higher end and it's very much targeting the niche of japanese customers or thai customers that would like to shop very specific japanese products so therefore the resource similarity is not very high and the market commonality is also not high so this is a way of looking at the market and understanding who actually our main competitors and who are secondary competitors now the next step is to understand what are actually drivers of competitive actions and competitive responses so what what drives the competitive rivalry and there are three drivers the first and most basic driver is awareness so let's talk through an example here for example we have lotus and big c and let's assume that lotus is lowering their prices for a specific period on grocery products which is also big c's main selling category so the first step is if lotus lowers their prices in order for bixey to take any action on this they need to be aware of what is actually happening in the market so that's the prerequisite of any competitive action now if big c misses that is not aware of what is happening it might lead to a severe negative impact for the company so awareness is the first key driver for competitive actions once we have awareness and once we know what is going on in the market and with our competitor then there's still a question of whether we're actually motivated to to react so in our case where lotus has lowered the prices in the grocery category because it is such an important category for big c as well it is very likely that big c has enough incentive to take action and response [Music] because the market community in this area is very high also um it's a it's a core category for both lotus and bixi in the grocery segment and therefore it's quite likely that big c will take actions now if this attack would come in an area that is not important or maybe big c has very little sales they might not react and might not have a motivation to take any action in response finally a company can only take strategic action if it has the ability to do so so in the hypermarket example it could be simply that big c is has run out of stock for a certain category and therefore it doesn't even have the ability to respond so the market share will automatically go to lotus anyway there is no competitive response that bixe could offer if it is running out of stock however if pc has sufficient stock then for sure there is a motivation to respond and the ability is there as well it has enough resources and the flexibility to react and therefore it will very likely take a competitive response here important to mention that the greater the imbalance of the resources the smaller is also the likelihood of a response next i would like to look a little bit deeper into what actually drives this likelihood of an attack or a response so starting off with the basic definitions first so attack or competitive action is there to build or defend competitive advantage and improve the company's market position and then the response in return is to counter the effects of the original action of the attack taken by a competitor and there are three factors that influence the likelihood of an attack the first one is timing there is something called the first mover incentive that is the expectation of a company to have higher returns to have more customer loyalty and gain a bigger market share and a reputation as an innovator if they move first a classic example for this one would be apple who has the reputation of being the innovator and moving first bringing in new get just new new functionality into the market basically all the time there's also second mover incentive so companies might decide not to be the first mover and to bait first especially if there's high uncertainty about the technology they might first want to study the customers save on r d as r d becomes cheaper as time passes by um they might also trying to take the opportunity to improve the process see avoid the mistakes see the mistakes that competitors make and then avoiding those mistakes and making it better also something called the late mover incentive and that is if you miss the boat then it's better to move late than never so timing is a big factor in the likelihood of an attack depending on what you believe in whether you want to be the first mover or you want to be the second mover that influences the likelihood of an attack second is organization size and that can go either way so there are advantages for small companies small companies are usually more flexible they have a big range of actions that they can take and if anything goes wrong the impact usually is rather small in case of a failure now for large companies they have much more resources and can therefore maybe act easier and any negative impact that comes from the action that they take it is easier to be absorbed because of the sheer size of the company so it can play out either way the third factor is quality equality is defined as meeting or exceeding customer expectations and it can come as product quality dimensions like performance features durability serviceability aesthetics and so on or service quality dimensions like timeliness courtesy consistency convenience completeness and accuracy of the service provided the key here is that we need to understand what is our position and what is the competitor's position in terms of quality because a company that has quality issues is less likely to launch a competitive attack and is also more vulnerable and might not be able to respond so timing company size and quality are the three factors that influence the likelihood of a strategic attack now briefly talking about response there are also three factors that influence the likelihood of a response the first one is the type of action and the expectation of the potential impact and a famous example here is coca-cola that saw pepsi in the 1920s changing their pricing to five cents basically a nickel for the 12 ounce bottle and didn't react because they didn't expect pepsi to gain such a big advantage and gain back market share through this action so they they didn't take any action because the expectation was different the second is the actor's reputation so if the actor has a very strong reputation and is is likely to have a big impact on us then we are more likely to respond to the attacker and finding the dependence on the market so if the attack happens in the market that is very important to us where we have a big market share which is core to our business then we are more likely to respond if it happens in a market that is not important to us or we have only a very small stake we might actually let it go and not respond next i would like to look briefly at competitive dynamics and a great way to study competitive dynamics is mckinsey's scp or structure conduct performance model how does this model work the idea behind this is that changes in the structure in a market for example consolidation leads to a change in conduct of the players in the market and then consequently to a change in performance as well and we can study this chain of changing structure to the conduct and to the performance however the model is a dynamic model it also works the other way around so sometimes changes in performance maybe because the profitability is dropping maybe customer trends are changing are forcing a change in conduct of the companies and as companies maybe consolidate or price less aggressively the structure might as well change so this model works forward and backward it's a very dynamic model important here to mention is that these changes and structures are often triggered by external shocks for example a change in government policy a change in regulation or maybe a natural disaster or anything else happening in the outside environment that forces an industry to have some structural changes and just to show how this model works and how this looks like in action this is an example of the australian beer industry as an example from mckinsey who started the australian beer industry in 1985 to 2001 and basically looks at all the external shocks that happened during that time and how structure conduct and performance changed accordingly and you see there's a lot of dynamics built in here a lot of back and forth and the principle of this model is to try to actually look forward now this is a backward looking analysis but the idea is to try to look forward and study what happens now if i acquire a competitor if i start to consolidate the industry what are my remaining competitors are going to do in terms of conduct and how is the performance of the entire industry our performance and the competitor performance changing and then how will that in turn again affect the conduct and then the structure of the industry so we try to model a complete cycle forward looking for an industry that's competitive dynamics one last point to mention here is that markets don't move at the same speed there's different types there are different types of markets the basic one is a standard cycle market typically it takes about five to seven years for a cycle to be completed there are some competitive advantages and companies are shielded for a limited amount of time for some time imitation is possible but it will take some effort an example here for me is the automotive industry where cycles like new models are being launched in about a five year cycle much faster are technology markets for example so here the the cycle time is three to four years or maybe even faster one or two years um here the competitive advantages are not shielded imitation is happening very fast and very inexpensive and companies continuously have to innovate and come up with new products and new ideas to remain the competitive edge but they're also slow cycle markets and here for me the steel industry comes to mind there are typically industries where we have very heavy capital investment they are very capital intensive and therefore shielded from new entrants or com from competition to come into the market imitation is costly it takes time to build a new factory a new plant and therefore the cycles for these markets are typically eight to ten years so markets are moving at different speed so this was a brief introduction into competitive rivalry and competitive dynamics we looked at some frameworks and some ways to study competitive dynamics and to forecast and predict what the competitors are doing so this concludes our session on business level strategies and competitive behavior [Music] you
Business_Strategy_Lecture
Business_Strategy_13_Budgeting_Financial_Planning.txt
foreign [Music] by now we are almost at the end of our strategic planning process we have come up with our strategy we have formulated the strategy in a strategy paper we have developed some high level initiatives and also some targets and kpis for those initiatives today we are bringing it all together in a financial plan and budget now unfortunately this step in the process is also the step that is quite tedious and that many students are struggling with and also many newcomers to the strategic planning process so i would like to spend some time on it i want to make it as practical as possible by sharing the approach that i have been taking in my past experience for budgeting but as usual we start off with a definition so budgeting or building a budget is an estimation of revenue and expenses over a specific future period of time and is usually compiled and re-evaluated on a periodic basis so we look at the revenues the expenses and we plan this for the upcoming period normally the next year or the next five years or three years and every year this process is repeated and the budget is re-evaluated now just to share how this fits in into our planning cycle that we discussed one of the previous sessions so it pops up here a couple of times normally in the finalization of strategic plan there is also a budgeting step that is the first draft and then after the presentation to the board of directors there's a second step which requires usually a revision or some additional inputs into the financial plan and budget as well as interest strategic plan and in the end which is typically the period around beginning of december we have a final strategic plan and a final budget and that becomes then the plan for the next year or for the next three years or the next five years now what are the objectives of budgeting why do we have to do this why is this so important first the budget translates the management's operating plans for the upcoming periods for the next couple of years into quantitative terms into financial terms that's very important the second step is once this quantification has happened it allows management to allocate and to control the company's resources according to the plan and ideally if we control it well and if we allocate it well we can achieve our objectives the third objective is that we can predict cash flows by forecasting the revenues expenses and the capital expenditure and so we can plan our cash in future periods the fourth objective is to coordinate all the financial activities across all parts of the company so that means also to coordinate financial plans across department and to manage departments in each area and the final objectives objective is to ensure that the strategic plans are actually financially feasible and that we can achieve all our objectives with the financial resources that we have to our disposal now the approach that i would like to introduce here is a seven step approach to budgeting and as i said this is the approach that i have been applying in the past couple of years in in real life in my day job so first we start with a historic baseline and we use revenues for that baseline and then we extrapolate use economic industry indicators to get a baseline revenue forecast for the next couple of years and from this revenue forecast we can then in the third step build the baseline p and l so fill all the other lines of the profit and loss statement and that gives us a baseline step one two three are kind of the first steps in the budgeting process then we come to the second phase which is to translate our strategic plan into financials and included in the budget and these are steps four five and six so in step four we basically build business cases for each of our strategic projects in the next step we map these on the timeline and identify which spend which revenue which benefit which cost comes at what point in time and then we bring it all together and create a first draft of the budget so these are steps number four five and six once we have all of this we have a first draft and then we need to check whether this first draft actually meets our objectives that we have set in our long-term vision and that's what i call a gap analysis so we look at our first draft budget and our long-term vision and our plan and see if there are any gaps if there are any gaps then we have to revise we have to come up with additional initiatives and come up with a new plan or additional plans so now let's get into the details and let's get technical so the first step is to start with a historic baseline revenue and as an example and this is purely a hypothetical example here i have taken tf mama the noodle manufacturer here in thailand and i started off with the revenues that i found in the annual reports so 2017-2021 now for 2021 i don't have the complete fourth quarter yet i have only three quarters so nine months available and i had to forecast or extrapolate the remaining three months and that's why this is shown here as nine plus three so nine month actuals and three month forecast so what we can see here is the revenues have been growing on average with 2.8 percent per year that's the keger or the compound annual growth rate and um we have some ups and downs and obviously in 2020 because of the covet crisis a little bit lower compared to 2019 but then in 2021 we expect to recover that basically fully so that's the baseline revenue for our company now in a second step what we try to do is we try to extrapolate and build a baseline revenue forecast so let me explain how i did this here and i did this very simplified there's more complicated ways of doing this more sophisticated ways but basically we take um the first part that you have already seen the years 2017 to 2021 so the past five years basically including the current one as a base and you will see here again the two point eight percent kaggar compound annual growth rate and then i looked at um what has happened in the past four to five years in terms of economic indicators and i just took for simplification here inflation per year and food industry trends and i looked at were there any new competitors coming in that would have had a major impact or any other factors that could have had a major impact on our budget and plan in the past five years so what i found is the average inflation in thailand flowing the past five years or so was about point five percent point four nine percent to be exact and the food industry which tf mama is a player in grew by about 2.0 percent so 2.0 now what you can notice here is that tfm outperformed the food industry and the inflation over the past five years so if we add the two together um tfma outperformed by about 12 per year now if i look at the forecast and again these are economic indicators or industry indicators that i can get from reports here from statista from euromonitor i see a forecast inflation of 1.48 so inflation slightly increases and the food industry trend on the other side declines slightly so it still grows but it grows by 0.6 percent so slower than the two percent that we have seen in the past couple of years so um if i apply the same kind of performance that tfm had in the past and apply this to those new indicators then i come up with a growth rate of about 2.3 percent and then i have applied this growth rate over the next couple of years and i have just extrapolated and grew um [Music] the revenues by that amount so basically the idea here is if we don't do anything if we don't change anything if we don't react just based on the inflation and that we can pass on to the consumers and the overall food industry trend we should see a growth rate in revenues of about 2.3 percent and that is what i have modeled out here again this is strongly simplified there are much more sophisticated forecast modeling approaches that can be applied but for our purpose here this is enough then once i have the forecast revenues i can build the rest of the profit and loss statement and you see here on the right side i have made some assumptions so i started off with a sales forecast as per the previous page for other income i just assumed that other income will increase with inflation so i built that over the years 2022 to 2026 based on the inflation forecast then i made assumptions for other lines for example cost of sales we see right now that we have a raw materials squeeze around the globe so i believe that in the next two years this difficulty in getting raw materials will continue the supply chain difficulties will continue and therefore we are likely seeing a cost increase of about 10 in just those t those two years and after that um the global supply chains will catch up and the raw material cost will come down again to previous levels but at least for 2022 and 2023 i have to assume that the raw material costs are increasing now selling expenses i just assume that i can maintain this as percent of sales and administrative expenses which is basically our back office i assume that we see about four percent salary increase per year which is kind of in line with what is happening in thailand at this point in time so four percent increase of salaries per year so i applied this over the next five years for other components of the p l i just assume that i can maintain it as percent of sales and for corporate income tax i assumed a corporate income tax of 20 so all the lines add up and that gives us a baseline pnl and if i go to the next page you see how this pl plays out visualized so the first part the upper part of the graph we have already seen is the revenues and then i just visualized the net profit as well put it in a graph and there you see already that we will have some challenges because compared to the year 2021 which sees a net profit forecast of about 5 billion thai baht 5 000 million thai baht um we see a big drop in 2022 because of the cost increase both personal costs as well as raw material costs and that of course hurts the profit and the profit will recover after those two years but it will not go back to pre previous levels or pre-covered levels so there is definitely something that we have to do but again this assumes this is the baseline if we don't take any strategic action if we don't do anything differently compared to what we are currently already doing so with this we have a baseline that's a starting point now what we have to do is we have to add on top our strategic plan and the impact of our strategic plan and here it becomes interesting so in step number four what we have to do is we have to build business cases for each of our strategic initiatives we talked a little bit about this already in previous sessions but we basically have to translate our strategy our strategic idea our strategic project or initiative into a business case and i've used a template here that i like to use for business cases in the company that i work in and it's a very simple model i would like to share like how this works so first i have to assume i have to estimate what the capex is and that is number one here in the chart i try to find out what i have to invest in terms of capital expenditure and what i took here is a energy savings case so an investment in a new chillers or air-conditioned units or something like this in a factory and here in this case the number 52.4 that's the investment that is required and in the second step i have to enter the depreciation period so this depends on accounting rules uh internal standards as well as the country's accounting rules and these kind of chillers air conditioning units and they are depreciated over a period of 10 years which is their useful life they assumed useful life so this 52.4 million baht have to be depreciated over 10 years then the third step is to estimate the benefits of this initiative so for example here installing new chillers new air conditioned units it can be extra revenues that are created can be cost savings like in this case or any other benefits that might be generated by the project itself then in step number four we have to calculate the depreciation cost so in this case it is basically the 52.4 million divided by 10 over 10 years so that's 5.24 million each here and the last step here is to calculate the corporate income tax impact from the benefit that we get so of course we have higher savings therefore we also need to pay higher tax which we have to deduct here from the net savings so once we have this we can then look at the overall business case so we see here the cash flows we see 52.4 million expenditure in year one that's the capex the capital expenditure required and then we see here savings coming in as well as a deduction of income tax which takes away from the savings which gives us our annual cash flows and then i can discount those cash flows at the cost of capital in this case i assumed eight percent cost of capital which depends on the company and on the industry that you are in but for this case i assume just eight percent and that gives me annual discounted cash flows in the lower part of the chart i have mapped them out in in a graph and from here i can already see that the business case is positive so i will have a positive net present value about 31.3 million from this initiative of installing new chillers new air condition unit and this means that we have a payback break-even point for this initiative in year six to be exact after five years and six months which we see here in the chart it helps basically in in the middle of year six now i have many of these kind of initiatives in my strategy strategy paper so now i have to map these out and see when are they happening so when are the capex outflows happening these are here marked in the red triangle and what is the project implementation period during which i work on the project but nothing really has happened yet in terms of benefits and then when do i get the benefits from the cash outlay that i have and you see here that for some initiatives there might be a big capex required and we see the benefits only one or sometimes even two periods later and other initiatives like project four here the energy savings that we talked about in the factory um we will see the impact basically immediately or at least in the same in the same year in the same period and that we have the capex outlay so finally we bring it all together and that is the last two steps so we take the baseline you see here on the top left the baseline p and l and after this baseline p l all the business cases the various business cases that we have come up with and that gives us a first draft of the budget and i have laid this out here so this how it can look like visualized we have the baseline here the base case in red and then each of the projects is an add-on both in terms of revenues now please note that not every of those projects has a revenue impact um but every project has at least a net profit impact like cost savings initiatives for example would not have the revenue impact so we wouldn't see them in the first chart but we would see the impact in the second chart and then in the third chart here on the bottom third graph is the capex outlay over the next couple of years so we see that this is not equally distributed heavy capex outlay will happen in the year 2022 and then again a very big amount is needed in the year 2025 so that's when we have planned for a major acquisition and that's why there's this big capex outline now we are basically done with the first draft of the budget but we see already from the graph that we do have a challenge so first of all what we see is that um previously the net profit was actually declining over the next five year period and this one we have already turned around so it's already um increasing by 3.3 percent per year which is positive however we see a big gap in the next two years and of course we talked about it already earlier on this is because we have a cost increase in those years from supply chain costs as well as personal expenses that we can expect so that hurts our profitability and we see a gap here and that is a challenge so um property management will not accept this kind of drop of the shares shareholders also will not accept this kind of drop so we need additional initiatives to fill this gap and the second thing that we can notice here is that we have a high capex very high it's almost as high as our net profit for the year so this probably requires additional financing so we need to come up with a plan of how we finance this additional capex which happens in the year 2025. now it's a few years to go but as we are writing our strategic plan for the next five years we need to include the financing for this capex in the plan and this happens very often that you come up with the first draft of the budget and realize that there are gaps and that we have to go back to the drawing table and come up with new plans additional plans that fill those gaps and that's what we would do here after we do a number of iterations then we would come up with our final budget so that's basically it these are the seven steps of how we get um to a budget and financial plan now we can do our resource planning for the years to come to meet our strategic objectives and to be in line with our budget now i showed you one approach and that is my abroad approach that i have been taking over the past couple of years when building a budget i would like to at least share with you a few alternative approaches of how a budget can be built so there are five types that i want to share with you the first type is incremental budgeting now we have done a little bit of this already in what i have shown you we have actually started out with that incremental budgeting basically means that we use the past performance and we make small changes to that past performance so we do just some tweaks here and there we adjust what has been there in the past couple of years and we basically just adjust the existing budget with some increments to reflect the growth or decline of the company based on industry trends so that's called incremental budgeting another approach is to look at it from the other way look at it from the end goal for example if you want to achieve a certain valuation or certain sales number and then we work it backwards and say okay what does it really take to get there this approach we would have taken in our iteration remember that we paused and said okay there's a gap in the next two years so we need to do something to fill those so again this would be activity-based budgeting so in the second iteration we would move to that approach and we would look at how can we achieve our target for 2022 and 2023 that our shareholders expect another approach is the value proposition budgeting approach and that looks at each line one by one and look and asks for each line what is the value that this line creates to the customer to our staff to stakeholders shareholders to the business overall so basically each line item in the budget each profit and loss statement line needs to be justified and anything that does not bring value is basically cut or reduced and things that bring a lot of value are then increased and try to maximize the battery in those areas that deliver strong results so that's value proposition budgeting approach number four is probably the most radical approach which is zero-based budgeting so compared to the incremental approach that we had before and also compared to the value proposition approach zero-based budgeting assumes that every business unit every part of the business has zero budget and needs to justify every single spend every single line item based on the objectives that the business unit wants to achieve so basically that means that every year we would build the budget again from zero from a blank sheet of paper and we would build it in a way that would justify why we need money to invest why we need money to spend here and there and would basically look at our business completely fresh every single year now this is a lot of effort of course a very radical approach but one that has become increasingly popular especially as we have crisis in the past couple of years especially with covet this approach can help to look at the business with completely different set of eyes from a completely fresh perspective and typically results in very significant savings because things that have little value are just cut out and some things that have been spent on for the past couple of years are just basically being eliminated or stopped spending on those items now all of these approaches are called static approaches to budgeting static because we assume that we come up with a budget for the next couple of years and this is basically it that's the budget that's the amount of money that management can work with and that becomes the north star the orientation point for all the stakeholders for shareholders for staff for management overall in managing the company now there are also methods that are called flexible much budgeting methods because they can be adjusted or they allow the budget to be adjusted based on circumstances so one for example is cash flow budgeting so here we assume that we have certain lines of credit certain cash flows that come in and out of the business and if those cash flows change then the budget will also change accordingly this is quite similar to the surplus budgeting so here we assume that we might not have enough money and enough budget for some of the initiatives that we want to do for example in acquisition but we might be surprised maybe we have some unexpected surplus coming in um some unexpected savings some um windfall basically happening in the industry and competitor exiting for example and we end up with a big surplus at some point and in the surplus budgeting method we would just say okay once we have that surplus then we make the acquisition or once we have enough money um in cash then we buy this machine so it's a quite a flexible approach which can be used for example as a top-up to an existing budget or it can also be used completely standalone so these were the basic budgeting approaches and a short introduction to budgeting and financial planning again it was a little bit technical and i tried to keep it as practical as possible not very academic i hope it was still useful and this leads us to the end of this session and then we will in the next session complete our strategic planning process with the organization structure and organizational planning [Music] you
Business_Strategy_Lecture
Business_Strategy_03_The_External_Environment_of_a_Firm.txt
[Music] in today's session we will talk about the external environment of a company the external environment of a company is one of the very important input factors and understanding the external environment is very important before we even start the strategic management process so in today's session we will talk about what are those elements of the external environment we will talk about methods ways of analyzing the external environment and i will give you a couple of frameworks and tools that you can use as part of the strategic management process when analyzing the external environment before we go into the theory i would like to show you a very practical example of where the external environment played a fundamental role in a strategic outcome of companies in a certain industry and for this i would like to take you back to the year 2010 when the deepwater horizon bps platform in the gulf of mexico had a major accident a major explosion and caused a major oil spill into the gulf so what happened back then was that the company bp was being sued for damages that happened to the people that lived in the area and as a consequence as you can see here the share price dropped dramatically it's basically made bp lose half of its market value at that time now that that would happen to bp you could say that's not really the external environment because arguably yes or no bp could have prevented the accident there was a big argument after the fact who was actually to blame whether it was one of the contractors or whether it was bp themselves but what is really interesting about this case is that the drop of bp's share price and the accident itself not only hurt bp but it also hurt shell and chevron and other competitors who had actually nothing to do with this accident itself so you can see here that the share price of shell and chevron also dropped at the same time by 20 or 15 respectively so in the oil and gas industry in this specific case the external environment like something that happened to another company in the industry had a fundamental impact on the competitors on other companies in the industry and what is also interesting about the oil and gas industry is that generally the share prices of oil companies and of course also with the share price the financial performance of [Music] oil and gas companies correlate with the oil price so the oil price is something that the companies cannot control and therefore they have very heavy influence of the external environment of the oil price on their financials and eventually also on their strategy so let's go into what are the elements of the external environment so looking at the the elements again the external environment is defined as opportunities and threats so things that are outside of the control of the company and the three elements that we will look at when we study the external environment is first the general environment which influences the industries and the firms within it that's the first element the second element is the industry itself and the third is very specifically the firms within the industry so the competitive environment and the competitors within the industry so when we look at external environment these are the three elements that we typically look at so before we dive into each of them and i will give you a couple of frameworks and and tools for each of them for external environmental analysis we have to talk first about the methodology of how companies actually track and analyze the external environment and there are four methods that companies typically use and normally in that order in that sequence so first we start by scanning scanning you can imagine as just monitoring on a daily basis on a weekly basis the news that come up in the industry basically the news that come up about the economy about the industry and about competitors specifically and the goal here of this scanning is to spot any early signals of potential changes um any any trends that might come up the nature of this data is very incomplete and unconnected so basically what it means for me and my day-to-day job is to look at newspaper articles newspaper clippings social media internet blogs anything that can give any indication of upcoming trends once we have spotted something and have roughly an idea of what is happening we start to monitor so we pick out two or three themes that we identified through scanning and start to look at these in more detail so we try to detect some meaning in those events and those sporadic news that come up that we spotted during the scanning process we gather more specific data and might use customer surveys we might purchase some some data from databases and also employee surveys to understand what's really going on in the industry or in the market once we have a thorough understanding of a certain fringe or a certain series of events we try to do some forecasting so we start to develop some models and give projections ideally as reliable as possible projections of what could happen in the future that typically involves quantitative models predictive studies for example forecasts of new technology trends compared to their behavior and so on and once we have a good forecast then we go one step deeper and try to assess for this specific forecast what might be the timing and the magnitude the order of impact that we could expect from this now the nature of of this data is more informed guesses to be frank there's a certain level of quantitative analysis but a lot here has to do with scenario scenarios scenario analysis because here it's very hypothetical and often quite difficult to get reliable data so scanning monitoring forecasting and assessing are the four methods that we typically use to assess the external environment so now i would like to jump in to the first element of the external environment which is the general environment and give you a framework that i like to use when assessing the external environment the general environment and this framework that i like to use is called d-step d-e-s-t-e-p it consists of six elements that we typically look at when we start the analysis of the general environment the first one is demographics so we are looking at trends of population size trends of age structure geographic distribution the ethnic mix of people and income distribution and a couple of other indicators that we study on an annual basis just to pick out one age structure for example is fundamentally important if you have an aging society that has certain consequences on the type of food for example that people consume the type of services that you have to provide for that aging population and we look there at long term trends five to ten year trends of what is going on in the demographics the second element e is economy quite straightforward here we look at the gdp growth we look at inflation interest exchange rates things that impact our financial performance in some way and that gives us give us an idea of what is happening in the larger economy the third element is the social cultural segment here it gets a little bit more sophisticated and a little bit more qualitative we look at things like religious values language communication education trends the values and the value system of the country itself and also customs and traditions so that's the social cultural segment d e and s the first three steps in the d-step model then next we look at technology and that starts very broadly with technology but goes then very specific into the technology that is coming up in the industry so we look at the technology penetration rates of certain technologies that affect our industry we look at investment in r d what our competitors doing what are other companies doing in the marketplace any innovations that come up and more specifically also communication technologies always also with the aspect of looking at new marketing methods or new marketing channels number five is the environment the physical environment so here we look at things that are important to us like energy consumption energy rates uh renewable energy that might come up in in countries natural disasters that might threaten our business water supply which is in the food industry where i come from very important so things like that and what the impact might be on our business going forward in the future and finally the political and legal segment so we're looking at global and local political events any changes in tax or regulations labor laws educational policies any state enterprise policies that might affect us all of these always with the aspect of what are those events that are happening and how do they impact us as a company and our industry now one note here i introduced the d-step model and some of you might be familiar with another model which is called pastel and i would like to just on a side note explain why i use the d-step versus the pastel model so pastel looks at political economic social technological environmental and legal factors so you see quite a significant overlap or similarity here with a d-step model but pastel does it splits up political and legal factors which in the bested model are combined in one area but it has two major shortcomings compared to step and which is the reason why i prefer the d step model over the pastel model the first of these shortcomings is that pastel does not mention demographics specifically and in my view this is one of the core inputs into a good strategy to understand the demographics and demographic trends customer trends especially if you work in the consumer phasing sector and second pastel starts with a political factor and when i do the strategy for my company i don't normally start my environmental analysis with politics i usually start with the customer base so demographics first and therefore the order of d-step seems to be more practical for me so that is the reason for that so after having looked at the general environment now we go one level deeper and look at the industry environment and the most well-known and very powerful model for understanding the industry environment is michael porter's five forces model and the model looks like this so we start off first by understanding the competitive rivalry in the industry in the very first step we need to identify actually what is the market that we look at what is the industry that we look at we need to identify the boundaries of that industry whether we define it very narrow or whether we define it broad and then we look at who are the competitors is that competitive rivalry very high or is it low in that case few competitors then we look at the bargaining power of suppliers so how strong our suppliers again the rating is here from high to low the bargaining power of customers on the other side so how strong are our customers when they negotiate with us and how much can they determine or drive the price then the threat of new entrants so how likely is it that new players will come into the market and the threat of substitute products now not to confuse this with the industry itself substitute products are alternative products that people could go to if they don't want to use the products that we produce an example is a car and a motorcycle so if we define the car industry as the industry that we look at motorcycles would be a substitutes so would be public transport for example you can buy the car or you can choose to use public transport instead and not own a car now these are the the traditional five forces in recent years there is a sixth force that has been added to it and that is complement us so these are companies that are supportive of our industry um that that help our industry thrive so the sixth force here so going quickly through the five forces and explaining each of them first the competitive rivalry in the industry we would look at the number of competitors we would look at growth of the industry what is the cost structure of the industry fixed costs variable cost any factors of differentiation anything that sets apart one company from the other and any strategic stakes that different companies have in the industry and of course also exit barriers so how difficult is it how easy is it to exit this industry then looking at suppliers again we would look at the number of suppliers any substitute products that would be supplied to us how important are we as a company as a customer to those suppliers so what is our power versus versus their what is our importance for their business how critical are the goods for our business so is the supplier very important or can i easily switch suppliers including looking at the switching cost and looking at whether those suppliers might actually forward integrate and become a competitor basically go into our field and compete against us head to head and similar for the bargaining power of customers we would look at the significance of the customer so how much of our total industry output is purchased by one customer how much of our output is purchased by one customer switching cost of the customer to other companies then differentiation standardization of products and is there any threat of backward integration meaning that one of our customers actually backward integrates meaning he becomes a competitor in our industry and competes directly with us these are the three burst forces that we look at then looking at threat of new entrants so here we would look at barriers to entry things like economies of scale any capital requirements switching costs that are there any government policies that exist as well as expected retaliation so what happens if a new entrant comes into the industry how would the incumbents react and retaliate and fight that new entrant and finally we will look at the threat of substitute products again like switching costs how attractive is the substitute is it becoming more attractive than it was before and how unique is our current product versus the potential substitutes that begs the question is there any factor of these five more important than others and greenwald and khan in their book competition demystified from 2005 given the answer to this that is quite quite clear what they are saying is that the factor that we really have to look at is barriers to entry that is the most important factor and they go as far as saying that we can ignore all the other factors and just look at barriers to entry as the most important part because if there are barriers to entry then it's very difficult for new firms to enter and that also means that we are protected as an incumbent we are protected and they say this is actually the fundamental element of a competitive advantage to being able to do something what rivals cannot so if rivals cannot enter our industry that means that we have somehow a competitive advantage therefore this barriers to entry is out of the five forces the most important force so after having studied the external environment in terms of general environment as well as the industry environment now we go to the last element which is the competitive environment and i would like to give you a framework of thinking about analyzing the competitor so the first question that we have to ask ourselves is what are actually the objectives of the competitor versus ours so what are our goals and what are the competitors goals how much is the competitor future oriented and how much risk is he likely to take versus are we likely to take so the future objectives that's the first step um it goes a little bit into psychology here so we try to understand what the competitor's plan might be then we go into the strategy so first our strategy our current strategy how are we currently competing and if there are any changes in the competitive structure does our strategy support those changes like are we prepared for those changes third we have to make some assumptions so we have to make assumptions about volatility how volatile is the future going to be are we assuming that things will more or less stay the same in the status quo or are there major changes major forces in the industry that drive change and against those assumptions that we are making what assumptions are our competitor likely making finally we have to try to get an understanding of the capabilities of our competitors so what are what are our strengths and weaknesses and how do we rate compared to our competitors a little bit more about strength and weaknesses will come in the next session so once we have analyzed those four elements then we can look into what the response to those is so what will our competitors do in the future where do we hold an advantage where does a competitor hold an advantage and how will any action that we take and the response that we get will change the relationship between us and our competitors so that is the first step and the framework of trying to understand competitors now going one level more specific there's an interesting tool that i like to use occasionally which is the strategy canvas the strategy canvas has three elements first we have to identify the critical success factors and you see them listed on the left side of the chart so here we talk about cost after sales service delivery reliability technical quality testing services design advice all of these are success factors for example in an electrical components business the second is the perceived performance and we see here value curves of the perceived performance so how do customers perceive the performance of the different companies so if you follow company a for example company a is very strong in cost and also quite strong in delivery reliability still strong in after sales service and technical quality um not so strong in testing services company b is basically trailing company a on all of these factors and company c is quite a bit behind now where it becomes interesting here is in element number five and number six the testing services and the design advice because that is something that company a and company b which seem to be the market leader in the industry are not really strong at and in the design advice not even really present and this is what we call a blue ocean so once company c has identified this blue ocean they can go full into this area and dominate this area and even though they are third and a laggard in the industry so quite behind the other companies they can still beat the others by offering these services and if they have the right approach to it and make these two services relevant to customers they might even win in this industry a great example of a company that has identified this and has changed their business model is actually dell adele was the first computer company that changed to a direct marketing model so directly selling to the customer and allowing the customer to customize their laptop or pc and that was very revolutionary so even though um they they had formidable competitors by focusing on this blue ocean element they managed to become one of the market leaders in the personal computer segment so this was a short introduction into external environmental analysis as i mentioned external environmental analysis always stands at the beginning of the strategic planning process it is the first thing that we do when we launch the process um in our company and it gives us a very good round view of what is going on in the industry in the next step and in the next session we will then look inwards and internally and look at the our own performance and our own strengths and weaknesses [Music]
Business_Strategy_Lecture
Business_Strategy_14_Organization_Structure_Controls.txt
[Music] bye [Music] now if you have followed this lecture series until here you will know that we are almost at the end of the sessions we have done all the hard work we have done the analysis we have thought about the strategy we have carefully applied the frameworks we have written our strategy paper and in the last session we also developed a financial plan and a budget now there's only one element left that we have to look at and that is the element of organizational design getting the organization ready for the strategy that we have prepared and this is what we are going to talk about in today's session now um in this session we will cover a couple of elements the first is we will talk about why is it so important that we think about organization design and getting the organization ready then we will look at some of the organization designs and org structures that help facilitate the strategy implementation third we will look at change management because a lot of time strategy involves major change for the organization so we will look at approaches to change management and then i want to end with a short note about ethics and ethical implementation of strategies we have a whole separate session dedicated to the topic of ethics so i only want to touch this very briefly today so let's go into it to explain why organizational configuration is so important i would like to look at mckinsey's 7s framework so first of all what is organizational configuration organizational configuration is the set of organizational design elements that interlink in order to support the intended strategy so we design an organization that in the end helps us implement our strategy and make the strategy successful the goal of course is to ensure that the organization's goal strategy structure systems and other important elements are aligned and are compatible with the strategy and a couple of decades ago by now mckinsey came up with a framework that outlines all these important elements and this is mckinsey 7s framework the starting point is in the middle the superordinate goals these are the vision the mission and the objectives that the organization has set for themselves and that is supported by the six elements that are surrounding those super ordinate goals so the strategy which is the major part of this um lecture series anyway then also the structure of the organization the systems that the organization has in place the style which refers to the leadership style of the top management and the management at large skills and capabilities of the organization and then the staff people in the organization a little explanation of some of these elements superordinate goals as i mentioned already are the overarching goals or the purpose of the organization as a whole so this includes the vision the mission and the objective all of these form the organizational purpose and these superordinate goals are at the center of the 7s framework all other elements should support these superordinate goals another one that might need a little bit more explanation is style so style is as i also mentioned already the leadership style of the top managers and what is important here is that the style leadership style of management should fit the other aspects of the 7s framework for example if you have a very highly directive or coercive style it might not fit very well with a matrix organization so the organization design and leadership style have to match each other staff is the kind of people that are in the organization staff here touches on most of the elements of the hr system of the human resource system that includes recruitment recruits the onboarding training rewards and the key question here is does the organization have the right people that actually match with astrology the last element that i would like to touch upon a little bit in more detail is skills so skills here refers actually to organizational capabilities so how are basically the individual talents and the individual capabilities and skills of the people in the organization used and and transformed into organizational capabilities that are aligned with astrology so with this 7s framework there are a couple of implications on how we do the organizational design the first is that organizing building and organization design involves a lot more than just getting the structure right so we need to think about all the other elements of the 7s framework in order to design the right organization that fits our strategy and our superordinate goals the second implication is that all elements need to fit with each other they need to be aligned with each other and they are connected together and the implication of that is that if we change one element of the 7s framework then it's very likely that we have to make changes to other elements as well for example if we change the strategy we need to look at all the other six elements and need to see whether everything is still in line with a strategy and whether the strategy actually can be successful so we can't look at at one element in isolation we always need to look at all the seven together now this should highlight why is it so important that we have to talk about organization design in the context of strategy implementation next i would like to touch upon some of those organization designs and a couple of years ago this is now already 35 years ago there has been some research about how large organizations actually develop and how they grow and how the organization structure changes accordingly most organizations start with a simple structure a simple structure where the owner surrounds himself herself with a number of people that support the organization in the very early stages now as the organization grows in revenues and as more employees join the organization accordingly a different structure is required and that is typically a functional structure now the organization grows further and typically what happens then is there's diversification into related products so that more products are being launched or maybe an internationalization where other markets are being explored that then leads to a divisional structure and from that divisional structure there might be additional expansion the national expansion it can lead to an international structure now i will explain the structures on the next slides that follow but just to explain this this chart here the bold arrows are typically the strategic strategic paths that organizations in the us have taken over the past couple of years and um the the thinner arrows are the strategies that can be applied in order to get from one of the structures to the other structure now what happens sometimes is that the organization does not move directly from a functional structure to a divisional structure by diversification or by adding more products sometimes there's also vertical integration so along the supply chain for example we we acquire a supplier or we integrate other parts of the supply chain that also might lead to a functional structure that the functional structure can be maintained in that case and if we diversify further then that leads to a divisional structure so for divisional structure to happen typically we need some level of diversification for it to make sense and of course international expansion can also happen after a functional structure has been put in place and that then would lead to a worldwide functional structure that assumes that there's no further diversification happening and there's also another part another pathway which is taken when the company decides to diversify into unrelated areas so into a kind of conglomerate structure then that leads to a holding company structure and with international expansion that might lead into a worldwide holding company structure so these are the different pathways that an organization can take in order to grow and the organizational structure that follows accordingly now i would like to look at the middle path and the structures that are this light highlighted here in bright blue in a little bit more detail let's start with a simple structure when the organization is still young basically has just developed a startup typically we have a very simple structure where there's an owner or top executive and this owner surrounds himself or herself with a number of employees so this is most common in organizations that have simple product lines maybe just one product line and typically not more than 15 employees in this case the owner manager or the top executive makes most of the decisions and controls all the activities the staff that we have here the employees they serve basically as an extension to the topic circuit now as the organization growth we need more specialized resources and that is happening here so here we have a chief executive at the top and then we have functional manager specialized managers that report into the chief executives and one level below we have lower level managers specialists operating staff that supports those managers and reports into those managers now in a functional arc structure those major functions are grouped internally and this typically happens when we have closely related products and services so no major diversification yet a vertically integrated structure is also possible the chief executive's most important responsibility here is to coordinate and integrate the different functional areas in order to avoid functional silos now the next structure once we diversify we grow into different types of businesses we have some related diversification or also international expansion we might come to a divisional structure so in this case we have a division general manager and he has the same functions reporting into him or her as on the previous slide the chief executive but with the difference that here we have other divisions as well so this structure is basically replicated across the different business lines so if you see if you look at division a could be one business line division b is a totally different division uh totally different business line so each division basically has its own functional specialists who are organized into their departments as in the previous functional structure and in this case also each operating division is relatively autonomous all the functions are there you see it here the production is there engineering marketing um personal hr accounting all reports into the general miniature and they're fairly independent in their decision-making but the overall governance still sits with a central corporate office with the chief executives and the corporate staff that coordinates across the division so that's the divisional structure finally for more complex situations where we have different geographies different product lines we might end up in a matrix structure so in a metric structure it basically combines functional and divisional structures now some individuals here will report to two different managers to their functional managers along the y-axis and to the project managers in this case our geographic managers along the x-axis so as i indicated already there's two main use cases for this kind of structure one is projects so staff from functional departments um they work on a project and therefore for that time in addition to their functional managers they might report into a project manager for that specific project the same can also be applied for geographical units so in this case we have product managers that have global responsibility for their product line and at the same time we also have regional managers and they have responsibility for the profitability of the business in their region now the beauty of this structure is that it provides quite a good deal of flexibility in terms of working on projects in terms of working on geographical assignments while at the same time maintaining this kind of functional structure so you get the the best of both now briefly about the advantages and disadvantages of each of those aux structures starting with the functional structure a function structure is great because you have a pool of specialists and that also helps coordinating those specialists and controlling them you have centralized decision making that helps with all kind of organizational perspectives or giving an organization perspective across all the functions and career paths are very easily defined you have professional development in the specialized areas you grow in your function basically now the disadvantages of the functional structure is that sometimes we end up in functional silos and that might prevent good communication and coordination across those functions also sometimes these specialists in each of the functions have a tendency to develop more like a short-term perspective and focus only on their function see only the narrow picture rather than the bigger picture of the company sometimes also there might be conflicts between different functions and this might put a burden on the top level decision makers and it's quite difficult to establish performance standards across the different functions because they're quite different in their tasks and how they work now the divisional structure here we have increased control and we allow the corporate executives to address strategic issues and control the divisions a division instructor is also quite quick in responding to environmental changes we have increased focus on products and markets at the same at the same time resources are relatively easily shared across the functional areas and also divisional structures facilitate the development of general managers because in each division we have a strong general manager and if we do proper succession planning and coaching and training of the people that report into this general manager we have a constant pipeline of good general managers for the company now the downside of a divisional structure is that you have quite a bit of cost duplicated because um we basically have the same kind of personal operations finance people duplicated in each of the divisions sometimes this might also lead to not so functional competition among divisions so they compete against each other rather than working together and this might actually distract from the overall performance of the of the company and from the overall goal that the company is pursuing with very different divisions it might sometimes be very difficult to maintain a uniform corporate image especially when there's a great level of diversification and the corporate divisions are quite different in in their field of business that might create a different culture also and a completely different image and divisions might over emphasize the short-term performance rather than the longer-term performance of the company overall finally with the matrix structure here we have increased market responsiveness because we have collaboration we have a lot of synergies we have quite good communication also resources are used quite efficiently because we always look at the two dimensions the functional and the divisional structures and bring them together and also in terms of professional development it gives gives a lot of possibilities and opportunities to people to shape their now careers downside of the metric structure is that this dual reporting can be confusing at times it can bring some uncertainty with regards to who is accountable sometimes we have power struggles especially if um there is a divisional leader and a functional leader and they both want to give directions to the team underneath them and if these directions are in conflict with each other you can imagine that there's a lot of conflict in the organization and difficulty for the staff that has to report now to two managers that are in conflict with each other working relationships are obviously much more complicated especially with two bosses that everyone has and also the human resource process is a little bit more complicated and we have to rely on teamwork we have some coordination that is needed between the functional and the divisional managers and that sometimes might slow down decision making now these were the main organizational structures there's other structural designs that can be applied but the ones that we touched upon here are the main ones and as the organization grows obviously the structure becomes more complex as well so next i would like to talk a little bit about the change management because very often when we implement a strategy we take the company into a different direction and this requires a level of change and i would like to touch upon this for a bit um now balogun and haley have come up with a framework that talks about four generic types of strategic change and they look at it in two dimensions on the one axis the y-axis here what is the nature of that change is that change that is required incremental so are there small changes to be made or is it a big bang so major changes required um as part of the strategy implementation and then the extent of that change so is it just a realignment that is needed so is the the current um situation already um quite quite good and quite stable so can we just realign and reshape a little bit here or is it a transformation like a real big change if the nature of the change is incremental such as small changes and it basically us to use the existing culture the existing structure then we just need an adaptation so that's a very small incremental change and basically building on what we already have if we cannot use what we already have if we need to move on from the current organizational culture and the current organization design then we need some kind of evolution now the nature of the change is still incremental so we will make small steps towards our goal but the goal itself is transformative so it's a major change to where we stand today so that's an evolution if we can use the current structure the current culture but we need to do it in a big bank in a big way because for example the company is in financial trouble then we need what is called reconstruction or also called a turnaround we need a major turnaround and finally if we cannot use the existing culture we need to change the culture and we need to make a major change um in terms of strategy then we need a revolution so let me talk about three of those four elements a little bit in more detail starting with evolution so evolution means we have an incremental transformation and this incremental transformation can be achieved in two ways the first is organizational ambidexterity what does it mean ambidexterity means we exploit the existing capabilities at the same time we also need to explore new capabilities so a change is required but we still can rely on the existing capabilities for example to help fund major initiatives that bring us to new capabilities so a couple of elements under this organizational ambient exterity the first is structural ambidextry here we maintain the core but we create separate small teams so we change the organizational structure by creating several small teams that help with the exploration of new areas another aspect of this approach is that we need diversity rather than conformity so here the top management has to ensure that we have a diversity of miniatures experience and that we encourage debate about future strategies so we want the diversity and we want people to express their opinions now the role of leadership in this approach is that leadership has to permanently balance encouragement new ideas experiments but also have the authority legitimacy and the recognition that is needed to really steer the organization and that's a fine balance that we have to strike here and finally we need tight and loose systems so the current exploiting the current existing capabilities needs a tight system and the exploration the experimentation needs a loser system and the the glue between the two is a clear strategic intent and that has to be expressed by the top management now another way to think about evolution is by looking at the stages of strategic change so that includes defining stages of transition so small interim stages that bring us from where we are today to the transformation that we need in the future another element here that might be helpful is to define irreversible changes so make some changes for example a new it system that is very difficult to reverse so we might not see an impact immediately but because it takes time to build this i.t system but once the it system is in place these changes will be irreversible and there will be long-term impact from this this definitely requires sustained top management commitment and requires that everyone is aligned and supports the transformation and it also involves winning hearts and minds so culture change which is needed in this case but that culture change should happen through persuasion and participation so that's the element of evolution then the second is the the reconstruction or the turnaround strategy here we talk about change that may be very speedy very fast very rapid and that involves an upheaval in the organization but it does not fundamentally change the culture or the business model of the organization so it's not transformative in that sense this typically happens when there is a crisis situation and some elements of this turnaround strategy is first the crisis stabilization so for management to regain control over the system for example focusing on cost reduction focusing on revenue increases focusing on productivity improvement tight financial control with the focus of rapid execution so fast execution the next is management changes so typically in these turnaround situations management changes are required management in top management sales marketing management and finance management and the reasons for this is that well for one the old management may actually have been the cause of the problems that ex that requires the turnaround in the first place second um typically these situations this turnaround situations require to bring in some management with the necessary experience and finally having a new management in place also brings in new perspectives new approaches that will help the organization to move forward and change the way the organization operates another element that is important to turn around strategies is to gain stakeholder support so it's very critical that key stakeholders like employees shareholders banks are kept in the loop of what's going on and especially as improvements are made that they are informed about those improvements eternal strategy also might require to clarify again what the target market is and the core products are one of the key elements here is to get closer to the customer and also improving the flow of the marketing information marketing communication and as this is critical it might need some clarification of or some reminder of what the target markets are and what the core products in those markets are to refocus the organization and finally turnaround strategies typically require financial restructuring so this might include the capital structure it might include raising additional funds it might also include renegotiating some of the terms and conditions and the agreements that we have with creditors with banks and so on so that's a turnaround again a turnaround is a rapid change a major change but it doesn't change the organization organization culture doesn't change the business model so in that far it is not transformative in that sense finally the most radical and most transformative type of change which is revolution so revolution is also the most challenging type of strategic change because it involves rapid cultural change and at the same time the need for change may not be evident to the people in the organization therefore cultural major cultural changes is required a cultural shift is required and leading this kind of revolution may involve things like setting a new clear strategic direction and articulate very clearly what the direction is and here's where the ceo can really shine and can really make a major difference then again a revolution might require top management changes maybe because the existing management might be linked to the past culture and there might be networks of colleagues customer suppliers that are in place and that need to be disrupted and this change in top management might also send a very clear signal of the significant of the change to the organization might actually help to drive the cultural change revolution typically also needs multiple styles of change management definitely a revolution needs a directive style which is a prevalent style in this kind of strategic change but it also needs to be mixed with elements of persuasion participation in order to overcome the resistance to change again i mentioned already a cultural change is required so we need to look at the forces that are in place that [Music] are useful for of the existing culture and those elements that are hindering the change that we want to drive and we need to do something about it which i will talk about on the next slide and finally this change revolutionary change needs to be monitored so constantly we need to set goals we need to set very clear unambiguous targets and monitor those targets and ensure that people actually do achieve those targets now with regards to culture change just a quick note a useful tool here is the so-called force field analysis so fossil analysis gives us a perspective on the forces that are at work in an organization and that either helps or prevents the change that we want to drive in the organization and there's three questions that we need to ask ourselves in this force field analysis the first question is what aspect of the current situation are actually blocking change and what do we need to do to overcome those blockage points the second question is what aspects of the organization actually are helpful and how can we reinforce these helpful elements of the current culture in the organization and finally what else do we need to add what needs to be introduced what needs to be added to a change and here's an example of a force field analysis so the green elements here highlighted in green are elements that are pushing so positive elements that help the change the red ones are the ones that are resisting and the the blue one is in addition some additional elements that might help in driving the change so this that's a framework that helps us especially in situations where we need a revolution and where the strategy that we said is revolutionary to the organization now finally i would like to talk a bit about ethics again there's a whole separate session dedicated to the aspect of business ethics but i would like to end this session when we talk about organization design and organizational controls with a quick explanation on ethics starting off with the definition so ethics is a system of right and wrong that assists individuals in making decisions and deciding whether what they are doing is moral or immoral if it's good or bad if it's socially desirable or not now organization ethics are those ethical guidelines for an organization so the values the attitudes the behavioral patterns that define what is the operating culture of the organization and that help us determine what an organization holds as acceptable behavior so it's a clear definition of what behaviors are acceptable in the organization and which behaviors are not acceptable in the organization and ethical orientation these are the practices that the firm uses to promote an ethical business culture now there are two types of approaches in ethics management one is called the compliance-based approach and the other one is called the integrity-based approach let me talk about the compliance-based approach first now the ethos here behind this approach is that we need to be conformed all employees need to be conformed with standards that are imposed from externally for example by lawyers or by the by the society in general so the standard here for the organization comes from external sources the objective is we want to prevent any misconduct criminal misconduct for example we want to prevent the organization and people in the organization from getting into trouble the leadership in this case the ones who define what is ethical and what is not are typically lawyers or external advisors the methods that we can use here is of course educations or training we limit the amount of discretion the decision power that people have we do auditing we do controls and we have strict penalties in place the assumption behind this compliance-based approach is that we have autonomous being so people that act on their own but they need to be guided by material self-interest which means like avoiding penalties and maybe getting rewards for proper ethical behavior now um that approach is quite directive and quite controlling that is opposed to the integrity-based approach in the integrity-based approach the ethos behind the approach is that people are self-governed and they manage to conduct themselves according to standards that are chosen by the organization not externally but by the organization from within the objective here is to enable a conduct that is responsible and in line with the expectations that the organization has developed leadership here is really management driven lawyers might aid this process hr might aid this process others external advisors and so on might be supporting this but it's really management driven the methods here are again like educations or training and information leadership but also accountability organizational systems and and decision processes we also have auditing and controls and penalties but it is much more self-controlling and people have more accountability and the behavioral assumptions behind this is that people in the organization are social beings and they can be guided of course by material self-interest but also by values by ideals by peers by role models [Music] and this approach is integrity based approach is basically the basis for the next session so i don't want to talk about it in this session i want to talk a little bit about the compliance based approach at least so there's four elements that need to be in place in order to ensure that this compliance-based approach works and to be frankly most industry organizations that i have worked in this compliance-based approach is applied an example of the other approach would be consulting companies for example mckinsey that applies a more self-responsible approach now in order for a compliance-based approach to work we need those four elements are listed here first we need strong role models so it works only if the top management also acts in the same way that is expected from their staff then the second element is corporate creators codes of conduct so it needs to be very clearly defined what is acceptable and what is not it needs to be written down and made available typically this comes at the beginning of an employee relationship when the employment contract is signed the code of conduct is typically an addendum to the employment contract then it also needs a reward and positive and negative so also a penalty system an evaluation system that controls that behavior and takes actions positive if the behavior is positive and negative like penalties if the behavior has been negative and finally the organization needs clear policies and procedures of what to do if there's a transgression meaning if there's a misbehavior misconduct what will happen what are the policies in this case and what procedures what actions do we take maybe from an hr perspective in order to correct the situation so this concludes the lecture on organizational structure organizational design and controls and we ended with the aspect of ethics so in the next session we will go much deeper into this last aspect of ethics i think it's really important for companies to behave ethical and this is getting more and more important as we have social media and much more much more scrutiny also from the public on how organizations behave and i'm a big believer into the importance of ethical behavior especially in the long term in the long run but in the short run organizations might get away with a lot of things in the long run they typically do not so behaving ethically and making this part of the strategy is very important and that's why a full session dedicated to this in the next session [Music]
Business_Strategy_Lecture
Business_Strategy_11_Cooperative_Strategy.txt
foreign [Music] in this session we will cover the last set of strategies which are cooperative strategies so basically strategies where companies enter into arrangement with other non-related companies to collaborate on certain business issues so in the session we will cover reasons for why companies would enter into such arrangements we will talk about the actual strategies that companies can pursue when they want to collaborate with other companies and finally we will talk about some of the risks and pitfalls and challenges that come along with cooperative strategies and how to manage those risks but as usual we will start off with definitions and some terminology so first of all a collaborative cooperative strategy or strategic alliance which is the main type of cooperative strategy that is defined as a strategy in which firms combine their resources and capabilities for the purpose of creating a competitive advantage and that combination of resources and capabilities can come in various forms it can be a complete joint venture as we will see on the next slide or it can be just a contractual arrangement where companies collaborate and work together so types of those kind of strategic alliances are first of all the joint venture a joint venture is defined as two or more firms come together and create a legally independent company typically joint venture partners own equal percentages and they contribute equally to the ventures operations and the reasons why companies come together or the benefit of creating such a joint venture is to improve a company's ability to compete especially when there is uncertain environments it establishes a long-term relationship so this is you can imagine this is kind of a marriage where you have a common child which is the joint venture and that binds the two companies together and typically in joint ventures you also have the transfer of some tacit knowledge so knowledge that cannot be easily codified and that has to be learned through experience so this is one of the big advantages of forming a joint venture that because of that marriage and that investment into the joint company tacit knowledge can be transferred or shared between the companies um similar but different in terms of the holding structure is so-called equity strategic alliance so the difference is that companies don't hold equal percentage and don't contribute equally to the joint venture but in equity strategic alliance two companies just own different percentages for example 70 30 or 80 20 or something like 60 20 20 if there's a case of three partners once the new company is formed but it still involves equity so it means like a new company is being set up the third type of strategic alliance is a non-equity strategic alliance so in this case it is really a contractual relationship that two or more firms come together and form a contractual relationship they share their resources and capabilities under this contract that means that in such a relationship there is no separate independent company the original companies stay and collaborate just on that contractual basis and based on research this has shown to create good value but if it's very complex then a structure like this like a contractual structure might not be suitable anymore examples for these non-equity strategic alliances are licensing agreements like franchising distribution agreements between companies supply contracts but also outsourcing agreements between two companies so these are contractual relationships and partnerships that companies go into so next let's look at some of the reasons why companies might pursue a cooperation strategy and i want to separate the reasons by the speed of the market so the first is a standard cycle market in this case like the main reason actually is to gain market power so reduce industry over capacity for example by working together with a company through collaborations you might gain access to new resources to complementary resources that help you bring your company forward might be of mutual benefit um economies of scale might be established through a joint venture or through a contractual relationship with another company if it is a cross-border partnership then a cooperative strategy might help to overcome trade barriers maybe there's main major competitors and smaller competitors might come together in a collaborative cooperative strategy to meet those competitive challenges of the bigger competitor pooling resources could be a reason especially when it's about very large capital projects so it's easier to carry the burden of a large capital project if the burden is distributed over many parties and also learning of new business techniques uh new testament knowledge as we already called it might be another reason in a standard cycle market now if you have a slower cycle market so where changes occur in a much slower at a much slower pace um one of the reasons to launch into a cooperative strategy might be to gain access to restricted markets where it's very difficult to enter because it's very capital intensive or where regulation restricts a company from entering then having a local partner and going into a local partnership might be a good way in a new market might be a possibility and also especially in slow cycle markets cooperative strategies might help to maintain the market stability for example establishing standards across different markets and entering into an agreement on a certain standard the the most common and most famous for example is the a4 paper size which works with all copying machines around the world now in a fast cycle market where it's very imperative to have very fast innovation and very fast product development cooperative strategies might help to speed up product development you might be able to speed up new market entry especially when it's across partnership across borders maintaining market leadership by having capabilities and using using the capabilities and skills of a partner i might play a big role here then forming an industry technology standard so it's easier to or more predictable to innovate um also linked to that sharing risky r d expenses rd expenses and fast cycle markets are typically very high because companies are permanently forced to do something new and therefore partnering with other companies and jointly sharing the burden of those r d expenses can be very helpful and very beneficial and then overcoming uncertainty also if you imagine you work in an industry where you have to permanently innovate and come up with new standards and new technologies then getting alignment on these new technologies with related companies can help to reduce the risk and reduce the uncertainty these are typically the main reasons for launching into cooperative strategies now moving on to the actual strategies so what can companies do how do these cooperative strategies look like starting with business level strategies there are four types of cooperative strategies the first one is a complementary strategic alliance so in this case companies that go into this partnership have somehow complementary resources this can happen on a vertical level so for example a supplier alliance which means you cover different stages in the value chain you work together with your supplier you collaborate you enter into a long-term partnership with one single supplier and therefore you have certainty of the supply and the supplier also has certainty of the uptake of the their goods and you can jointly develop new technologies and new research and development on the horizontal level that's also possible so firms in the same stage in the value chain also can join and can use the resources combine their resources to create more benefit overall for the industry and maybe having complementing complementary assets help defend or fend off some of the competitors the second type of strategy for cooperative strategies is the competition response strategy that that's a defensive strategy and basically helps to defend against a competitive action that is taken by major competitors so by working with by combining the power of like smaller competitors they might be able to fend off a major dominant competitor in the market so these are competition response strategies a third set of strategies is uncertainty reducing strategies so in this case companies work together for example on risky r d or big capex projects capital expenditure projects um new product development and so on to reduce the uncertainty one of the areas here for example is in the car industry a couple of years ago car companies worked on different technologies so some companies worked on hybrid technologies some companies were working on an electric engine and some companies were working on hydrogen engines so by just aligning across the industry like aligning with the other companies in the industry on certain standards and a certain way forward these expensive r d efforts can be just reduced and by having an industry association for example and collaborating in that industry association you can reduce the uncertainty around those new product developments or research and development costs the last one is actually a negative one which is a competition reducing strategy now this is also called collusion there's two types of collusion there's explicit collusion so these are direct agreements about the pricing or about the amount of production um here opec comes to mind obviously on a country level countries oil producing countries come together and agree both on the basically they agree on the output quantities that are being produced and therefore they determine uh the global market prices for oil um but i should mention here that these explicit collusion strategies are illegal in most countries and uh there are regulators that monitor those behaviors very closely and if they see illegal activities like companies working together um illegally or against the law they will take action against those companies um it becomes a little bit more difficult if there's tacit collusion so if there's indirect coordination of production and pricing decisions by just looking at each other and observing each other's actions and responses then it's quite difficult for regulators to intervene example for this one would be for example gas stations and i'm thinking about the european market where consumer protection groups have highlighted this that there's tested collusion between the gas station operators adjusting prices as um following each other's prices and therefore basically providing no point of differentiation to the consumer and this is to the detriment of the consumer so they claim so so important so some some research on those strategies so the first set of strategies complementary alliances have actually the greatest potential to generate competitive advantage especially if they are vertical alliances so working with a supplier for example on the development of new of new technologies and collaborating on that level um the last set of strategies competition reducing strategies usually have the lowest probability of really creating competitive advantage they usually come at the expense of the customer and the company is there focus on collusion rather than developing a true competitive advantage so this is typically to the detriment not only of the customer but also innovation and progress overall now moving on to corporate level cooperative strategies quick definitions and talking about some of the benefits so corporate level coverage strategies are strategies where you collaborate for the purpose of expanding the business operations into new areas obviously on the corporate level so benefits of this those kind of corporate level cooperative strategies open of new markets especially when you can't acquire a company so when m a is prevented for example by the regulator by the government then cooperative strategies offer a great potential to enter new markets even though um from a merchant acquisition perspective this is not possible also cooperative strategies require less resources fewer resources are less costly compared to an acquisition and they give quite a high level of flexibility in terms of the the efforts to diversify the partners operations i should highlight here that if a merchant acquisition is allowed even then companies might still decide to first collaborate and cooperate with a partner just to test the market maybe test the company test the partnership and then follow this by a real m a so very often we see a cooperative strategy first that is then followed maybe two or three years later by an acquisition so there's three types of corporate level cooperative strategies the first one is a diversifying strategic alliance so here companies share their resources their capabilities to engage in diversification either on a product level so coming up with new products in new markets and new areas or on a geographic level so finding new markets for the existing products the second one is a synergistic strategic alliance so here it's all about creating economies of scope using the synergies across multiple functions so for example sharing services like hr services financial finance services corporate finance services and therefore bringing down the cost through the economies of scope of all the partners an example for this are well shared services companies shared service centers which are companies that operate for example in big industrial parks and provide services like hr like canteen services catering services maybe even accounting services and so on and a third type of corporate level cooperative strategy is franchising so franchising very specifically is a contractual arrangement between completely independent firms typically the franchisor the franchise owner and the franchisee which is typically a smaller company often a sme to basically put the products under the franchise's trademark in a certain place and on the time frame and the franchisor controls the sharing of the resources and capabilities with the partner and the franchisee provides those resources so obviously in the restaurant industry this is very common that you have a brand owner such as rbi restaurant brands international for the case of burger king for example and you have a franchisee that operates the burger king restaurant using their resources and their own local capabilities to run the burger king restaurant in the geography that has been contractually agreed so again based on research corporate level cooperative strategies are typically broader in scope and more complex and therefore they are also more challenging and typically also more costly so this is something to look out for finally we have to look at some of the risks and some of the challenges that come along with corporate level or even business level cooperative strategies so some of the risks that are related to the management of those contracts of those contractual relationships so definitely one of the risks is that the contract is inadequate so maybe it doesn't cover all the areas maybe there are loopholes and this can pose a legal risk from a contractual perspective the second risk is that maybe companies enter into those agreements and misrepresent their competencies what does that mean either they overstated their company competencies from the beginning and then when it comes to the actual collaboration then the actual competencies are disappointing or they don't send their best resources and all their competencies into the joint venture or into the collaboration effort and therefore the the joint venture or the collaboration effort falls short of the expectations and maybe also the expectations of the different business partners are not clearly represented in the contract not clearly stated in the contract and therefore the discrepancies between the expectations of the management and the actual outcome so these are all risks related to the contract itself and then there are also risks related to the behavior of the partners so sometimes partners just fail to use their complementary resources as i already mentioned that maybe there's the right agreement already in the contract but the companies just don't send their best people sometimes it's also a way to just hold the alliance partner specific investments hostage yeah and well just just misbehave and just make use of the fact that the alliance partner has already invested a large amount of money while actually not contributing to the joint purpose but just trying to benefit one-sided from the agreement maybe there are different spending philosophies maybe one company is quite generous in the spending the other company is quite careful in the spending and therefore there's a clash when it comes to the joint venture and there might also be different working standards safety standards for example where one company has very high standards and very sophisticated and is very careful in not making mistake or causing any safety assets and the other company has much lower standards and that can create of course conflict but it can also create a risky reputational risk to the company with the highest standards so all of these are risks that have to be considered when entering into a collaboration or a cooperative strategy with a company so how to manage this there are two ways to think about this one is to just minimize the cost and therefore minimize the exposure and minimize the risk so in this case you might put in the contract specific clauses and be very specific about how the strategy should be monitored how the behavior is controlled putting a lot of restrictions and controls in place and therefore well creating a level of mistrust and being overly cautious on each of the contributions um a tit-for-tat strategy where you carefully watch what the other party is doing and only if the other party moves you also move the result of this cost minimization strategy is that it typically results in both parties just trying to minimize the cost and the risk from the joint activities and therefore not living up to the full potential of the collaboration a much better approach is and a much more trusting approach is the approach of opportunity maximization so in this case the contracts are much less formal and have fewer constraints on the partners there is a possibility to jointly explore how the resources the capabilities can be shared always with the purpose of creating the best value the overall focus is less on controlling but much more on the business activities themselves and it's really always very top and bottom line focus not cost focus at all but creating the biggest benefit and the result of this is typically that there's a much higher likelihood of value creation from this opportunity maximization now i need to be mentioned that this needs trust and trust takes time to establish it takes a certain management approach and here a certain level of business friendship if you can talk about friendship in business terms but a certain level of friendliness towards each other is definitely required here and based on research this works quite well works better on a domestic level and is much more difficult to establish in international cooperative strategies right or wrong it seems like we seem to trust our own countrymen more than people in other countries according to research at least so but the big takeaway is that cost minimization is always the worst approach and opportunity maximization with a trusting relationship between the partners is a better approach and can create higher value so that brings us to the end of this session and it also brings us to the end of the strategy formulation process so we have covered the most important strategies and with the next set of sessions we will move into strategy implementation [Music]
Business_Strategy_Lecture
Business_Strategy_10_International_Strategies.txt
foreign [Music] in this session we are talking about a set of strategies that typically becomes relevant after the company reaches a certain level of maturity and these strategies are international strategies so at some point in the lifetime of the company once the company grows it might decide to look for markets outside the domestic market and in this session we will talk about reasons why the company might do so then we will talk about the actual strategies that the company can apply and finally we will talk about some of the pitfalls and some of the risks and challenges that come with internationalization but before we do so we will start with a definition so international strategy is defined on the basis of the domestic market so it's a strategy through which the firm sells its goods or services outside its domestic market and the relevance of the domestic market is something that we will cover shortly in this session as well next let's look at some of the reasons why a company might decide to internationalize and the first reason is increased market size so the company might at some point decide that the domestic market is getting too small or growth opportunities are limited so international markets offer an opportunity to extend the reach to new customers but it's also an opportunity to extend the product life cycle maybe in the domestic market the product life cycle is coming to an end or is reaching a plateau and the lag the time lag in consumer behavior that we see for example between the u.s and then emerging markets are between the european markets and emerging markets is something that companies can take advantage of also there might be areas where the specific product that a company is in is not yet available and therefore there's unsatisfied demand in in some countries that the company can leverage and therefore increase its market size the second reason and all of these are by importance and relevance based on research so the second reason companies typically use to internationalize is return on investment so big investments for example in plans in r d in new products they come with a certain fixed cost and international markets referring back to number one the increased market size give the opportunity to spread this fixed cost over a larger market and over larger output quantities and therefore there are higher returns on the investment if the products can be sold in a larger or in a global market the third reasons and you see all of these are somehow linked economies of scale so with higher output um comes a decrease in the unit cost typically by increased capacity utilization of the existing facilities and also the opportunity to integrate some of the operations on a global scale examples for this is airbus for example where basically the production base is split across many countries and the parts are being brought together to lose to assemble the final airplane or another example is of course apple i think on the back of the iphone it says designed in california but produced or made in china so apple is a company that uses those economies of scale and the global operations and global cost advantages to its own advantage reason number four is a little bit different that's learning so companies sometimes decide to go outside of their home market to learn about new consumer trends new opportunities to learn about new evolving technologies in global markets and so on so an example for this was the telecom sector in the early 2000s so some european companies actually ventured outside of europe went to japan and korea very clear that they wouldn't have a chance to get a big market share in those countries but they used the opportunity to be in those countries and to learn from the incumbents and from the consumers about new trends and then bring those trends and those technologies to europe so learning is definitely an important factor when companies think about internationalization fifth location advantages so some locations have just lower cost for example lower labor cost but maybe also lower cost raw materials easier access to raw materials sometimes companies internationalize because their supply base is there already is international already so they see the opportunity to follow their suppliers and a big reason as mentioned already is also the access of to consumers in emerging markets especially next briefly some trends international and in the global market environment and the trend sign two directions actually let me start with the negative one and it's marked here in in red there is something that we can observe which is called the liability of foreigners let me elaborate a little bit what that is liability or fairness is defined as a set of costs that is associated with like certain issues certain problems certain challenges that come about when entering a foreign market so what can those costs be for example when you enter a new country you might be unfamiliar with the operating environment and therefore you might incur additional costs costs that the local incumbents might not incur economic differences cultural differences might play a role here which might lead to some difficulties at the beginning and some losses or some again increased cost and increased difficulties that companies face um administrative differences for example here we are talking about licenses or approvals regulations that might exist in certain countries that new incoming companies are not familiar with and therefore they have a disadvantage compared to incumbents and also the challenges that come with coordinating over distances time difference being probably the smallest problem to solve but uh also delays in decision making um delays in transport of goods from one country to the other and so on that is called the liability of foreigners and that is something that we can observe is getting more and more challenging especially as we see some nationalistic tendencies in some countries and um just referring to the trade war between china and the us that we saw a couple of years ago so this is part of the liability of foreigners if you're a foreigner in another country on the other side we see a positive trend towards regionalization an example here for this is the european union or it's the asean economic community or aec so there's an increased focus on the regional markets rather than just country based markets and that helps companies to expand outside of their domestic market so we see these two trends kind of clashing with each other and and definitely contradicting each other if you would have asked me a couple of years ago i would have said there's definitely a stronger trend towards regionalization but over the last three four years you can see quite a strong trend especially also with the corporate crisis that the liability of foreigners still plays a role and is still an important factor for companies so next let me talk a little bit about the strategies that companies can apply when internationalizing and i want to start with talking about international business level strategies so specifically strategies for a single business focus and let me start off with two propositions that i want to make here the first one is on an international level the home country of production so the original country where the company is from is often the most important source of competitive advantage meaning if your industry is strong in your home country then you have a real opportunity to take this advantage and bring it to other countries so that's the first proposition the second proposition is the stronger the position that you have in your home market the better are the chances for success in the national market so if you're already the market leader in your home country then you have a better chance to be also the market leader in international markets i should mention though that exceptions exist and one exception that comes to my mind here is yam yam actually yam yam is a producer of instant noodles here in thailand and in thailand definitely it is not the market leader is probably the number two or number three in the market behind mama which is by by far the more famous and more popular brand here in thailand but if you go to europe actually yam is the one that is mostly on the shelves so in europe so in the international markets um is the predominant product so there are some exceptions but in general the company that leads in the domestic market is typically also the leader in international markets now when you think about international business level strategies a good framework and a good way of thinking around those strategies is michael porter's competitiveness of nations so these are factors that are relevant for having a competitive advantage internationally and the first one is factors of production so if the factors of production are in place meaning qualified labor for example or raw materials that are available if you have an advantage to these ones then you probably have an advantage over your competitors overall so meaning if in your home country there are strong factors of production that play towards that advantage for example for the automotive industry a very strong engineering mindset and engineering knowledge in germany or in the chemicals industry also like state of the art chemical engineering capabilities then once these companies internationalize once mercedes or bmw comes to international markets then they will bring along these competitive factors and it will give them a strong competitive position the second is demand conditions so think of the automotive industry in germany you have like very strong customer demand or customer expectations towards a car you have no speed limit on the roads at least for now and um consumers just love their cars and i can remember as a as a child we made a big round uh with a bicycle around cars just to not scratch the car because that was seen as a very big problem if you would do that so there's very big um a very big automotive culture in in germany and therefore the german car makers they have to meet those demands of the consumers and have to innovate on a permanent basis and therefore they also bring these capabilities to international markets and take a leading position there um related and supporting industries so if you have in your home country those supporting industries for example for the watchmakers in switzerland all the supporting suppliers of these very small parts and also the the training and the education of the watchmakers is all concentrated in a small area around geneva in switzerland and the surrounding area so it's a very strong [Music] strong focus on on these things and therefore also that helps when they go international it gives them an advantage in edge and finally the strategy the structure and the rivalry in the industry which also fosters the growth so think of the car industry in germany again here you have like a very strong rivalry between bmw mercedes volkswagen porsche they all fight for the leading position every year and because of the very demanding customers coming back to point number two they are forced to permanently innovate and be state of the art in their technology and in what they bring to the car and therefore that gives them a very strong position when they go into the international market so meaning the more competitive your domestic market is and if you are able to compete in a very competitive domestic market then it makes it easier to go into international markets because you have an advantage there and that's the big takeaway here as well so companies that operate in very country specific industries so in industries that that your country is well known for they have a higher chance of succeeding in international markets as well so that's business level strategies now let's talk about internationalization philosophies so when companies internationalize and this is maybe also quite relevant for corporate level strategies what are the philosophies that they can use when they expand and in this 2x2 matrix you see the two axis so on the x-axis it's the pressures for local adaptation what does it mean how big is the pressure to really be considered as a local company or very well embedded locally is that high or low and on the y-axis you see the pressures for lower cost so for more standardization and scale and cost optimization and the three relevant strategies three plus one i would say the three relevant strategies are a global strategy if you have high pressure on low cost and relatively low pressure or unimportance of being considered as local then the multi-domestic strategy which is the opposite so multi-domestic means you try to you pretend basically to be local or you try to be as local as possible in your approach and you have almost no pressure or no no intent to use global scale and i have examples on this in a second um if a company faces both so the need to be considered local and at the same time also the pressure to have lower costs and use the scale then that's called a transnational strategy let me say upfront that this is very difficult to do because these two typically contradict each other but there are companies that have attended this and have done quite well in doing so there's one empty field so there's another strategy which is simply called international strategy and you will see if you read the footnote here that these strategies were relevant maybe after world war ii when companies really started to look outside their their home country for the first time example here is coca-cola for example the companies went outside of their home country and they basically set up production bases or supply bases in each of the country without looking at really global scale but these times where you need where you don't need to focus on costs anymore are basically basically over and even though coca-cola didn't fundamentally change their strategy but what changed for them is that they have much more power in terms of marketing and a much stronger brand and therefore they basically move to a global strategy because they use their brand strength to really bring down the the cost for them now let me give you the definitions quickly so global strategy means that you use a standardized approach standardized products across the markets and often the competitive strategy here is dictated by the home office multi-domestic again the opposite is the strategy where the decisions are decentralized so you let each of the business units in each of the countries make the key decisions and the home office the the original country where the company is from basically it's just a facilitating role and then what that leads to is that brands and products are tailored to each of the markets and then the third strategy as i mentioned transnational strategy tries to achieve both the global efficiency and the local responsiveness it is very difficult to do because these two are conflicting as i already mentioned let me give you an example for each the first example that came to my mind is tesco so tesco is a company that uses an approach where basically in each country they go to and you see here poland hungary and the uk for example they use a standardized approach and leverage best practices so what you see here for example is the packaging of the value line you see the store design including the decoration and everything looks very similar actually if you just look at the pictures here from afar and you don't read the advertisement slogans you wouldn't know which country you are in just now and even the signage of course they use the local currency with the pound the for hint and the slotty uh the local currency signs but the idea the concept is basically the same so just leveraging the same ideas so leveraging best practices across the globe and those strategy the signage the the campaigns and so on often being dictated by the home office the complete opposite and i stay within retail here is group casino so group casino is the company for thai listeners that used to own big c here in thailand and also big c in vietnam the group owns a portfolio of companies around the globe and this is the current portfolio in the year 2021 they still have argentina brazil cameroon in africa colombia and france obviously their home market and uruguay and you see that each of the brands is different first of all and if you would ask people in the country if you go into the country and ask people about those brands they would probably tell you that those brands are local brands so libertad is an argentinian brand uh bao de azucar is a brazilian brand casino is a french brand and disco is a brand from uruguay and same for thailand at that time a few years back if you would have asked people what big c is people would have probably told you that it's a thai company as a thai brand so they intentionally try to be local try to be considered as local and in the back office they still use certain policies and certain best practice sharing to really get some of the benefits of being a global company but otherwise the companies in each of the markets is fairly standalone and fairly free to make decisions now finding an example for a transnational strategy is a little bit more challenging but the one i found a couple of years ago is hsbc so the bank in 2003 2004 they drove a campaign a global campaign where the slogan was hsbc the world's local bank and that's pretty much describing what a transnational strategy is so trying to use the global scale and using local knowledge or positioning the company as a company that knows the local geography very well and you see here some of the examples of how they try to portray themselves as being knowledgeable in different markets and this was really a global campaign that was run in all the countries so that's a transnational strategy so next let's talk a little bit about entry mode so once a company decides to move into international markets what are the options to enter the market so the first entry mode and the most simple one is exporting so once a company decides i want to sell my goods in another market i can find opportunities to export comes at a very low cost but then also relatively low control especially if you don't have operations in the market it's quite difficult to um to control and see what how consumers react and so on a next level of entry mode is licensing that includes also franchising for example basically finding a local partner and then giving the local partner a license to operate on your behalf um so this is something that for example beer companies do they find local partners and they provide the recipe for the beer and allow the company to brew on their behalf same for coca-cola for example which uses um local bottlers and gives them the license to to bottle and basically fill the bottles and the cans and then sell the cans under the coca-cola brand now this comes at quite low cost also relatively low risk but the challenge here is also little control so companies like coca-cola or heineken or carlsberg and so on when they use these strategies they have relatively little control over what their bottlers do and they need to put extra resources in place to make sure that the marketing and the quality and everything is to their standards otherwise it's damaging to their brand um it gives you low returns but again it comes at a low cost so it can be a very profitable model to do licensing then the next level of sophistication is strategic alliance that includes joint ventures for example where you find a local partner and you jointly invest into the new venture of course here you have shared costs you share your resources as well the risk is also shared but finding a good partner a trustworthy partner is often a challenge and then also the problem of the integration so really you have two different company cultures country cultures and company cultures that meet each other so that can often be a challenge one step further is acquisition so that means you buy a company in the market that you want to enter with an acquisition you get very quick access to the new market well the negotiations are often complex and the whole strategy of acquisition usually comes at a higher cost and you might still have problems with merging the domestic operations and the overseas operations and as we discussed in one of the previous lectures acquisition strategies often fail so it's definitely challenging but it gives you of course also immediate control over the business if you acquire the company and finally the highest level of complexity and sophistication is to have a new and fully owned wholly owned subsidiary which is also called a greenfield venture so basically coming to a new country and building a complete new facility and entering basically from from scratch this is very complex it can be costly and very time consuming to get familiar with the local regulations and acquire all the permits and so on to to open the business comes at a higher risk but it gives you also maximum control so these are the five entry modes and across these five entry modes from number one to number five there's increasing complexity but also increasing control there is increasing risks as we move down but also increasing returns so all of these strategies are viable but have to be assessed very carefully one note here typically there's something like a dynamic of entry modes so meaning you start with exporting you start with licensing and as the business matures and as the market becomes more relevant then you move down the hierarchy and down down the level and increase the complexity maybe but also the level of control there are also examples other examples for example starbucks used to operate most of its markets and recently moved a step back and focuses only on core markets and those markets that are not core for them they license it out so there is some dynamics going back and forth depending on the strategy of the company now having talked about the entry modes finally we need to talk about some of the risks that come along with international strategies and there are two major types of risks the first one is political risks so obviously if you enter a new geography you need to be sure that the com the country is politically stable there's the risk of war military engagements unknown outcomes if there's local conflicts between different parties if there's political protests or political instability and that can all pose a security threat so that is definitely something that companies need to monitor and need to be careful about when they invest money in a foreign market same as regulations regulations might change there might be protectionist political trends for example the government might decide to ban foreign companies from doing certain activities or taking back some of the assets that foreign companies hold um so you always have to be aware that this is a risk that you are exposed to and changes of law can always occur again like the nationalization of assets here as a main risk besides the political risk there's also economic risks so macroeconomic risks for example debt levels can play a role here prices can be volatile there is risk of inflation currency risk is a second risk to be aware of so especially if a market gains an importance but then the currency fluctuates heavily so that means also that the home country that the total overall profits of the company trade accordingly so this is something also to be aware of and uh to manage carefully trade agreements might play a role so sometimes advantages that you might have by being in the local country might go away if the the country enters into a trade agreement with another country and then the export and import of goods becomes basically um without duty free so so free from from any duties or any trading major trading costs and that might change the economics of industries entirely there's also the danger of counterfeit products and also the fight against counterfeit products and then finally we might have some financial risks so we had the 2008 financial crisis that affected global markets um a credit crunch and financial instability that comes with it that is something to watch out for so that brings us to the end of this session so internationalization strategy is definitely something to look at once a company reaches a certain level of maturity and something that can help to expand the market and open up for new opportunities [Music]
Business_Strategy_Lecture
Business_Strategy_01_Introduction_to_Strategic_Management.txt
[Music] foreign [Music] hello and welcome to my business strategy course my name is oliver gotcha and over the course of the next 15 sessions i will be giving you an introduction into strategic planning and strategic management thank you so much for joining me on this journey in today's introductory session i would like to answer three questions the first is what actually is strategy the second is why do we need a strategy why do companies need a strategy and the third one is who is actually the audience of a good strategy towards the end of the session i will be giving you an overview of what to expect for the rest of this course so let's jump right into it and start with the definition of strategy and a good starting point for this is always to go back to the roots of the word or the origin of the word the word strategy comes from two greek words ancient greek words the word horstratos which means the army and again which means to lead so literally strategy means to lead an army and obviously we want to lead the army to victory in the war so you will see that we are in the realm of war terminology here and a lot of the terminology that we will use is actually coming from that water terminology for example to attack a counter attack or when we talk about the mode that protects us from competition so all of this is more terminology but as we talk about business strategy we need a bit of a more peaceful definition as well and here i would like to refer to the definition of hit island and hoskissen they define strategy as the integrated and coordinated set of commitments and actions designed to exploit core competencies with the aim of gaining a competitive advantage so there are really three parts to this definition and if you allow me to take them apart briefly the first part is a set of commitments and actions and that is nothing else but a plan so a strategy is a plan integrated and coordinated here means that the plan must be integrated across all the functions of a company and also coordinated across all the different departments of a company the second point focuses on the weapons that we will use to put our strategy in place and that is our strength our core competencies that we have so we will try to use our core competencies as weapons against the competition and finally we talk about the goal of the strategy and that is to gain a competitive advantage so basically to be better than the competition there are a couple of other definitions of strategy but you will see they are all fairly similar chandler in 1963 he focuses on long-run goals so it's a strategy is something that is long-term and defines the objectives of an enterprise and the causes of action to achieve those goals and objectives porter talks about strategy as being different from competition and choosing those activities that deliver a unique mix of value to the customer for mintzberg in 2007 strategy is simply a pattern in a stream of decisions and johnson defines strategy as a long-term direction of an organization so all of these definitions fairly similar the next question now is why do we actually need a strategy and if you allow me to rephrase this why do companies need a strategy today more than at any time before what has changed over the past decades that makes having a strategy so important now there have been four forces two of them have been there for a very long time basically forever and two of these forces are fairly recent and fairly new and here are the four forces are the four challenges that businesses face today two of these challenges in the lighter color here are economic cycles and product life cycles well economic cycles are the ups and downs of the economy and this has always been there and drug life cycles basically from the product development to the boom of a product to the product decline where sales um go down and the product is not so interesting for customers anymore so companies have to manage through these economic cycles and they have to manage their product life cycles and come up with new products and with innovation to revive their product portfolio now the things that are relatively new here are globalization and technological advancement and these two forces actually make strategy so important for a company nowadays and let me elaborate a little bit why that is talking about globalization first there are pros and cons of globalization there's benefits positive impact on companies and negative impacts on companies and i would claim that both the positives and the negatives are important to be addressed in a good strategy when we talk about the benefits briefly benefits of globalization obviously is that companies now have access to global product markets so a much larger customer base trade barriers are reduced so both the shipping cost but also of course the customs are being reduced and therefore it becomes cheaper to trade internationally companies now have access to a global supplier base and therefore also to the skills and also the resources of those suppliers and this can also lead to cost reduction for example by leveraging global labor markets with maybe lower labor rates than in the home country and of course finally the opportunity for international corporations or cooperating with international partners that maybe didn't exist some 50 60 years ago challenges of course are that companies now have to face global competition you cannot just look across the street at your local competitor you have to compare yourself with global companies with very different resources capabilities and also economic context they are in also there's higher exposure to global risks and very specifically to political and economic risks so both benefits and challenges but both of them have an impact on strategy and need to be addressed by a good strategy let me moving on to technological advancement looking at household penetration rates of products i think this is a good indicator to see how fast technological advancement is moving nowadays compared to a couple of decades ago take the automobile for example it took 60 years from the invention to the atom of the automobile to reaching 25 percent of u.s households the teleport was a little bit faster there it took about a little bit less than 40 years to reach 25 percent of u.s households but look at what happened recently tv already took only 20 years but then more recent technologies like internet mobile phone pc took like less than 20 years to reach 25 percent of u.s households and it's getting faster and faster and that is also represented on the next page which looks at technological advancement over time so starting from the 1970s to the early 2000s and you see that in the 1970s color tv or basic cable the growth rates were fairly flat and as we move into the 1990s and into the 2000s the growth rates of the new technologies that come in are much much faster so they spread much faster and have a much wider reach in a quicker time horizon so it becomes more and more paramount for companies to be on top of the technology and to have solid plans in place to address competition that brings in new technology finally we have to ask ourselves are there companies that are more in need of astrology than others and greenville and khan in their 2005 book demystifying competition answer this question and they start out with all markets and ask the first question do you have a competitive advantage is there are there competitive advantages in the market if there's no competitive advantage and all companies basically have the same resources and have the same kind of capabilities and compete for the same customer base then there's only one way and that is to be operational effective that means efficiency efficiency efficiency and reducing cost and gaining ground over the competition by being just the most efficient company in the market but if there are competitive advantages then the next question is is there something like a single dominant firm single government firm for example could be microsoft in the software and computer systems market if there is a single dominant firm then the question is is it you or is it others if it's you then that's fantastic because then you just have to manage your competitive advantage you are in a very advantageous position already so you just have to manage it and make sure that no one else catches up to that if you are in the other buckets and you don't have that competitive advantage then greenwald and khan propose that the best thing is to exit the market gracefully because if there's a single dominant firm and it's not you then it will be very very difficult to get economic advantage or strategic ground against your competitors now if there's no single dominant firm then that becomes interesting that becomes the interesting playing field of strategists because here we face very difficult strategic decisions so we have many companies that compete and there are competitive advantages but the question is how to get those and this is the bulk of this course will be focused on these situations where we have no single dominant firm but we are fighting for gaining a competitive advantage next we have to ask ourselves so who cares who cares about strategy who is actually our audience who should we focus on when we develop our strategy and here we have to look at stakeholders of the company stakeholders are defined as individuals and groups who can affect or are affected by the strategic outcomes achieved and who have enforceable claims on a firm's performance so who are those stakeholders there are four large group of stakeholders the first group are capital market stakeholders so these are basically the kind of people companies that provide capital to our company for example the shareholders but also major suppliers of capital the banks and the debt holders for our business that's the first group the second group are product market stakeholders so these are our customers our suppliers the host communities we are in but also labor unions the third group are organizational stakeholders employees managers and non-managers for example board members who govern the company and the fourth group are institutional stakeholders governments and non-governmental organizations and you see from this chart that there's a number of stakeholders that we have to keep in mind when we write our strategy and that also means writing a good strategy that addresses the needs and the wants of all these stakeholders is quite complex now this course is about teaching strategy and when you start teaching strategy you have to ask yourself is it actually possible to teach or is strategy something that is just there from birth is it an art or is it a science and for me when i thought about this question the answer is it's actually neither nor it's not an art really 100 it's not really a science 100 it's a bit of both and the best way i could come up with to describe this is that strategy is a craft it is something that you can learn it is based on a process it is based on tools and [Music] frameworks that you can actually learn that you can follow and yes there is a little bit of art in it and there's a little bit of science in it as well but more than anything strategy can be learned and strategy can be practiced and this leads me to the final slide which is the overview of what you can expect as part of this course and this entire course is structured around the strategic management process so the process starts with the goal and the aspiration that you set for the company then you go out and you study the external environment and the internal environment in the external environment we talk about the economy we talk about the industry and the competition and for the internal environment you talk about our core competencies our strength our weaknesses and understand where we stand as a company based on this and based on the goal and aspiration that we have we develop a vision and a mission for the company and that becomes the input for our strategy then we move on to the actual strategy formulation and there's a couple of elements that we have to look at we start off with the business level strategy and competitive rivalry and competitive dynamics then we move into a corporate level strategy and here we look at specifically acquisitions and restructuring strategies finally we look at two more specific strategies which is international strategy and cooperative strategy then once we have formulated that strategy and have produced a proper plan in the proper document then we move on to strategy implementation and here are four areas that we have to look at one is to translate our strategy into concrete strategic projects and initiatives then these projects also have a consequence on the organization structure and the controls that we put in place of course we need to talk about budgeting and the financial planning because all of our strategies have an impact on the finances of the company and finally more and more important nowadays becomes the aspect of sustainability and business ethics so making sure that our strategy is also in line with our long-term sustainable goals and proper based on proper ethical principles now if we do everything right here then this will earn us above average returns it will help us to beat most of the competition be better than the average and um give us good solid returns and then the process starts over again and with the next cycle we need to set an aspiration and a goal again and go through the strategic management process again so this is what we will cover as part of this course thank you so much for joining today's session and i see you again soon [Music]
Business_Strategy_Lecture
Business_Strategy_02_Measuring_Success_in_Strategic_Management.txt
[Music] how do we actually measure success in strategic management at the end of the last session we looked at the strategic management process and at the beginning of this process we said we would set a goal or an aspiration of what we want to achieve and hopefully at the end after we go through the entire strategic management process we would end up with above average returns so hopefully with achieving our goal and our aspiration in today's session i would like to look at ways of how we can measure strategic success we will look at eba our economic value added as one potential measure for strategic success and towards the end of the session we will talk about drivers of above average returns and how we can achieve above average returns according to two academic models that we will look at so let's start by looking at the ways of measuring success and strategic management and there's really a number of ways of how we can measure and i would like to introduce a couple of those that i could think of the first and probably easiest also to measure is sales or revenues so we would just take from the profit and loss statement or the income statement of the company the total sales or the total revenues generated by the company in a given period this one is easy to do it's easy to compare across firms but it doesn't really say much about the profitability of those sales it's easy to get sales quickly by just dumping the price but whether it's profitable or not we don't know so maybe not a great measure for strategic success now how about profit so operating profit ebits or ebitda that's also easy to measure we can see that from the profit and loss statement but ebit and ebitda can be manipulated with creative accounting measures so they can vary quite significantly in in different years depending on accounting treatment of specific items now the next one net profit after tax is a little bit clearer it represents the net results of the business after all taxes and after all financing costs but still there's something missing it doesn't give a clear indication of what investments were needed to achieve those results that were reported as npad so still some missing aspects now we could also look at market share which is basically the share of the company's revenues as a proportion of the total revenues generated in the market it's a good indicator for the relative power vis-a-vis other companies but it largely depends on how wide or how narrow we define the market we could define the market as very narrow and then we would be market leader or we could define it as wider and then there would be other companies coming into play and we might no longer be the market leader if we take a wider definition again there is no indication of the profitability same as if we look at sales or revenues and market share is sometimes not so clear sometimes difficult to measure especially if we deal with a lot of non-listed companies that don't follow the same [Music] reporting standards as publicly listed companies now when we talked about npad we mentioned that there's no indication about the investment now what about roic so the return on invested capital that's slightly better because it takes the invested capital into account and puts it into relation of the net return of the company it's also easy to compare across companies and easy to compare across industries because here it's a ratio that we look at but still there's no indication of the cost at which the capital resources have been acquired so still a missing aspect so there's a couple of other aspects well before we go into more details of the cost what about the share price that certainly takes everything into account right well it's easy to measure it's available on a daily basis fully transparent it's a key metric for the owners of the company who can look up the share price every day at any point in time however the challenge with share price is that share price is quite dependent on performance relative to market expectations rather than the absolute performance so if the expectations are very low and the company outperforms those expectations the share price might go up and might make a big jump if the company still performs very strongly but the expectations were just much higher than the share price might actually drop despite a very good reported result plus in addition to that the share price is often subject to speculation and therefore maybe not a good measure of strategic success total return to shareholders is quite similar it takes the dividends into account and also the dividend or the interest on the dividends paid over time so the good thing about total return to shareholders is it looks at a longer period of time and it takes the relative performance as well as the absolute performance into account but we haven't taken care of the subject of speculation as well of the expectations and the management of expectations by the market some companies are measured by completely different measures like the number of customers or the number of subscribers especially e-commerce companies or subscription-based companies so that can be quite a useful metric in those cases especially if the company doesn't have any significant revenues or profits yet but again because there are there is no indication of revenues or profitability of these customers that that come in um it doesn't really measure strategic success either or is only partially a good measure for strategic success for these companies now to go completely out of the box we could we could look at company reputation as well so we would measure customer sentiment through surveys or through pulse opinion polls that can be actually a pretty good medium and long-term indicator for the sustainability of the company and the sustainability of its customer base but again there's no direct relationship to sales or profit and this requires a thorough research methodology to be really credible and to give a good indication of where the company really stands now one measure and this will be the one that i would like to look at in more detail is economic value added it's also referred to as economic profit and that is a well-rounded metric of profitability and invested capital and it also takes into account opportunity cost the challenge with this one is while it is a very good measure for strategic success it is not very straightforward to assess and we have to go through many steps to get to economic value added so in the next step i would like to introduce economic value added a little bit further and would like to look at how to actually calculate economic value added so economic value added or economic profit can be measured as the revenues minus the explicit costs minus the opportunity costs so it also takes opportunity cost into account so if you look at this example here of opening a restaurant let's assume we have revenues of 100 000 and then after deducting all the cost cost of goods and labor and space rental marketing expenses we end up with an accounting profit of 10 000 us dollars so it's a 10 profit ratio not too bad but if you look at what else we could do during the time of opening and running this restaurant for example having a full-time salary chop we might earn forty thousand dollars a year in that shop so actually our economic profit from operating this restaurant is minus thirty 000 so we it's a negative economic profit if you take into account the alternative or the opportunity costs that we have from operating these restaurants full time looking at how companies look at economic value-added eva eba is defined as the net operating profit after tax or notepad minus the invested capital times the cost of that capital so the weighted average cost of capital or back here in this example maybe we have a company that has an operating profit of 1 million dollars and then with 30 tax in the u.s the company would be left with a net operating profit after tax of 700 000 now that seems quite a good number but if you look at that it took 10 million dollars of investment to come up with these seven hundred thousand dollars net operating profit after tax and that this investment comes maybe at a cost of eight percent then the total cost of capital would actually be eight hundred thousand and with a net operating profit after tax of 700 000 we would have an economic value added or an economic value loss i should say of 100 000 so given the opportunity cost of 800 000 dollars with invested capital it would have been wiser to invest the capital in something else finally if we come up with the economic value added we still have to compare this number to other companies in the industry to find out if we are above average or not the consulting company mckinsey did a study of 393 firms globally and they measured that economic profit so they looked at the invested capital and for these 2393 firms the average invested capital stood at 9.3 billion us dollars and with this invested capital the companies produced a net operating profit of 921 million dollars on average so the roic or the return on invested capital was about 9.9 percent but to achieve these 9.9 percent there was a capital charge of 741 million or on average eight percent during the period of 2010 to 2014 and that left the companies with an economic profit of 180 million or 1.9 of the invested capital now next i would like to look at ways of how we can actually measure this and make this very practical of how we can measure economic profit in in practice so here's the formula the economic value add is operating profit after tax minus the invested capital times the weighted average cost of capital or back so if you break this down the net operating profit of the tax is fairly easy to to assess so we take the operating profit that can be found in the income statement or the ebit we get this from the p l the income statement for the tax rate we use the marginal tax rate the company normally reports an effective tax rate so the actual tax that the company pays this can vary depending on the year and depending on tax savings measures that the company puts in place so it might very well be lower than the marginal tax rate and the real tax rate is officially announced or that this law in the country but [Music] we use in this case the official corporate income tax rate of the country so that is called the marginal tax rate in thailand that would be 20 that's the corporate income tax rate here in thailand so and then we take the operating profit and deduct the tax and that gives us the net operating profit after tax on notepad now for invested capital the easiest way of assessing this is to look at the balance sheet and taking the book value of equity and the book value of debt now i would like to propose here that a better measure for the invested capital is to take actually the market value of equity and the market value of debt because that reflects the cost and and the magnitude of the capital that would be needed at this point in time not some years back when the actual debt and the actual equity was raised so the market value of equity is calculated by the share price times the number of shares outstanding both we can find in the annual report of the company the market value of debt is a little bit more complicated to assess because first of all it includes all the interest bearing debt and also all the lease obligations so we need to look in the annual report and need to assess both the interest bearing debt and the lease obligations the market value of the interest bearing debt can be assessed by looking at the book value of the debt and then applying the um the formula that is shown here i don't want to go into the details so in any corporate finance course um you will talk about um the bond valuation formula but that is here's the formula and i will show you some examples of how to calculate this and then for the value of operating leases you would have to look at the least commitments and discount that by the pre-tax cost of debt and the pre-tax cost of debt is the risk-free rate plus the defaults the default spread of the country plus the default spread of the company which i will also show you later in the example now in the formula we have not looked at the back yet so we have to look at this separately because that's also another step and a more complicated step to assess so value-added cost of sorry weighted average cost of capital is calculated as the cost of equity times the weight of that equity as proportion of the total capital plus the cost of debt times the weight of that debt as proportion of the total capital invested so for the cost of equity it is basically the risk-free rate which is for example the 10-year government bond for triple a-rated countries such as the u.s or germany that's the risk-free rate that we have in the market um and then plus the beta of the company times the equity risk premium so the better of the company can be found on platforms like yahoo finance and the equity risk premium that is something that we can actually look up it's calculated an annual basis by professor damodaran from nyu stern school of business and we can look it up on the website he calculates that on an annual basis usually in january or february at the beginning of the year for every country now for the cost of debt that is the risk-free rate again same that the triple a rated country risk-free rate a 10-year government bond typically plus the default spread for the country [Music] that can also be looked up by in the by the database in the database of professor damoran from nyu it's online available for free and then the fault spread for the company now again here we need to go through another hoop which is to look at the synthetic credit rating of the company and assessing what is the default spread of the company and for this one we need to take two further steps we need to look at the interest coverage ratio which is the operating profit of the company divided by the interest expenses so it basically says how many times can we cover the interest expenses through our operating profits so we get a multiple and then again professor amaran from nyu stern publishes a table every year um with a synthetic rating based on that default spread so for example if based on that interest coverage ratio excuse me so basically if we have an interest coverage ratio of 10 then in this case the rating would be a double a for a low market cap berm and that would result in a default spread of one percent and low market cap firms here are defined as companies with a market cap of less than two billion now all of this seems very complicated and um i would like to end this explanation uh with a quote from charlie munger who calls the desire for having some what he calls false precision in finance as the physics envy so basically saying that [Music] well academics in finance envy their physics colleagues and would like to produce something that is as accurate as precise and as complicated as some of the formulas in modern physics but he also states that this doesn't happen in economics typically the economic system is too complex and therefore all these calculations that we do give us over confidence in the accuracy of the numbers so i would like to put this here to say like you can do all these complicated calculations for example for estimating the market value of equity and the market value of deb or you can also just take the book value from the balance sheet which is much much easier to assess now after these complicated calculations i would like to show you some example to make this a bit more practical and here i looked at the basic materials industry in thailand so first i calculated and you see here six companies that are in this basic materials industry and listed in the set 100 so in the 100 biggest companies in the stock exchange of thailand we can calculate we can we can find the information on finance.yahoo.com about the operating income with the corporate tax rate the marginal tax rate of 20 which is the same for all these six companies we can calculate the net operating profit after tax then the market value of equity can be calculated by the share price times the number of shares outstanding which both we can also find on finance.yahoo.com then we can use the formula that we have just discussed to calculate the interest coverage ratio so again interest payments are also stated on finance.yahoo.com for listed companies i looked up the risk-free rate which is the 10-year u.s treasury bonds right now in august 2021 it stands at 1.35 thailand has a credit rating of baa 1 and with that according to professor damodaran from nyu stern this leads to a default spread of 1.55 and all of this together together with the default spread of the company this gives us the pre-tax cost of debt i looked up in the annual report the remaining lifetime of the maturity or the majority of the outstanding debt so basically when do the companies have to pay this step back so you see here this ranges from uh 1.1 to about 3.6 years so this is a time measured in years and with this using the formula we can calculate the market value of the debt which differs slightly from the book value of debt the beta and the equity risk premium we can look up for thailand the beta can be looked up in finance.yahoo.com it's reported there for the company for each company and that gives us the cost of equity so basically the return that shareholders of that company would demand given the risk profile of the company and from the pre-tax cost of that we can also calculate the after-tax costs of them so doing all the calculations in a simple spreadsheet we can calculate the net operating profit after tax by using the formula we can calculate the vac weight average cost of capital and we can calculate the invested capital we see here now that the economic value added for those six companies varies quite significantly so pttc for example has as of august 2021 an eva of about minus 13.3 billion thai baht simon's demand group for example sec has a positive economic value added of 3.7 billion tibot and tusco which is typical asphalt an asphalt company has an economic value added of 3.178 billion thai tibot now the number itself is interesting but it becomes more interesting if we look at this in proportion um to the actual invested capital and look at it as a percentage and then we see here that the seemingly big number a negative number of eva for pdt gc pg global chemicals is negative three percent um while the 3.2 billion of tusco are actually representing 9 economic value added so now if we look at above average or above in this case above the median the companies scc tusco and toa would be above average and the other three companies epg indorama and pdt global chemical would be below average so you have three companies above average of course and three companies below average and we get a fairly good view of the industry here now finally i would like to conclude this session by looking at two models of how success in strategic management so how above average returns actually can be achieved the first model i would like to look at is called the industrial organization model and basically the industrial organization model in very simple terms says that the external environment is much much more important than managerial and the model itself makes four assumptions the first is that the external environment limits strategic choices the second one is that firms control very similar relevant resources and pursue similar strategies so the resources across companies are actually quite similar the third assumption is that resources are actually mobile between firms and therefore any advantage that a company has in terms of resources is only short term and we can think of this as managers for example that can move from one company to the other if they get a better offer at the competitor company so those short-term advantages that a company might have from having a stronger management team might only be short-lived and finally the fourth assumption is that decision makers are rational they act in the firm's best interest which is profit maximization and therefore those material decisions those material choices are basically all tending towards the same level across companies so the basic idea of this model is look for an attractive industry to operate in and the proposal of academics that follow this kind of framework of thinking the industrial organization model say that first you have to study the external environment that's the very first important step then you look at an attractive industry that you want to play in and if that industry is not your industry right now then you should put an effort in to change your industry if there's another industry that is more attractive then you formulate your strategy for that industry you develop the skills you acquire the resources and then you implement your strategy so you take action on this one so this one was the industrial organization model the first model that we will look at the second model is the resource-based model it's the absolute opposite of the industrial organization model the resource-based model suggests that resources are key for gaining a competitive advantage so it's all about the resources that you have available in your company and there are three assumptions the first assumption is resources and capabilities are really unique think of the engineering talent that steve jobs had in apple and they created something really unique and advantageous for the company that competitors took actually years to get even close to organizations are simply a collection of resources and capabilities and therefore it's all about those capabilities because they make up the company really and the third assumption is that those core capabilities become actually core competencies and these are fundamentally strategic important and therefore the basic idea of those who propose the resource-based model is get the best resources and these are the steps first we have to identify what these resources are then we need to get those resources in and build the right capabilities we need to determine what is really the competitive advantage of the resources that we have in place and then we look at the industry and then formulate and implement our strategy so the starting point for this resource based model is fundamentally different from the industrial organization model in so far as it starts out with the resources and the attractive industry is only secondary laser role but it's only secondary so now that leaves us with the question of which of these model actually works better so if we would have to choose which model should we actually look at and i would like to give you two data points here the first one is from a from the same mckinsey study that we already mentioned earlier the 2393 companies that mckenzie looked at for the period of 2010 to 2014. mckinsey here looked at the average annual economic profit within each industry and as you can see here industries vary significantly in terms of the economic profit that they generate so for example if you are in the software industry the likelihood that you have very high positive economic profit is much greater than if you are for example in the electric utilities industry or in the construction materials industry now still of course as a company you can outperform in the electric utilities or the construction materials market but it is much easier to generate economic positive economic profit if you are in the software industry or in the telecom services industry or in the automobile industry during that time so what this suggests is that maybe the industrial organization model has a stronger impact on the economic value added created by companies than managerial choices so even the best managers in the electric utilities in the construction materials industry will struggle to create positive economic profit now another model slightly older is by making and potter from 1997 who studied the influence on profitability across multiple companies as well and in their research it came out that actually the resource based model maybe outweighs the industry industrial organization model so if you look here at um the corporate parent effect and the business unit effect together which is maybe an indication of resources and the management skills and capabilities of those companies it by far outweighs the industry effect and the year effect together which would be external influences and um have to note though that as a big portion in this study came out as unexplained variation which are things like economic crisis for example that cannot be controlled or natural disasters changes in consumer taste and so on so you could also call it luck or bad luck good luck or bad luck so a big portion is unexplained but if you look at the explained portion it seems like the resource-based model actually has a point and seems to outweigh the industrial organization model so now that we looked at ways to measure strategic success we can now look at the industry that our company is in and we can compare as a starting point the current state the starting point of the company in terms of the economic value add compared to its peers and that builds the basis for the aspiration that we want to set and then we can dive into the strategic analysis [Music]
Business_Strategy_Lecture
Business_Strategy_04_Internal_Factors_Resources_Capabilities_Core_Competencies.txt
foreign [Music] in the last session we talked about the external environment we looked at some tools and frameworks that helped us to analyze the external environment for example the general environment the industry environment and our direct competition this was the first important input factor into our strategy today we are going to cover the second important input factor in our strategy which is an analysis of our internal capabilities and processes and what we are going to do today is we are going to look at how to get from resources all the way through core competencies and these core competencies then creating a competitive advantage secondly we are looking at the value chain we are trying to understand the value chain and trying to see where we are having strength and weaknesses across the value chain then we are also going to study some financial indicators that help us to benchmark whether we are on par or ahead or lagging behind our competition in terms of some of the financial indicators and finally i would like to talk about some of the challenges that come along with internal analysis because it's not that straightforward and not that easy to have an honest conversation about internal capabilities and internal weaknesses especially but before we are going to talk about the resources and the core competencies we have to understand one important aspect first and that aspect is value and especially value to customers customer value is defined as a product's performance characteristics and attributes for which customers are willing to pay so this willingness to pay is a very important factor that we always have to have in mind when we talk about internal analysis is what we are doing internally is that something that the customer is willing to pay for and a great way to illustrate this is the so-called value stick this is from an article of harvard business school and basically it shows that the value that is being created somewhere lies between the willingness to sell for the willingness of a supplier to sell wts here and the willingness to pay which is the customer's willingness to pay for our product so the best way of creating and the easiest way of creating value for our company is if we can increase that point where the customer is willing to pay so the maximum price that the customer is willing to pay for a product that is what creates value for us and for the customer so next i would like to highlight a few resources that we have to look at and understand and how these resources can create value for us we differentiate resources into two types the first type is tangible resources and there's basically four types the first type is financial resources under financial resources we understand both the borrowing capacity so how easy or difficult is it for us to get money from external sources mainly from from banks or from debt funding and also the generation of funds internally so equity funding the second tangible resource is called organizational resources and that's basically the reporting structure the systems the processes so the tangible things about our hr system physical resources are planned equipment access to raw materials so everything that we need to produce products or produce services and finally the last tangible resource is technological resources and not so straightforward but these ones are patents trademarks and trade secrets and what is meant here specifically is basically the piece of paper that says that this is our patent this is our intellectual property and we can use this intellectual property and we are protected from competition using the same intellectual property so these are the tangible resources in terms of intangible resources the first that comes to mind is human resources now you might say hey human resources that's tangible that's a that's a person but what is meant here by human resources is is not the person itself because that would sit in the structure on the previous page human resources here means the knowledge that comes with the people that the managerial capabilities that people have the organizational capabilities that we build and of course like trust that comes from clients and from connections and from relationships that our human resources have with with customers and with suppliers the second intangible resource is innovation resources this is for example ideas scientific capabilities that we might have and our capacity to innovate and to come up with new products and new services and finally reputational resources so here definitely and coca-cola comes to mind as a company that has a very stronghold very big stronghold in reputational resources the brand name and the brand proposition the reputation with customers of course then the perception customers perception of product quality and durability as well as reputation with suppliers and interactions and relationships that we have across the value chain so these are the intangible resources now most companies have these resources in some shape of form but the key to having a competitive advantage is how you use these resources and capabilities that come with these resources and make them a true core competency that differentiates us from the competition and to get to a core competency we have to look at five attributes and these these are the five the first attribute of a core competency is whether this competency is valuable so we talked about value just now um valuable is if this capability can help to exploit opportunities neutralize threat and really add value to the customer secondly the core competency has to be rare so that means that only few competitors possess it if any maybe no competitor processes it and this is unique to us or quite unique to us the third element is that this core competency is inimitable or at least costly to imitate so difficult to imitate which means that other firms cannot easily develop this capability usually this happens when the capability is somehow what we call quarterly ambiguous meaning it's not easy to identify where this core competency or this capability really comes from in many companies it's deeply rooted in the firm historically and the example i use here is the consulting company mckinsey now many companies are in the consulting business but somehow the big consulting companies like mckinsey boston consulting group bain they have quite a unique standing and that is very difficult to explain and it's also very difficult to imitate and for other competitors to copy and that is because a lot of the culture and the babies companies conduct their business is deeply rooted and maybe 50 maybe 60 in the case of mckinsey even 100 years old and this creates a social complexity that is very difficult for others to imitate and and to copy so that's the third element the fourth factor is that this core competency has to be non-substitutable so meaning there is no alternative there is no strategic equivalent to the capability which means that any competitor who would like to copy us on this or attack us on this would have to have exactly that capability finally the fifth element is organizational support so this is the arc structure the formal and inform management control system that needs to support the core competency and make sure that it is being used to really create value so after we have understood what our resources and how these resources can be translated into core competencies we now have to look inside and outside our firm and understand where across the value chain do we have strength and where do we have weaknesses and so we have to do some basic value chain analysis and i want to start with the industry value chain before we go into the internal value chain so this is a typical industry value train i took the automotive industry so it starts from raw materials and in the case of the automotive industry it's for example iron or mining then the parts go or aluminium mining it goes into a steel mill aluminium factory where the primary manufacturing happens then the third step is fabrication this would be the car part manufacturers or the car part suppliers that produce for example the engine parts or the engine itself are important input materials into the car then the fourth step is the actual assembly line so this is the automotive assembly that sits with the car companies the mercedes bmw toyota and so on um after that it goes to a distributor for example if you're here in thailand it goes to the local subsidiary of the car company and from the local subsidiary it is then distributed to a retailer in this case it's a car dealership and the customer actually the end consumer has a relationship with this car dealership so to understand the value chain i took a simple example and i want to take you through the steps that it takes to really understand and analyze the industry value chain and the example i took here is the vietnamese tea market i found some data from 2015 that is pretty straightforward and explains in the simple concept of how value chain analysis works so the first step is we need to understand what are the steps in the value chain here it goes from the farmer to the processor then there's a tri-t trader the wholesaler and finally the retailer and from there it goes to the end consumer so we can analyze first and understand the steps that it takes from the early beginning to the end consumer the next step then is to research the cost prices at each step and the best way is to go from the back to the front so if we start on the right side with the consumer that's fairly easy we can go to the retailer and we can look on the shelf and what is the price and in this case the researchers found out that the price is eight dollars and 35 cents for one kilogram of tea the retailer actually pays 7.41 cents the wholesaler 5.53 and so on the most tricky part in this analysis is the first step which is the farmer so what is actually the cost price for the farmer and in this case the biggest part of the cost price for the farmer is actually the farmer's own labor so using some assumptions we assess that the cost of labor for one kilogram of tea is about one dollar so actually 96 cents to a dollar so after we have understood the cost prices then we have to look at what the margins are at each step sometimes it's easy to research and in some cases we have to estimate so starting from the from the farm at this time so the farmer sells to the processor at one dollar and 32 cents so there's a profit margin in there of about 36 cents and this the processor sells onto the dry trader for 2.20 so the processor makes another 56 cents of margin the trader actually has a quite a big margin and as he continues to sell on the margin is 1.28 the wholesaler makes another 1.48 and the retailer 62 cents so if we add up the entire cost of the value add and the margin we see here we get to the 8.35 which is a retail price and out of these 8.35 actually 4.5 cents is the actual cost of the product and 4929 is the margin that is generated along the way along the value chain so in the fourth step then we just map out um the profits to see the industry power distribution and we see here that out of the total profitability that is generated in this value chain only eight percent go to the farmer the biggest winners or the strongest players here are the tried tea and the wholesaler those two players make up about two-thirds of the total margin that is generated along the way and that is also shown in the last step visualizing this this profit distribution in a simple pie chart you can see that actually the farmer and the processor actually the ones who have most of the physical work to do with the tea have relatively small shares of the profit and a huge share goes to the powerful players here in this chain the traders and the wholesalers so this is the first part of the value chain analysis the external value chain now in the next step we're looking at the internal value chain and trying to understand how we create value within our company and this is a partners framework for the internal value chain it consists of the primary activities at the bottom which are basically the steps from inbound logistics the purchasing to the operations the outbound logistics to the customer marketing and sales in the organization and then the after sales service the services that are delivered along with the product and on top are the support activities this is the management the firm infrastructure hr technology it department and so on and the procurement and inventory management team so what we would do when we analyze a corporation's value chain is we would look at each of these elements and would try to analyze and understand how are we performing versus the best practice or the best competitor and whether we have gaps in our capabilities and one approach to do this and this is an example from from mckinsey specifically looking at supply chain analysis in this case is to do this along a grid of a five-point scale where five is best practice and uh one is like lagging behind basically and we would try to honestly rate ourselves along these criteria whether we are one two three four or five and this then would be translated into a spider web diagram where we can see where we are lagging behind or where we are performing strongly these are the the best practices so the further we are in towards the center it means we are lagging behind in this case packaging and shipment slagging behind the best practices while purchasing is close to best practice in this case so this helps to visualize where we stand and how we compare to best practice or to our best competitor in terms of our internal value chain now as a next step we need to look at some financial indicators because all of this eventually has to translate into financial results and um here is a list of a few financial indicators that we could look at so i like to start usually with the market value indicators market value or pricing ratios because they're easy to research and easy to find at least for listed companies so this is the price earnings ratio market to book ratio then a second factor that is very interesting to see the competitor's strength is definitely solvency short-term and long-term solvency ratios so this is the current ratio quick ratio cash ratio and for long-term solvency total debt ratio debt to equity ratio equity multiplier interest coverage ratio that shows us um how much or how often the firm could cover their interest expenses through their operating earnings and the cash coverage ratio which is a slightly more stringent version of the interest coverage ratio then we would look at profitability ratios these are very good indicators especially if we have very direct indicators to see whether we are in line with the profit margins and the return on assets and return on equity of our direct competition and finally some operational metrics are just asset utilization matrix that we could also look at we already covered in the second session the concept of economic value added that's more complex but for me also more powerful additional indicator to look at but we covered this at length in the second session and same as last time i have done a bit of benchmarking across thailand's basic material industry using the same six companies that we looked at last time and what we see here for example i try to color code based on the strength and weaknesses across this bench so basically the bench strength green meaning it's relatively strong compared to the other five companies and red meaning it's relatively weak compared to the other five companies and we see here that toa for example is very strongly positioned in terms of the solvency we see that indorama ventures ivl is challenged in terms of the solvency indicators compared to the competition and the profitability tusco is very strong so typical asphalt is leading in compared to its peers closely followed by by toa and in asset utilization pdt global chemical pdhc is actually quite strong as well to be fair on this analysis the companies don't have exactly the same products for example in the rama ventures i would assume that their product portfolio is more geared towards more common products that are more competitive in the market meaning they have probably lower pricing powers or commodities for example while tusco toa have more specialty chemical products or specialty products that can command a better pricing power so these were some basic analysis that we can do to understand our internal strength and our internal weaknesses however it is not always that easy to have an honest and straightforward internal conversation on this one so strengthen weaknesses uh sounds very straightforward but the challenges nut in 1999 has done some research across some 300 to 400 companies and his finding was that actually 50 of organizational decisions fail and um if you would have asked management at that time how many of your decisions fail i think no one would have admitted that half of the decisions are off why do organizational decisions fail so often it's because of uncertainty complexity and inter-organizational conflicts so uncertainty meaning that we make decisions based on assumptions that we have about the future but we don't know if the assumptions will fall in place exactly as we assume the complexity and we covered this in the very first session business is getting more and more complex technology is evolving faster we talk about business disruption new players coming in new industries are being created so the to foresee the future is ultimately more complex and a big factor is also intra-organizational conflicts so an example for this one is for example the conflict between a marketing and sales department and supply chain department for example so supply chain might have a kpi or a target to keep the inventory as low as possible to keep the cost low the cost of warehousing etc while the sales and marketing team of course would like to have as much inventory as possible so they can fulfill the sales of the organization so there is an inter-organizational conflict and that sometimes prevents close collaboration and sometimes also prevents us from having an honest conversation and an honest organization on our analysis nevertheless having a very clear picture of what are our core competencies how do we do how do we add value across the value chain where our strength where our weakness all of this is very very important as we go into our strategy process so as if we have clarity on this one this is a very very important input factor into our strategy [Music]
Business_Strategy_Lecture
Business_Strategy_15_Sustainability_Business_Ethics.txt
[Music] so with this session we are going to conclude our business strategy course but i don't want to end this course without touching upon one subject that is very often not part of the business strategy curriculum but that is very much at my heart and that i think should be part of any business strategy discussion and that is the topic of business ethics and sustainability so in today's session i'm going to cover why business ethics and sustainability has become so important in business leaders strategic choices why they can't ignore this topic anymore i'm going to cover a couple of key issues ethical issues that business leaders are facing and we're going to conclude with a few theories ethics theories and guiding principles that should help business leaders in their decision making but as usual before we go into the details we start off with the definition and i want to start very broadly with the definition of the word ethics the word ethics comes from ancient greek which literally means the character so when we talk about ethics and business ethics we talk about a question of character and in this case the character of the decision maker but more formally ethics is defined as the study of moral value of human conduct and of the rules and principles that are to govern them so we're talking about human conduct so human behavior and then about some of the guiding principles that help people in their you in their behavior and in their decision making so let's talk about why is it so important that we cover this topic and a couple of years ago the consulting company mckinsey and company conducted some research among business leaders globally with regards to how they see societal expectations evolving and the results were quite striking so the first very clear indication from this survey was that expectations are changing societal expectations are becoming much much more important and have become much more important over the last five years and this has been said by more than 90 percent of the participants and no matter if they came from public or private companies no matter what geography they were coming from and no matter whether they worked in small medium or large enterprises and more than half of the participants said that it will further change over time so expectations are further changing and further growing in importance now which stakeholders are most influential in the decision making it is employees and consumers these two stakeholder groups are by far the most influential customers are actually gaining an influence so over the next five years which is shown here on the right side consumers are actually becoming more important in the way companies think about and manage societal expectations now top of mind are environmental concerns so the increasing environmental concern and greater demand and limited supply of natural resources which are two topics that are closely linked are really top of mind and are really shaping how businesses conduct their their operations conduct their business and what society is expecting from them furthermore governance issues besides climate change governance issues are also becoming very important for example corruption so these are things that companies have to look out for as they conduct their business and as they make their decisions going forward and finally there is a very strong impact and influence from societal expectations on company strategy so being asked about whether they will incorporate environmental social governance esg issues into their strategy 93 percent of the participants say that they will more and more incorporate these issues into their strategic planning process 59 say much more and 34 say somewhat more 7 only seven percent say it's about the same or less going forward so societal expectations matter and they matter more and more on how people conduct their business now next i would like to cover a few aspects of business ethics that are particular issues for how people conduct their business and fricher in his book on business ethics in 2005 has defined five main issues in business ethics and he says that basically these are the five that we have to look out for and they're listed here so the five are bribery coercion deception theft and unfair discrimination and i'm going to cover briefly each of these five so the first issue is bribery and primary is very simply defined as the offering giving receiving or soliciting or something of value for the purpose of influencing the decision of a decision maker what this leads to is that there's a conflict of interest between the decision maker and his or her organization and primary is usually used to gain sales to get a new contract enter into new markets change others decisions primary can take many forms for example cash payments cars services that are provided gifts mobile phones like very various forms of bribery the second issue is coercion now coercion means compelling someone or convincing someone by physical force by threat and often the threat to use physical force which has the effect that it makes people act in a certain way that is actually against their will and certainly against the benefit or the interest of their company it creates benefits for those using the coercion and it creates harm for the ones that are being forced to act in a certain way special form of this is extortion so blackmailing threatening someone maybe to disclose some some information or using power over that person forms of cursion can be tempering physical harm or extortion the third issue is deception and that is the most common ethical transgression and deception is defined as knowingly and willingly making a false statement or representation so an example here could be false advertisement so claiming that a product has certain benefits that it actually doesn't have effects of deceptions are that it lures persons customers consumers into an action that that person actually didn't intend and it usually causes damage to the person that had been deceived forms of deception can be dishonest behavior qualifying research falsifying data distorting data creating misleading advertisement or also misrepresenting a product or a service making a false statement about this kind of service then issue number four is theft and theft is very widely defined here it means taking something that doesn't belong to you without the owner's consent now very important i mentioned that theft is by it has a very wide definition here but it doesn't refer to losses to the competitive forces so if all the actors play according to economic rules and one company loses out in the under other company gains that is of course not not theft i just want to highlight this here effects of theft is that it's a sudden change of rules in the market so property ownership changes and this usually leaves insufficient time for other players to take action and can cause significant harm to some people so for example um caesar of um of operating sites by foreign companies we saw this a couple of decades ago in bolivia so the property the equipment was lost and that was obviously a form of theft so other forms of theft are conceptual properties of intellectual property as well counterfeit products price collusion is also a form of theft that's theft of buyer's money dishonesty in contract is also considered as a form of theft here cheating over selling customers and unfair pricing so that's why i said like pricing decisions are also part of this here so theft is a very wide definition here the fifth and last ethical issue is unfair discrimination and this is defined as the unfair treatment or the denial of privileges to people because of their race their age their gender nationality or religion or other irrelevant factors now what is important here and that is stated here clearly at the bottom it's called unfair discrimination we need to be aware that we discriminate all the time when we make a job offer for example and hire one person over another that is discrimination for sure but that's based on qualification or when we pay out bonuses at the end of the year someone will get more and someone will get less bonus and that is obviously based on performance so we discriminate based on performance but these are acceptable criteria when we talk about discrimination as an ethical issue here we explicitly talk about unfair discrimination so effects of unfair discrimination are for example inefficiencies so when we hire a less qualified person because we do the person a favor over others that is unfair but it leads to inefficiencies be because the person is less qualified and of course it creates a disadvantage for the person who is affected who has been discriminated against forms of unfair discrimination can be favoring someone based on non-relevant criteria making decisions based on racial or sexual or religious prejudices and can be denial of jobs promotions or any other benefits so these were some of the very common ethical transgressions ethical issues that business leaders need to be aware and that we come across and as we conduct business next i want to talk about some of the solutions some of the way of thinking about business ethics ethical theories that help our decision making that help us in the strategy setting process and that also helps to help to explain why the same situation sometimes might lead to different decisions by different people why some people would go one way if they face the same definition the same situation and other people would decide differently and in a nutshell before we talk about these guiding principles just a short definition of business ethics business ethics is the process of evaluating decisions either before or after so pre or post with respect to moral standards of the society's culture so basically what i want to discuss here are guiding principles that help us in the evaluation of decisions either before if you have to make a decision or after if you have to judge whether a decision was good ethical or bad non-ethical and this page shows the overview of the ethical theories that we are going to cover today so this is two groups consequentialist theories and non-consequentialist theories they can subdivide it into four subgroups and in total we are covering nine ethical theories and these are the ones that are listed here in bullet points but i explain them one by one so the first group that we are going to discuss are consequentialist theories now the word consequentialist already suggests that these theories look at the consequences of a decision so the outcome of a decision they are also called with the greek term teleological theories helios is the end so basically looking at the end result of a decision now again decision is judged or evaluated based on its consequences and in very simple terms on this slide well if the consequences are good then the act is right and it's ethical according to consequential theories if the consequences are bad then the act is wrong or unethical now the challenge is as you can already guess from this chart the consequences for whom different people might face different consequences out of the same decision and therefore the evaluation highly depends on who evaluates the decision but let's go a little bit into detail so the first ethical theory that we are going to discuss is called egoism now there's a colloquial usage of the word egoism we call the person egoistic for example if the person looks only after after himself or herself but in this sense egoism is the academic term for this theory but the meaning is very close very similar as you will see so egoism is the moral standard that focuses on self-interest and this can have different levels it can be on a personal level on an organizational level so for my company for example or on a community level and the example that comes to mind here obviously was donald trump's america first policy that's an egoistic policy based on a community level a whole country level now there are two groups of egoism one is a short-run self-interest which is linked to the decision at hand immediately and long-run self-interest which is also called enlightened self-interest which takes a longer perspective into account and between the short-run self-interest and the long-run self-interest maybe some of the decisions might change if you take a longer-term perspective a note here that is important egoism is actually reflected in market theory so a market allocation mechanism has embedded egoism in it so when we make a decision we usually look at after our own or our company's interest and um are not so much interested in the interest of the other person that we are dealing with and that is part of the market market theory and also important to note that igor's decision making is often judged unethical by everyone except the one who makes the decision now this is egoism looking at immediately the decision and the impact on us on our company or on our society or on our country there is a wider group of consequentialist theories which is called utilitarianism and again as the name suggests the name utility is is in the term so this looks at the net utility the net benefit that is derived from a decision across a group of stakeholders so the key belief here is that a decision is ethical if it provides greater net utility net benefit than any other alternative decision so in consequence of that statement a decision maker must evaluate all alternatives and then choose the one that brings the biggest net benefit and with net benefit we already see we mean different stakeholders involvement and with that benefit we mean like well there's an impact on us there's an impact on a counterpart maybe an impact on society and if we net the negative and the positives out then we should take this this decision that brings the highest net utility for all people involved not necessarily for ourselves so there's again two types actual terrorism which looks at the specific act as a specific decision at hand and provides a short-term perspective while rule utilitarianism tries to come up with longer-term perspective it tries to come up with rules that apply to any similar situation and that guide decision making when someone gets into a similar situation so in summary consequential theories can be short term or long term and they can be narrow focus which is on ourselves in our organization so that is egoism short-term ecoism and long-term egoism and if we really look at the wider community and that the net utility that is provided for the divided community then we have hugely determinism act unitarianism and rule utilitarianism for the long term now a final thought on this one there are several difficulties with consequentialist theories and these are listed here the first difficulty is that it is quite a challenge to foresee what all the consequences of a business decision will be however according to consequential theories this is required to make the decision in the first place so there's certainly a contradiction there secondly consequences of decisions are often not easily measured so for example how do we measure if as a consequence of our decision someone gets injured or how do we measure the life the value of life of a person so these are very difficult to measure especially in economic terms um so that also makes it difficult to make a decision based on consequentialist theories third even if you look at net utility for the society at large that may cause significant harm to a few people so an example of this one is if a government builds a road or a big infrastructure project and needs to disown some land owners so there's definitely net benefit of the project for example if you build a road and millions of people will travel on this road every day but there is significant harm cost to the families that live on this ground maybe have grown up there for generations and that lose their their home their family home to this road project and finally similar to statement number three utility obtained is not constant across members of a group so some people will benefit significantly if they travel on this road every day for example and others that don't travel on this road very frequently might not benefit from that same decision so that's important to keep in mind now there's another group of ethical theories which provides maybe a little bit more straightforward guidance but it's also much stricter in terms of the guidance that it provides and this is the group of non-consequentialist theories now as the name suggests non-consequentialist theories don't look at the consequences they are also called deontological principles from the greek word being which means being or to be and non-consequentialist theories basically say hey it doesn't matter what the outcome of a decision is there is a set of rules that must be followed and all we have to think about are those rules it doesn't really matter what the decision is it's completely irrelevant whether the outcome is good or not and the evaluation whether something is ethical or not is based on the reason for acting and not the consequences the first group here is rights principles and the the idea behind this is that people are granted certain moral and human rights because we are human beings and we all have to follow those those rights and those guidelines you don't violate people's rights and in exchange they don't violate yours this philosophy and this concept is attributed to emmanuel kant who lived in the 18th century and he said very clearly that rules are valid irrespective of the circumstances so it doesn't matter if there's an emergency state or whatever the circumstances are you follow the rules no matter what now going a little bit more detail into khan's perspective he says an act is right if you want everyone to act in the same way in a similar situation and it doesn't matter which side you're on so whenever you will have to make a decision you have to look at all the stakeholders and ask yourself hey if i would be in their shoes would i still be happy with my decision with the decision that i'm about to make or do i cause harm to someone and if that is the case then well i wouldn't want to be harmed so then i shouldn't make that decision and the key statement is to treat the people as ends themselves never only as a means to an end so you have to respect them respect them as human beings and help them realize their own ends and goals and according to there is no individual exception to these rules personal benefit consequences and result of the decision does not have any impact on moral value w d ross a scottish philosopher offers a little bit an alternative perspective to so he says okay there are certain duties that we have but they are premier farts so that means that they might sometimes contradict each other and they might be in conflict with each other so we have to take them at face value and we can sometimes we have to decide which one is more important some of these are more important than others an example here if you make a promise to your friend to help him with moving houses for example and then your parents need your urgent help from you now obviously you have a moral conflict here but if helping your parents is more important then according to ross you're allowed to break the promise to your friend there's a good reason for it and there's a very noble reason for breaking that promise and you can make up for it however there are certain basic rights that cannot be violated and these are the six listed here life and safety truthfulness privacy freedom of conscience freedom of speech and private property so these are things that cannot be violated no matter what now the last group of ethics theories that i want to discuss is justice principles justice principles so justice principles basically say that a trust act respects rights fairness and equality and the idea behind this it is that sometimes we make mistakes sometimes we have ethical transgressions so we act unethically but we can make up for it maybe the most complicated of this justice principle is distributive justice but the idea behind this in simple terms is that well benefits and burdens are not equally distributed across members of a society so some people for example are richer wealthier have it easier have a better life and some people have it more difficult which is considered as unfair but we can make up for this by redistributing something and that's why we have taxes for example so taxes our way to redistribute from maybe wealthier people to the less wealthy people and with that redistribution or this this distributive justice we can make up for many unjust unfair situations that we face in our day-to-day life regenerative justice is very straightforward it's part of our normal punishment system of our legal system it's basically refers to punishment for wrongdoing so if you make a mistake if you violate a law or you violate an ethical rule you can make up for it by being punished for example taking a jail term but then once you finish that jail term then justice is done and then everything is forgotten and equal again and you can restart from a moral perspective from from an ethical perspective and the last one is compensatory justice so compensatory justice is basically compensation if you have to do an ethical transgression and the example that i had already given is the disowning the land where a government builds an infrastructure project or a new road now that's obviously against ethical standards because you violate the the ownership rights the property rights of a person but you can make up for it according to rights principles through compensation so basically compensation should be equal or better than the law suffered but that leaves con that leaves difficulties when it's about like loss of life for example so so how do you compensate um for this one so just this principle again um are giving an opportunity to people to make up for any wrongdoing and to compensate for that in some shape or form now this brings us to the end of our discussion of ethical theories so again in summary these are the theories that we have discussed and as you can see this explains why in the same situation some people might act differently take take the example that we discussed about advertising misleading advertisement now some business leaders might decide that this is okay and that would be taken or referring to an egoist or maybe even a utilitarianist um decision making if you claim that well if we sell the product if we sell more of the product we can create employment we do good for society we pay taxes and so on so you could argue from a utilitarianist by the society perspective why misleading advertisement would be okay now in the same situation people who follow the rights principles and who argue from that from that perspective might strictly say no you don't want to be deceived by advertisement and if you don't want to be deceived by advertisement then you shouldn't do the same to others as well that would be or ros perspective so these ethical theories might help us to guide our decision making or at a very minimum they might help to explain why people take certain decisions so this is this brings us to the end of this session and also to the end of our business strategy course thank you so much for being part of this journey thank you for following the videos um i hope that some of the content in these sessions was useful to you i hope that maybe you can apply it in your day-to-day business life thank you again for watching and good luck [Music] you
Business_Strategy_Lecture
Business_Strategy_06_Business_Level_Strategy.txt
[Music] over the last five sessions we did a basic introduction into strategic planning and after that we discussed at quite some length about strategic inputs we looked at the external environment we looked at the internal environment and then we discussed vision mission statements and values and you might have asked yourself why does it take so long until he actually gets to the strategic planning itself and if i look at reality at real life and at the strategic planning process that we drive in our companies it usually takes about that much time it takes about one third of the time to set up the entire strategic planning process to study the external environment look at strength and weaknesses and then set up the vision and the mission statement for the company in order to really get started with a strategy process so in terms of the length of the lecture and the duration in real life it's pretty much a light but nevertheless today we will start finally with the strategy formulation process and we will talk about the first element which is business level strategy so in today's session we will start talking about the customer because that's the starting point of every good strategic planning process then we will talk about some basic business level strategies and their pros and cons or the opportunities and potential pitfalls and then i will end the session with a new framework relatively new framework the strategy clock and i will explain what that is but before we go into this let's start first with a basic definition and as you will see it pretty much follows the definition of business strategy in general but here this is the definition of business level strategy so the first three bullet points are the same as for business strategy it's an integrated and coordinated set of commitments and actions with the aim of gaining competitive advantage by exploiting core competencies but now with a specific addition in specific product markets so business level strategy looks at specific products in specific product markets implications is number one it's a single product focus secondly we look at something here that every firm needs so every firm needs a business level strategy why because every firm has a product or a service that it has to sell and that makes business level strategy the core strategy for every business so it's quite substantial and quite important in terms of our strategic planning process but as i mentioned at the beginning of this journey stands the customer so let's look at the customer as a starting point and to kick this off i would like to look at the example of a luxury shopping mall operator so this is a company that has built luxury shopping malls operates those shopping malls meaning brands out the space but also operates department stores within this shopping mall so what are the customer needs that this luxury shopping mall operator has to look at understand and then satisfy you might say of course it's like products so you need lots of luxurious products for people to buy but it's actually a little bit more sophisticated like this when people go into luxury shopping malls maybe they look less for the products but more for the experience so luxury shopping experience like through his shopping experience is very important to be trendy to have a strong brand recognition of course a beautiful atmosphere is very important quality products of course needs need to be there it's very clear but other elements come in as well like social status social recognition or just the time that is spent with friends or fun while shopping so all of these are actually needs that customers bring along when they come to a luxury shopping mall and it's very important for the operator of such businesses to be aware of all of these and find ways to satisfy those needs a good framework to help us understand our customer and also how to satisfy the needs is the three-part who what and how framework let me talk through briefly the first element is the who so here we have to clearly identify who is our customer and good tools for this market segmentation and after a basic market segmentation a clear definition of who the target customer groups are and who the target customer group is the second element is the watch here we are talking about what are the needs that we have to satisfy so this starts with a basic customer needs analysis and then goes into a product feature analysis so which products do we have that satisfy those needs and finally the third part is the hull so here we are talking about the core competencies that we have or that we need to build in order to satisfy customer needs and then the strategies for building those competencies or strengthen the core competencies that we have let me go briefly into each of those segments to explain a little bit further so the first element is the who so we can do a basic market segmentation and the basic one is starting off with a b2c market so the end consumer market segment this is for businesses that sell directly to end consumers like retailers or restaurants um the first and most widely used one is demographics so for example going by gender or by age group socio-economics or by income level could be another way to segment customers geographic for example urban customers versus up country customers could be another way a fourth option is psychological so looking at factors that play a role when people buy something consumption patterns for example in restaurant business we would look at weekend customers versus weekday customers or lunch customers versus dinner customers and then finally perceptual factors as well a little bit more complicated is market segmentation for b2b customers more complicated simply because we might we as um as end consumers we might not be so familiar with those b2b segments so let me explain briefly so one way to segment customers in a business to business environment in industrial segments is by end user so we could look at what is the the final usage of our product that we produce and segment our customers in that way we could of course a little bit simpler segment our customers by product so certain product groups that we produce would be one customer segment and another product group would be another customer segment geographic segment is very similar to the b2c it's by the location of the customer common buying factors here for example one group could be the quality seekers so customers that don't care much about the price but they look for they look for ultimate quality or those who value just in time production or those who are simply for the looking for the best bargain or the cheapest option so this could be another way of segmenting b2b customers by customer size so is our customer a large company or a medium-sized company or a small company and finally by type of stakeholders here for example is it a government-owned company or public listed company or a privately held company so these are ways to describe the who and to identify who are our market target groups the second step is the bot and here we need to analyze customer needs now this usually causes some debate because there's a lot of myths about customer needs and many people say customers don't really know what they want so we have to tell them what they want and the point here and i took this from strategy.com website the point here is that yes customers usually don't know what solution they need but they know very well what their needs are so a good market researcher can actually find out what the customer needs are but then it is the task of the company to identify what is the solution for those needs the third and last step is the how so building the core competencies that help to satisfy the customer needs and we discuss about core competencies in the previous session but these are basically the resources and capabilities that serve as a competitive advantage and a great example that i want to bring here is 7-elevens ready-to-eat strategy so the customer need is quite straightforward it's inexpensive quality food that customers can get any time of the day that is what the customer need now the core competency that 711 brings to this game is a number actually and that's why they are so successful with their ready to eat meat meals here in in thailand the first is recipe development so 711 found a way to develop good recipes in good portion size that fulfill those those needs quite standard thai dishes that are very popular at any time of the day the second is food production at a standard quality a very decent quality for for the price that is offered then of course a distribution network across thailand with over 10 000 far over 10 000 stores by now the ample store network anywhere in the country so it's easy for customers um to access good service the staff is very systematically trained and provides good service to the customer very friendly service among the thai retailers and finally with a sister company cp food it's a full integration from production to the end consumer and that ensures the lowest possible cost and this is passed on to the customer as the lowest price so you see how 711 identified the need and then use their core competencies and the number of those to satisfy those needs in the best possible way so after we have understood now the customer it's time to discuss the actual strategic thinking process and a great framework for this is to start off by looking at building customer relationships and again it's a three-part framework and this starts off with a reach so the first step in strategic thinking is to look at how can we actually build a strong reach to our customers now one of the challenges with building reach for consumer companies is that someone can go into 7-eleven or into any restaurant and you as the business owner will never know that that person was there why because the person pays maybe by cash and leaves no footprint behind so it could be anyone of course the cashier could say this person is probably around 30 to 40 years old and is male or female but that's pretty much about what we can know about the customer maybe we can identify the nationality by the language spoken or something but not much more so to have reached what companies do is they build loyalty programs loyalty programs are really there to collect customer data and to learn about your customers to understand who your customers are and to be able to communicate with your customers by having their mobile phone number for example or their address and being able to send direct communication to those customers or through email so reach is the first step and companies do that by building loyalty programs or similar incentives that would have customers sign up to them now online retailers for example don't have this problem because everyone who orders online needs to leave a delivery address behind a telephone number and email address so online businesses have pretty good customer information even without loyalty programs but for others for more traditional businesses they they need loyalty programs to gather customer information now once you have the basic reach and the basic information about the customer then you can build richness and this is built by studying in more detail the information um the information flow between the firm and the customer so for example how often does the customer come and shop with you what items does the customer usually buy what is the busket size how much does the customer spend per time per month per year so by having this information we build deeper and deeper knowledge about the customer and then we can build affiliation and affiliation means facilitating useful interactions with the customers so meaning sending them useful offers communicating with them in the right amount in the right frequency and at the right time when the customer actually wants it without creating spam or a nuisance or annoyance to the customer so in the strategic planning process thinking about the reach richness and affiliation is a very important step now once we have established who the customer is once we have established how to build reach richness and affiliation of the customer to our business then we can go into the actual strategic planning process and the the perfect starting framework for this is porter's generic strategies so in this framework we see two dimensions on the x-axis we see the basis for customer value which can be either something unique and distinctive or low cost and on the y-axis we see the strategic scope the target market is it a broad target market with a big number of customer segments or is it very narrow very segmented and only one segment or a small number of segments that we want to serve and this gives us three generic strategies the first one is cost leadership so a course leadership strategy is a low cost strategy serving a broad market segment an example of a cost leadership strategy let's take the automotive industry would be toyota so toyota is clearly a cost leader very cost conscious and delivers at a relatively low cost decent quality automobile now more towards the uniqueness and distinctiveness this is a differentiation strategy so again serves a broad market but it's quite unique and distinctive in the features so an example for this one would be mercedes bmw porsche uh maybe lexus as well so the luxury car companies would sit in this segment in the differentiation strategy now serving a more narrow segment would be a segmentation strategy so examples for this one would be the ferrari the lotus lamborghini and this luxury or as we call it here supercars as well so they have a very small customer base they're very expensive few people can afford them and they fill a specific need for a specific segment now i have to highlight that some textbooks split this strategy number three further up into a focused cost leadership and focused differentiation strategy but for simplicity and the original and using the original framework from porter i would like to stick to the to the three um generic strategies maybe just to give an example of a focused cost leadership strategy it could be for example a mobile phone provider that provides a low-cost mobile phone specifically for old people yeah maybe with a bigger keyboard a bigger screen easier to read easier to navigate but at very low cost so this could be an example of a focused cost leadership strategy so these are the generic strategies let me give at least the short definitions most of it i mentioned already but just to have the official version of the definition so cost leadership strategy is defined as a strategy that is focused on providing goods and services with features that are acceptable to customers at the lowest cost relative to competitors so acceptable features at the lowest cost differentiation on the other side is a strategy on producing goods and services at an acceptable cost that customers perceive as different in ways that are important to them so a couple of things here first of all it needs to be an acceptable cost still and then second it needs to be features that are perceived as different so perception is important here it doesn't actually have to be very different but the perception has to be different and it needs to be ways that are important to them so let's take um the mercedes for example right the difference is not that big actually it's a car that brings you from a to b the driving experience in its very basics it's pretty much the same as the same gear it has the same blinker and light and and all the basic features are the same but there are a number of ways that mercedes uses to differentiate the car from a toyota or a honda and that is for once the design for sure which is for some customers very important it's of course also the branding the luxury branding and the luxury style and then of course the quality of the seats and and the quality of the interior and so on there's as many small elements to the differentiation it's basic it's it's provides the same service as a toyota or honda but mercedes bmw porsche they managed to differentiate their product in ways that are important to their customers finally segmentation strategy it's defined as providing goods and services that serve the needs of a particular competitive segment so these are the basic strategies and it was a quick overview of these strategies i would like to go quickly one level deeper and just highlight the opportunities that come with each strategy using the five forces model and then look at some of the risks that also come with each strategy starting with course leadership so for sure for the cost leader it is very difficult for other companies to compete because in order to offer the same price and keep in mind we're talking about cost here not about price so in order to give the same price or maintain the same margin as the cost leader the other companies would have to catch up and become at least close to the cost leader so there's in competitive library definitely a benefit um in terms of bargaining power of suppliers cost leaders can usually absorb price increases and also because of the scale effect cost leaders might be quite powerful compared to others so they might have the upper hand over suppliers in terms of bargaining power anyway customers for cost leaders they often have limited bargaining power usually especially when cost leaders pass on the price impact to the customer then they are also the price leaders in the market and then it is difficult anyway for customers to negotiate any any further new entrants will find it quite difficult to enter a market where there is a strong cost leader because they would have to have the same type of efficiency and would make it very difficult for those new entrants to come in and and survive and then also substitutes if there's a cost leader and he prices accordingly so um substitutes will have it similar to the competitors quite difficult to get a foothold now this strategy doesn't come without risks so for sure one of the risks is that innovations by other competitors might make the low cost product actually obsolete over time so customers switch to the more innovative product cost cutting might eliminate some features of the product that are actually important to the customers so you can overdo it in terms of cost cutting and then there's always a risk that the cost cutting strategy can be imitated by other players moving on to the next one of the differentiation strategy again using the five forces so looking at competitive rivalry in a differentiation strategy taking mercedes or bmw for example we have usually a high loyalty of customers and that helps to defend against competition so people stick to the brands that they usually buy and that they are used to suppliers so even if suppliers have strong bargaining power the differentiator usually has some ways to pass on the higher cost to the consumers because of the loyalty that they bring along for the customer it the product if the product is unique that makes the customer typically less price sensitive and one example i like to give here is luxury brands like louis vuitton or ms they usually don't discount they don't have to discount because their product is unique if you want to buy a handbag from ms or handbag from louis vuitton then there's only one that that you can buy it so it's unique and therefore customers are less price sensitive if they really want to buy these items for new entrants first of all they face a loyalty of the customers to the incumbents so that makes it challenging for them and the new entrant would also have to overcome the uniqueness of the product itself so both are big challenges for new androids to come into the market and finally substitutes again branding and the loyalty also helps to fend off any substitute products risks here especially during economic difficult times customers might switch to lower cost products so this is definitely one of the challenges the means of differentiation might cease over time so as a differentiator you need to constantly reinvest in r d in the new product development take the example of apple the iphone for example there's a new iphone coming out basically every year or so in order to always stay on top and not give the competitors a chance to catch up with the new developments then sometimes the experience of other products and again like especially cheaper products might cause customers to switch but it could also be competitive products other differentiated products an example is a mercedes driver who test drives a bmw likes the driving experiences and then switch and of course counterfeiting higher end products is a big challenge for differentiators so any branded product is definitely a subject to being copied in an illegal way so that is definitely a challenge briefly about segmentation strategy as well so um in terms of competitive rivalry one of the pros of a segmentation strategy is that large firms maybe sometimes overlook the niches so don't go into the niches even though they could but it's not interesting enough for them because it's too small in terms of bargaining power of suppliers well higher prices in a niche segment can typically pass on to customers for customer bargaining power first of all niche customers are often maybe less price sensitive or you can offer targeted customer service better tailored to the needs of the customer better tailored to the customer themselves so offering a point of differentiation here then about new entrants so niche markets they may not be large enough to be interesting for a big new entrance and for smaller new entrants it might be still challenging when there's already a strong incumbent to go into the same market you might capture only a portion of the already small market share which might not be attractive enough and then substitutes those subsidiary products would have to address the same niche needs of the target segment so that is often very challenging to find substitute products that address exactly the same needs now all the risks of this strategy definitely there is a challenge of competitors out focusing each other so going into an even narrower and narrower segments and therefore taking market share from the other and making their own market share also quite small to the point where it's not attractive anymore the second challenge competitors that have a broad market scope might actually discover the niche and then decide to go into that niche an example of this is mercedes for example with its maiba which went into the ultra luxury segment and then the third needs of customers might assimilate over time and they might get similar to a broader focused segment and therefore the segmentation strategy itself might be no longer relevant so now we have looked at some of the basic generic strategies and looked at the mechanics of why those strategies work why they are relevant and why there are very successful players like for example toyota or bmw mercedes who for the past decades follow these strategies and can maintain the leading market position and be quite successful in finally i would like to close with a relatively new framework which is called the strategy clock and the special thing about the strategy clock is that it actually combines porter's generic strategies and puts the price element in the framework as well which previously it was only the cost that we looked at but here the price comes into play and i would like to start at the 12 o'clock point and go clockwise around and explain a little bit what strategies there are the two dimensions on the x-axis is price and on the y-axis is the perceived product benefit or the perceived service benefit so at 12 o'clock we have a strategy that is specifically designed to quickly gain market share you see it's the center point so in terms of price there's no premium there's also no discounting it's pretty much the market price so what happens at the 12 o'clock point is we offer differentiation without a price premium so this is there to quickly gain market share now this makes a product very attractive and you can gain market share with this strategy but also at this point you don't make a very big margin so maybe you give away some opportunity of what could actually be possible now a better point is pretty much the two o'clock point which is differentiation with the price premium so this is pretty much the optimum so here we have a clearly perceived benefit of the product or benefit of the service and we are charging a price premium on it and that is the classical differentiation strategy then as we pass through basically two o'clock to seven o'clock this is the non-competitive strategy you see as we go down the clock the perceived product and service benefits get smaller and smaller but still we are charging a price premium for this product and this is typically something that doesn't work so these are non-competitive strategies and as we move downward so as we move towards the three o'clock point it will become less and less interesting and attractive for customers because they pay a premium but don't get any big benefits over what is already in the market and therefore they will probably jump off so the feasible strategies start again at around the um seven o'clock point and this is pretty much the counter point to the differentiation strategy this is the classical low cost and then also low price strategy so we have the lowest cost possible and we offer the lowest price and that's called the no frills strategy like airlines like ryanair offer at the absolutely lowest price but also at the same lowest cost and at the same time lowest cost and also no specific additional benefits to the customer except bringing them from point a to point b now at nine o'clock we have a low price strategy so basically we have the lowest price possible but we offer a product that is pretty much standard in the market so that would mean that we probably have a very very slim very small margin but probably could get a decent fraction now there's one segment left which goes from 9 to 12 which is a hybrid strategy this one is interesting but very very difficult to do here you have a low price and at the same time you offer a product that is at least perceived as having a lot of benefits and the one company that comes to mind here is ikea or ikea so ikea the swedish furniture company they offer furniture at the lowest price possible but because of the design and the the entire way the product is distributed and marketed it is perceived at somehow having higher benefits than other furniture offers that are in the market so that's a very smart strategy and a very successful strategy if you can pull it off but there's very few companies that actually do so so this is the strategy clock again combining the potters generic strategies and bringing price into the game so in this session we got a brief overview of the basic strategies business level strategy which is the most basic starting point of a strategic thinking process the next session will go a little bit deep deeper in to business level strategies and talk about competitive rivalry and how we use the business level strategy to specifically fight off competition [Music]
Business_Strategy_Lecture
Business_Strategy_08_Corporate_Level_Strategy.txt
[Music] in the last two sessions we talked about business level strategy so these are strategies for a single product market focus in today's session we will cover corporate level strategies so we will expand our focus to more than just one business to a portfolio of businesses and we will talk about diversification reasons for diversification as well as portfolio management and we'll end the session by looking at what does the stock market actually say about diversification but before we do that as usual we have to start off with a definition corporate level strategy is an integrated and coordinated set of commitments and actions a firm takes to gain a competitive advantage by selecting and managing a group or a portfolio of different businesses competing in different product markets at this point i would also like to clarify a few terms this is an example of a value chain so i took the example of the automotive supply chain here starting from raw material production to part manufacturers then to the car manufacturers which are the brands like mercedes or toyota to the distributors the car importers then the dealers eventually going to the car buyers to the to the end consumer now when we talk about integration at the same level of the supply chain then we talk about horizontal integration so this would be for example the merger between daimler and chrysler to dime like chrysler back in the 2000s early 2000s that would be a horizontal integration we talk about vertical integration when we integrate along the supply chain so for example a car manufacturer acquiring or merging with a supplier or the forward integration taking over the dealership for example or the the car importers so just to clarify these terms next we will talk about different levels of diversification we start off with low levels of diversification and here we have the single business which we already know from the business level strategy in this case more than 95 or 95 or more of revenues come from one single business an example for this would be starbucks which is basically the coffee chain business a slightly higher level of diversification is when we have dominant business strategies which means that 70 to 95 of revenues come from a single business but there's also another business so here in thailand we would have um carabao as an example uh for this kind of level of diversification carabao has its brands its energy drinks but it's also having a distribution business so energy drinks in 2020 were about 82 percent of the total business the revenues and the distribution business was sitting at 14 and there were some other businesses involved as well but um of course the energy drink business for karabaw was dominant so that would be a dominant business diversification moving on to moderate to high levels of diversification we have a level of diversification which is called related constrained diversification in this case it's less than 70 percent of revenues that comes from a single business but all the businesses all the business units share product linkages technological linkages or distribution linkages so an example for this would be cp all which has the convenience store business with 711 and then macro cash and carry the split here is around 64 percent in 2019 for the convenience store business and 36 percent for for cash and carry so about two-thirds and one-third that would be related constrained because both are retail businesses obviously a slightly higher level of diversification is the related linked diversification here less than 70 percent of revenues come from a single business and there are some links but only very limited links between the businesses an example for this one would be minor international or mint here in thailand where about 60 percent of the business comes from hotels 35 percent roughly comes from from food and another 5 or so comes from the lifestyle business which is a retail business between hotels food and the retail business very limited links close to none at all and finally we have very high levels of diversification this is also called a conglomerate diversification where the businesses have basically no common links with each other an example for this one would be science event group which has a cement business chemicals business and a paper business so apart from the fact that they are all kind of in the industrial sector there's close to no linkage between those three businesses and therefore it's unrelated diversification so why do businesses actually diversify and does this diversification actually create value this is a big question a big strategic question that we have to ask ourselves so let's look at it and i give you the big picture and the overview first um they are basically based on research 11 or so reasons for diversification and some of them are value creating again on average and based on research some of them are value neutral and some of them are value reducing so what it means is if businesses for example diversify because of economies of scope which is in the first group then this diversification on average based on the research creates value so looking at those one by one the first group of of reasons is value creating and the first and foremost is economies of scope so economies of scope is defined as cost savings that a firm creates by sharing some of its resources or by transferring competencies between the businesses so examples of those activities are combining the purchasing power for example if two businesses have similar suppliers or it could also be a technology transfer between the different business units and examples for transferring core competencies could be if one business unit does not need to develop their own resources so we have capacity savings or we transfer some intangible resources like knowledge or patents or something from one bu to the other so that one bu gets competitive advantage over its rivals from that diversification from from those economies of scope so this on average if companies diversify for this reason it creates value on average second reason that creates value is market power so this is when a company diverts advice to sell products at a price level above the competitors or if it's able to reduce its cost below the competitor cost level one area where market power occurs is multi-point competition so for example when a company follows its competitors and competes in the same product markets or geographical areas as a competitor so for example if a competitor moves um to cambodia or to laos then we also do the same and we have multi-point competition and therefore we keep the market power of the competitor in check and maybe expand our own market power and here we also have to talk about vertical integration so backward and forward integration as we just discussed so backward integration if we absorb the supplier margin by producing our own input products in-house or forward integration by creating our own source of output by basically taking on the next step in the value chain so all of these activities increase market power and as i said on average they create value the last value creating reason is financial economies financial economies is defined as cost savings through improved allocation of financial resources and that can come from basically two sources the first one is efficient internal capital market allocation what does that mean it means that when we invest in the outside market in companies in the stock market for example in companies where we don't have full disclosure for full information then we run somehow blind we are reliant on the information that is provided to us so we have a very limited information flow and we also have the same information as any competitor has so therefore there is limited value creation however if we use the internal capital market then we know exactly how each business unit is performing so we can allocate our capital in the most efficient way to those business units and those areas that have the highest growth potential or the highest profit potential and can therefore get the best return on our capital that's the efficient internal market a capital market allocation the other thing that occurs in when companies look for financial economies is restructuring of assets so for example the private equity model you buy a company you restructure it and you sell it off with a profit we will talk more about this in the next session so these are financial economies that can occur and again on average their value creating now there's a number of reasons that is neither value creating nor value destroying so basically value neutral and the first one of these is anti-trust regulation so companies might diversify because they are forced to do that by law so competition laws might prevent you from growing within your own industry and therefore all you can do is to look outside of your own industry if you want to to grow and these these anti-trust laws typically foster conglomerate diversification because businesses are forced to look outside their own industry another reason for businesses to diversify sometimes is tax laws so at two levels here actually from as a shareholder level shareholders may actually prefer that companies reinvest into diversification or into other businesses because of personal tax reasons if the company pays the excess cash and dividends then this is subject to income tax in most countries so shareholders might actually prefer that you reinvest that money in other companies or diversify the business because that share value increase is not a taxable event in most countries the other reason is corporate tax so if a company invests in new diversified assets this may lead to higher depreciation allowance and which is a non-cash flow expense and is tax deductible and might give the company financial benefit again under trust regulation and tax laws are both value neutral reasons so on average based on research they do not add value to the business another one is low performance so companies might diversify because their current performance is poor so companies might hope to improve the overall performance of their business by diversifying and based on research you see here the chart and the reality is that it really depends on what type of diversification happens on average it's value neutral as discussed but there's a higher chance to get benefit for the business if we are talking about a related constraint diversification so where the businesses have strong linkages for unrelated diversification the opportunity of benefit is quite a bit lower another reason is uncertain future cash flows and basically risk diversification so diversifying into different types of businesses because of risk or risk management so interesting enough while this is a very frequent reason this is also value neutral so on average companies don't create value when they follow a diversification strategy for this reason so uncertain future cash flows what does it mean the core businesses might be subject to market movements for example in the oil and gas industry it moves with the oil price so you might aim to smooth the business volatility by diversifying the business but again on average this is very neutral synergies is another reason to diversify or the synergies the belief in synergies now reality is that synergies are very very hard to realize synergies occur if companies work together and by working together the values created is higher than the companies working independently but again the reality is it usually doesn't occur so value neutral on average and finally the acquisition of tangible and intangible resources so by diversifying into another business to get benefit from the other businesses tangible or intangible resources now from my personal experience unless the resources were underutilized previously it's very hard to actually realize benefits across the businesses um for example the value created by by sharing in one business unit often comes at the expense of the other business so one business will benefit the other business will get less attention or will be neglected and therefore on average again for the total business and for the combined business this is very neutral now there are also reasons that on average destroy value let's look at this and there's two basically one is diversifying ministerial employment risk so from a management perspective the larger the businesses the more growth opportunities or career opportunities exist within the business and that increases the likelihood for any underperforming manager to find a new role within the same company and it extends the lifetime basically of of those underperforming managers so no wonder that this strategy if this is the reason for diversification actually destroys value and another one is increasing managerial compensation again from a management perspective any diversification any acquisition or enlargement of the business is very attractive it's exciting and it gives opportunities for new roles like for example board seeds or increased span of control and that comes typically with a higher level of compensation so if this is the motivation for diversification then again on average it destroys value now let's look at some strategies on how to think about managing a portfolio and the most famous tool or the most famous framework for doing this is the bcg metrics the boston consulting group metrics from the 1970s 1980s first talking about portfolio management so portfolio management is a method of first assessing the competitive position of a portfolio of businesses within the corporate corporation so looking at how is each business performing the second is suggesting strategic alternatives for each business so based on the performance what do we do with each business and how can each business move forward and basically defining a business level strategy for each of these businesses and then identifying priorities for the allocation of resources again financial economies come to mind here of how to best allocate the internal capital to maximize the benefit and the profit for the business so the boston consulting group matrix works like this on one axis you have on the x-axis you have the relative market share and this is measured relative to the biggest competitor the largest competitor so in the middle at one we would basically be as large as our largest competitor in size at the very extreme on the left side of the x-axis it would be 10 times larger than the largest competitor so a very dominant position and on the right side of the x-axis it would be like one-tenth of the largest competitor so relatively small compared to the dominant player on the y-axis we have market growth or industry growth and this is just measured in percent so in the middle you have roughly 10 percent industry growth and that ranges from zero to 20 percent the circles that you see here are the business units so each circle represents one business one product portfolio and you see here the bigger ones make a bigger share of the total portfolio of the company while the smaller ones are relatively unimportant i would say or small revenue shares for the company so by looking at the different fields for the stars for example these are the high growth businesses where we have a relatively strong market position dominating market position over the next competitor so these are the ones that we want to really push where we want to invest capital and keep it growing then the cash cows in the bottom left are those where we have a really dominant market position so bigger than the next largest competitor but limited growth so these are the areas where we would just like to maintain our market share continue to grow the business at the low rates and just as the name of the field suggests use this business as a cash cow to fund other ventures now the dogs are businesses where most consultants and managers recommend to get out of the business so basically to to divest this kind of business because um first of all they have low market growth and we are a relatively small player we have a giant competitor in the same industry so the best thing is to get out of these kind of businesses and the question marks again as the name suggests are not so clear so we see high market growth but we are relatively small compared to competitors so either we see the potential to grow so we invest heavily and make sure that we maintain or retain in the end a strong market position at least as big as the next competitor so we move it towards the star area or if we are really a dwarf compared to the rest of the industry then we better divest and move out so this is one way of thinking about managing the portfolio of the business by using the bcg matrix now a way to look at the bcg matrix is also to look at it from a product life cycle perspective so you see here the product life cycle over time with the x-axis and the sales the volume on the y-axis and the product usually goes through an introduction stage in a growth stage a maturity stage and a decline stage and very often not always but very often the product in the bcg matrix fits to this product life cycle so at the very beginning of the introduction phase products are typically question marks and we don't know yet where the journey is going we see high growth but we are also relatively small compared to competitors then once we gain in scale then the product becomes a star and eventually as the growth subsides it moves towards a cash cow and in the end if the demand is really slowing down it might move to a dark stage or maybe a bigger competitor enters the market and and beats us in terms of scale then the product might move to a dork stage now this relationship between the product portfolio category and the stage of the product life cycle is a possibility but it's not a given so it's not for granted that it always has to be like this finally i would like to look at what does the stock market say about diversification strategies and actually the stock market does not like diversification so much particularly if it's a conglomerate diversification so meaning unrelated businesses in the stock market there's a so-called conglomerate discount and that is roughly based on research but 20 so what does it mean if you add up the value of the businesses one by one um you the sum of the parts gives you 100 percent then the stock market value of the conglomerate of those different businesses is actually quite significantly lower about 20 lower at sitting at around 80 of the actual possible value now some reasons and these are my my personal opinions um for one i believe that many investors are actually aware of the complexity that comes with managing these conglomerates management time is spread very thin on each of the businesses and therefore it doesn't get each of the businesses doesn't get enough management time and therefore investors believe that the business is not managed to the optimum and as the stock price depends on the perception of the investors that is reflected in the price so that's one potential reason another reason that i learned during my time when i was in investor relations analysts always want to build a model they're building a model to try to evaluate or value your business and when the business gets too complex with too many different business units then they struggle to build a comprehensive financial model and therefore they assign a discount to their evaluations simple simply because they cannot be so sure of what the actual value creation will be in the future for each of the business unit and therefore they discount something to be on the safe side and the third is there might actually be actual diseconomies of scale where a conglomerate becomes so big and so complicated and so complex that actually the economies of scale subside and maybe turn into negative and if you would split up the company then those these economies of scale would actually disappear so interesting to see that the stock market does not value diversification so this brings us to the end of this session in the next session we will look at a little bit deeper level into mergers and acquisition strategies [Music] you
Business_Strategy_Lecture
Business_Strategy_09_Acquisition_Restructuring_Strategies.txt
foreign [Music] in our last session we talked about corporate level strategy and specifically diversification now one way to achieve diversification is to build a business from scratch however this is often lengthy and it comes with a lot of risk because we don't know if our business actually makes it so one way of making this faster and getting a higher level of certainty is by actually acquiring a business so today we will talk about acquisition and restructuring strategies specifically we will talk about reasons for acquisitions why so many acquisitions fail and how to make them successful and finally about restructuring strategies so after we acquire a company what do we do with that company to get the best value out of it but before we do that we first have to talk about some definitions the first definition is a merger so merger is an agreement between two firms to integrate their operations on a relatively co-equal basis so a merger actually is not an acquisition an acquisition is when one firm buys a controlling interest so more than 50 percent of another company and the intent and the result of that acquisition is to make the acquired firm a subsidiary takeover is a special type of acquisition where the target firm does not solicit the acquiring firms what does it mean in a takeover situation the acquiring company buys out the target firm without the target firm's consent it's also often called a hostile takeover so these three terms are often mixed up and confused so it's important that we have a clear definition at the beginning i would like to talk about some of the reasons of why companies buy other companies why do acquisitions happen the first reason is increased market power so this can come from a couple of areas in the case of horizontal acquisitions meaning if one company buys a competitor or a company in the same industry this can be cost-based energies so basically reduced costs from the combination of the two businesses or revenue based synergies so finding new revenue pools or new revenue sources after combining the two businesses for vertical acquisitions for example when a company buys into their comp their supplier there we have control over additional parts of the value chain and therefore we have margin control and that gives us additional sources of market power as well if we have related acquisitions then we might have an increase in customer base or because we might use similar suppliers or the same suppliers even we have an increase in bargaining power with those suppliers also we might use an acquisition to prevent competitors from growing so either by growing our own market share and leaving no room for the competitor or by buying the company before the competitor buys the the company and therefore we have a competitive edge over our competitor the second reason is overcoming entry barriers so very often companies buy other companies with cross-border acquisitions so they get immediate market access especially when there's economies of scale in place or differentiated products this cross-border acquisition will make it very fast to enter a new market the third reason is linked to new product development so sometimes companies buy other companies in order to get access to their new products or to their innovations and the reason for this is just to give you some statistics internal product development is actually costly and risky eighty-eight percent of innovations fail and sixty percent of the remaining twelve percent face imitation within four years so this is particularly costly in high-tech and pharma industries so by buying the innovations or by buying a company that has innovations already registered products there's higher predictability and we might get better returns on investment the fourth reason is increased diversification so in the previous session we already talked about the reasons for diversification and acquisitions can foster related or unrelated diversifications depending on which type of business we buy the fifth reason is reshaping the firm's competitive scope so acquisitions might help to lessen the firm's dependence on one products and therefore diversify and reduce the risk for the business and finally an important reason is also learning and developing new capabilities so for example by gaining access to the intangible resources of a company we might gain capabilities that we currently don't possess and if we need to build those capabilities internally it might be very long and and costly so by acquiring a company we might gain these capabilities actually in a faster way now one of the challenges with acquisitions is shown in this chart most acquisitions actually fail only about 20 of acquisitions are successful another 20 are clear failures where it's a real catastrophe and for the remaining 60 they deliver results that are disappointing there's a great video about the biggest disasters in acquisition and mergers and i will link this video for you here now why do so many acquisitions fail let's look at some of the reasons the first and foremost reason is integration difficulty corporate cultures can be quite diverse maybe financial systems don't link up the relationships between the management or even at the working level don't really work out and very often there is fights over the status of executives of the acquired firm and this can lead to a fallout so integration difficulties is one of the most frequent reasons why acquisitions fail the second most frequent reason is that there was actually not enough due diligence and with religions we mean the evaluation of the target firm before the acquisition happens so in this traditions process the company tries to assess the true value of the target firm how the transaction is financed or the difference in culture are tax consequences and so on so all the aspects of the acquisition are assessed during the due diligence process this is typically conducted in collaboration with investment bankers accountants lawyers management consultants but sometimes it happens that companies don't do this due diligence process very carefully miss out on important aspects and therefore the acquisition later fails as a consequence the third reason is that after the acquisition the acquiring company ends up with large extraordinary amounts of debt this leads to increased cost of debt so typically you face a downgrading of your credit rating and then the cost of that increase and therefore there's increased likelihood of bankruptcy the whole cost of capital increase along with this this acquisition after or being in a difficult step position might lead to not being able to make required investments in r d in hr in training in marketing and so on and this postponing of r d investments often has a long term negative impact on the business so therefore in the long term the acquisition also fails because of the extraordinary amount of debt reason number four is that there is too much diversification so especially in conglomerate diversification when you buy a company that is not related to your existing business it is very difficult for management to be knowledgeable enough in the different industries and therefore often there's a lack of attention to the detail and it can make it difficult to control the performance and therefore the business performance slips and the acquisition fails reason number five is that synergies that are forecasted or that are built into the acquisition models are often not achieved very often it's the main motivation for the board of directors to make the investment or for the management to to do an acquisition but actually it's very challenging and very often synergies are overestimated just to justify the acquisition in the first place and after the fact it doesn't turn out that way and therefore the acquisition also fails reason number six is that an acquisition takes a very large amount of management time i have to say that deal making like making these acquisitions is fun it's exciting you deal with new people you deal with a new new environment new business new culture for management this is really fun and it involves all kind of management activities like searching for the target doing the due diligence which is very time consuming also preparing and executing the negotiations and then after the deal is done managing the integration so what happens in that process is that a lot of focus goes into the acquisition and there's a big risk that the managers lose the focus on the fundamental problems of their core business and therefore the whole business together with the acquisition also fails and finally from unrelated acquisitions there might be diseconomies of scale especially for unrelated acquisitions so these these economies occur when the business just becomes too large it's too difficult to manage and there's additional costs of managing this big company and they outweigh the benefits of the economies of scale and actually turn them into negative so this might lead to higher cost and then eventually the acquisition fails because of the synergies not being achieved or the economies that were hoped to be achieved from the acquisition actually don't come through now not all acquisitions fail about 20 of acquisitions actually are successful so what are some of the elements of those successful acquisitions the first attribute is when the acquired firm has complementary assets to those of the acquiring firm if that is the case then there's a high probability of synergies including economies of scope and increased purchasing power and and so on and therefore competitive advantage so related acquisitions complementary assets is always a very important attribute to make acquisitions successful the second one is that an acquisition should be friendly so statistically hostile takeovers don't turn out so well if the acquisition is friendly then the integration is typically faster more effective and also it's more likely that the company pays lower premiums on the acquisition price and therefore the acquisition becomes more affordable the third attribute of an effective acquisition is to do an effective due diligence to evaluate the target firm's health so financial health cultural health hr if this is done properly then you acquire a firm with the strongest capabilities and don't overpay the fourth attribute is when the acquiring firm is in a financially favorable position before the acquisition happens if that is the case if there's some cash in the business then the financing like either debt or equity is easier and is less costly and the firm doesn't risk to go into too much debt which is the next attribute so the merged firm also needs to maintain a low or moderate debt position again this will lead to lower financing costs and therefore lower risk in the long run attribute number six is that the acquiring firm needs to be focused on r d and innovation now this is needed to maintain a long-term competitive advantage in markets and actually from the research it is not enough for just the acquired firm to have those r d resources and this knowledge this culture needs to be in the acquiring firm already and finally the acquiring firm needs to manage the change well this also means that the acquiring firm needs to be flexible and adaptable this is not a given because the acquiring firm is usually in the upper hand because it takes ownership of the new company but the more flexible and open-minded the acquiring firm is the faster it will be the integration and the higher also the likelihood to achieve synergies between the firms so these were the seven attributes of successful acquisitions finally i would like to look at what happens after the acquisition so here we are talking about restructuring strategies and there are three types of restructuring strategies the first is downsizing so after you acquire the firm you might decide to reduce the number of employees reduce any redundancies between the two companies or even reduce entire operating units so shut down entire businesses and gain synergies from this one so that's downsizing a second strategy different strategy is down scoping so here it is about divesting unrelated businesses and and maybe doing spin-offs so this is to reduce the complexity of the business you sell off those parts that don't fit into the portfolio unrelated businesses and the third strategy is a leveraged buyout so in this case the company might decide to buy all the stock of the company and make it private so after having acquired like a majority for example 60 the company might decide to also acquire the remaining 40 percent take it off the stock market for example make it private and well this can lead to a large amount of debt and might require some sales of asset but it also means that you have the full control over the business so let's look at how these strategies typically play out starting with downsizing now in the short term downsizing bring some positive impact because you have reduced labor cost and you might achieve some short-term synergies in the long term there's a risk of the downsizing strategy that it leads to lots of capital loss of human capital so when good people leave the business or you lose good people as part of the labor reduction and therefore this might lead to lower performance for the downscoping strategy there's two effects on one side there's reduced depth and reduce financing costs because you spin off the company and on the other side there's also a positive impact on strategic controls because the remainder of the business the core business is actually strengthened by having more focus on it and this typically leads both together to higher performance so down scoping strategies on average lead to higher performance in the long term now a leveraged buyout an lbo also leads to higher strategic control because you take full control of the business but it also leads to high debt costs because you basically need to acquire 100 of the equity of the business and as a result this can lead to higher performance so some leverage buyouts actually turn out pretty well in the long term but it can also lead to higher risk because of the high depth levels so we have looked at corporate level strategies diversification and specifically advertising acquisition in the next sessions we will talk about more specific strategies like international strategies as well as cooperative strategies [Music] you
Business_Strategy_Lecture
Business_Strategy_05_Vision_Mission_Values_and_a_few_Loose_Ends.txt
foreign [Music] in the last two sessions we covered two of the inputs into our strategy process first we covered the external environment and then we discussed analyzing the internal environment some of our strengths and weaknesses in today's session i would like to cover the third of the strategic inputs which is the vision and the mission of companies as well as the core values now this is a factor that is often underestimated and overlooked and in this short session i would like to highlight the importance of this factor and i would like to start by looking at the hierarchy of of these elements and basically highlighting what we are going to cover today first we are going to look at the mission what is a good mission statement what is it there for what is its purpose the second then is the mission statement of the company then we look at strategic objectives and finally we look at the importance of having core values so let's start off with the mission statement a vision basically is defined as an organizational goal and this goal should evoke powerful and compelling mental images so for example it should cover what is the future that the organization seeks to create it should be an aspiration that will enthuse gain commitment and stretch performance so it should be highly motivating and it should encourage the employees and stakeholders to support the vision of the organization and to get excited about it one one way i like to think about it is to ask myself like what do i want to achieve what should the organization look like in 5 10 20 years or even further out then looking at vision statements it is actually not so easy to find good visual statements or good examples of visual statements the reason is that many companies seem to mix up vision and mission statements we will see this in the next example but here i found seven examples of vision statement that i found inspiring and that i found particularly good statements so the first one is actually from the south african department of education um whose vision statement is a south africa in which all our people have access to lifelong learning that's a fantastic mission it's aspirational it's inspirational that is really great mcdonald's our vision is to be the world's best quick service restaurant very clear very straightforward everyone in the organization will easily understand what we are there to work for and maybe to take a third example ge where the statement is to be number one or two in every single market that we serve it's very straightforward very clear easy to remember gives a strong sense of purpose and a strong vision for the employees next is the mission statement and the mission statement is basically an operationalization of the vision so in the definition it aims to provide employees and stakeholders with clarity about what the organization is fundamentally there to do which means the purpose of the organization its scope of operations and what the basics of the basis of its competitive advantage is supposed to be a couple of examples of great mission statements tesla at first to accelerate the world's transition to sustainable energy very clear other one from that very straightforward also simply to spread ideas it's actually built in the branding of ted already which you see in the logo ted ideas worth spreading and the purpose the mission statement is to spread ideas and for linkedin the purpose is to connect the world's professionals to make them more productive and successful so this is the mission statement directly derived from the mission statement and next in the hierarchy of of the goals are the strategic objectives so these are statements of very specific outcomes that are supposed to be achieved so they further operationalize the mission statement and provide guidance of things that the organization can do to achieve its vision and mission and those strategic objectives should be smart so specific measurable achievable relevant and time-bound and just some textbook examples of those strategic objectives i just pick one financial and one non-financial objective so that could be for example to increase sales growth by six percent to eight percent and accelerate coordinate earnings from thirteen percent to fifteen percent per share in each of the five years so you see it's measurable it's very specific it's probably achievable relevant to the organization and you see with the number five years it's also time bound so there's a timehold horizon set for it or a non-financial objective reduce the volatile air emissions 15 by 2015 from the 2010 base year index to net sales that is a 3ms example of non-financial objectives again like very specific measurable achievable relevant for what the company wants to achieve and with a five-year horizon again 2010 to 2015 it is also time-bound next an element again that is often overlooked many companies of course have core values but i tend to think that companies don't spend enough time on their core values and don't spend enough time to really internalizing those core values and bringing them across to their people so why is it so important corporate values communicate the core principles and these core principles should guide the organization's strategy that's why it's so relevant to the strategic thinking and the strategic planning process now why core values for me the ultimate test always is do these values change with circumstances and if the answer is yes they would change the circumstances then the values are not core they are not enduring and not quite give you an example if you say we always set highest professional standards like here in the example of mckinsey but if times get tough and if the economy is doing bad or the company is doing badly financially then we are willing to take shortcuts then this is definitely not a core value because it comes with a condition the case of mckinsey here three core values adhere to the highest professional standards improve clients performance significantly and create an unrivaled environment for exceptional people at my time at mckinsey it was very clear we knew these values and every year we were reminded of the values through value stay which was one day set aside to specifically think about and talk about the values and promote them among staff and it was made very clear at any point in time that these were non-negotiable and non-compromisable no matter what these three principles were always to be adhered to and therefore these are core values finally um we are now at the end of the session about strategic inputs and i have some what i like to call loose ends that we have to catch up on they didn't really fit into any of the three input factors but more like general strategic inputs and i would like to talk about at least two of these briefly the first one is a framework that summarizes basically the contents of the external environment and the analysis of the internal strength and weaknesses and that is the swot framework so swot stands for strength weaknesses opportunities and threats and basically it combines both elements the internal and the external environment s and w so strength and weaknesses come from the internal environment and from internal factors from within the organization and within the control of management and opportunities and threats outside of management's control they stem from the external environment of course strength and opportunities are helpful to managing the goals and achieving the goals and weaknesses and threats are harmful now the only point that i would like to make here that i would like to highlight here is when using this framework please be careful not to mix up strengths and opportunities and weaknesses and threats the key criteria really is written here on the left side for strength and weaknesses these are elements that are purely in within the control of management so they can be influenced by managerial decisions opportunities and threats are there to be managed to be defended against or to utilize as for the benefit of the company but they are outside the control of management for example an oil price increase is outside of the control of anyone in the oil and gas industry and this is like likewise for a decrease in oil prices bile quality or safety is well within the realm of management and can be controlled with management as a strength or as a weakness and the weakness of course needs to be fixed and defended against and same for a threat the second framework that i would like to introduce us last part of the strategic inputs is the growth strategy framework the three horizons of growth i find this a very good framework at the outset of the strategic thinking process because it links directly many of the elements um that that we discussed the external environment internal environment the core competencies that we need to build as well as it links directly back to the vision and the mission statement because it looks forward at least five years and so this framework helps to build a strong vision for the company and the idea behind this framework is that each company needs to think in three horizons in order to sustainably grow the first horizon concerns the core business this is the existing business the business that is there today and here the idea is that for the next one or two years the goal is to defend and expand the current business as a core business and the focus here is on immediate gains in revenues and profits that's the start out of the growth journey now in order for the business to grow beyond that beyond the next one or two years it needs to go to horizon two where we need to think about initiatives that build momentum and that give rise to emerging new businesses that might not be big right now but that might be up and coming in the future so this will drive growth within the next three to five years and will become relevant by the time then so the focus here is not on profitability but it is purely on revenues and market share and turning some future ideas into future profitability drivers that's horizon 2. now going out five years and longer this is what i would call seed money so in horizon 3 we would see some early ideas that might seem out of the scope right now a little bit crazy outside the box and we try new things that could turn into growth ideas in the future but here the focus is neither on revenues nor on profitability but this is about trying new things about innovation here for example a kpi could be the number of new ideas or number of new initiatives launched and the idea is that these have the highest value potential in the future but right now they are still small so so their value will come in in five years or longer or they might even break away and and fall out completely so this is the area to take a little bit of risk and to try new things with limited capital investment but seeding ideas for the future so as we set out our vision i find the three horizons a great framework to look forward and to identify factors that will grow the business so this brings us to the end of our strategic input factors and by now we are ready to actually and finally start our strategic thinking process and our strategic planning process [Music] [Music] you
MIT_RES6012_Introduction_to_Probability_Spring_2018
S015_Infinite_Series.txt
This will be a short tutorial on infinite series, their definition and their basic properties. What is an infinite series? We're given a sequence of numbers ai, indexed by i, where i ranges from 1 to infinity. So it's an infinite sequence. And we want to add the terms of that sequence together. We denote the resulting sum of that infinity of terms using this notation. But what does that mean exactly? What is the formal definition of an infinite series? Well, the infinite series is defined as the limit, as n goes to infinity, of the finite series in which we add only the first n terms in the series. However, this definition makes sense only as long as the limit exists. This brings up the question, when does this limit exist? The nicest case arises when all the terms are non-negative. If all the terms are non-negative, here's what's happening. We consider the partial sum of the first n terms. And then we increase n. This means that we add more terms. So the partial sum keeps becoming bigger and bigger. The sequence of partial sums is a monotonic sequence. Now monotonic sequences always converge either to a finite number or to infinity. In either case, this limit will exist. And therefore, the series is well defined. The situation is more complicated if the terms ai can have different signs. In that case, it's possible that the limit does not exist. And so the series is not well defined. The more interesting and complicated case is the following. It's possible that this limit exists. However, if we rearrange the terms in the sequence, we might get a different limit. When can we avoid those complicated situations? We can avoid them if it turns out that the sum of the absolute value of the numbers sums to a finite number. Now this series that we have here is an infinite series in which we add non-negative numbers. And by the fact that we mentioned earlier, this infinite series is always well defined. And it's going to be either finite or infinite. If it turns out to be finite, then the original series is guaranteed to be well defined, to have a finite limit when we define it that way, and furthermore, that finite limit is the same even if we rearrange the different terms, if we rearrange the sequence with which we sum the different terms.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L043_Die_Roll_Example.txt
We will now use counting to solve a simple probabilistic problem. We have in our hands an ordinary six-sided die which we are going to roll six times. So this is our probabilistic experiment. And we're interested in the probability of a certain event, the event that the six rolls result in different numbers. So let us give a name to that event and call it event A. So we wish to calculate the probability of this event. But before we can even get started answering this question, we need a probabilistic model. We need to make some assumptions, and the assumption that we're going to make is that all outcomes of this experiment are equally likely. This is going to place us within a discrete uniform probabilistic model so that we can calculate probabilities by counting. In particular, as we discussed earlier in this lecture, the probability of an event A is going to be the number of elements of the set A, the number of outcomes that make event A occur, divided by the total number of possible outcomes, which is the number of elements in our sample space. So let us start with the denominator, and let us look at the typical outcomes of this experiment. A typical outcome is something like this sequence, 2, 3, 4, 3, 6, 2. That's one possible outcome. How many outcomes of this kind are there? Well, we have 6 choices for the result of the first roll, 6 choices for the result of the second roll, and so on. And since we have a total of 6 rolls, this means that there is a total of 6 to the 6th power possible outcomes, according to the Counting Principle. And by the way, since we have so many possible outcomes and we assume that they are equally likely, the probability of each one of them would be 1 over 6 to the 6th. Incidentally, that's the same number, the same answer, you would get if you were to assume, instead of assuming directly that all outcomes are equally likely, to just assume that the different rolls are rolls of a fair six-sided die, so the probability of getting a 2 is 1/6, and also that the different rolls are independent of each other. So in that case, the probability, let's say, of this particular sequence would be the probability of obtaining a 2, which is 1/6, times the probability that we get a 3 at the next roll, which is 1/6, times 1/6 times 1/6 and so on, and we get the same answer, 1 over 6 to 6th. So we see that this assumption of all outcomes being equally likely has an alternative interpretation in terms of having a fair die which is rolled independently 6 times. Now, let us look at the event of interest, A. What is a typical element of A? A typical element of A is a sequence of 6 rolls in which no number gets repeated. So, for example, it could be a sequence of results of this kind, where each number appears just once. So all the numbers appear exactly once in this sequence. So what we need here is basically to have a permutation of the numbers 1 up to 6. So these 6 numbers have to appear in an arbitrary order. In how many ways can we order 6 elements? This is the number of permutations of a set of 6 elements and, as we discussed earlier, this is equal to 6 factorial. So we have now counted the number of outcomes that make event A happen, which is 6 factorial. And by calculating this ratio, we have obtained the probability of the desired event. You can now pause and try to solve a problem of a similar kind.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L088_Normal_Random_Variables.txt
We now introduce normal random variables, which are also often called Gaussian random variables. Normal random variables are perhaps the most important ones in probability theory. They play a key role in the theory of the subject, as we will see later in this class in the context of the central limit theorem. They're also prevalent in applications for two reasons. They have some nice analytical properties, and they're are also the most common model of random noise. In general, they are a good model of noise or randomness whenever that noise is due to the addition of many small independent noise terms, and this is a very common situation in the real world. We define normal random variables by specifying their PDFs, and we start with the simplest case of the so-called standard normal. The standard normal is indicated with this shorthand notation, and we will see shortly why this notation is being used. It is defined in terms of a PDF. This PDF is defined for all values of x. x can be any real number. So this random variable can take values anywhere on the real line. And the formula for the PDF is this one. Let us try to understand this formula. So we have the exponential of negative x squared over 2. Now, if we are to plot the x squared over 2 function, it has a shape of this form, and it is centered at zero. But then we take the negative exponential of this function. Now, when you take the negative exponential, whenever this thing is big, the negative exponential is going to be small. So the negative exponential would be equal to 1 when x is equal to 0. But then as x increases, because x squared also increases, the negative exponential will fall off. And so we obtain a shape of this kind, and symmetrically on the other side as well. And finally, there is this constant. Where [is] this constant coming from? Well there's a nice and not completely straightforward calculus exercise that tells us that the integral from minus infinity to plus infinity of e to the negative x squared over 2, dx, is equal to the square root of 2 pi. Now, we need a PDF to integrates to 1. And so for this to happen, this is the constant that we need to put in front of this expression so that the integral becomes 1, and that explains the presence of this particular constant. What is the mean of this random variable? Well, x squared is symmetric around 0, and for this reason, the PDF itself is symmetric around 0. And therefore, by symmetry, the mean has to be equal to 0. And that explains this entry here. How about the variance? Well, to calculate the variance, you need to solve a calculus problem again. You need to integrate by parts. And after you carry out the calculation, then you find that the variance is equal to 1, and that explains this entry here in the notation that we have been using. Let us now define general normal random variables. General normal random variables are once more specified in terms of the corresponding PDF, but this PDF is a little more complicated, and it involves two parameters-- mu and sigma squared, where sigma is a given positive parameter. Once more, it will have a bell shape, but this bell is no longer symmetric around 0, and there is some control over the width of it. Let us understand the form of this PDF by focusing first on the exponent, exactly as we did for the standard normal case. The exponent is a quadratic, and that quadratic is centered at x equal to mu. So it vanishes when x is equal to mu, and becomes positive elsewhere. Then we take the negative exponential of this quadratic, and we obtain a function which is largest at x equal to mu, and falls off as we go further away from mu. What is the mean of this random variable? Since this term is symmetric around mu, the PDF is also symmetric around mu, and therefore, the mean is also equal to mu. How about the variance? It turns out-- and this is a calculus exercise that we will omit-- that the variance of this PDF is equal to sigma squared. And this explains this notation here. We're dealing with a normal that has a mean of mu and a variance of sigma squared. To get a little bit of understanding of the role of sigma in the form of this PDF, let us consider the case where sigma is small, and see how the picture is going to change. When sigma is small, and we plot the quadratic, sigma being small means that this quadratic becomes larger, so it rises faster, so we get a narrower quadratic. And in that case, the negative exponential is going to fall off much faster. So when sigma is small, the PDF that we get is a narrower PDF, and that reflects itself into the property that the variance will also be smaller. An important property of normal random variables is that they behave very nicely when you form linear functions of them. And this is one of the reasons why they're analytically tractable and analytically very convenient. Here is what I mean. Let us start with a normal random variable with a given mean and variance, and let us form a linear function of that random variable. What is the mean of Y? Well, we know what it is. We have a linear function of a random variable. The mean is going to be a times the expected value of X, which is mu plus b. What is the variance of Y? We know what is the variance of a linear function of a random variable. It is a squared times the variance of X, which in our case is sigma squared. So there's nothing new so far, but there is an additional important fact. The random variable Y, of course, has the mean and variance that we know it should have, but there is an additional fact-- namely, that Y is a normal random variable. So normality is preserved when we form linear functions. There's one special case that's we need to pay some attention to. Suppose that a is equal to 0. In this case, the random variable Y is just equal to b. It's a constant random variable. It does not have a PDF. It is a degenerate discrete random variable. So could this fact be correct that Y is also normal? Well, we'll adopt this as [a] convention. When we have a discrete random variable, which is constant, it takes a constant value. We can think of this as a special degenerate case of the normal with mean equal to b and with variance equal to 0. Even though it is discrete, not continuous, we will still think of it as a degenerate type of a normal random variable, and by adopting this convention, then it will always be true that a linear function of a normal random variable is normal, even if a is equal to 0. Now that we have the definition and some properties of normal random variables, the next question is whether we can calculate probabilities associated with normal random variables. This will be the subject of the next segment.