Time & Space Complexity - DSA Series by Shradha Ma..
Category
📚
LearningTranscript
00:00:00Hi everyone and welcome to the complete DSA series in which we are going to cover about
00:00:09Time Complexity and Space Complexity.
00:00:11Now we are working really hard to make this the best DSA series on YouTube for placements.
00:00:16Apart from this, if you want to read any other concept in DSA, you can find it on this channel
00:00:20in this playlist.
00:00:21We can also read it from there if we want.
00:00:23Now before the lecture starts, a small disclaimer that in today's lecture we have covered time
00:00:27and space complexity in depth, which means that we will discuss such algorithms like sorting
00:00:32algorithms and recursion in today's chapter, which we will discuss later in the DSA series.
00:00:38So those students who already have the idea of sorting and recursion, they can watch the
00:00:43entire lecture.
00:00:44And those students who are following the series from the beginning, they don't have to worry.
00:00:47Today we have definitely seen the time complexity of sorting and recursion, but we will be covering
00:00:51these algorithms later in the series.
00:00:53So we will get the timeline of the entire lecture below.
00:00:55If you have done recursion, then we have to see the part of recursion.
00:00:58If we have not done recursion yet, then I will advise you that we have to see the entire
00:01:03chapter, especially the practical usage part, which we have covered in the last.
00:01:07It is compulsory for all students.
00:01:10So now let's get started.
00:01:16Now let's move on to our concept of time complexity and space complexity.
00:01:21And why is it one of the most important concepts in programming?
00:01:24Basically, even if we read any data structure, even if we read any algorithm, we can use
00:01:29the concept of time and space complexity with all of them.
00:01:32When we are solving questions on a platform, then we will use time and space complexity.
00:01:37When we are sitting in interviews, then whatever solution we give, no matter what question
00:01:42comes, in 99% of cases, our interviewers can ask us about the time and space complexity
00:01:46of the solution.
00:01:47Along with this, we should have the concept of time and space complexity in our online
00:01:52course.
00:01:53So that we can quickly go in the direction of an efficient solution.
00:01:58So this concept is related to almost every other concept of programming.
00:02:03And it helps us a lot in finding a solution to a problem.
00:02:09And that is why it is one of the most important chapters that we are going to do in our DSA
00:02:14series.
00:02:15And we will do this chapter with a complete practical approach.
00:02:18So from the point of view of placement and interview, we are going to study this concept
00:02:21so that we get practical knowledge.
00:02:23So first of all, let's talk about what exactly is time complexity?
00:02:26The most important thing is that time complexity is not the actual time taken.
00:02:31Time complexity does not mean that we have written a code and how much time will this
00:02:36code run?
00:02:37Sometimes the time comes 0.05 seconds or the time that comes in this way, it is not
00:02:42time complexity.
00:02:43The same C++ code can take a different time on Windows, it can take a different time on
00:02:48Mac and it can take a different time on Linux.
00:02:50In fact, if we submit the same code online on different platforms, then it may take a
00:02:56different time on LeetCode, it may take a different time on Codeforces, it may take a different
00:02:59time on Codeshift and it may take a different time on other websites.
00:03:02So the actual execution time of a code is also machine dependent.
00:03:06And when we talk about different online platforms, it is also server dependent.
00:03:10So time complexity is not the actual time taken.
00:03:13The time complexity of the algorithm, the time complexity of the code does not differ
00:03:17according to the different machines, it always remains the same.
00:03:20So what is time complexity?
00:03:22Time complexity is the amount of time taken as a function of input size.
00:03:27Basically, time complexity, the amount of time taken here, we measure in terms of operation.
00:03:33Operation means whatever statement we write in our code, it is an operation.
00:03:37For example, I wrote integer x is equal to 10.
00:03:40This will be an operation.
00:03:41I put a for loop.
00:03:43The more times this loop runs in this for loop, the more operations will be performed.
00:03:46So in a code, how many operations are performed as a function of input size?
00:03:54Means when the input size n increases, how many operations will be performed?
00:03:58If it is less, how many operations will be performed?
00:04:00If it is less, how many operations will be performed?
00:04:03How does this input size affect my number of operations, that is, the amount of time taken?
00:04:09It changes.
00:04:10We call that behavior time complexity.
00:04:13Let's understand this with a very simple example.
00:04:16We read the concept of arrays.
00:04:18We read about linear search in arrays.
00:04:21Linear search is one of the most simple algorithms of DSA.
00:04:24What happens in linear search?
00:04:26We have an array in linear search.
00:04:28Along with this, we are given a target.
00:04:30In this array, we have to search our target one by one on each index.
00:04:35This is a linear search algorithm.
00:04:37Now in this algorithm, let's suppose the size of our array is n.
00:04:42Because what is an array?
00:04:44It is an array input.
00:04:45And its size n can vary.
00:04:47It is possible that sometimes we get an array with only one single element.
00:04:50It is possible that tomorrow we get an array with many elements.
00:04:53So in this way, the size is variable.
00:04:55Now the pseudocode of our linear search algorithm looks like this.
00:04:58We go to i less than n.
00:05:00We go to i plus plus.
00:05:02Whenever the value of our array of i is equal to the value of our target,
00:05:07then we return this i as the answer.
00:05:10So this is our for loop.
00:05:12If we do not get anything, then we can return minus 1.
00:05:16So this is our overall concept of linear search.
00:05:18It is very easy.
00:05:19Now here, let's suppose our n is the size of my array.
00:05:23Or we can say that in the case where the value of n is 1,
00:05:26then how many operations will I need in this case?
00:05:28Basically, the number of operations will be the same as the number of times this loop will run.
00:05:31So if I assume one operation to run this loop,
00:05:34then if my n is 1, then my loop will run only once.
00:05:37So approximately, we can assume that we need only one operation.
00:05:41Now if tomorrow the size of my array increases to 10,
00:05:44then my loop will run 10 times.
00:05:46And what operation am I performing by running 10 times?
00:05:49I am performing the operation of checking inside the loop again and again.
00:05:52In the worst case, the worst case means that I have an array of 10 sizes.
00:05:56And in the worst case, I will traverse this entire array
00:05:59and I will get my target on the last index.
00:06:01This is my worst case.
00:06:03If I get it in the beginning, then it is the best case.
00:06:05But in the last case, it is the worst case in linear search.
00:06:07So in the worst case, my loop will run 10 times.
00:06:10So in this also, approximately, how many operations will I need?
00:06:1210 operations will be needed.
00:06:14If tomorrow the size of my array increases to 100,
00:06:17then I know that my loop will run 100 times and I will need 100 operations.
00:06:20And if tomorrow the size of my array increases to 100,
00:06:23then in that case, approximately, I will need 100,000 operations
00:06:27because my loop will run 100,000 times.
00:06:30So here, I can see the behavior of my operations according to the input size.
00:06:34How my operations are changing when I am increasing the input size.
00:06:38And this change is called time complexity.
00:06:41As you can notice, as my input size is increasing,
00:06:44similarly, my number of operations are increasing.
00:06:46Now with what speed are they increasing?
00:06:49To find out the exact speed, we can plot it in the form of a graph.
00:06:53Plotting in the form of a graph means,
00:06:55one we will take our y-axis,
00:06:57one we will take our x-axis.
00:06:59This is the y-axis.
00:07:01On the x-axis, we will keep the input size,
00:07:03which is going to be n.
00:07:06On the y-axis, we will keep the number of operations.
00:07:09Now I know that when I have approximately 1 size,
00:07:12then I have 1 operation.
00:07:14So if 1 lies here,
00:07:16then my 1 point will come here.
00:07:19When there are 10, then there are 10 operations.
00:07:21Similarly, if 10 lies somewhere here,
00:07:24then I will get this value.
00:07:27If we talk about 100,
00:07:28I am not making a scale graph,
00:07:30but let's suppose 100 lies here,
00:07:32and 100 lies here,
00:07:33then my point will come here.
00:07:34That is, I am combining both the points and plotting.
00:07:37Tomorrow, if we want to plot 1 lakh,
00:07:40then let's suppose 1 lakh comes here,
00:07:42and 1 lakh of this comes here,
00:07:44then my point will come here.
00:07:46So in this way,
00:07:47my graph will be plotted for all the points.
00:07:50Here, we have given x-axis to n,
00:07:52and y-axis to the number of operations.
00:07:54So when I have a lot of points in the graph,
00:07:56how does their function come out?
00:07:57Their function comes out
00:07:58by drawing a straight line through all the points.
00:08:00So here, a straight line like this will be drawn for me.
00:08:04In fact, if we want,
00:08:05we can align these points in this way.
00:08:08So all these points are going to fall on this straight line.
00:08:12And what is the equation of this straight line in math?
00:08:14The equation of this straight line in math is
00:08:16y is equal to x,
00:08:17or we can say fx is equal to x.
00:08:20So what is my function?
00:08:21My function is this function.
00:08:22Now what is there on the x-axis?
00:08:24There is n on the x-axis.
00:08:25So we can say,
00:08:26my function is going to be fn is equal to n.
00:08:29And from here, the value of my time complexity comes
00:08:31that now my time complexity is equal to O of n.
00:08:34We write the symbol of O,
00:08:36then write the parenthesis,
00:08:37and whatever is the value of the function on the right side,
00:08:40this becomes my time complexity.
00:08:42We call this time complexity as our linear time complexity.
00:08:46By saying linear,
00:08:47it means that the more my input size is increasing,
00:08:49the more this value is increasing,
00:08:51the more my number of operations are increasing in the same proportion.
00:08:54That is why it is called linear.
00:08:56Because whatever n is,
00:08:58fn, fn means number of operations will also be the same.
00:09:01So from time complexity,
00:09:03we do not get the actual time,
00:09:04that we got 1 second, 2 seconds.
00:09:06From time complexity,
00:09:07we get the behavior
00:09:08that how our algorithm will perform.
00:09:11Now the question arises,
00:09:12what is the need to get the behavior?
00:09:14Basically, whenever we talk about algorithms,
00:09:17we always try to calculate the time complexity
00:09:20for the worst case scenarios.
00:09:22By saying worst case scenarios,
00:09:24it means when our input size,
00:09:26that is our n,
00:09:28is very big.
00:09:30Because if we are designing Facebook,
00:09:32if we are designing Instagram,
00:09:33if we are designing WhatsApp,
00:09:34then whenever we make systems in real life,
00:09:37and write algorithms for them,
00:09:38then we want to make such systems
00:09:40that can handle millions of,
00:09:41billions of users.
00:09:43So our algorithm,
00:09:44whether it is running on 5 size input,
00:09:46or 100 size input,
00:09:48or 1000 size input,
00:09:50it does not matter.
00:09:51Because these are very small values.
00:09:53Our algorithm,
00:09:55on 1 lakh,
00:09:56on 1 million,
00:09:57on 10 million,
00:09:58on 1 billion,
00:09:59how it is performing,
00:10:00that matters more.
00:10:01Because that saves us from worst case scenarios.
00:10:04If an algorithm is performing well in worst case scenarios,
00:10:06then how it is performing on small values,
00:10:08it does not matter.
00:10:09Because it will perform in an optimized way.
00:10:12That is why whenever we talk about time complexity,
00:10:15we generally focus on worst case scenarios.
00:10:18Because in real life scenarios,
00:10:20or if we look at it from an interview placement point of view,
00:10:23then worst case scenarios are the most important.
00:10:25So there are two reasons behind finding time complexity.
00:10:28First is that we can build better algorithms,
00:10:30better systems.
00:10:31The second reason is that we can compare two algorithms.
00:10:34For example,
00:10:35one algorithm's time complexity is big of n,
00:10:37and the other algorithm's time complexity is big of n square.
00:10:40So we should know which algorithm is faster between the two.
00:10:43So that is what we are learning in this chapter.
00:10:46Now let's talk in detail about this big O.
00:10:49What exactly have we written for big O?
00:10:52Big O is basically a notation for our time complexity or space complexity.
00:10:57Notation means symbol.
00:10:59We can imagine it as a symbol.
00:11:01For example, if we write 5,
00:11:03then we will know whether it is 5 rupee or 5 dollar from the symbol.
00:11:06Similarly, if I have written n,
00:11:08then I am talking about time complexity in n.
00:11:11We get to know this by putting big O.
00:11:13It is not a normal function.
00:11:14It is a function of time complexity.
00:11:16It is a behavior of time complexity.
00:11:18That tells us big O.
00:11:19So big O is a notation,
00:11:21a symbol,
00:11:23which gives us worst case scenario or worst case complexity.
00:11:28This worst case is also called our upper bound.
00:11:32If any algorithm runs,
00:11:34if its time complexity is big O of n,
00:11:36then there will be only n operations in the worst case.
00:11:38Our algorithm will not behave worse than that.
00:11:42Yes, it is possible that there will be some other scenario in the best case.
00:11:46For example, if I have this array,
00:11:48my numbers are 1, 2, 3, 4, 5.
00:11:50If my target is 1,
00:11:54then I will get my answer from linear search in the first or zeroth index in the same operation.
00:12:00This is the best case.
00:12:01But in the worst case,
00:12:03I will have to do n operations at most.
00:12:05It can be the best case,
00:12:06but it will not be worse than this.
00:12:08Whenever we find a function of time complexity,
00:12:10like we found a function here,
00:12:12it is not necessary that the function is always straightforward.
00:12:15We can also find a combination of many different values.
00:12:19For example, I got a function 4n square plus 3n plus 5.
00:12:24We got a function like this.
00:12:26This is my fn.
00:12:28So how do I find time complexity from this?
00:12:30If we have a function like this,
00:12:32to find time complexity from it,
00:12:34first of all, we have to ignore our constants.
00:12:36Anything which is a mathematical number is a constant.
00:12:40For example, 4 is a constant, 3 is a constant, 5 is also a constant.
00:12:44After ignoring the constants,
00:12:46the value I will get is n square plus n plus 1.
00:12:49We get 1 by removing this constant.
00:12:52The second step is to consider only the largest term.
00:12:56Largest term means,
00:12:58if I take a very large value of n,
00:13:00for example, if I take a value of n as 1 lakh,
00:13:02which term among these three will be the largest?
00:13:05If the value of n square is 1 lakh,
00:13:07then the value of n square will be 10 to the power 10.
00:13:09Similarly, the value of n will be 10 to the power 5.
00:13:12And the value of 1 will be 1, no matter what happens to n.
00:13:14So which is the largest term?
00:13:16The largest term is this term.
00:13:18We will ignore the rest of the terms.
00:13:20And the overall time complexity will be
00:13:22big O of n square.
00:13:24If we have a complex function like this,
00:13:26then we do not write everything in time complexity.
00:13:28By performing these two steps,
00:13:30we try to get the final small time complexity.
00:13:33If we want, we can take another example.
00:13:35Let's suppose we have got 100
00:13:37plus 5 n square
00:13:39plus root n.
00:13:41We have got a value like this.
00:13:43Now we can take a large value of n.
00:13:45If we want, we can take a value of 1 million.
00:13:47For 1 million,
00:13:49first of all, we have to ignore the constants.
00:13:51We will ignore 5 and 100.
00:13:53Even if 100 is a large value,
00:13:55it is still a constant.
00:13:57So this is going to be 1 plus n square
00:13:59plus under root of n.
00:14:01We will get to know which term is the largest
00:14:03among these three.
00:14:05But we can also substitute the value of n.
00:14:07For 10 to the power 6,
00:14:09under root n will be 10 to the power 3.
00:14:11n square will be 10 to the power 12.
00:14:131 will be 1.
00:14:15This is the largest term.
00:14:17We will ignore the rest of the terms.
00:14:19In this case also,
00:14:21we can calculate our final time complexity.
00:14:23Now this notation,
00:14:25like we have big O,
00:14:27we have two more notations.
00:14:29In fact, we have a lot of notations in math.
00:14:31The notation of worst case scenario,
00:14:33worst case time complexity,
00:14:35is big O.
00:14:37We call it our upper bound.
00:14:39Similarly,
00:14:41we have average case time complexity
00:14:43which we represent as theta.
00:14:45We call it theta.
00:14:47Similarly,
00:14:49worst case scenario also has time complexity
00:14:51which we represent as omega.
00:14:53We call it omega.
00:14:55Generally, we call omega as our lower bound.
00:14:57But in interviews,
00:14:59in coding tests,
00:15:01while practicing questions on coding platforms,
00:15:03we will be using only this time complexity.
00:15:05No one will ask you this in interviews.
00:15:07Definitely,
00:15:09to find out these two values,
00:15:11we have a lot of calculus formulas.
00:15:13We can use limits and derivations.
00:15:15In fact, in colleges,
00:15:17their equations are asked in mid sems
00:15:19and end sems.
00:15:21But we will not cover it here.
00:15:23Because placement, internship, interview
00:15:25is not practical for us as software engineers.
00:15:27If you are interested,
00:15:29you can read a book named Cormen.
00:15:31It has a lot of algorithms.
00:15:33You can read it if you want.
00:15:35If you want,
00:15:37you can read about Master's Theorem.
00:15:39These are different resources
00:15:41from which you can go and study for your semester exams.
00:15:43Here, we are studying practical things
00:15:45according to the time complexity.
00:15:47These notations also exist.
00:15:49We have big O, small O, small theta,
00:15:51small omega,
00:15:53loose bound, tight bound.
00:15:55There are a lot of mathematical concepts
00:15:57which we can study.
00:15:59But from a placement point of view,
00:16:01big O is what you have to calculate.
00:16:03We have seen the calculation process.
00:16:05Your interviewers
00:16:07will also be asking you questions
00:16:09related to big O.
00:16:11No theory or formula will be asked in interviews.
00:16:13Similarly,
00:16:15we have the concept
00:16:17of space complexity.
00:16:19Space complexity is not the actual space
00:16:21taken by the program.
00:16:23It is the amount of space taken by an algorithm
00:16:25as a function of input size.
00:16:27For example, whenever we write a code,
00:16:29space is occupied in that code
00:16:31in two ways.
00:16:33One is our actual input.
00:16:35This input can be a vector,
00:16:37this input can be an array,
00:16:39this input can be a string.
00:16:41And the other is
00:16:43our auxiliary space.
00:16:45Auxiliary means extra space
00:16:47which we take
00:16:49separately from the input.
00:16:51For example, we performed a linear search.
00:16:53There must be an array in the linear search.
00:16:55Only then can the linear search
00:16:57be performed on it.
00:16:59This array which exists,
00:17:01we call it our input.
00:17:03We do not count the input in space complexity.
00:17:05In space complexity,
00:17:07we only consider this extra auxiliary space.
00:17:09For example,
00:17:11we are given a question.
00:17:13We are given an array in the input.
00:17:15What you have to do is
00:17:17you have to write another array
00:17:19in the output.
00:17:21We will call it our square array.
00:17:23In this square array,
00:17:25we have to store the original elements
00:17:27of the array as a square.
00:17:29To solve this problem,
00:17:31we will have an original array.
00:17:33There will be n number of elements in the original array.
00:17:35Its size will be n.
00:17:37To store the squares,
00:17:39we will make another array.
00:17:41We will give the same size to this array
00:17:43which is the size of the input array.
00:17:45Because if there are elements
00:17:471, 2, 3, 4, 5, 6, 7,
00:17:49we are trying to store the squares of these elements.
00:17:51In this way,
00:17:53the total size will be same.
00:17:55Here, this n,
00:17:57which is our array,
00:17:59is our input which we will not count in space complexity.
00:18:01But the second array which we made,
00:18:03we have occupied the extra space.
00:18:05This is our auxiliary space
00:18:07which will give the value of space complexity.
00:18:09Here also, we will see the same relationship.
00:18:11As our input size increases,
00:18:13our auxiliary space also increases.
00:18:15Because the size of auxiliary space
00:18:17is going to be exactly n,
00:18:19here our space complexity
00:18:21will be big of n.
00:18:23This is space complexity.
00:18:25If we plot the graph of this,
00:18:27we will get the same straight line
00:18:29which we got in the case of time complexity.
00:18:31Because the function of big of n
00:18:33is always represented
00:18:35in a straight line in the graph.
00:18:37Let's see another example.
00:18:39For example, we have an array in the input.
00:18:41What do I have to do in the output?
00:18:43In the output, I have to return the sum
00:18:45of all the elements of the array.
00:18:47To calculate this sum,
00:18:49we will make some sum variable
00:18:51which we will initialize with 0.
00:18:53Then we will run a loop
00:18:55i is equal to 0 to i less than n i plus plus.
00:18:57Every time, we will add the value of array of i
00:18:59in the sum.
00:19:01In the last, we will print the sum.
00:19:03It will be a simple logic.
00:19:05In the input, the size of the array
00:19:07which is n,
00:19:09is already given.
00:19:11Apart from this, we have used
00:19:13some sum variable.
00:19:15The size of the array may be n is equal to 10,
00:19:17n is equal to 100,
00:19:19n is equal to 1 lakh,
00:19:21n is equal to 1 million.
00:19:23Whatever may be the size of the array,
00:19:25the sum variable will be the same.
00:19:27This space will occupy the same space.
00:19:29The space of the variables or auxiliary space
00:19:31will be constant.
00:19:33Our function will be
00:19:35big O of k.
00:19:37In a way, it is equal to constant.
00:19:39Let's suppose
00:19:41all the variables of this code
00:19:43are occupying
00:19:45k constant space.
00:19:47The space complexity will be
00:19:49equal to big O of k.
00:19:51As we ignore constant in big O,
00:19:53we call constant space complexity
00:19:55as big O of 1.
00:19:57Why is it constant?
00:19:59As the size of the input
00:20:01is increasing,
00:20:03there is no extra space
00:20:05occupied in the code.
00:20:07This is called constant space complexity.
00:20:09How do we visualize
00:20:11constant space complexity in a graph?
00:20:13In a graph, it is visualized
00:20:15as a horizontal straight line
00:20:17which is parallel to the x-axis.
00:20:19This means that
00:20:21no matter how much the size of the input
00:20:23increases,
00:20:25the auxiliary space
00:20:27will remain constant.
00:20:29This line represents big O of 1.
00:20:31This line represents big O of n.
00:20:33In this way, we have many different
00:20:35space and time complexities.
00:20:37Both space and time complexities are important.
00:20:39But in majority of cases
00:20:41or from an interview point of view,
00:20:43we mostly focus on time complexity.
00:20:45Because in our modern systems,
00:20:47it is comparatively easier
00:20:49to bring extra space
00:20:51as compared to making an algorithm
00:20:53which is more efficient in terms of time.
00:20:55This is why
00:20:57for modern systems,
00:20:59the algorithms and codes we use
00:21:01definitely space complexity is also important.
00:21:03But more importance is given
00:21:05to time complexity
00:21:07whether it is in interviews
00:21:09or in practical life.
00:21:11It is important to save time
00:21:13because nowadays,
00:21:15storage in computers has become cheap
00:21:17but everyone wants a faster
00:21:19and more efficient experience.
00:21:21Majority of the examples we will see
00:21:23will be of time complexity.
00:21:25But the fundamentals to calculate time complexity
00:21:27are similar to calculate
00:21:29space complexity.
00:21:31Next, we will see some common time complexities
00:21:33and where they occur in programming.
00:21:35For example,
00:21:37we can plot different time complexities
00:21:39in the same graph
00:21:41where we have input size
00:21:43and number of operations.
00:21:45This is the graph of big O of 1.
00:21:47This is the graph of big O of n.
00:21:49In this graph,
00:21:51the best time complexity is
00:21:53equal to big O of 1.
00:21:55Because it is a constant time complexity
00:21:57no matter how big the input size is,
00:21:59the number of operations will be the same
00:22:01and we will occupy the same space.
00:22:03So, we ask for the best time complexity.
00:22:05As the value of big O of 1 increases,
00:22:07our time complexity
00:22:09moves towards worse.
00:22:11The time complexity of big O of log n
00:22:13is slightly less than that of big O of 1.
00:22:15Log n is also a good time complexity
00:22:17Log n time complexity
00:22:19is slightly worse than
00:22:21big O of n, i.e. linear time complexity.
00:22:23N log n is slightly worse than
00:22:25linear time complexity.
00:22:27Here, we can simply imagine
00:22:29x square as n square.
00:22:31n square moves towards
00:22:33worse time complexity.
00:22:35n cube is
00:22:37even worse time complexity.
00:22:39Even worse time complexity is
00:22:412 to the power n.
00:22:432 to the power n is called exponential time complexity
00:22:45and it is generally seen in recursion.
00:22:47We are going to see
00:22:49how to calculate it.
00:22:51Even worse time complexity is
00:22:53n factorial.
00:22:55As the function of time complexity
00:22:57moves towards y axis,
00:22:59the time complexity moves towards worse.
00:23:01These are the best time complexities
00:23:03and these are the worst time complexities.
00:23:05There are some time and space
00:23:07complexities which are seen
00:23:09in many algorithms.
00:23:11These are called
00:23:13recursion time complexities.
00:23:15First, let us see these common time complexities.
00:23:17Where do they occur?
00:23:19What do they mean?
00:23:21We are going to solve a lot of questions based on time complexity.
00:23:23Then, we will learn
00:23:25how to calculate time complexity in recursion.
00:23:27The most common time complexity
00:23:29is big O of 1,
00:23:31which is also called constant.
00:23:33It is called constant time complexity.
00:23:35This complexity is generally seen in those questions
00:23:37where we do not have to do any loop or recursion.
00:23:39We have to apply a simple formula
00:23:41and get our answer.
00:23:43For example, we have to calculate
00:23:45the sum of numbers from 1 to n without using any loop.
00:23:47We will simply apply the formula
00:23:49n into n plus 1 by 2.
00:23:51This is the formula of sum from 1 to n.
00:23:53We simply took an integer n
00:23:55and seen it.
00:23:57After that, we calculated the answer
00:23:59n into n plus 1 by 2.
00:24:01This is also a constant operation.
00:24:03Let us suppose
00:24:05all of them are k operations.
00:24:07The value of k is always constant.
00:24:09The input size is n.
00:24:11So, this formula
00:24:13does not make any difference.
00:24:15For small n, we need one operation.
00:24:17For big n, we need only one operation.
00:24:19Here, our time complexity
00:24:21will be equal to big O of k.
00:24:23As we ignore constants in big O,
00:24:25our overall time complexity
00:24:27will be equal to big O of 1.
00:24:29This is called constant time complexity.
00:24:31Generally, this time complexity
00:24:33is also seen in other cases.
00:24:35For example, we have a given array.
00:24:37We always need to
00:24:39print the first element of this array
00:24:41i.e. the element on array of 0.
00:24:43We already know
00:24:45what is index and what is index 0.
00:24:47We need to print this value every time.
00:24:49There is no need to run any loop.
00:24:51When we print this value,
00:24:53this will be our constant time complexity function.
00:24:55Similarly, if we have a sorted array
00:24:57with numbers 1, 2, 3,
00:24:59we need to print the largest value
00:25:01in the sorted array.
00:25:03In ascending order sorted array,
00:25:05the largest value will always be on the last index
00:25:07i.e. in array of n-1.
00:25:09When we print this value,
00:25:11there is no need to run any loop.
00:25:13Here also, we need big O of 1 time.
00:25:15Here also, we need big O of 1 time.
00:25:17In fact, later in our series,
00:25:19we are going to study about
00:25:21hash tables.
00:25:23In hash tables,
00:25:25we read unordered map
00:25:27in C++.
00:25:29Its Java version is HashMap.
00:25:31The time complexity of
00:25:33big O of 1 time
00:25:35is assumed to be
00:25:37big O of 1 time.
00:25:39The time complexity of
00:25:41big O of 1 time is assumed to be
00:25:43big O of 1 time.
00:25:45However, it is not a pure
00:25:47constant time complexity.
00:25:49It is not a pure
00:25:51constant time complexity.
00:25:53It is amortized
00:25:55constant time complexity.
00:25:57Amortized means that
00:25:59the average is big O of 1
00:26:01time. It is practically
00:26:03constant but theoretically
00:26:05it is amortized.
00:26:07We will study this in detail
00:26:09in the concept of hashing.
00:26:11Basically, in our program,
00:26:13where there is no loop,
00:26:15where there is an if condition,
00:26:17where there is a number of
00:26:19operations in the loop,
00:26:21we assume the number of
00:26:23operations in the loop to be
00:26:25constant.
00:26:27The constant time complexity
00:26:29is shown like this.
00:26:31Big O of 1 is equal to X axis.
00:26:33The next common time complexity
00:26:35is linear.
00:26:37We call big O of n as linear.
00:26:39We get to see this in many places.
00:26:41For example,
00:26:43we have seen linear search.
00:26:45There is a constant in linear search.
00:26:47If we are taking n factorial,
00:26:49then the loop of n factorial will be like this.
00:26:51What is happening internally in this loop?
00:26:53It is a constant operation.
00:26:55We can call it k.
00:26:57What is variable?
00:26:59Basically, the number of times
00:27:01my loop will run is variable.
00:27:03If my input size is n,
00:27:05I have to find n factorial,
00:27:07then how many times will my loop run?
00:27:09It will run n times.
00:27:11How many times is the loop running?
00:27:13It is running n times.
00:27:15How many times is the loop running?
00:27:17It is running constant.
00:27:19How many times will it run n times?
00:27:21It will run constant.
00:27:23Since we ignore the constants,
00:27:25the time complexity of big O of n will be equal.
00:27:27That is why the time complexity of n factorial
00:27:29is big O of n.
00:27:31Similarly, if we look at Catan's algorithm,
00:27:33we have already done this.
00:27:35In Catan's algorithm,
00:27:37this is a constant work.
00:27:39If I look at the work inside the loop,
00:27:41this is also a single line.
00:27:43This is also a constant work.
00:27:45This part inside the loop
00:27:47is a constant work.
00:27:49How many times is the loop running?
00:27:51It is running from 0 to n.
00:27:53It is running n times.
00:27:55If we ignore the constants,
00:27:57the time complexity of big O of n will be equal.
00:27:59Similarly,
00:28:01when we do dynamic programming,
00:28:03we will code nth Fibonacci work.
00:28:05Inside nth Fibonacci,
00:28:07this is a constant work.
00:28:09If we look outside,
00:28:11the loop is running from 2 to n.
00:28:13Whether the loop runs from 0 to n,
00:28:15whether it runs from 1 to n,
00:28:17whether it runs from 2 to n,
00:28:19whether it runs from 5 to n,
00:28:21there are big values like 1 million,
00:28:231 billion.
00:28:25In such big values,
00:28:27whether the loop starts from 0,
00:28:29runs from 1, 2, 5, 10,
00:28:31it does not matter.
00:28:33On average, the loop will run n times.
00:28:35That is why the time complexity
00:28:37of this entire loop
00:28:39is going to be big O of n.
00:28:41Similarly, if we run a loop
00:28:43and calculate the sum of n numbers,
00:28:45that is also a code of linear time complexity.
00:28:47If we want to solve
00:28:49this problem,
00:28:51we should use Moore's voting algorithm
00:28:53which we used in the last lecture.
00:28:55That is also a question of linear time complexity.
00:28:57Linear time complexity is such
00:28:59that we get to see it in many places.
00:29:01This is a good time complexity.
00:29:03This is not a slow algorithm.
00:29:05This is a good time complexity.
00:29:07Next common time complexities
00:29:09are n square and n cube.
00:29:11Where do we get to see n square?
00:29:13There are some sorting algorithms
00:29:15which are not optimized.
00:29:17For example, selection sort,
00:29:19insertion sort.
00:29:21These are not very good sorting algorithms
00:29:23but their time complexity
00:29:25is equal to big O of n square.
00:29:27For example, here we have a bubble sort code.
00:29:29In bubble sort code,
00:29:31there is a for loop
00:29:33and we have an if statement in it.
00:29:35Now this swap work
00:29:37is a constant work.
00:29:39We can assume it as K.
00:29:41Now if I see my outer loops,
00:29:43my outer loop runs
00:29:45from 0 to n-1.
00:29:47On average, my outer loop
00:29:49runs n times.
00:29:51This is my outer loop.
00:29:53Now if my outer loop runs once,
00:29:55let's suppose this loop
00:29:57runs once,
00:29:59how many times will the inner loop run?
00:30:01How can we find it?
00:30:03For that, we can take a small value of n.
00:30:05Let's suppose we took value of n equal to 4.
00:30:07If value of n is 4,
00:30:09then outer loop i is equal to 0
00:30:11less than n-1.
00:30:13Then value of i will be 1
00:30:15and value of i will be 2.
00:30:17In this way, my outer loop will run 3 times
00:30:19for n is equal to 4.
00:30:21Now when my outer loop will run
00:30:23for the first time for i is equal to 0,
00:30:25then how many times will my inner loop run?
00:30:27In inner loop, value of j is equal to 0
00:30:29will go from 4,
00:30:31minus 0, minus 1
00:30:33i.e. up to 3.
00:30:35In this case, value of j will go first 0,
00:30:37then 1 and then 2.
00:30:39Similarly, for i is equal to 1,
00:30:41value of j will go first 0, then 1.
00:30:43For i is equal to 2,
00:30:45value of j will go only up to 0.
00:30:47As i increases,
00:30:49inner loop will run less.
00:30:51For n is equal to 4,
00:30:53my inner loop is running this many times.
00:30:55Now listen carefully to what I am going to tell you.
00:30:57If my outer loop runs for the first time,
00:30:59then inner loop will run 3 times
00:31:01and inner loop will run 3 times
00:31:03and perform k operations.
00:31:05When outer loop runs for i is equal to 1,
00:31:07then how many operations are performed?
00:31:09Inner loop runs 3 times
00:31:11and performs k operations.
00:31:13When outer loop
00:31:15runs for second time,
00:31:17then inner loop runs 2 times
00:31:19and performs k operations.
00:31:21Then inner loop runs 2 times into k times.
00:31:23When outer loop runs for third time,
00:31:25i.e. 2 times,
00:31:27then inner loop runs 1 time
00:31:29and performs k operations.
00:31:33So plus 1 into k.
00:31:35Now when will total number of operations come?
00:31:37number of operations when n is equal to 4. Now, if I take any arbitrary value of n,
00:31:41let's say n is equal to n only, then what pattern do I see in it?
00:31:45I see n minus k plus n minus 2 into k plus n minus 3 into k plus similarly,
00:31:54in this way, at last 2k plus k. In this way, I will get a formula that
00:31:58this should be my total number of operations for any n.
00:32:01I already got to know this because we are going from 1 to n minus 1.
00:32:05So, here also we are going from 1 to n minus 1. Now, the total number of
00:32:08general operations that I got, what is the final value of it?
00:32:12If we get the sum of all these values, then what will be the sum?
00:32:16We can take k as common, inside it will be n minus 1
00:32:19plus n minus 2 plus n minus 3 and in the same way, we have 2 plus 1.
00:32:24So, here basically, I have to calculate the sum of all numbers from 1 to n minus 1.
00:32:29From 1 to n minus 1, the sum of all numbers is n into n minus 1 divided by 2.
00:32:34And this value is being multiplied with k.
00:32:36If we open this, then it will be k n square by 2 minus k n by 2.
00:32:42And because we have to ignore the constants,
00:32:44then in this time complexity, k will be ignored, 2 will be ignored, k will be ignored, 2 will be ignored.
00:32:49I will have the remaining terms, n square minus n.
00:32:52Which is the bigger term among these? n square.
00:32:53We obviously know, so we will also remove this n.
00:32:56So, the final time complexity that will come, that will be O of n square.
00:32:59So, I hope we have understood that how for nested loops,
00:33:03we can calculate their time complexity by looking at their number of operations.
00:33:08Now, generally, in a nested loop, it is not that we have to calculate the time complexity for every nested loop.
00:33:13For many loops, as soon as we see, we get to know that this outer loop will also run n times.
00:33:18The inner loop will also run n times.
00:33:21The outer loop is running n times and for that, the inner loop is running n times every time.
00:33:25So, the time complexity becomes O of n square.
00:33:27So, in this way, when we do a lot of algorithms,
00:33:29we will get to know that which is the nested loop of n square and which is not the nested loop of n square.
00:33:33Along with this, along with sorting algorithms,
00:33:35all the questions that we have about patterns,
00:33:38like when we do patterns and the nested loop that runs in patterns,
00:33:42the time complexity in the questions of patterns is also O of n square.
00:33:47In fact, as a homework problem, we can go to our patterns class and see that lecture.
00:33:52For every single pattern, we can try to calculate the time complexity.
00:33:55Now, as we have n square, we also have n cube time complexity.
00:34:00In n cube time complexity, we generally have three nested loops.
00:34:04Our outer loop is also running n times.
00:34:07There is another loop inside it that is running n times.
00:34:09And there is another loop inside it that is running n times.
00:34:12We had already seen an example of this that if we have a given array
00:34:16and we have to print all possible sub-arrays of it.
00:34:25So, for any array, when we print all its sub-arrays,
00:34:28we have to write three nested loops in its code whose time complexity is O of n cube.
00:34:33We have already studied this in the last lecture.
00:34:36So, this is an example of our n cube loop.
00:34:39Apart from this, another common time complexity is log n.
00:34:42Now, those students who do not know about log,
00:34:44you can pause here and learn about the basics of log.
00:34:47So, once you can revise the basics of log that we were learning in 6th, 7th and 8th class.
00:34:53The log n time complexity is considered to be very good.
00:34:56And where does log n time complexity come?
00:34:58If this is my graph and this was our constant time complexity,
00:35:02then the linear time complexity on this same graph comes here.
00:35:06This is O of n.
00:35:07The n square time complexity comes here.
00:35:10O of n square equals.
00:35:12The n cube comes here.
00:35:15O of n cube equals.
00:35:16And log n time complexity comes here.
00:35:19O of log n equals.
00:35:21So, the log n time complexity is very close to the constant.
00:35:25Because log of n is a very small value.
00:35:28What is log?
00:35:29Log is the power.
00:35:30Whatever the base is, like here generally if the base is 2,
00:35:32then what power of 2 should we keep so that we get n value.
00:35:36So, this power is a very small value.
00:35:38That is why log n is generally a very small value.
00:35:41And this time complexity is very optimized.
00:35:43Now, the log n time complexity is generally seen in binary search.
00:35:47We will be doing the code of binary search in the coming DSA chapters.
00:35:50We will also learn many algorithms.
00:35:52So, binary search is done.
00:35:54Or like our binary search trees are done.
00:35:56So, we get to see log n time complexity in the operations of BST.
00:36:00Let's see how the log n time complexity of binary search is seen.
00:36:03Now, this is the algorithm of binary search.
00:36:04My assumption is that you already know the binary search.
00:36:07If you don't know, then when we read the binary search,
00:36:09then we can go and see this section.
00:36:11So, to find the time complexity of binary search,
00:36:13basically we have two pointers S and E.
00:36:15So, how does binary search work?
00:36:17We apply binary search on a sorted array.
00:36:20This array is already sorted.
00:36:22Like, it has 1, 2, 3, 4, 5, 6 elements like this.
00:36:25We find the start of our array.
00:36:28We find the end of our array.
00:36:29And the funda of binary search says that
00:36:31every time I have to find the mid of my array.
00:36:34And always compare my target with the mid.
00:36:37So, every time we find our mid.
00:36:39And compare the target with the mid.
00:36:40Depending upon the target, whether it is bigger or smaller than the mid.
00:36:43Our search space every time,
00:36:45that how many arrays I have to search.
00:36:47Like, let's suppose the target is equal to 5.
00:36:49Now, I know that the target is bigger than the mid.
00:36:51So, I have to search in the second half only.
00:36:53The first half is completely ignored.
00:36:55So, every time our search space is halved.
00:36:58Next time, the search space will be equal to 4, 5, and 6.
00:37:02After that, we will subtract the mid.
00:37:03So, it is possible that the search space will be smaller next time.
00:37:06So, if in the beginning, my search space,
00:37:09i.e. how big was the array in which I have to search.
00:37:11If it is n, then next time it will be n by 2.
00:37:15After that, next time it will be n by 4.
00:37:17After that, next time it will be n by 8.
00:37:19And in the worst case,
00:37:20in the worst case, in the binary search,
00:37:22the search space reaches 1 while being less.
00:37:25So, if I have to find out how much time complexity I will get.
00:37:29So, basically, I have to find out that
00:37:31if I divide n by 2 every time,
00:37:33divide it by 2, divide it by 2,
00:37:35then how many times will n be equal to 1?
00:37:39The number of times n will be equal to 1 by dividing by 2,
00:37:41that will be my time complexity.
00:37:43Because what are these values?
00:37:44These values are n divided by 2 to the power 0.
00:37:47Then n divided by 2 to the power 1.
00:37:49Then n divided by 2 to the power 2.
00:37:51Then n divided by 2 to the power 3.
00:37:53And in this way,
00:37:55where will that value come?
00:37:56When n divided by 2 to the power something k or something x,
00:38:00whose value will be equal to 1.
00:38:02So, basically, I have to find out this x.
00:38:04Because here I have divided it by 0 times,
00:38:06here I have divided it by 1 time,
00:38:07here I have divided it by 2 times,
00:38:08here I have divided it by 3 times,
00:38:09and here I have divided it by x time.
00:38:10So, whatever value of my x will come,
00:38:13this x is my time complexity.
00:38:15Now, how will we find the value of x?
00:38:17We can find the value of x from this.
00:38:19I know that my n divided by 2 to the power x is equal to 1.
00:38:23So, can I say that n is equal to 2 to the power x?
00:38:25So, if we take log on both sides,
00:38:27what value will we get?
00:38:28For this, generally, we should have basic log knowledge,
00:38:30which we can read a little.
00:38:31So, log base 2 n will be equal to x.
00:38:34So, what will be the value of x?
00:38:35The value of x will be log n base 2 k equal.
00:38:38By the way, generally, in programming,
00:38:39we ignore the base.
00:38:40Because even if this log 3 base n is there,
00:38:42still our overall time complexity is called log n.
00:38:46So, what is the value of x here?
00:38:47It is equal to log n.
00:38:48So, we had told earlier that what is x?
00:38:50It is equal to time complexity.
00:38:51So, what is the time complexity?
00:38:52Time complexity is equal to big O of log n.
00:38:55In any algorithm, in computer science,
00:38:58if you see such a situation,
00:39:00that we have some value in the beginning,
00:39:02at the next level, that value becomes half.
00:39:04At the next level, it becomes more half.
00:39:05At the next level, it becomes more half.
00:39:07So, generally, in those algorithms,
00:39:09time complexity comes out to be big O of log n.
00:39:12This is the logic that follows in the binary search tree.
00:39:14In that also, the same logic follows
00:39:16which we will study in the time of trees.
00:39:18So, this is big O of log n time complexity,
00:39:20which is a very optimized time complexity.
00:39:22Generally, beginner students get scared by seeing this.
00:39:24But this time complexity is going to be your best friend.
00:39:27Because if you can bring this into the questions,
00:39:29then that is a really, really optimized code.
00:39:31So, you should be happy at any time
00:39:33if you see log n in your final answer in time complexity.
00:39:36Similarly, we have another general time complexity,
00:39:39which is called n log n.
00:39:41The time complexity of n log n
00:39:43is seen in optimized sorting algorithms.
00:39:46For example, if we have merge sort,
00:39:49or if we have average case of quick sort.
00:39:52In worst case of quick sort, n square is there.
00:39:54But in average, n log n is there.
00:39:56We will cover this later in our entire DSA series.
00:40:00Or generally, in our greedy algorithms,
00:40:02because sorting is also applied in greedy algorithms,
00:40:05there also, we see equal time complexity of n log n.
00:40:08So, if we talk about the graph,
00:40:09where does n log n lie on the graph?
00:40:12So, I know that my log n time complexity,
00:40:15big O of log n, is small as compared to big O of n.
00:40:19Being small means that it is a better time complexity.
00:40:22So, if I multiply both sides by n,
00:40:25then can I say that my n log n time complexity
00:40:29is better as compared to n square time complexity.
00:40:33So, n log n is bigger than n, but smaller than n square.
00:40:36So, my n log n time complexity will lie somewhere here.
00:40:39That is equal to big O of n log n.
00:40:42So, it is much better than linear n log n.
00:40:44Apart from this, we have another common time complexity,
00:40:47which we will study in detail later,
00:40:48which is called exponential.
00:40:51We call this exponential time complexity.
00:40:53And this is generally seen in recursion.
00:40:56In recursion, our brute force recursion,
00:40:59in which there is no optimization,
00:41:01in that, we call big O of 2 to the power n,
00:41:03or big O of 3 to the power n,
00:41:05or big O of 4 to the power n,
00:41:07we call all of these exponential.
00:41:09This is considered to be a very bad time complexity.
00:41:12And generally, where does it lie?
00:41:14It lies somewhere here in our chart.
00:41:16Big O of 2 to the power n.
00:41:18If we see 3 to the power n,
00:41:19then 3 to the power n will be worse than this.
00:41:21If we see 4 to the power n,
00:41:22then it will be worse than this.
00:41:23So, we are going to see 2 to the power n
00:41:24in more detail in recursion.
00:41:27Apart from this, there is another time complexity
00:41:29related to recursion,
00:41:30which is equal to big O of n factorial.
00:41:31This is not that common,
00:41:33but you will get to see it in some places.
00:41:34For example, in recursion,
00:41:35we do a very classical question called n queens.
00:41:38So, the brute force approach of n queens
00:41:40by using recursion,
00:41:41contains big O of n factorial.
00:41:43Just like the question of n queens,
00:41:44there is another question called Knight's tour.
00:41:46In Knight's tour,
00:41:47which is our approach without any optimization,
00:41:50there is basic recursion.
00:41:52So, there we get to see n factorial time complexity.
00:41:54If we have a question that we have a given string,
00:41:58we have to find all possible permutations of that string.
00:42:03So, the approach of finding all possible permutations
00:42:06is our recursive approach generally.
00:42:08And even in that,
00:42:09we need n factorial equal time complexity.
00:42:12For example, if we have a string abc.
00:42:15So, what will be the permutations of string abc?
00:42:17One permutation will be abc,
00:42:19one permutation will be acb,
00:42:21one permutation will be bac,
00:42:23one permutation will be bca,
00:42:25one permutation will be cab,
00:42:27and one permutation will be cba.
00:42:28So, how many permutations will be there in total?
00:42:30If the value of n is 3,
00:42:31that is, the size of the string is equal to 3,
00:42:33then there will be 6 permutations.
00:42:35What does 6 mean?
00:42:37It is 3 factorial.
00:42:38And what is 3 factorial?
00:42:39It is n factorial.
00:42:40So, if I have to find n factorial permutations,
00:42:43then it will take me big of n factorial time to find it.
00:42:47That is why this question also has a time complexity of big of n factorial.
00:42:52So, in this way, n factorial time complexity exists.
00:42:54But it is also very bad.
00:42:56In fact, it is worse than 2 to the power n.
00:42:59Because what is 2 to the power n?
00:43:012 to the power n is that we are multiplying 2 only.
00:43:04How many times are we multiplying?
00:43:05n times.
00:43:07But what is n factorial?
00:43:09We are multiplying n, then n minus 1,
00:43:11then n minus 2.
00:43:13How many times are we multiplying?
00:43:143 into 2 into 1.
00:43:16This is also n times.
00:43:17Now, here only 2,
00:43:19that is, a small number is being multiplied.
00:43:20Here, bigger than 2 is 3,
00:43:21bigger than 3 is 4,
00:43:22bigger than 4 is 5.
00:43:23Then numbers are being multiplied up to n.
00:43:25So, this time complexity is way worse as compared to exponential.
00:43:29This time complexity is worse than exponential.
00:43:32And if we look at it on the chart,
00:43:34then my exponential comes here,
00:43:35and my n factorial time complexity comes here.
00:43:38So, I hope we have understood
00:43:39how our time complexity is going from good to worse.
00:43:44And generally, what are the use cases
00:43:46where we get to see time complexities in the entire DSA.
00:43:50So, this was an overview
00:43:52that which time complexities generally exist in programming,
00:43:55where we get to see them.
00:43:57Now, we are going to ask some questions based on time complexity.
00:43:59Then we will talk about how we calculate time complexities in recursion.
00:44:03Now, we have to solve a question for prime number.
00:44:05We have this code for prime number
00:44:07in which we have to calculate time complexity.
00:44:09So, my assumption is that you already know this code, this logic.
00:44:13There is a small error in this code
00:44:15that it should be less than or equal to.
00:44:17Not just less than, it should be less than or equal to.
00:44:19So, if we are given a code like this,
00:44:21I know that there is a loop running in this code.
00:44:24The work inside the loop,
00:44:26this is also a constant work,
00:44:27this is also a constant work,
00:44:28this is also a constant work.
00:44:29So, overall, how much?
00:44:30k constant work is being done.
00:44:31So, I just have to see how many times my loop is running.
00:44:34I know that my loop will always start from 2.
00:44:36i is equal to 2 is starting.
00:44:38And until the value of i square is less than or equal to n,
00:44:42my loop is running.
00:44:43So, in the worst case, how many times will my loop run?
00:44:45In the worst case, my loop will run until the value of i square is equal to n.
00:44:50And the value of i square is equal to n means
00:44:52that when the value of i becomes root n.
00:44:54So, can I say that my loop is running from i is equal to 2 to under root n?
00:45:00Now, whether the loop runs from 2 to root n or runs from 0 to root n,
00:45:04the time complexity remains the same.
00:45:06Because n is a very big value.
00:45:08It is a value of millions or billions.
00:45:09So, whether the loop starts from 0,
00:45:11whether it starts from 2,
00:45:12whether it starts from 5,
00:45:13it will not make much difference.
00:45:14Because there is no value of 2 or 4 in front of 1 million.
00:45:17And that is why,
00:45:18what will be the time complexity?
00:45:20I know that if my loop runs from 0 to root n,
00:45:23then the time complexity will be big of root n.
00:45:26So, this prime number logical loop
00:45:29will have a time complexity of big of root n.
00:45:32Now, my next question will be for you.
00:45:33You have to find out whether big of root n is a bigger time complexity
00:45:37or big of n is a bigger time complexity.
00:45:40First is this.
00:45:41Second, is big of root n a better time complexity
00:45:44or is big of log n a better time complexity?
00:45:47I am giving you two homework problems.
00:45:49You have to tell who is better between these two and who is better between these two.
00:45:55The next question that we will solve is selection sort.
00:45:58So, my assumption is that you already know selection sort.
00:46:00So, this is my general code for selection sort.
00:46:03In selection sort, this loop is running.
00:46:05In the loop, this constant k operation is being performed.
00:46:08Then another loop is running.
00:46:10In this loop, this is if.
00:46:11In this too, constant k is there.
00:46:13This is also constant.
00:46:15This is also constant.
00:46:16So, constant is being performed.
00:46:17In just one loop,
00:46:19another loop is running which performs constant work.
00:46:22Now, I know how many times my outer loop will run.
00:46:24Outer loop will run from 0 to n-1.
00:46:27So, how many times outer loop is running?
00:46:29Outer loop is running n times.
00:46:33If outer loop runs once,
00:46:36then how many times my inner loop is running?
00:46:37Inner loop goes from i plus 1 to n.
00:46:41For example, when i's value is 0,
00:46:43then where will j's value go?
00:46:45j's value will go from 1 to n.
00:46:47When i's value is 1,
00:46:48j's value will go from 2 to n.
00:46:50When i's value is 2,
00:46:51j's value will go from 3 to n.
00:46:53So, can I say that in first time,
00:46:55my loop will run n times?
00:46:56I am talking about inner loop.
00:46:58In second time, it will run n-1 times.
00:47:00In third time, it will run n-2 times.
00:47:02And I can see this pattern,
00:47:03then n-3, n-4.
00:47:05How many times will my loop run in last time?
00:47:07It will run once.
00:47:08When i's value will finally be n-2,
00:47:12in last time, it will be n-2.
00:47:14So, for n-2, j's value will be n-1.
00:47:17And j should run less than n.
00:47:19So, my loop will run once.
00:47:21So, while running outer loop,
00:47:23my inner loop,
00:47:24for first time, it will run n, n-1, n-2 like this.
00:47:27So, how many times will my inner loop run?
00:47:29Technically, it should be n-1.
00:47:32This is n-1, n-1.
00:47:33So, n-1, n-2, n-3, n-4 should run like this.
00:47:37But whether you write n or n-1 here,
00:47:401 out of 1 million,
00:47:411 out of 10 million,
00:47:421 out of 10 million,
00:47:43it doesn't make much difference.
00:47:45So, that is why,
00:47:46whether we are writing n or n-1 here,
00:47:47it won't make any difference on time complexity.
00:47:50So, if I have to take sum of n-1, n-2, n-3 like this,
00:47:57then sum of numbers from 1 to n is
00:47:59n into n plus 1 divided by 2.
00:48:01And after running my inner loop so many times,
00:48:03it is performing k operations, k constant operations.
00:48:07So, this will be my total time complexity.
00:48:09And I already know its value.
00:48:11It is equal to big of n square.
00:48:13That is why, time complexity of selection sort is equal to big of n square.
00:48:16So, this is how we calculate time complexity.
00:48:18By the way, we could have calculated this by estimating.
00:48:20I know that this is a constant work.
00:48:22I know that outer loop will run on average n times.
00:48:26Inner loop, if outer loop has run once,
00:48:29then how many times inner loop will run in worst case?
00:48:31In worst case, it will run n times.
00:48:33So, outer loop will run n times.
00:48:35And to run once, inner loop will also run n times.
00:48:37So, when outer loop will run n times,
00:48:39then for every run of it, inner loop will run n times.
00:48:41And by running n times, it will perform k operations.
00:48:43So, in this case,
00:48:45time complexity will be big of k n square.
00:48:48And if we ignore constant,
00:48:50then time complexity will be big of n square.
00:48:52So, we can calculate in this way also.
00:48:54We don't have to do calculation every time.
00:48:56But to calculate in this way,
00:48:58will be useful for practice.
00:49:00Next, we will learn how to calculate
00:49:02time and space complexity in recursion.
00:49:04So, before starting this section,
00:49:06my assumption is that
00:49:08you already know recursion.
00:49:10You know what a recursion tree is.
00:49:12You know what a call stack is.
00:49:14If we haven't learned recursion yet,
00:49:16then we can learn to calculate time complexity of recursion.
00:49:18So, to calculate time complexity of recursion,
00:49:20we have taken a simple example
00:49:22of factorial calculation
00:49:24using recursion.
00:49:26Now, this factorial code is a fairly easy example
00:49:28of what recursion looks like.
00:49:30We have a function named factorial
00:49:32which has integer n.
00:49:34If value of n is 0, then we will return 1.
00:49:36Otherwise, we are going to return n multiplied by
00:49:38factorial of n minus 1.
00:49:40In this way, our recursion will work.
00:49:42Now, in recursion codes,
00:49:45there are two ways.
00:49:47The first way is
00:49:49to calculate time complexity
00:49:51using recurrence relation.
00:49:53We write the equation of recurrence relation
00:49:55and solve it mathematically
00:49:57to calculate time complexity.
00:49:59The second way to calculate time complexity
00:50:01is that we find out
00:50:03the time complexity
00:50:05as total number of
00:50:07recursion calls
00:50:09multiplied by
00:50:11work done in each call.
00:50:13In this way,
00:50:15we get the same value of time complexity.
00:50:17Now, generally,
00:50:19the method of recurrence relation
00:50:21is a little more mathematical.
00:50:23In fact,
00:50:25it is not possible
00:50:27to calculate recurrence relation
00:50:29for every recursion code.
00:50:31So, generally, in practical cases,
00:50:33we prefer the second method.
00:50:35For example,
00:50:37let's talk about this factorial function.
00:50:39In this method,
00:50:42we will try to find out the answer
00:50:44using recurrence relation.
00:50:46What will be the recurrence relation
00:50:48of this code?
00:50:50It can be fn is equal to
00:50:52constant plus f of n minus 1.
00:50:54It means that
00:50:56if this function is called
00:50:58for n,
00:51:00then this function is doing
00:51:02constant work of multiplying
00:51:04plus it is calling the same function
00:51:06for a value n minus 1.
00:51:08So, fn
00:51:10is equal to
00:51:12f of n minus 1
00:51:14plus some k
00:51:16which is doing
00:51:18constant work
00:51:20in other parts of the function.
00:51:22So, this is the recurrence relation.
00:51:24Now, how do we find the value of time complexity?
00:51:26Basically, in recursion,
00:51:28we will call factorial for n.
00:51:30So, fn is a measure of
00:51:32time complexity.
00:51:34To solve this equation,
00:51:36we have to find f of n minus 1.
00:51:39So, this is going to be
00:51:41k plus f of n minus 2.
00:51:43Then, we have to find f of n minus 2.
00:51:45So, it will be k plus f of n minus 3.
00:51:47In this way, our values will continue.
00:51:49What will be the value at the end?
00:51:51At the end, we will get
00:51:53f of 1 is 1 plus
00:51:55f of 0.
00:51:57And for f of 0,
00:51:59when the value of n is 0,
00:52:01then we have to simply return 1.
00:52:03So, when the value of n is 0,
00:52:05this constant work k2
00:52:07So, let's suppose this is
00:52:09some other constant.
00:52:11We will replace it with k2.
00:52:13Now, if we add all the terms
00:52:15on the left and right side,
00:52:17many terms will cancel out.
00:52:19For example, f of n minus 1 will cancel out.
00:52:21f of n minus 2 will cancel out.
00:52:23f of 3 will cancel out.
00:52:25Similarly, this value will also cancel out.
00:52:27Finally, which term will survive on the left side?
00:52:29Only f of n will survive on the left side.
00:52:31So, we can say
00:52:33f of n is equal to
00:52:36many k values will survive on the right side.
00:52:38In fact,
00:52:40this is going to be k.
00:52:42Many k constant values
00:52:44will survive on the right side.
00:52:46So, let's take k as common for them.
00:52:48But how many k constant values are there?
00:52:50We have come from n to n minus 1,
00:52:52n minus 2 and so on till 1.
00:52:54So, whether we go from 1 to n or
00:52:56from n to 1,
00:52:58we got n equations.
00:53:00So, how many times did we get k?
00:53:02We got k n times.
00:53:05If k2 survives on the left side,
00:53:07then we can add k2.
00:53:09So, this will be our overall time complexity.
00:53:11In time complexity,
00:53:13we ignore the constants on the right side.
00:53:15So, k will be ignored and k2 will be ignored.
00:53:17And the time complexity is going to be big O of n.
00:53:19So, in this way,
00:53:21the time complexity of this particular code
00:53:23will be a linear time complexity, big O of n.
00:53:25So, we found out this time complexity
00:53:27from our recurrence relation.
00:53:29Now, there are many questions
00:53:31for which recurrence relation is intuitive.
00:53:33Like Fibonacci,
00:53:35there are many recursion codes
00:53:37for which recurrence relation is not intuitive.
00:53:39So, for those cases,
00:53:41we prefer our second method
00:53:43which is finding out
00:53:45the total number of recursion calls
00:53:47by drawing our recursion tree.
00:53:49Now, how can we draw our recursion tree
00:53:51for this particular code?
00:53:53Let's suppose our n is equal to 4.
00:53:55For n is equal to 4,
00:53:57how will the factorial tree look like?
00:53:59For n is equal to 4,
00:54:02f of 4 will call f of 3,
00:54:04f of 3 will call f of 2,
00:54:06f of 2 will call f of 1,
00:54:08f of 1 will call f of 0.
00:54:10So, can we say that
00:54:12every level is calling its smaller level?
00:54:14So, if n is equal to 4,
00:54:16for that, 1, 2, 3, 4, 5 calls
00:54:18are being called in this code.
00:54:20Similarly, if n is equal to 5,
00:54:22for that, 6 calls will be called,
00:54:24for that, 6 calls will be called, 7 calls will be called.
00:54:26So, can we say that for any n,
00:54:28we are calling total n plus 1 calls?
00:54:30We can ignore this 1 from the beginning.
00:54:32If n has any value,
00:54:34for that, in general,
00:54:36we are calling approximately n calls.
00:54:38So, how many recursive calls are there?
00:54:40These are n calls.
00:54:42This call will calculate its answer.
00:54:44It will return 1 to the upper level.
00:54:46Here, 1 will be multiplied by 1
00:54:48and will return 1 to the upper level.
00:54:50Here, 1 will be multiplied by 2
00:54:52and will return 2 to the upper level.
00:54:54Here, 3 will be multiplied by 3 and will return 6.
00:54:56Here, 4 will be multiplied by 4
00:54:59and will return 24 to the upper level.
00:55:01Finally, what is 4 factorial?
00:55:034 factorial is equal to 24.
00:55:05In this way, our recursion will work.
00:55:07So, if approximately n calls are being called,
00:55:09then total calls will be equal to n
00:55:11multiplied by
00:55:13work in each call.
00:55:15If we talk about a call,
00:55:17it means a function call.
00:55:19In this function, there is an if statement.
00:55:21It is a constant function.
00:55:23It is multiplying by n.
00:55:25It is calling. It is also a constant function.
00:55:27So, overall, in a function call,
00:55:29a constant function is being called.
00:55:31We can take k as a constant.
00:55:33So, the time complexity will be
00:55:35big of n into k or we can call it
00:55:37big of nk equal.
00:55:39So, here also the same answer has come.
00:55:41So, whether we get the answer
00:55:43from the recurrence relation or
00:55:45from the formula,
00:55:47the answer is always going to be same.
00:55:49So, the time complexity of this recursive code
00:55:51is going to be linear.
00:55:53As we have time complexity,
00:55:56space complexity of normal codes
00:55:58has a small and interesting difference.
00:56:00And the difference is that
00:56:02whenever we do recursion,
00:56:04we have a call stack.
00:56:06Now,
00:56:08when we calculate factorial
00:56:10by applying for loop,
00:56:12then we count
00:56:14how many extra variables
00:56:16we have made in our code
00:56:18or in the program
00:56:20that we have written for loop.
00:56:22Whether we have made any extra array,
00:56:25we calculate such things.
00:56:27But in recursion,
00:56:29if we are not making any extra variable
00:56:31in this code,
00:56:33then also there is an extra stack
00:56:35which is used in memory
00:56:37and we have to count that also
00:56:39in our space complexity.
00:56:41The call stack of recursion
00:56:43in which all the data of calls are stored,
00:56:45we count that also in our space complexity.
00:56:47For example, here we have written
00:56:49the code of factorial.
00:56:51We know that for n is equal to 4,
00:56:53we can show the call of f of 4 like this.
00:56:55The call of n is equal to 4 will go.
00:56:57Then the call of n is equal to 4
00:56:59and n is equal to 3 will be put.
00:57:01Then the call of n is equal to 3 and f of 2 will be put.
00:57:03Then there will be one more call occupy
00:57:05memory in stack.
00:57:07Then when f of 1 is called,
00:57:09one more call will occupy memory in stack.
00:57:11Then when f of 0 is called,
00:57:13one more call will occupy memory in stack.
00:57:15Even though, we have not made any extra variable here,
00:57:17but when recursion is happening internally,
00:57:19then in the call stack,
00:57:21stack frames memory occupy which contributes to our space complexity.
00:57:25Now what does this space complexity comes out to be?
00:57:28In recursion, in space complexity, we calculate this auxiliary space which we will use as
00:57:34extra variables here.
00:57:35Plus, we add the memory of this call stack to it.
00:57:39And this basically becomes depth of recursion tree multiplied by memory occupied in each
00:57:47call.
00:57:48Or in fact, by changing this, we can write that space complexity is equal to height of
00:57:55call stack multiplied by memory in each call.
00:58:01So, whether we talk about depth of recursion tree, i.e., from F4 to 3, 2, 1, 0, the depth
00:58:08of recursion tree will be equal to n.
00:58:11Or whether we talk about height of call stack in worst case scenario.
00:58:14In worst case scenario, maximum height of call stack will be approximately equal to n.
00:58:20n plus 1, n plus 2, n minus 1, it doesn't matter.
00:58:23Approximately n.
00:58:24So, whether we look at the height of call stack or look at the depth of recursion tree,
00:58:30the values of both are going to be the same.
00:58:33Because recursion is already working.
00:58:35This is the way of visualizing in recursion.
00:58:37This is actually happening in memory.
00:58:39This is the way of visualizing it.
00:58:41So, in both the cases, total height is going to be equal to n.
00:58:44And how much memory are we occupying in each call?
00:58:47In each call, we haven't defined any variable as such.
00:58:50And even if we had defined a variable, it would have been a constant memory.
00:58:53So, here, this is also constant.
00:58:55So, space complexity becomes equal to O of n.
00:58:58So, here, even though we can see that we haven't defined any variable in recursion,
00:59:02still O of n's space complexity takes our code because of this call stack.
00:59:06So, we can remember three things for recursion.
00:59:09These are the two ways of time complexity in recursion.
00:59:12The second way is generally easier and most preferred.
00:59:15For space complexity, we can use any formula and calculate our space complexity.
00:59:20And if you know how to make a recursion tree,
00:59:22if we understand recursion well,
00:59:24then it becomes very easy for us to find space and time complexity.
00:59:27Now, let us solve a question based on recursion.
00:59:30Let's suppose we are given a recursive Fibonacci code.
00:59:33My assumption is that you have already asked a question while doing recursion.
00:59:37Now, if we want to find its time and space complexity, how do we do it?
00:59:40First of all, let's talk about time complexity.
00:59:42For time complexity, let's draw our recursion tree.
00:59:45Let's suppose for n is equal to 4.
00:59:47For n is equal to 4, Fibonacci's first call will be for F4.
00:59:50What will F4 do? It will check the base case.
00:59:52If it doesn't meet, it will call for F3 and F2.
00:59:55So, F4 called for F3 and F2.
00:59:59Now, what will F3 do?
01:00:01If the base case doesn't meet for F3, it will call for F2 and F1.
01:00:04So, F3 will call for F2 and F1.
01:00:08Similarly, F2 will call for F1 and F0.
01:00:11Similarly, F2 will call for F1 and F0.
01:00:15Here, the base case met.
01:00:17This is the base case. Base case. Base case. Base case.
01:00:19So, overall, these are the total calls.
01:00:22Generally, when an incomplete tree is formed in a recursion tree,
01:00:26we try to form branches and balance them out
01:00:30so that our last level is completely filled.
01:00:32Now, if the last level is completely filled,
01:00:34what was the second way to calculate time complexity?
01:00:37Time complexity was the total number of calls
01:00:40or total calls as we call them,
01:00:43multiplied by the work done in each call.
01:00:46So, what is the total number of calls?
01:00:49We know that there is one call on our first level,
01:00:51there are two calls on our second level,
01:00:53there are four calls on our third level.
01:00:55Similarly, since we have balanced out,
01:00:57there will be eight calls on our last level.
01:00:59Can we say that 1 is equal to 2 to the power of 0?
01:01:012 is equal to root to the power of 1.
01:01:034 is equal to 2 to the power of 2.
01:01:058 is equal to 2 to the power of 3.
01:01:07So, in this way, our calls are increasing at every level.
01:01:09If instead of n is equal to 4,
01:01:11we had an arbitrary n,
01:01:13some other value of n,
01:01:15then how would the total number of calls look like?
01:01:17The total number of calls would be on the first level.
01:01:192 to the power of 0 plus 2 to the power of 1
01:01:21plus 2 to the power of 2 plus 2 to the power of 3.
01:01:23And in the last level,
01:01:25I know that there would be n minus 1 calls
01:01:27because here n is equal to 4.
01:01:29Similarly, in the last level,
01:01:31there are 2 to the power of 3 calls.
01:01:33Similarly, in the last level,
01:01:35there are 2 to the power of n minus 1 calls.
01:01:37Now, we have to add these total number of calls.
01:01:39So, these total number of calls together
01:01:41are making a geometric progression.
01:01:43So, we have read in the math that
01:01:45in the geometric progression,
01:01:47the total sum of all of the values
01:01:49is equal to a into r to the power of n minus 1
01:01:51divided by r minus 1.
01:01:53This formula has a starting value,
01:01:55r common difference and n total number of terms.
01:01:57We have read this in the math.
01:01:59You can search this in the math.
01:02:01Here, a is the first value
01:02:03which is equal to 2 to the power of 0
01:02:05multiplied by r to the power of n.
01:02:07What is the common difference?
01:02:09All terms have a common difference of 2.
01:02:11So, 2 to the power of n minus 1
01:02:13divided by 2 minus 1.
01:02:15This value becomes
01:02:171 into 2 to the power of n minus 1
01:02:19divided by 1.
01:02:21Here, this value becomes
01:02:23equal to 2 to the power of n minus 1.
01:02:25The final value is 2 to the power of n minus 1.
01:02:27I hope we have understood
01:02:29how this value is obtained.
01:02:31This value is obtained by
01:02:33redrawing the recursion tree
01:02:35and taking the sum of all the calls.
01:02:37Now, when we have the sum of all the calls,
01:02:39we have the total number of calls.
01:02:41Our total number of calls is
01:02:432 to the power of n minus 1.
01:02:45How much work is being done in each call?
01:02:47This is the constant work of each call.
01:02:49This is the constant work of each call.
01:02:51This is the constant work of each call.
01:02:53This is the constant work of each call.
01:02:55This is the constant work of each call.
01:02:57This is the constant work of each call.
01:02:59This is the constant work of each call.
01:03:01This is the constant work of each call.
01:03:03This is the constant work of each call.
01:03:05This is the constant work of each call.
01:03:07This is the constant work of each call.
01:03:09This is the constant work of each call.
01:03:11This is the constant work of each call.
01:03:13This is the constant work of each call.
01:03:15This is the constant work of each call.
01:03:17This is the constant work of each call.
01:03:19This is the constant work of each call.
01:03:21This is the bad time complexity for the Fibonacci code.
01:03:23But in Brute Force Recursion,
01:03:25we get the same time complexity.
01:03:27To make the calculation simple,
01:03:29we balanced out our tree.
01:03:31If we don't balance it out,
01:03:33the time complexity of
01:03:35Accurate Recursive Fibonacci
01:03:37is 1.618 to the power of n.
01:03:39This value is called the golden ratio.
01:03:41This value is called the golden ratio.
01:03:43We will be able to calculate this exact value
01:03:45on a paper pen.
01:03:47Generally, people remember this value.
01:03:49In interviews, we don't have to tell this value.
01:03:51We don't have to tell 2 to the power of n.
01:03:53Because majority of things are simple.
01:03:55No one is expecting you to get the exact value.
01:03:57No one is expecting you to get the exact value.
01:03:59Whether you take 1.6 or 2,
01:04:01both are constants.
01:04:03Also, if we have to find the space complexity,
01:04:05how will we do it?
01:04:07For space complexity,
01:04:09we have the value
01:04:11of the depth
01:04:13of the recursive tree
01:04:15or the height of the call stack
01:04:17multiplied by space and memory
01:04:19in each call.
01:04:21What is the depth of the recursive tree?
01:04:23We know the depth.
01:04:25It goes from 0 to n-1.
01:04:27How many levels are there in total?
01:04:291st, 2nd, 3rd.
01:04:31We have n levels.
01:04:33How much memory is being occupied in each level?
01:04:35We are not using any extra variable in the function.
01:04:37We can consider it as a constant
01:04:39or 1.
01:04:41The total space complexity
01:04:43will be O of n.
01:04:45We can also find the space complexity of Fibonacci.
01:04:47We can also find the time complexity
01:04:49of Fibonacci using
01:04:51recurrence relation.
01:04:53I am giving you that as a homework problem.
01:04:55The time complexity of Fibonacci
01:04:57is going to be equal to
01:04:59time complexity of n-1
01:05:01plus
01:05:03time complexity of n-2
01:05:05plus O of 1.
01:05:07Rest of the work remains constant.
01:05:09If I have to find the time complexity of this function,
01:05:11I am calling a function of n-1
01:05:13and a function of n-2.
01:05:15Rest of the things are constant
01:05:17which I have already compared in O of 1.
01:05:19This is our recurrence
01:05:21relation
01:05:23for time complexity
01:05:25for Fibonacci.
01:05:27I am giving you the solution of this recurrence relation
01:05:29as a homework problem.
01:05:31The solution of this will be around n
01:05:33because this is our overall time complexity.
01:05:35This is how we calculate
01:05:37the time and space complexity
01:05:39in recursion.
01:05:41Let's see another good example
01:05:43which is called Merge Sort Algorithm.
01:05:45My assumption is that
01:05:47you must have already done Merge Sort.
01:05:49If not, we will do it in DSA series.
01:05:51You can watch
01:05:53this explanation after doing it.
01:05:55For Merge Sort, this is our
01:05:57overall recursive function.
01:05:59In this recursive function, Merge Sort is called
01:06:01along with Merge function
01:06:03which is called in every call.
01:06:05First of all,
01:06:07let's focus on time complexity.
01:06:09Time complexity is total number of calls
01:06:13multiplied by
01:06:15work done in each call.
01:06:19This is the value of time complexity.
01:06:21Now, what is the work done in each call?
01:06:23Let's see that first.
01:06:25This is our
01:06:27constant work.
01:06:29These two calls are also
01:06:31constant work.
01:06:33But we don't know how much time
01:06:35this Merge function takes.
01:06:37First of all,
01:06:39let's find out time and space complexity
01:06:41of Merge function.
01:06:43If we know about Merge function,
01:06:45then we will know about every call.
01:06:47Because Merge function is used in
01:06:49every Merge Sort call.
01:06:51First of all, let's find out
01:06:53time and space complexity
01:06:55of Merge Sort.
01:06:57In Merge Sort,
01:06:59this is our merge step.
01:07:01This is not a recursive function.
01:07:03This is a normal function.
01:07:05In this, we have a vector.
01:07:07Let's talk about time complexity.
01:07:09This is a constant work.
01:07:11This is a while loop.
01:07:13This is a while loop.
01:07:15This is a constant work.
01:07:17This is a for loop.
01:07:19This is a constant work.
01:07:21This is a loop.
01:07:23This is a third loop.
01:07:25This is a fourth loop.
01:07:27If we add all the time complexities,
01:07:29then we will get the overall time complexity.
01:07:31What does this merge step
01:07:33do?
01:07:35Logically, it takes two arrays.
01:07:37Let's suppose one array is 1 and 5
01:07:39and second array is 3 and 6.
01:07:41In this way, it will take array 1 and 2.
01:07:43Its work is to merge both of them.
01:07:45I hope you already know this theory.
01:07:47It merges both of them
01:07:49in a sorted manner.
01:07:51First, it will take 1, then 3,
01:07:53then 5, then 6.
01:07:55In this way, it merges them in a sorted manner.
01:07:57The first while loop
01:07:59in Merge Sort
01:08:01runs until one of the two
01:08:03arrays is completely traversed.
01:08:05The remaining 2 and 3 loops
01:08:07run when any remaining array is left.
01:08:09For example, 1, 3 and 5
01:08:11are all copied here.
01:08:13Only 6 is left.
01:08:15For this, a separate loop will run for 6.
01:08:17What does this merge step do?
01:08:19It merges all three loops
01:08:21and works to traverse
01:08:23both the arrays in merge step.
01:08:25I know that in merge step,
01:08:27if the size of this array is half
01:08:29and the size of this array is half,
01:08:31then the size of both the arrays is n in worst case.
01:08:33Overall, if the value of n was 4,
01:08:35then in merge step,
01:08:37one array will be of 4x2 size
01:08:39and the other one will also be of 4x2 size.
01:08:41Both of them will be merged and
01:08:43the array of 4 size will be formed again.
01:08:45You will understand these things only
01:08:47if the logic of merge sort is already clear.
01:08:49All three loops run
01:08:51big O of n times.
01:08:53The last loop,
01:08:55because we are copying values
01:08:57in our final array,
01:08:59the size of temporary is equal to n
01:09:01and the size of final array is also equal to n.
01:09:03The fourth step
01:09:05also runs big O of n times.
01:09:07So, the time complexity of merge
01:09:09is big O of n plus n
01:09:11which is equal to big O of 2n.
01:09:13Since we ignore the constants,
01:09:15the overall merge step time complexity
01:09:17is equal to big O of n.
01:09:19That is why the time complexity of merge
01:09:21step, not merge sort,
01:09:23merge step is big O of n.
01:09:25Now, let's look at space complexity.
01:09:27For space complexity,
01:09:29we have to look at auxiliary values.
01:09:31One is input and the other is auxiliary values.
01:09:33What is input?
01:09:35Input is the array.
01:09:37I don't have to count the array.
01:09:39We have also made a vector
01:09:41and we are pushing back the values
01:09:43in this vector.
01:09:45So, can we say that
01:09:47we have to store n elements.
01:09:49To store n elements,
01:09:51we take temporary memory of n size
01:09:53in merge step.
01:09:55Merge sort takes two arrays
01:09:57and to merge these two arrays,
01:09:59it takes a temporary array.
01:10:01This temporary array is
01:10:03our auxiliary space
01:10:05which we count.
01:10:07If the size of temporary array is equal to n,
01:10:09the space complexity also becomes
01:10:11equal to big O of n.
01:10:13So, the time and space complexity
01:10:15of merge step is equal to big O of n.
01:10:17Now, let's calculate the time and space complexity
01:10:19of merge sort.
01:10:21We have already calculated the merge step.
01:10:23Big O of n is time complexity
01:10:25and big O of n is space complexity.
01:10:27How does merge sort work?
01:10:29Let's take an example of an array of 4 sizes.
01:10:31Merge sort divides an array of 4 sizes
01:10:33into two parts.
01:10:35It divides into two calls
01:10:37which are of two sizes.
01:10:39If there is an array of two sizes,
01:10:41it will also divide into two calls.
01:10:43It will also divide into two calls.
01:10:45While backtracking,
01:10:47the merge step will perform.
01:10:49While backtracking,
01:10:51the merge step will perform.
01:10:53While backtracking, the merge step will perform.
01:10:55Generally, this is how
01:10:57merge sort works.
01:10:59Our recursive tree will be of this type.
01:11:01First of all,
01:11:03if we call f of 4,
01:11:05then this call will be f of 2,
01:11:07then this call will be f of 1,
01:11:09then this call will be f of 1,
01:11:11then this call will be f of 1,
01:11:13and finally this call will be f of 1.
01:11:15The inner value is our array size.
01:11:17So, can we say that
01:11:19on the first level, 2 to the power of 0 calls were called,
01:11:21on the second level, 2 to the power of 1 calls were called,
01:11:23on the third level, 2 to the power of 2 calls were called,
01:11:25or we can also see that
01:11:27on the first level, if the size of my array was 4,
01:11:29then on the next level,
01:11:31the size of the array was halved in every call.
01:11:33On the next call, the size of the array was halved.
01:11:35So, if instead of n is equal to 4,
01:11:37there was an arbitrary n,
01:11:39then what would we have in that case?
01:11:41Let us minimize this.
01:11:43Let us resize this.
01:11:45If there was an arbitrary n,
01:11:47then in that case, we would have had a call of f of n,
01:11:49f of n would have had a call of f of n by 2,
01:11:51this would have had a call of n by 2,
01:11:53then this would have had a call of f of n by 4,
01:11:55here also, f of n by 4 would have been called.
01:11:57Similarly, f of n by 4.
01:11:59Similarly, f of n by 4.
01:12:01So, on the first level,
01:12:03if the size of my array is n,
01:12:05on the next level, n by 2 is happening,
01:12:07on the next level, n by 4 is happening,
01:12:09then how long will this be halved
01:12:11until the value of 1 is equal?
01:12:13Every time, n is equal to 2 by 0,
01:12:15n divided by 2 to the power 1,
01:12:17n divided by 2 to the power 2,
01:12:19similarly, n divided by 2 to the power k.
01:12:21So, if I want to find out
01:12:23the total number of levels,
01:12:25I can say that if I have x levels,
01:12:27of this recursive tree,
01:12:29if I have total x levels,
01:12:31then what will be the value of x?
01:12:33Basically, it is not k, it is 2 to the power x.
01:12:35n divided by 2 to the power x
01:12:37is equal to 1.
01:12:39This gives me n is equal to 2 to the power x.
01:12:41On both sides, if we take log,
01:12:43log n is equal to x.
01:12:45So, can I say that I have total log n levels?
01:12:47So, one understanding we got
01:12:49is that
01:12:51the total number of levels
01:12:53in the recursive tree
01:12:55is equal to log n.
01:12:57Now, if we want to find the time complexity,
01:12:59in this way,
01:13:01if we see our recursive tree,
01:13:03then what is the formula to find the time complexity?
01:13:05Time complexity is equal to
01:13:07total calls
01:13:09multiplied by work done
01:13:11in each call.
01:13:15Now, in my recursive tree,
01:13:17if I have called fn,
01:13:19then how much work will be there in one call?
01:13:21This is my constant work, this is my constant work,
01:13:23but how much work will be there in merge step?
01:13:25It will be equal to big of n.
01:13:27So, in one call, I have big of n work.
01:13:29If I have called second call,
01:13:31then in this fn by 2 call,
01:13:33I will have n by 2 work,
01:13:35and in fn by 2 call, I will have n by 2 work.
01:13:37Similarly, the work done on these two calls
01:13:39will also be equal to big of n.
01:13:41Similarly, in next level,
01:13:43in this call, I will have n by 4 work,
01:13:45in this call, I will have n by 4, in this call, I will have n by 4,
01:13:47in this call, I will have n by 4,
01:13:49and in all four calls, I will have n by 4 into 4,
01:13:51which is equal to big of n.
01:13:53So, in my recursive tree,
01:13:55if big of n is working on every level,
01:13:57and I have total log n levels,
01:13:59then my work is equal to
01:14:01log of n.
01:14:03So, my total time complexity
01:14:05will be big of n
01:14:07multiplied by log n,
01:14:09and this is the time complexity
01:14:11of the merge sort algorithm.
01:14:13So, in this way, the time complexity of merge sort
01:14:15is equal to n log n.
01:14:17So, this is our time complexity.
01:14:19How will we find out space complexity?
01:14:21We know that for our space complexity,
01:14:23we have to find the total depth
01:14:25of our recursion tree.
01:14:27The depth of our recursion tree is equal to log n.
01:14:29But, in every call,
01:14:31this is a constant space,
01:14:33and there is no extra space
01:14:35occupied in it.
01:14:37But, in the merge sort algorithm,
01:14:39space is occupied.
01:14:41The space complexity of merge sort
01:14:43is equal to big of n,
01:14:45because we are using our temporary vector.
01:14:47Now, this temporary vector,
01:14:49when our merge sort algorithm
01:14:51will take space in the stack,
01:14:53then one call will go for n,
01:14:55then next call will go for n by 2,
01:14:57then next call will go for n by 4,
01:14:59then next call will go for n by 8.
01:15:01So, we will occupy log n levels
01:15:03in the recursive call stack.
01:15:05But, in every call,
01:15:07we are calling a separate merge step
01:15:09and making a temporary array of n size.
01:15:11So, this space is occupied by n.
01:15:13But, it is not like
01:15:15this n vector is in the memory
01:15:17and we are making a new vector at the next level.
01:15:19We are making a new vector at the next level
01:15:21and this space is accumulating.
01:15:23It is not like that.
01:15:25As soon as we make this vector here
01:15:27and as soon as our function ends,
01:15:29it gets deleted from the memory.
01:15:31So, in this case, we will say
01:15:33that the space complexity is log n plus n.
01:15:35Why plus n?
01:15:37Because this n space is not like
01:15:39that when this stack will run,
01:15:41every level will occupy n space
01:15:43and will not release it.
01:15:45Every level is occupying n space
01:15:47but leaves it and occupies the next level.
01:15:49So, at one time,
01:15:51big O of n is occupying the space.
01:15:53So, the overall space complexity
01:15:55is big O of log n plus n
01:15:57because of recursive stack
01:15:59and merge step of n.
01:16:01But, because we ignore
01:16:03smaller terms in big O,
01:16:05log n is smaller term among log n
01:16:07and that is why the space complexity
01:16:09comes out to be big O of n
01:16:11for merge sort algorithm.
01:16:13So, this is our space complexity
01:16:15and this is our time complexity
01:16:17for overall merge sort algorithm.
01:16:19If we have understood this code
01:16:21and this calculation,
01:16:23it means that we have understood
01:16:25merge sort algorithm very well
01:16:27because it is generally difficult
01:16:29to understand merge sort
01:16:31especially if we are studying
01:16:33first time time complexity or
01:16:35first time programming concept.
01:16:37But, if you have successfully
01:16:39understood it, you should give
01:16:41a pat on your back because
01:16:43you have covered a very good
01:16:45milestone. Now, the last topic
01:16:47which we are going to cover is
01:16:49regarding the practical usage
01:16:51of time complexity.
01:16:53Along with questions, we have
01:16:55some constraints.
01:16:57For example, if I have an array
01:16:59named nums,
01:17:01I have a size constraint
01:17:03that the maximum length of nums
01:17:05is equal to 10 to the power 5.
01:17:07What is the meaning of writing
01:17:09these constraints?
01:17:11Generally, if we go to any coding platform
01:17:13and write any code in it,
01:17:15then on that coding platform,
01:17:17in one second,
01:17:19for any program,
01:17:2110 to the power 8 operations
01:17:23can be run.
01:17:25It means, in one second,
01:17:27on that platform, approximately
01:17:2910 to the power 8 can be a little bigger value
01:17:31or a little smaller value.
01:17:33But, on average, 10 to the power 8
01:17:35operations take one second
01:17:37to execute. And for every question,
01:17:39the execution limit we have
01:17:41is of one second.
01:17:43Your question should be like this,
01:17:4510 to the power 8 operations or
01:17:47one second at max.
01:17:49If our algorithm or code
01:17:51is taking more time than 10 to the power 8 operations,
01:17:53then we have a common error
01:17:55which is called TLE, time limit exceeded.
01:17:57Generally, TLE does not allow us.
01:17:59If TLE comes, then we have to
01:18:01optimize our algorithm faster.
01:18:03So, what is the logic of 10 to the power 8 operations?
01:18:05Basically,
01:18:07if we are given a given input size,
01:18:09like in this question, I have been given
01:18:11that my nums.length
01:18:13i.e. my n is
01:18:15less than or equal to 10 to the power 5.
01:18:17So, to solve
01:18:19this entire question,
01:18:21can I write an algorithm of n square?
01:18:23Can I apply a nested loop?
01:18:25Let's examine that.
01:18:27If I write a code of
01:18:29big of n square for this given constraint,
01:18:31then will TLE come or not?
01:18:33How do we analyze that?
01:18:35Basically, this n,
01:18:37I will replace it with the maximum value of n.
01:18:39Maximum value of n will be
01:18:4110 to the power 5. And if I want to take
01:18:43its square, then its square will be
01:18:4510 to the power 10.
01:18:4710 to the power 10
01:18:49is greater than 10 to the power 8.
01:18:51And I have a limit that
01:18:5310 to the power 8 operations should not be more.
01:18:55But if my code is of n square,
01:18:57then 10 to the power 8 operations will be more.
01:18:59i.e. TLE will come here
01:19:01and this code will not be submitted.
01:19:03So, what I have to do now?
01:19:05I have to optimize this algorithm.
01:19:07While optimizing, I will see
01:19:09whether my code of n log n will be submitted.
01:19:11Because what is the value
01:19:13of n log n?
01:19:15If n log n is not there, then I will try big of n.
01:19:17If big of n is not there, then I will try big of log n.
01:19:19If that is also not there, then I will try constant.
01:19:21Will the code of n log n be submitted?
01:19:23Let's see for that.
01:19:25For n log n, if we find the value,
01:19:27then if we write 10 to the power 5 in n log n,
01:19:29then it will be 10 to the power 5
01:19:31multiplied by log of 10 to the power 5.
01:19:33This is
01:19:3510 to the power 5 into
01:19:37log 10.
01:19:39This log 10
01:19:41The value of base 2 is generally
01:19:43equal to 0.3.
01:19:4510 to the power 5 multiplied by 5
01:19:47multiplied by 0.3.
01:19:49This value is less than
01:19:5110 to the power 8.
01:19:53I am performing less operations
01:19:55than my allowed limit.
01:19:57This means that the code of n log n
01:19:59will be submitted. This TLE will not be submitted.
01:20:01So, my final algorithm
01:20:03should be of n log n or
01:20:05it should be better than that.
01:20:07It will have log n, n and big of 1.
01:20:09But it cannot be bigger than that.
01:20:11If n square does not work,
01:20:13then n cube will not work and it will not be bigger than that.
01:20:15The maximum time complexity
01:20:17will be accepted in this particular question.
01:20:19In fact,
01:20:21it has also been given here that
01:20:23sorting will solve this question.
01:20:25Another optimization has been given that
01:20:27can you solve it without sorting?
01:20:29So, we have to write our algorithm
01:20:31by optimizing it a little more.
01:20:33Basically, the constraints with every question
01:20:35help us decide
01:20:37which algorithm can solve this question
01:20:39and which algorithm cannot.
01:20:41Generally, the constraint values
01:20:43have some particular values
01:20:45which are given.
01:20:47If we are given n greater than
01:20:4910 to the power 8,
01:20:51then for this constraint,
01:20:53either the code of big of log n is submitted
01:20:55or the code of big of 1 is submitted.
01:20:57If I write
01:20:59that I have to do a linear search
01:21:01and I have to write a linear time complexity
01:21:03algorithm for this constraint,
01:21:05is it possible?
01:21:07Here, n is greater than 10 to the power 8.
01:21:09It can also be 10 to the power 9.
01:21:11If I write big of 10 to the power 9
01:21:13instead of n,
01:21:15then 10 to the power 9 will be greater than 10 to the power 8.
01:21:17It means that my code will not be submitted
01:21:19but TLE will be given.
01:21:21Therefore, big of n is not allowed
01:21:23but big of log n and big of 1
01:21:25are allowed.
01:21:27For this constraint, we have this
01:21:29expected time complexity.
01:21:31When we have less than or equal to 10 to the power 8,
01:21:33the solution of big of n works.
01:21:35The solution of big of n
01:21:37works means that this solution
01:21:39will be accepted.
01:21:41But the solution worse than this
01:21:43will not be accepted.
01:21:45If you write n square,
01:21:47then we will get TLE in that solution.
01:21:49For n less than or equal to 10 to the power 6,
01:21:51the solution of n log n will be accepted.
01:21:53Generally, if we feel that
01:21:55the solution of n log n will be accepted,
01:21:57then we should think
01:21:59where n log n is seen.
01:22:01n log n is seen in sorting questions.
01:22:03So, it is possible that
01:22:05in this constraint,
01:22:07sorting is being applied.
01:22:09In this way, our mind should start working
01:22:11considering the time complexity.
01:22:13It is not necessary that sorting is being done.
01:22:15But it can be done because sorting is used in many questions.
01:22:17If n is less than or equal to 4,
01:22:19then n square will be accepted.
01:22:21If n is less than or equal to 500,
01:22:23then n cube will be accepted.
01:22:25If n is less than or equal to 25,
01:22:27then 2 to the power n will be accepted.
01:22:292 to the power n means
01:22:31that the value of n is already very small.
01:22:33So, it means that
01:22:35the time complexity of the algorithm
01:22:37should be very bad.
01:22:39Hence, we can get a hint
01:22:41that there is a brute force approach
01:22:43of recursion in this question.
01:22:45So, this is also a big hint.
01:22:47Here, we got a hint
01:22:49that sorting is used in this.
01:22:51So, we will think of sorting.
01:22:53Here, we will think of recursion.
01:22:55Similarly, in the case of less than or equal to 12,
01:22:57the time complexity of Big O of n factorial
01:22:59is used in majority of cases.
01:23:01So, here also we will think of brute force recursion.
01:23:03So, by looking at the constraints,
01:23:05we can estimate
01:23:07the time complexity
01:23:09to which we can go.
01:23:11We can find out the limit
01:23:13that in the worst case,
01:23:15my algorithm will perform this much.
01:23:17You can find the best case from that.
01:23:19But in the worst case,
01:23:21we get an idea that
01:23:23if n log n, then n square.
01:23:25I am wasting a lot of time.
01:23:27I have to think of n log n.
01:23:29So, this is what the practical usage of time complexity looks like.
01:23:31So, I hope that till now,
01:23:33we have cleared a lot of logic
01:23:35of time complexity and space complexity.
01:23:37Today, we have covered a lot of things.
01:23:39Today, we have covered
01:23:41what is time complexity
01:23:43and space complexity exactly.
01:23:45After that, we have discussed
01:23:47common time complexities
01:23:49in which Big O of 1 comes,
01:23:51log n comes,
01:23:53n log n comes,
01:23:55n comes, n square comes,
01:23:57n cube comes, 2 to the power n comes,
01:23:59n factorial comes.
01:24:01Apart from this, there are many other time complexities.
01:24:03But generally, this is what we see
01:24:05in programming, in DSA, most of the times.
01:24:07Apart from that, we have seen
01:24:09a lot of graphs, how we can judge
01:24:11which time complexity is better
01:24:13and which is worse.
01:24:15We have also understood
01:24:17how to calculate time and space complexity
01:24:19of recursion.
01:24:21Along with that, we solved a lot of questions
01:24:23practically.
01:24:25If this algorithm comes, how will we find out time complexity of it?
01:24:27If another algorithm comes, how will we find out time complexity of it?
01:24:29And in fact,
01:24:31time and space complexity is such a chapter
01:24:33which we are not doing only today.
01:24:35In the future, in all the chapters of DSA,
01:24:37in all of the chapters, whichever algorithm we write,
01:24:39we are going to find out time complexity of all those algorithms.
01:24:41We are going to calculate space complexity
01:24:43of all the algorithms.
01:24:45So, it is not something theoretical
01:24:47which we have just read once and now we have to reject it.
01:24:49It is something which we are going to practice
01:24:51in every single class in the future.
01:24:53So, there is no need to worry
01:24:55that today we have read so many concepts
01:24:57fully packed. How will we remember them?
01:24:59These concepts of time and space complexity
01:25:01you will remember with practice.
01:25:03And we are going to practice that
01:25:05with every class of DSA.
01:25:07We have solved a lot of questions.
01:25:09We have understood practical usage
01:25:11of time and space complexity.
01:25:13How does it exactly work?
01:25:15Where do we apply it when we are solving problems
01:25:17and how do we apply it practically?
01:25:19So, this was all about time
01:25:21and space complexity and I hope that you enjoyed
01:25:23this lecture a lot and you learnt
01:25:25something new from this lecture.
01:25:27If we have successfully completed this lecture
01:25:29and we have learnt something new from this lecture,
01:25:31then you can let me know about it in the comments
01:25:33or on Twitter. You can tag me on Twitter
01:25:35and you can let me know how did you like this lecture.
01:25:37That's all for today. See you in the next video.
01:25:39Till then keep learning and keep exploring.