Jump Game II (Leetcode 45)

Difficulty: Medium Link : Day 5: May Leetcode Challenge

Problem Description

Given an array of non-negative integers nums, you are initially positioned at the first index of the array. Each element in the array represents your maximum jump length at that position.

Your goal is to reach the last index in the minimum number of jumps. You can assume that you can always reach the last index.

Examples

Input: nums = [2,3,1,1,4]
Output: 2
Explanation: The minimum number of jumps to reach the last index is 2. Jump 1 step from index 0 to 1, then 3 steps to the last index.
Input: nums = [2,3,0,1,4]
Output: 2

Constraints:

  • 1 <= nums.length <= 1000
  • 0 <= nums[i] <= 105

Solution

With some examples, you can observe that the problem can be broken down into recursive sub problems like the Frog Jump problem. This way we can see a pattern that if we could compute the minimum jumps for every index with the possible jump choices, when we reach the last index, we would have the global solution.

Dynamic Programming

Since we are able to find the recursive sub problems, we can build a jumps array from right to left such that jumps[i]. The jumps[0] would have the final minimum number of jumps that we need to traverse to the end index. But if you think about the complexity, it would be atleast O(N) for storing the jumps array and O(N2) time complexity because from every index we would have to compute the possible jumps in range [1…Array[i]] and keep storing the minimum for ith index in the jumps array. We could have moved forward with this approach but there is another more efficient way.

If you are unfamiliar with the dynamic programming approach, check GeeksForGeeks article.

Greedy Approach

What do we want to find out? The minimum jumps to reach the last index. In order to reach have minimum jumps, we need to maximise the value of jumps we can take at each index. Whenever we want to maximise/ minimise a quantity in order to reach a global solution, it is a good candidate for greedy approach. We can divide the array into areas of same that are accessible from minimum possible number of jumps. This would result in a number of sliding windows, and we would iterate over each sliding window and find the farthest index that we can reach.

After completing every window, we will compute the indexes for the next window. The start (left) index would be the next of last preceding window and end (right) should the farthest that was reachable from previous window.

Also increase the number of jumps which would be our answer. You can also check out more examples on the GeeksForGeeks for building intuition .

Code

Swift

class Solution {
func jump(_ nums: [Int]) -> Int {
var left = 0 , right = 0 , jumps = 0
while right < nums.count 1 {
var farthest = 0
for i in leftright {
farthest = max(farthest,i + nums[i])
}
jumps += 1
left = right
right = farthest
}
return jumps
}
}
view raw jump_game_ii.swift hosted with ❤ by GitHub

Python

class Solution:
def jump(self, nums: List[int]) -> int:
n = len(nums)
steps = 0
farthest = 0
currentJumpEnd = 0
for i in range(n1):
farthest = max(farthest, i + nums[i])
if i == currentJumpEnd:
steps += 1
currentJumpEnd = farthest # Jump as far as possible
return steps
view raw jump_game_ii.py hosted with ❤ by GitHub

Java

class Solution {
public int jump(int[] nums) {
int jumps = 0, end = 0, maxFar = nums[0];
for ( int index = 0; index + 1 < nums.length; ++index ) {
maxFar = Math.max(index + nums[index], maxFar);
if ( index == end ) {
++jumps;
end = maxFar;
}
}
return jumps;
}
}
view raw jump_game_ii.java hosted with ❤ by GitHub

Complexity Analysis

We are using a Breadth-first search on the array and every element is being iterated once therefore, total of N operations. Also we are using variables to track the window, total jumps and the farthest index we can reach which is independent of the size of input.

Time = O(N) // Greedy Approach

O(N2) // Dynamic Programming

Space = O(1) // Greedy

O(N) // Dynamic programming

Bonus Tip

We can see above that Dynamic programming does not always lead to an optimal solution hence, we have focussed on the Greedy approach for its simplicity and efficiency. Whenever you come across problems whenever you have a case to minimise/ maximise a quantity, dynamic programming can be a starting point but Greedy approach might be the optimal one.

Leave a Reply

Your email address will not be published. Required fields are marked *