You are given a 0-indexed integer array nums of length n, and two positive integers k and dist.
The cost of an array is the value of its first element. For example, the cost of [1,2,3] is 1, while the cost of [3,4,1] is 3.
You need to divide nums into exactly k disjoint contiguous subarrays, such that the difference between the starting index of the second subarray and the starting index of the k-th subarray should be at most dist.
In other words, if you divide nums into subarrays nums[0..(i₁-1)], nums[i₁..(i₂-1)], ..., nums[iₖ₋₁..(n-1)], then iₖ₋₁ - i₁ ≤ dist.
Return the minimum possible sum of the costs of these subarrays.
Example: If nums = [1,3,2,6,4,2], k = 3, and dist = 3, you might divide it as [1] + [3,2] + [6,4,2] with costs 1 + 3 + 6 = 10.
Input & Output
Constraints
- 1 ≤ n ≤ 105
- 1 ≤ nums[i] ≤ 109
- 2 ≤ k ≤ n
- 1 ≤ dist ≤ n - 2
- The first subarray always starts at index 0