Mark As Completed Discussion

One Pager Cheat Sheet

  • Dynamic Programming (DP) with Memoization is a technique for optimizing the solution of overlapping sub-problems to anexponential time solutionto apolynomial time solution, while using additional memory to store intermediate results.
  • This sentence summarizes the concept of Fibonacci Numbers, which is defined and implemented utilizing a recursive design pattern, wherein each number is the sum of the two preceding numbers in the sequence.
  • By examining the recursion tree for computing fibonacci(5), we can deduce an exponential O(2^n) complexity, with potential for speedups by using stored intermediate results.
  • We can reduce the exponential time complexity of our Fibonacci computation from O(2^n) to O(n) using a simple memoization technique with only two extra memory spaces.
  • Dynamic Programming (DP) is a powerful algorithmic technique used to efficiently solve problems with optimal solutions and overlapping sub-problems through memoization, reducing the complexity from exponential (O(n^2)) to linear (O(n)).
  • Finding the path with the highest reward through an m * n grid using Backtracking requires enumerating all possible paths and selecting the one with the maximum reward, resulting in an exponential time complexity of O(2mn).
  • By combining the technique of Dynamic Programming with the Memoization of accumulated rewards stored in the reward matrix, we can find the optimum, or best, path from the start to goal that collects the maximum reward.
  • The pseudo-code finds the maximum reward when moving up or right on a m x n grid, from (0, 0) to (m-1, n-1).
  • The optimal path to a maximum reward can be found by filling an additional matrix d which stores the predecessor for each cell (r, c) when finding the maximum path.
  • We can use the direction matrix to backtrack from the goal cell to the start cell and pop the elements off a stack to get the final path.
  • Using Dynamic Programming and Memoization, we can drastically reduce the time complexity from O(2mn) to O(m * n), but with a cost of additional storage of O(m * n) space complexity.
  • Memoization is a technique which increases space complexity but drastically reduces time complexity, allowing for a significantly faster algorithm.
  • The problem parameters given allow us to use dynamic programming and memoization to solve the weighted interval scheduling problem in order to maximize the total weight of the non-overlapping intervals.
  • A maximum weight for an interval can be computed recursively by either including or excluding it, and looking at the intervals j-1 and lower for exclusion.
  • By using Memoization, we can drastically reduce time complexity and optimize the code efficiency of Dynamic Programming for Scheduling.
  • We can retrieve theinterval setusing thepredecessor arraypand theBoolean arrayX`.
  • The time complexity of the Memoized version of Weighted Interval Scheduling is O(n), assuming the intervals are already sorted according to their finish times.
  • The Longest Increasing Subsequence (LIS) problem using dynamic programming with memoization has a time complexity of O(n^2), achieved by two nested loops of O(n) computations for each of the n slots.
  • The space complexity of solving the Longest Increasing Subsequence (LIS) problem using dynamic programming with memoization is O(n).