## Haskell: How to memoize this algorithm?

memoization without recursion

I wrote this solution to the coin change problem on HackerRank:

makeChange :: Int -> [Int] -> Int makeChange n ys = go n (sort ys) where go _ [] = 0 go n (x:xs) | x == n = 1 | x > n = 0 | otherwise = (makeChange n xs) + (makeChange (n - x) (x:xs))

However it times out on some of the larger test cases. I saw this article on implementing memoization using `let`

bindings but it mostly went over my head and I'm not sure how I would implement that here. Any help?

I rewrote it and got a substantial performance improvement, but i'm still timing out on the hacker rank exercise:

makeChange' :: Int -> [Int] -> Int makeChange' = let go _ [] = 0 go n (x:xs) | x == n = 1 | x > n = 0 | otherwise = (makeChange' n xs) + (makeChange' (n - x) (x:xs)) in go f (x:y:xs) = makeChange' x ys where ys = sort xs main = interact $ show . f . map read . words

I moved the pre-sort into an intermediate function `f`

which I am also using to handle the input from Hacker Rank. They give you a string with the change target, the length of the change array, and the array of change units. I use `f`

to drop the length from the input.

**algorithm,** What's the fastest way to memoize a recursive function in Haskell? Background: Recently I have been solving Project Euler problems in Haskell General-purpose algorithms and data structures, illustrated in Haskell Provably perfect random shuffling and its pure functional implementations Pure-functional transformations of cyclic graphs and the Credit Card Transform

I think this is best solved with an explicit 2D array. In effect, we give the result of each function call a location in this array. This allows us to only need to evaluate function at most once. There's a tiny bit more boilerplate we have to add, because we need to check if we'd index outside the array

makeChange :: Int -> [Int] -> Int makeChange n ys = arr ! (n,1) where len = length xs xs = sort ys arr = array ((1,1),(n,len)) [((i,j), go i j x)| i <- [1..n], (j,x) <- zip [1..] xs] go n ix x | x == n = 1 | x > n = 0 | otherwise = (if ix + 1 <= len then (arr ! (n, ix+1)) else 0) + (if (n-x) > 0 then (arr ! ((n-x), ix)) else 0)

**Data.Function.Memoize,** This includes a class for memoizable argument types and a Template Haskell An instance Memoizable T for some type T means that that memoize method It isn't very hard, and there aren't any fixed rules of how to "memoize" - you just need to design a good algorithm. Posts like this one that talk about general "techniques for memoization" in Haskell tend to push you into premature low-level optimizations due to the influence of the imperative technique.

I'll try to demonstrate how behzad.nouri's code is working in an effort to understand it myself:

Keeping in mind Haskell's laziness, we observe that `!! n`

means our final list will be evaluated up to it's `(n+1)`

th element. Our main block for each coin is:

let b = (take x a) ++ zipWith (+) b (drop x a) in b

There's a little trick here that works because `b`

is a list. Note that a similar kind of self-reference wouldn't work if `b`

were an integer. Let's say our only coin was 2:

> let b = (take 2 [1,0,0,0,0]) ++ zipWith (+) b (drop 2 [1,0,0,0,0]) in b => [1,0,1,0,1]

(Which means one way to make zero, one way to make two, and one way to make four.) What happens is `b`

gets built recursively as it's self-referenced until there is a length match for the zip evaluation:

> zipWith (+) [] (drop 2 [1,0,0,0,0]) => [] -- But we prepended [1,0] so the next `b` is [1,0] > zipWith (+) [1,0] (drop 2 [1,0,0,0,0]) => [1,0] -- But we prepended [1,0] so the next `b` is [1,0,1,0] > zipWith (+) [1,0,1,0] (drop 2 [1,0,0,0,0]) => [1,0,1] -- But we prepended [1,0] so the result matching the length is [1,0,1,0,1]

Now let's use this knowledge to follow through `solve 4 [1,2,3]`

:

let b = (take 3 [1,0,0,0,0]) ++ zipWith (+) b (drop 3 [1,0,0,0,0]) in b take 3 [1,0,0,0,0] -- n+1 elements are evaluated from the infinite list => [1,0,0] ++ [0,0] -- drop 3 from [1,0,0,0,0] (+) [] => [] -- prepend [1,0,0] => [1,0,0] ++ [0,0] (+) [1,0,0] => [1,0] -- prepend [1,0,0] => [1,0,0,1,0]

That's one way to make zero and one way to make three. Next:

let b = (take 2 [1,0,0,1,0]) ++ zipWith (+) b (drop 2 [1,0,0,1,0]) in b take 2 [1,0,0,1,0] => [1,0] ++ [0,1,0] (+) [] => [] -- prepend [1,0] => [1,0] ++ [0,1,0] (+) [1,0] => [1,1,0] -- prepend [1,0] => [1,0,1,1,0] ++ [0,1,0] (+) [1,0,1,1,0] => [1,1,1] -- prepend [1,0] => [1,0,1,1,1]

That's one way to make 0, one way to make 2, one way to make 3, and one way to make 4. Next:

let b = (take 1 [1,0,1,1,1]) ++ zipWith (+) b (drop 1 [1,0,1,1,1]) in b

This is the one with the most iterations since `b`

is built from just one element

take 1 [1,0,1,1,1] => [1] ++ [0,1,1,1] (+) [] => [] -- prepend [1] => [1] ++ [0,1,1,1] (+) [1] => [1] -- prepend [1] => [1,1] ++ [0,1,1,1] (+) [1,1] => [1,2] -- prepend [1] => [1,1,2] ++ [0,1,1,1] (+) [1,1,2] => [1,2,3] -- prepend [1] => [1,1,2,3] ++ [0,1,1,1] (+) [1,1,2,3] => [1,2,3,4] -- prepend [1] => [1,1,2,3,4]

That's 1 way to make 0, 1 way to make 1, 2 ways to make 2, 3 ways to make 3, and 4 ways to make 4.

**Memoization in Haskell,** It turns out that memoization in Haskell is just as easy, if not easier, than memoization Before I talk about memoization in Haskell, I will discuss the course, you've probably seen the binary search algorithm - an algorithm to Conal Elliott: Elegant memoization with functional memo tries and other posts on memoization. Conal Elliott Denotational design with type class morphisms, section 9 (Memo tries). Memoization with recursion. Things become more complicated if the function is recursively defined and it should use memoized calls to itself.

**Lazy Dynamic Programming,** So let's look at how to do dynamic programming in Haskell and implement string Dynamic programming algorithms tend to have a very specific memoization Memoization was explored as a parsing strategy in 1991 by Peter Norvig, who demonstrated that an algorithm similar to the use of dynamic programming and state-sets in Earley's algorithm (1970), and tables in the CYK algorithm of Cocke, Younger and Kasami, could be generated by introducing automatic memoization to a simple backtracking recursive

**How does Haskell decide which memoized value will be discarded ,** but the basic issue is the correct value to keep varies by algorithm, so it's very If Haskell automatically memoized the subexpressions in the first function, the This definition uses Haskell's ability to define functions as equations with pattern-matching clauses: here the first one, with [] pattern for an empty list on its left-hand side, and the second, with (p:xs) pattern on its left-hand side standing for non-empty list with the head element p (used as a pivot element),

**Memoization and CAFs,** Many string and graph algorithms make use of memoization. A classic example in Haskell is the algorithm for the nth Fibonacci number, of which one variant is To create algorithms in Latex you can use algorithm2e, algorithmic or Listings environment.

##### Comments

- Briliiant. And seems significantly faster than the array version in Probie's answer.
- Wow looks like an amazing solution. Its way beyond my skill level to understand it at this point, but i'm trying to work through it.