optimization

Notes: Gradient Descent, Newton-Raphson, Lagrange Multipliers

TL;DR: A quick “non-mathematical” introduction to the most basic forms of gradient descent and Newton-Raphson methods to solve optimization problems involving functions of more than one variable. We also look at the Lagrange Multiplier method to solve optimization problems subject to constraints (and what the resulting system of nonlinear equations looks like, eg what we could apply Newton-Raphson to, etc). Introduction Optimization problems are everywhere in engineering and science. If you can model your problem in a way that can write down some function that should be minimized or maximized (the objective function) to get the solution you want, even in cases where there is no analytical solution (most real cases), you can often obtain a numerical solution.
Read more

Memoization in the Wild

Overview Memoization or memoisation is a method used to optimize programs. Usually, at least in my experience, it’s one of the first topics introduced when dynamic programming algorithms are being discussed. With a quick google search you can find the Wiki or a trillion other blogs about it - most will show the canonical example - the “hello world” of the topic - that is, using memoization to optimize a recursive implementation of a function that generates the n-th Fibonacci number (or sometimes a function computing factorials).
Read more