Learn how to generalize the idea of a tangent plane into a linear approximation of scalar-valued multivariable functions.
Background
- The gradient
What we're building to
Local linearization generalizes the idea of tangent planes to any multivariable function. Here, I will just talk about the case of scalar-valued multivariable functions.
The idea is to approximate a function near one of its inputs with a simpler function that has the same value at that input, as well as the same partial derivative values.
Written with vectors, here's what the approximation function looks like:
This is called the local linearization of
near .
Tangent planes as approximations
In the previous article, I talked about finding the tangent plane to a two-variable function's graph.
The formula for the tangent plane ended up looking like this.
This function
It has the same value of
at the point .It has the same partial derivatives as
at the point .
As always in multivariable calculus, it is healthy to contemplate a new concept without relying on graphical intuition. That's not to say you should not try to think visually. Maybe instead think purely about the input space, or think relevant transformation rather than the graph.
Fundamentally, a local linearization approximates one function near a point based on the information you can get from its derivative(s) at that point.
In the case of functions with a two-variable input and a scalar (i.e. non-vector) output, this can be visualized as a tangent plane. However, with higher dimensions we don't have this visual luxury, so we are left to think about it just as an approximation.
In real-world applications of multivariable calculus, you almost never care about an actual plane in space. Instead, you might have some complicated function, like, oh, I don't know, air resistance on a parachute as a function of speed and orientation. Dealing with the actual function may be tricky or computationally expensive, so it's helpful to approximate it with something simpler, like a linear function.
What do I mean by "Linear function"?
Consider a function with a multidimensional input.
This function is called linear if in its definition, all the coordinates are just multiplied by constants, with nothing else happening to them. For example, it might look like this:
The full story of linearity goes deeper (hence the existence of the field "Linear algebra"), but for now, this conception will do. Typically, instead of writing out all the variable like this, you would treat the input as a vector:
And you would define the function using a dot product:
For the purposes of this article, and more generally when you talk about local linearization, you are allowed to add in a constant to this expression:
If you wanted to be pedantic, this is no longer a linear function. It's what's called an "affine" function. But most people would say "whatever, it's basically linear".
Local linearization
Now, suppose your function
The idea of a local linearization is to approximate this function near some particular input value,
Notice, by plugging in
, you can see that both functions and will have the same value at the input .The vector dotted against the variable
is the gradient of at the specified input, . This ensures that both functions and will have the same gradient at the specified input. In other words, all their partial derivative information will be the same.
I think the best way to understand this formula is to basically derive it for yourself in the context of a specific function.
Example 1: Finding a local linearization.
Problem: Have yourself a function:
Find a linear function
Step 1: Evaluate
Step 2: Use this to start writing your function. Which of the following functions will be guaranteed to equal
For both of these,
, and are all arbitrary constants. You might start writing the desired function
like this:
To make sure that the remainder of the equation is
at while keeping things linear, we only add constant multiples of the terms , , and , since these will all be at the input.
The partial derivatives of
Step 3: Compute each partial derivative of
Now we evaluate each of these at
Luckily, our computations are made easier by the fact that
.
Step 4: Replacing the constants
Now notice what this looks like if you write it with vector notation.
It is just a specific form of the general formula shown above.
Example 2: Using local linearization for estimation
What follows is by no means a practical application, but working through it will help give a feel for what local linearization is doing.
Problem: Suppose you are on a desert island without a calculator, and you need to estimate
Solution:
We can view this problem as evaluating a certain three-variable function at the point
I don't know about you, but I'm not sure how to evaluate square roots by hand. If only this function was linear! Then working it out by hand would only involve adding and multiplying numbers. What we could do is find the local linearization at a nearby point where evaluating
The point we care about is very close to the much simpler point
All partial derivatives of
at
The first of these is
Looks like someone chose a few convenient input values, eh?
On to the partial derivatives (heavy sigh). Since the square roots are abundant, let's write out for ourselves the derivative of
Okay, here we go. The simplest partial derivative is
Since
Nestled even deeper, that tricky
Next, evaluate each one of these at
Plugging these values into our expressions for the partial derivatives, we have
Unraveling the formula for local linearization, we get
Finally, after all this work, we can plug in
Calculating this by hand still isn't easy, but at least it's doable. When you work it out, the final answer is
Had we just used a calculator, the answer is
So our approximation is pretty good!
Why do we care?
Although it is not common to find yourself estimating square roots on a desert island (at least where I'm from), what is common in the contexts of math and engineering is wrangling with complicated but differentiable functions. The phrase "just linearize it" is tossed around so much that not knowing what it means could be awkward.
Remember, a local linearization approximates one function near a point based on the information you can get from its derivative(s) at that point. Even though you can use a computer to evaluate functions, that's not always enough.
You might need to evaluate it many thousands of times per second, and working it out in full takes too long.
Maybe you don't even have the function explicitly written out, and you just have a few measurements near a point which you wish to extrapolate.
Sometimes what you care about is the inverse function, which can be hard or even impossible to find for the function as a whole, whereas inverting linear functions is relatively straight-forward.
Summary
Local linearization generalizes the idea of tangent planes to any multivariable function.
The idea is to approximate a function near one of its inputs with a simpler function that has the same value at that input, as well as the same partial derivative values.
Written with vectors, here's what the approximation function looks like:
This is called the local linearization of
near .