Local linearization (article) | Khan Academy (2024)

Learn how to generalize the idea of a tangent plane into a linear approximation of scalar-valued multivariable functions.

Background

  • The gradient

What we're building to

  • Local linearization generalizes the idea of tangent planes to any multivariable function. Here, I will just talk about the case of scalar-valued multivariable functions.

  • The idea is to approximate a function near one of its inputs with a simpler function that has the same value at that input, as well as the same partial derivative values.

  • Written with vectors, here's what the approximation function looks like:

    Lf(x)=f(x0)Constant+f(x0)Constant vector(xx0)xis the variable

  • This is called the local linearization of f near x0.

Tangent planes as approximations

In the previous article, I talked about finding the tangent plane to a two-variable function's graph.

Tangent plane, perspective 1

The formula for the tangent plane ended up looking like this.

T(x,y)=f(x0,y0)+fx(x0,y0)(xx0)+fy(x0,y0)(yy0)

This function T(x,y) often goes by a different name: The "local linearization" of f at the point (x0,y0). You can think about this as the simplest function satisfying two properties:

  1. It has the same value of f at the point (x0,y0).

  2. It has the same partial derivatives as f at the point (x0,y0).

As always in multivariable calculus, it is healthy to contemplate a new concept without relying on graphical intuition. That's not to say you should not try to think visually. Maybe instead think purely about the input space, or think relevant transformation rather than the graph.

Fundamentally, a local linearization approximates one function near a point based on the information you can get from its derivative(s) at that point.

In the case of functions with a two-variable input and a scalar (i.e. non-vector) output, this can be visualized as a tangent plane. However, with higher dimensions we don't have this visual luxury, so we are left to think about it just as an approximation.

In real-world applications of multivariable calculus, you almost never care about an actual plane in space. Instead, you might have some complicated function, like, oh, I don't know, air resistance on a parachute as a function of speed and orientation. Dealing with the actual function may be tricky or computationally expensive, so it's helpful to approximate it with something simpler, like a linear function.

What do I mean by "Linear function"?

Consider a function with a multidimensional input.

f(x1,x2,,xn)

This function is called linear if in its definition, all the coordinates are just multiplied by constants, with nothing else happening to them. For example, it might look like this:

f(x1,x2,,xn)=2x1+3x2+5xn

The full story of linearity goes deeper (hence the existence of the field "Linear algebra"), but for now, this conception will do. Typically, instead of writing out all the variable like this, you would treat the input as a vector:

x=[x1x2xn]

And you would define the function using a dot product:

f(x)=[235]x

For the purposes of this article, and more generally when you talk about local linearization, you are allowed to add in a constant to this expression:

f(x)=cSome constant+vSome vectorx

If you wanted to be pedantic, this is no longer a linear function. It's what's called an "affine" function. But most people would say "whatever, it's basically linear".

Local linearization

Now, suppose your function f(x) does not have the luxury of being linear. (The bolded "x" still represents a multidimensional vector). It might be defined by some crazy expression way more wild than a dot product.

The idea of a local linearization is to approximate this function near some particular input value, x0, with a function that is linear. Specifically, here's what that new function looks like:

Lf(x)=f(x0)Constant+f(x0)Constant vector(xx0)xis the variable

  • Notice, by plugging in x=x0, you can see that both functions f and Lf will have the same value at the input x0.

  • The vector dotted against the variable x is the gradient of f at the specified input, f(x0). This ensures that both functions f and Lf will have the same gradient at the specified input. In other words, all their partial derivative information will be the same.

I think the best way to understand this formula is to basically derive it for yourself in the context of a specific function.

Example 1: Finding a local linearization.

Problem: Have yourself a function:

f(x,y,z)=zex2y3

Find a linear function Lf(x,y,z) such that the value of Lf and all its partial derivatives match those of f at the following point:

(x0,y0,z0)=(8,4,3)

Step 1: Evaluate f at the chosen point

f(8,4,3)=

f(8,4,3)=3e8243=3e6464=3e0=3

Step 2: Use this to start writing your function. Which of the following functions will be guaranteed to equal f at the input (x,y,z)=(8,4,3)?

Choose 1 answer:

Choose 1 answer:

  • Lf(x,y,z)=3+8ax+4by+3cz

  • Lf(x,y,z)=3+a(x8)+b(y4)+c(z3)

For both of these, a, b and c are all arbitrary constants.

You might start writing the desired function Lf like this:

Lf(x,y,z)=3+<something equal to0when(x,y,z)=(8,4,3)>

To make sure that the remainder of the equation is 0 at (x,y,z)=(8,4,3) while keeping things linear, we only add constant multiples of the terms (x8), (y4), and (z3), since these will all be 0 at the input.

Lf(x,y,z)=3+a(x8)+b(y4)+c(z3)

The partial derivatives of Lf, as you have written it so far, are precisely these constants a, b and c. So to force our function to have the same partial derivative information as f at the point (8,4,3), we just need to set these constants equal to the corresponding partial derivatives of f at this point.

Step 3: Compute each partial derivative of f(x,y,z)=zex2y3

fx(x,y,z)=

fz(x,y,z)=

fx(x,y,z)=x(zex2y3)=zex2y3(2x)=2xzex2y3fy(x,y,z)=y(zex2y3)=zex2y3(3y2)=3y2zex2y3fz(x,y,z)=z(zex2y3)=ex2y3

Now we evaluate each of these at (8,4,3).

fx(8,4,3)=

fy(8,4,3)=

fz(8,4,3)=

Luckily, our computations are made easier by the fact that e8243=e6464=e0=1.

fx(8,4,3)=2(8)(3)e8243=48fy(8,4,3)=3(42)(3)e8243=144fz(8,4,3)=e8243=1

Step 4: Replacing the constants a, b and c in the expression of Lf with these partial derivative values, what do you get?

Lf(x,y,z)=

Lf(x,y,z)=3+48(x8)144(y4)+1(z3)

Now notice what this looks like if you write it with vector notation.

Lf([xyz])=3+[481441][xyz]=f([843])+[fx(8,4,3)fy(8,4,3)fz(8,4,3)][xyz]=f([843])+f([843])[xyz]

It is just a specific form of the general formula shown above.

Lf(x)=f(x0)Constant+f(x0)Constant vector(xx0)xis the variable

Example 2: Using local linearization for estimation

What follows is by no means a practical application, but working through it will help give a feel for what local linearization is doing.

Problem: Suppose you are on a desert island without a calculator, and you need to estimate 2.01+0.99+9.01. How would you do it?

Solution:

We can view this problem as evaluating a certain three-variable function at the point (2.01,0.99,9.01), namely

f(x,y,z)=x+y+z

I don't know about you, but I'm not sure how to evaluate square roots by hand. If only this function was linear! Then working it out by hand would only involve adding and multiplying numbers. What we could do is find the local linearization at a nearby point where evaluating f is easier. Then we can get close to the right answer by evaluating the linearization at the point (2.01,0.99,9.01).

The point we care about is very close to the much simpler point (2,1,9), so we find the local linearization of f near that point. As before, we must find

  • f(2,1,9)

  • All partial derivatives of f at (2,1,9)

The first of these is

f(2,1,9)=2+1+9=2+1+3=2+4=2+2=4=2

Looks like someone chose a few convenient input values, eh?

On to the partial derivatives (heavy sigh). Since the square roots are abundant, let's write out for ourselves the derivative of x.

ddxx=ddxx12=12x12=12x

Okay, here we go. The simplest partial derivative is fx

fx=xx+y+z=12x+y+z

Since y is nestled in there, fy requires some chain rule action:

fy=yx+y+z=12x+y+z12y+z

Nestled even deeper, that tricky z will require two iterations of the chain rule:

fz=zx+y+z=12x+y+z12y+z12z

Next, evaluate each one of these at (2,1,9). This might seem like a lot, but they are all made up of the same three basic components:

12x+y+z=122+1+9=122+2=1412y+z=121+9=124=1412z=129=16

Plugging these values into our expressions for the partial derivatives, we have

fx(2,1,9)=14fy(2,1,9)=1414=116fz(2,1,9)=141416=196

Unraveling the formula for local linearization, we get

Lf(x)=f(x0)+f(x0)(xx0)=f(x0)+fx(x0)(xx0)+fy(x0)(yy0)+fz(x0)(zz0)=2+14(x2)+116(y1)+196(z9)

Finally, after all this work, we can plug in (x,y,z)=(2.01,0.99,9.01) to compute our approximation

2+14(2.012)+116(0.991)+196(9.019)=2+0.014+0.0116+0.0196

Calculating this by hand still isn't easy, but at least it's doable. When you work it out, the final answer is

2.001979

Had we just used a calculator, the answer is

2.01+0.99+9.012.001978

So our approximation is pretty good!

Why do we care?

Although it is not common to find yourself estimating square roots on a desert island (at least where I'm from), what is common in the contexts of math and engineering is wrangling with complicated but differentiable functions. The phrase "just linearize it" is tossed around so much that not knowing what it means could be awkward.

Remember, a local linearization approximates one function near a point based on the information you can get from its derivative(s) at that point. Even though you can use a computer to evaluate functions, that's not always enough.

  • You might need to evaluate it many thousands of times per second, and working it out in full takes too long.

  • Maybe you don't even have the function explicitly written out, and you just have a few measurements near a point which you wish to extrapolate.

  • Sometimes what you care about is the inverse function, which can be hard or even impossible to find for the function as a whole, whereas inverting linear functions is relatively straight-forward.

Summary

  • Local linearization generalizes the idea of tangent planes to any multivariable function.

  • The idea is to approximate a function near one of its inputs with a simpler function that has the same value at that input, as well as the same partial derivative values.

  • Written with vectors, here's what the approximation function looks like:

    Lf(x)=f(x0)Constant+f(x0)Constant vector(xx0)xis the variable

  • This is called the local linearization of f near x0.

Local linearization (article) | Khan Academy (2024)
Top Articles
Latest Posts
Article information

Author: Edwin Metz

Last Updated:

Views: 6201

Rating: 4.8 / 5 (78 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Edwin Metz

Birthday: 1997-04-16

Address: 51593 Leanne Light, Kuphalmouth, DE 50012-5183

Phone: +639107620957

Job: Corporate Banking Technician

Hobby: Reading, scrapbook, role-playing games, Fishing, Fishing, Scuba diving, Beekeeping

Introduction: My name is Edwin Metz, I am a fair, energetic, helpful, brave, outstanding, nice, helpful person who loves writing and wants to share my knowledge and understanding with you.