Difference between revisions of "General Linear Regression"

From PrattWiki
Jump to navigation Jump to search
(Finding the coefficients for the "constant" model)
Line 83: Line 83:
  
 
== Finding the coefficients for a "straight line" model ==
 
== Finding the coefficients for a "straight line" model ==
 +
So, that was relatively painless (except for the typesetting).  Next up, let's look at a slightly more complex model: a straight line.  That is to say, $$\hat{y}(x)=a_0x^1+a_1x^0$$.  Yes, the indexing is a little unfortunate but that's the way it goes.  This means that the $$S_r$$ value, using the second version above, will be:
 +
<center>$$\begin{align*}
 +
S_r&=\sum_k\left(\hat{y}_k-y_k\right)^2=\sum_k\left(a_0x_k+a_1-y_k\right)^2
 +
\end{align*}$$</center>
 +
There are now two ''variables'':$$a_0$$ and $$a_1$$.  This means that to minimize the $$S_r$$ value, you need to solve:
 +
<center>$$
 +
\begin{align*}
 +
\frac{\partial S_r}{\partial a_0}&=0\\
 +
\frac{\partial S_r}{\partial a_1}&=0
 +
\end{align*}$$
 +
</center>
 +
where the $$\partial$$ symbol indicates a partial derivative.  A partial derivative simply means that you are looking at how something changes with respect to changes in only one of its variables - all the other variables are assumed constant.  For example, the volume of a cylinder can be given by $$V=\pi r^2h$$ where $$r$$ is the radius of the base and $$h$$ is the height.  Using partial derivatives, you can calculate how the volume changes either as a function of changing the radius of the base or as a function of changing the height:
 +
<center>$$
 +
\begin{align*}
 +
\frac{\partial V}{\partial r}&=2\pi r h & \frac{\partial V}{\partial h}&=\pi r^2
 +
\end{align*}$$
 +
</center>
 +
On the left, the $$h$$ is taken as a constant; on the right, the $$r$$ is taken as a constant.
 +
 +
Here goes round two!  First, let's look at $$a_0$$:
 +
<center>$$
 +
\begin{align*}
 +
\frac{\partial S_r}{\partial a_0}=\frac{\partial}{\partial a_0}\left(\sum_k\left(a_0x_k+a_1-y_k\right)^2 \right)&=0
 +
\end{align*}$$
 +
</center>
 +
The derivative of a sum is the same as the sum of derivatives, so put the derivative operator inside:
 +
<center>$$
 +
\begin{align*}\sum_k\frac{\partial}{\partial a_0}\left(a_0x_k-a_1-y_k\right)^2&=0
 +
\end{align*}$$
 +
</center>
 +
Use the power rule to get that $$d(u^2)=2u~du$$ and note that $$u=(a_0x_k+a_1-y_k)$$ so $$\frac{du}{da_0}=x_k$$ here (this is different from what it was above):
 +
<center>$$
 +
\begin{align*}
 +
\sum_k2\left(a_0x_k+a_1-y_k\right)x_k=\sum_k2\left(a_0x_{k}^{2}+a_1x_k-y_kx_k\right)&=0\end{align*}$$
 +
</center>
 +
Since we are setting the left side to 0, the 2 is irrelevant.  Also, the summand can be split into '''three''' parts...
 +
<center>$$
 +
\begin{align*}
 +
\sum_k\left(a_0x_{k}^{2}\right)+\sum_k\left(a_1x_{k}\right)-\sum_k\left(y_kx_k\right)&=0
 +
\end{align*}$$
 +
</center>
 +
...and then the parts can be separated. 
 +
<center>$$
 +
\begin{align*}
 +
\sum_k\left(a_0x_{k}^{2}\right)+\sum_k\left(a_1x_{k}\right)&=\sum_k\left(y_kx_k\right)
 +
\end{align*}$$
 +
</center>
 +
None of these terms is simple enough do do anything with other than to recognize that the $$a_0$$ and $$a_1$$ are not functions of $$k$$ and can thus be brought out of the summations:
 +
<center>$$
 +
\begin{align}
 +
a_0\sum_k\left(x_{k}^{2}\right)+a_1\sum_k\left(x_{k}\right)&=\sum_k\left(y_kx_k\right)
 +
\end{align}$$
 +
</center>
 +
Now let's look at $$a_1$$:
 +
<center>$$
 +
\begin{align*}
 +
\frac{\partial S_r}{\partial a_1}=\frac{\partial}{\partial a_1}\left(\sum_k\left(a_0x_k+a_1-y_k\right)^2 \right)&=0
 +
\end{align*}$$
 +
</center>
 +
The derivative of a sum is the same as the sum of derivatives, so put the derivative operator inside:
 +
<center>$$
 +
\begin{align*}\sum_k\frac{\partial}{\partial a_1}\left(a_0x_k-a_1-y_k\right)^2&=0
 +
\end{align*}$$
 +
</center>
 +
Use the power rule to get that $$d(u^2)=2u~du$$ and note that $$u=(a_0x_k+a_1-y_k)$$ so $$\frac{du}{da_1}=1$$ here:
 +
<center>$$
 +
\begin{align*}
 +
\sum_k2\left(a_0x_k+a_1-y_k\right)(1)=\sum_k2\left(a_0x_{k}+a_1-y_k\right)&=0\end{align*}$$
 +
</center>
 +
Since we are setting the left side to 0, the 2 is irrelevant.  Also, the summand can be split into '''three''' parts...
 +
<center>$$
 +
\begin{align*}
 +
\sum_k\left(a_0x_{k}\right)+\sum_k\left(a_1\right)-\sum_k\left(y_k\right)&=0
 +
\end{align*}$$
 +
</center>
 +
...and then the parts can be separated. 
 +
<center>$$
 +
\begin{align*}
 +
\sum_k\left(a_0x_{k}\right)+\sum_k\left(a_1\right)&=\sum_k\left(y_k\right)
 +
\end{align*}$$
 +
</center>
 +
While the second term is actually simple enough to do something with (it is just adding up $$a_1$$ $$N$$ times and thus could be $$Na_1$$, we are simply going to recognize that the $$a_0$$ and $$a_1$$ are not functions of $$k$$ and can thus be brought out of the summations:
 +
<center>$$
 +
\begin{align}
 +
a_0\sum_k\left(x_{k}\right)+a_1\sum_k\left(1\right)&=\sum_k\left(y_k\right)
 +
\end{align}$$
 +
</center>
 +
If we wanted to be explicit about it, we could note that $$\phi_1(x)$$ here is $$x^0$$ and write:
 +
<center>$$
 +
\begin{align} \tag{2e}
 +
a_0\sum_k\left(x^{1}_{k}x^{0}_{k}\right)+a_1\sum_k\left(x^{0}_{k}x^{0}_{k}\right)&=\sum_k\left(y_kx^{0}_{k}\right)
 +
\end{align}$$
 +
</center>
 +
 +
In fact, we could do the same with equation (1) above and write it as:
 +
<center>$$
 +
\begin{align} \tag{1e}
 +
a_0\sum_k\left(x^{1}_{k}x^{1}_{k}\right)+a_1\sum_k\left(x^{0}_{k}x^{1}_{k}\right)&=\sum_k\left(y_kx^{1}_{k}\right)
 +
\end{align}$$
 +
</center>
 +
 +
Equations (1e) and (2e) give two equations with two unknowns; putting them in matrix form yields:
 +
<center>$$
 +
\begin{align}
 +
\begin{bmatrix}
 +
\sum_k\left(x^{1}_{k}x^{1}_{k}\right) & \sum_k\left(x^{0}_{k}x^{1}_{k}\right) \\
 +
\sum_k\left(x^{1}_{k}x^{0}_{k}\right) & \sum_k\left(x^{0}_{k}x^{0}_{k}\right)
 +
\end{bmatrix}
 +
\begin{bmatrix} a_0 \\ a_1 \end{bmatrix}&=
 +
\begin{bmatrix} \sum_k\left(y_kx^{1}_{k}\right) \\ \sum_k\left(y_kx^{0}_{k}\right) \end{bmatrix}
 +
\end{align}$$
 +
</center>

Revision as of 00:27, 28 October 2019

This is a work in progress. It is meant to capture the mathematical proof of how general linear regression works. It is math-heavy.

Introduction

Assume you have some data set where you have $$N$$ independent values $$x_k$$ and dependent values $$y_k$$. You also have some reasonable scientific model that relates the dependent variable to the independent variable. If that model can be written as a general linear fit, that means you can represent the fit function $$\hat{y}(x)$$ as:

$$ \begin{align*} \hat{y}(x)&=\sum_{m=0}^{M-1}a_m\phi_m(x) \end{align*} $$

where $$\phi_m(x)$$ is the $$m$$th basis function in your model and $$a_m$$ is the constant coefficient. For instance, if you end up having a model:

$$ \begin{align*} \hat{y}(x)&=3e^{-2x}+5 \end{align*} $$

then you could map these to the summation with $$M=2$$ basis function total and:

$$ \begin{align*} a_0 &= 3 & \phi_0(x) &= e^{-2x} \\ a_1 &= 5 & \phi_1(x) &= x^0 \end{align*} $$

Note for the second term that $$\phi(x)$$ must be a function of $$x$$ -- constants are thus the coefficients on an implied $$x^0$$.

The goal, once we have established a scientifically valid model, is to determine the "best" set of coefficients for that model. We are going to define the "best" set of coefficients as the values of $$a_m$$ that minimize the sum of the squares of the estimate residuals, $$S_r$$, for that particular model. Recall that:

$$ \begin{align*} S_r&=\sum_k\left(y_k-\hat{y}_k\right)^2=\sum_k\left(\hat{y}_k-y_k\right)^2 \end{align*} $$

Finding the coefficients for the "constant" model

The simplest model you might come up with is a simple constant, $$\hat{y}(x)=a_0x^0$$. This means that the $$S_r$$ value, using the second version above, will be:

$$\begin{align*} S_r&=\sum_k\left(\hat{y}_k-y_k\right)^2=\sum_k\left(a_0-y_k\right)^2 \end{align*}$$

Keep in mind that the only variable right now is $$a_0$$; all the $$x$$ and $$y$$ values are constant independent or dependent values from your data set. The only parameter you can adjust is $$a_0$$. This means that to minimize the $$S_r$$ value, you need to solve:

$$ \begin{align*} \frac{dS_r}{da_0}&=0 \end{align*}$$

Here goes!

$$ \begin{align*} \frac{dS_r}{da_0}=\frac{d}{da_0}\left(\sum_k\left(a_0-y_k\right)^2 \right)&=0 \end{align*}$$

The derivative of a sum is the same as the sum of derivatives, so put the derivative operator inside:

$$ \begin{align*}\sum_k\frac{d}{da_0}\left(a_0-y_k\right)^2&=0 \end{align*}$$

Use the power rule to get that $$d(u^2)=2u~du$$ and note that $$u=(a_0-y_k)$$ so $$\frac{du}{da_0}=1$$ here:

$$ \begin{align*} \sum_k2\left(a_0-y_k\right)&=0\end{align*}$$

Since we are setting the left side to 0, the 2 is irrelevant. Also, the summand can be split into two parts...

$$ \begin{align*} \sum_k\left(a_0\right)-\sum_k\left(y_k\right)&=0 \end{align*}$$

...and then the parts can be separated.

$$ \begin{align*} \sum_k\left(a_0\right)&=\sum_k\left(y_k\right) \end{align*}$$

Recognize the $$a_0$$ is a constant; since you are adding that constant to itself for each of the $$N$$ data points, you can replace the summation with:

$$ \begin{align*} Na_0&=\sum_k\left(y_k\right)\end{align*}$$

Dividing by $$N$$ reveals the answer:

$$ \begin{align*} a_0&=\frac{1}{N}\sum_k\left(y_k\right)=\bar{y} \end{align*}$$

The best constant with which to model a data set is its own average! Admittedly, this will lead to an $$r^2$$ value of 0, which is not great, but it is as good as you can get with a model containing nothing more than a constant.

Finding the coefficients for a "straight line" model

So, that was relatively painless (except for the typesetting). Next up, let's look at a slightly more complex model: a straight line. That is to say, $$\hat{y}(x)=a_0x^1+a_1x^0$$. Yes, the indexing is a little unfortunate but that's the way it goes. This means that the $$S_r$$ value, using the second version above, will be:

$$\begin{align*} S_r&=\sum_k\left(\hat{y}_k-y_k\right)^2=\sum_k\left(a_0x_k+a_1-y_k\right)^2 \end{align*}$$

There are now two variables:$$a_0$$ and $$a_1$$. This means that to minimize the $$S_r$$ value, you need to solve:

$$ \begin{align*} \frac{\partial S_r}{\partial a_0}&=0\\ \frac{\partial S_r}{\partial a_1}&=0 \end{align*}$$

where the $$\partial$$ symbol indicates a partial derivative. A partial derivative simply means that you are looking at how something changes with respect to changes in only one of its variables - all the other variables are assumed constant. For example, the volume of a cylinder can be given by $$V=\pi r^2h$$ where $$r$$ is the radius of the base and $$h$$ is the height. Using partial derivatives, you can calculate how the volume changes either as a function of changing the radius of the base or as a function of changing the height:

$$ \begin{align*} \frac{\partial V}{\partial r}&=2\pi r h & \frac{\partial V}{\partial h}&=\pi r^2 \end{align*}$$

On the left, the $$h$$ is taken as a constant; on the right, the $$r$$ is taken as a constant.

Here goes round two! First, let's look at $$a_0$$:

$$ \begin{align*} \frac{\partial S_r}{\partial a_0}=\frac{\partial}{\partial a_0}\left(\sum_k\left(a_0x_k+a_1-y_k\right)^2 \right)&=0 \end{align*}$$

The derivative of a sum is the same as the sum of derivatives, so put the derivative operator inside:

$$ \begin{align*}\sum_k\frac{\partial}{\partial a_0}\left(a_0x_k-a_1-y_k\right)^2&=0 \end{align*}$$

Use the power rule to get that $$d(u^2)=2u~du$$ and note that $$u=(a_0x_k+a_1-y_k)$$ so $$\frac{du}{da_0}=x_k$$ here (this is different from what it was above):

$$ \begin{align*} \sum_k2\left(a_0x_k+a_1-y_k\right)x_k=\sum_k2\left(a_0x_{k}^{2}+a_1x_k-y_kx_k\right)&=0\end{align*}$$

Since we are setting the left side to 0, the 2 is irrelevant. Also, the summand can be split into three parts...

$$ \begin{align*} \sum_k\left(a_0x_{k}^{2}\right)+\sum_k\left(a_1x_{k}\right)-\sum_k\left(y_kx_k\right)&=0 \end{align*}$$

...and then the parts can be separated.

$$ \begin{align*} \sum_k\left(a_0x_{k}^{2}\right)+\sum_k\left(a_1x_{k}\right)&=\sum_k\left(y_kx_k\right) \end{align*}$$

None of these terms is simple enough do do anything with other than to recognize that the $$a_0$$ and $$a_1$$ are not functions of $$k$$ and can thus be brought out of the summations:

$$ \begin{align} a_0\sum_k\left(x_{k}^{2}\right)+a_1\sum_k\left(x_{k}\right)&=\sum_k\left(y_kx_k\right) \end{align}$$

Now let's look at $$a_1$$:

$$ \begin{align*} \frac{\partial S_r}{\partial a_1}=\frac{\partial}{\partial a_1}\left(\sum_k\left(a_0x_k+a_1-y_k\right)^2 \right)&=0 \end{align*}$$

The derivative of a sum is the same as the sum of derivatives, so put the derivative operator inside:

$$ \begin{align*}\sum_k\frac{\partial}{\partial a_1}\left(a_0x_k-a_1-y_k\right)^2&=0 \end{align*}$$

Use the power rule to get that $$d(u^2)=2u~du$$ and note that $$u=(a_0x_k+a_1-y_k)$$ so $$\frac{du}{da_1}=1$$ here:

$$ \begin{align*} \sum_k2\left(a_0x_k+a_1-y_k\right)(1)=\sum_k2\left(a_0x_{k}+a_1-y_k\right)&=0\end{align*}$$

Since we are setting the left side to 0, the 2 is irrelevant. Also, the summand can be split into three parts...

$$ \begin{align*} \sum_k\left(a_0x_{k}\right)+\sum_k\left(a_1\right)-\sum_k\left(y_k\right)&=0 \end{align*}$$

...and then the parts can be separated.

$$ \begin{align*} \sum_k\left(a_0x_{k}\right)+\sum_k\left(a_1\right)&=\sum_k\left(y_k\right) \end{align*}$$

While the second term is actually simple enough to do something with (it is just adding up $$a_1$$ $$N$$ times and thus could be $$Na_1$$, we are simply going to recognize that the $$a_0$$ and $$a_1$$ are not functions of $$k$$ and can thus be brought out of the summations:

$$ \begin{align} a_0\sum_k\left(x_{k}\right)+a_1\sum_k\left(1\right)&=\sum_k\left(y_k\right) \end{align}$$

If we wanted to be explicit about it, we could note that $$\phi_1(x)$$ here is $$x^0$$ and write:

$$ \begin{align} \tag{2e} a_0\sum_k\left(x^{1}_{k}x^{0}_{k}\right)+a_1\sum_k\left(x^{0}_{k}x^{0}_{k}\right)&=\sum_k\left(y_kx^{0}_{k}\right) \end{align}$$

In fact, we could do the same with equation (1) above and write it as:

$$ \begin{align} \tag{1e} a_0\sum_k\left(x^{1}_{k}x^{1}_{k}\right)+a_1\sum_k\left(x^{0}_{k}x^{1}_{k}\right)&=\sum_k\left(y_kx^{1}_{k}\right) \end{align}$$

Equations (1e) and (2e) give two equations with two unknowns; putting them in matrix form yields:

$$ \begin{align} \begin{bmatrix} \sum_k\left(x^{1}_{k}x^{1}_{k}\right) & \sum_k\left(x^{0}_{k}x^{1}_{k}\right) \\ \sum_k\left(x^{1}_{k}x^{0}_{k}\right) & \sum_k\left(x^{0}_{k}x^{0}_{k}\right) \end{bmatrix} \begin{bmatrix} a_0 \\ a_1 \end{bmatrix}&= \begin{bmatrix} \sum_k\left(y_kx^{1}_{k}\right) \\ \sum_k\left(y_kx^{0}_{k}\right) \end{bmatrix} \end{align}$$