Explicit orthogonal regression¶
Estimate the parameters of a scalar non-linear model from experimental data.
$$ y = f(\bm{\beta}, x) = \frac{\beta_1 x^2 + x (1-x)}{\beta_1 x^2 + 2 x (1-x) + \beta_2 (1-x)^2} $$
In [1]:
Copied!
import matplotlib.pyplot as plt
import numpy as np
from odrpack import odr
import matplotlib.pyplot as plt
import numpy as np
from odrpack import odr
First, we define the experimental data and the model function.
In [2]:
Copied!
x = np.array([0.100, 0.300, 0.400, 0.500, 0.600, 0.700, 0.800])
y = np.array([0.059, 0.243, 0.364, 0.486, 0.583, 0.721, 0.824])
x = np.array([0.100, 0.300, 0.400, 0.500, 0.600, 0.700, 0.800])
y = np.array([0.059, 0.243, 0.364, 0.486, 0.583, 0.721, 0.824])
In [3]:
Copied!
def f(beta: np.ndarray, x: np.ndarray) -> np.ndarray:
b1, b2 = beta
return (b1*x**2 + x*(1-x))/(b1*x**2 + 2*x*(1-x) + b2*(1-x)**2)
def f(beta: np.ndarray, x: np.ndarray) -> np.ndarray:
b1, b2 = beta
return (b1*x**2 + x*(1-x))/(b1*x**2 + 2*x*(1-x) + b2*(1-x)**2)
Then, we define an initial guess for the model parameters beta
and, optionally, also the corresponding bounds.
In [4]:
Copied!
beta0 = np.array([1., 1.])
lower = np.array([0., 0.])
upper = np.array([2., 2.])
beta0 = np.array([1., 1.])
lower = np.array([0., 0.])
upper = np.array([2., 2.])
Lastly, we define the weights for x
and y
based on a suitable rationale, such as the estimated uncertainty of each variable.
In [5]:
Copied!
sigma_x = 0.01
sigma_y = 0.05
wd = 1/sigma_x**2
we = 1/sigma_y**2
sigma_x = 0.01
sigma_y = 0.05
wd = 1/sigma_x**2
we = 1/sigma_y**2
We can now launch the regression! If you want to see a brief computation report, set iprint=1001
.
In [6]:
Copied!
sol = odr(f, beta0, y, x, lower=lower, upper=upper, we=we, wd=wd)
sol = odr(f, beta0, y, x, lower=lower, upper=upper, we=we, wd=wd)
The result is packed in a OdrResult
dataclass. Let's check the solution convergence and the estimated model parameters.
In [7]:
Copied!
sol.stopreason
sol.stopreason
Out[7]:
'Sum of squares convergence.'
In [8]:
Copied!
sol.beta
sol.beta
Out[8]:
array([1.4291868 , 1.67473433])
All fine! Let's plot the solution.
In [9]:
Copied!
_, ax = plt.subplots()
# Plot experimental data
ax.plot(x, y, 'o')
# Plot fitted model
xm = np.linspace(0., 1., 100)
ym = f(sol.beta, xm)
ax.plot(xm, ym)
ax.set_xlabel('x')
ax.set_ylabel('y')
_, ax = plt.subplots()
# Plot experimental data
ax.plot(x, y, 'o')
# Plot fitted model
xm = np.linspace(0., 1., 100)
ym = f(sol.beta, xm)
ax.plot(xm, ym)
ax.set_xlabel('x')
ax.set_ylabel('y')
Out[9]:
Text(0, 0.5, 'y')