Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gerald's Case: DFOP- nonsense CIs #27

Open
zhenglei-gao opened this issue Mar 5, 2014 · 6 comments
Open

Gerald's Case: DFOP- nonsense CIs #27

zhenglei-gao opened this issue Mar 5, 2014 · 6 comments

Comments

@zhenglei-gao
Copy link
Owner

  • Most statistical problems are regular in a neighborhod of the solution.
  • Problem of nondifferentiable boundaries
  • Our program does no eliminate the need for judgment, testing and patience.
  • There is no guaranteed strategy that will resolve every difficulty.-- Gill et al, 1981, p. 285
    *

Gill PE, Murray W, Wright MH(1981) Practical Optimization.

@zhenglei-gao
Copy link
Owner Author

The asymptotic theory behind the formula for 's.e.' breaks down with parameters at boundaries. It assumes that you are minimizing the negative log(likelihood) AND the optimum is in the interior of the region AND the log(likelihood) is sufficiently close to being parabolic that a reasonable approximation for the distribution of the maximum likelihood estimates (MLEs) has a density adequately approximated by a second-order Taylor series expansion about the MLEs. In this case, transforming the parameters will not solve the problem. If the maximum is at a boundary and if you send the boundary to Inf with a transformation, then a second-order Taylor series expansion of the log(likelihood) about the MLEs will be locally flat in some direction(s), so the hessian can not be inverted.

These days, the experts typically approach problems like this using Monte Carlo, often in the form of Markov Chain Monte Carlo (MCMC). One example of an analysis of this type of problem appears in section 2.4 of Pinheiro and Bates (2000) Mixed-Effects Models in S and S-Plus (Springer).
https://stat.ethz.ch/pipermail/r-help/2008-June/165928.html

@zhenglei-gao
Copy link
Owner Author

there is really no way to get around this problem apart from having a good initial guess

@zhenglei-gao
Copy link
Owner Author

http://cowles.econ.yale.edu/P/cp/p09b/p0988.pdf

ESTIMATION WHEN A PARAMETER IS ON A BOUNDARY

@zhenglei-gao
Copy link
Owner Author

Standard error can be incorrect:

  • parameter distributions are heavy-tailed, and the standard statistical theory assumes them to be with a finite second moment. Zaliapin et al. (2005)describes how heavy-tailed distributions have a very different behavior for small and large samples.
  • In addition, strong non-linearity in the likelihood function near its maximum might contribute part of discrepancy. Kagan and Schoenberg (2001) shows the problems when most of the parameters need to be positive and the likelihood value is close to its maximum.

@zhenglei-gao
Copy link
Owner Author

https://groups.google.com/forum/#!topic/comp.soft-sys.matlab/7luxU61mjVk

Or, if your
likelihood was Gaussian, and you were just solving a nonlinear least
squares problem, then it would be straight forward to compute the
Jacobian (matrix of first derivatives) by finite differencing and then
use the approximation H=J'*J. I would recommend against trying to
compute the second derivatives directly by second order finite
differencing- in my experience it just doesn't work well in
practice.

Another approach that might be more practical would be to sample from
your likelihood (or more generally posterior) distribution using
Markov Chain Monte Carlo methods. If the likelihood can be computed
relatively quickly, then this can be a very effective
technique.

A third approach that you might consider is building a quadratic
metamodel of the likelihood near the optimal parameter values.

@zhenglei-gao
Copy link
Owner Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant