-
Notifications
You must be signed in to change notification settings - Fork 0
Description
We should look in more detail at the this approach to improving Laplace approximation: https://academic.oup.com/jrsssb/advance-article/doi/10.1093/jrsssb/qkaf082/8425622.
Paul and I went to Botond Szabo's ISBA BNP webinar presentation (2026-02-04).
I am probably not seeing things fully clearly, but I think that in the nested approx setting when we are sampling latents, we could add the skewing via simulation for each draw that we take (which is of course conditional on a value of the parameters), requiring only a couple evaluations of the log posterior.
My general sense is that our current approach may have more error in the latents than in the parameters and often the latents are of most interest, so we might hope that this could help some (a lot?). No idea how effective this would be relative to what INLA does.
Some vague questions:
- I didn't follow the connection between the 1st half of Szabo's talk (with the 3rd deriv, Annals of Stats paper) and the 2nd half (with the simulation-based skewing; JRSSB paper) -- i.e., is there any advantage to taking the approach in the first half?
- How does their work relate to the skew-normal used in INLA for latents marginals.
- Can one use their work to get a better estimate of the marginal likelihood for a given parameter value than Laplace gives us.