Encoding prior knowledge in ensemble refinement

JOURNAL OF CHEMICAL PHYSICS(2024)

Cited 0|Views6
No score
Abstract
The proper balancing of information from experiment and theory is a long-standing problem in the analysis of noisy and incomplete data. Viewed as a Pareto optimization problem, improved agreement with the experimental data comes at the expense of growing inconsistencies with the theoretical reference model. Here, we propose how to set the exchange rate a priori to properly balance this trade-off. We focus on gentle ensemble refinement, where the difference between the potential energy surfaces of the reference and refined models is small on a thermal scale. By relating the variance of this energy difference to the Kullback-Leibler divergence between the respective Boltzmann distributions, one can encode prior knowledge about energy uncertainties, i.e., force-field errors, in the exchange rate. The energy uncertainty is defined in the space of observables and depends on their type and number and on the thermodynamic state. We highlight the relation of gentle refinement to free energy perturbation theory. A balanced encoding of prior knowledge increases the quality and transparency of ensemble refinement. Our findings extend to non-Boltzmann distributions, where the uncertainty in energy becomes an uncertainty in information.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined