作者: A. K. Rajagopal , S. Teitler , Vijay P. Singh
DOI: 10.1007/978-94-009-3953-0_25
关键词: Differential entropy 、 Exponential family 、 Probability density function 、 Principle of maximum entropy 、 Mathematics 、 Probability distribution 、 Entropy (information theory) 、 Exponential distribution 、 Applied mathematics 、 Sufficient statistic
摘要: After a brief expository account of the Shannon-Jaynes principle maximum entropy (POME) for discrete and continuous variables, we give here an some recent research work which (i) “histogram” method to contrast modes computation role histogram in actual practice when dealing with probability distributions. (ii) The idea mean logarithmic decrement associated distribution is introduced shown be related concept differential entropy. respect arbitrary logarithm ratio density function exponential discussed context hydrological investigations. Unlike (i), this quantity example Kullback-Leibler (KL) Information, always positive invariant under coordinate transformation. (iii) constraints entering into POME as well minimum K L information are identified class sufficient statistics determine unknown parameters functions that occur most commonly used models. (iv) An where only first two moments semi-infinite domain givens shed light on limitations POME, recentky recognized by Wragg coworkers, made relevant Sonuga rainfall-run off relationship. Finally, (v) generating distributions starting from one basic employing transformations given. This conjuction leads notions “Physical constraints” “mathematical examing parameter estimation.