作者: J. Aczél
DOI: 10.1007/978-3-642-11004-7_1
关键词:
摘要: 1. Let \(\Gamma _{\text{N}} = \left\{ {\left( {{\text{p}}_{{\text{1,}}} {\text{p}}_{{\text{2,}}} {\text{ \ldots p}}_{\text{N}} } \right)\left| {\sum\limits_{{\text{k}} 1}^{\text{N}} {{\text{p}}_{\text{k}} 1,\,{\text{p}}_{\text{k}} \geq 0,\,{\text{k}} \equiv 1,2, \ldots,{\text{N}}} \right.} \right\}\) be the set of all complete finite discrete probability distributions (e.g. probabilities different outcomes an experiment, contents a communication, etc.) with N members (N 2,3,…). C. E. Shannon (1948) has introduced “Shannon entropy” (with understanding 0 log := 0) $${\text{H}}_{\text{N}} \left( {{\text{p}}_{\text{1}} {\text{,p}}_{\text{2}} {\text{, \right): - \sum\limits_{{\text{k 1}}}^{\text{N}} {{\text{p}}_{\text{K}} \log _2 {\text{p}}_{\text{K}} {\text{for}}\,{\text{all}}\,\left( {{\text{p}}_{\text{1}},{\text{p}}_{\text{2}}, {\text{p}}_{\text{N}} \right) \in \Gamma _{\text{N}},\,{\text{n}} 2,3, \ldots,$$ (1) as measure uncertainty (before experiment was made, message received or, equivalently, information (received from completed etc.). What justifies formula (1) and some further measures information?