▲ | paipa 2 days ago | ||||||||||||||||
The denominator isn't the issue. The context-dependent base of the logarithm is, which makes 1 Bel = 10x for some things and 1 Bel = 3.16x for others. I've never heard of decibels used in probability theory. Did they adopt it with the same baked-in bastardizations? Please tell me +10dB(stdev) = +10dB(variance) isn't a thing. | |||||||||||||||||
▲ | dj3l4l 2 days ago | parent | next [-] | ||||||||||||||||
The problem stated in the article is that the unitless quantity of 1 Bel is effectively applied only to power ratios. It is of course true that one can transfer a scaling of powers into the base of the logarithm when we are trying to figure out what effective scaling of voltages corresponds to 1 Bel of power scaling, but it is ultimately more meaningful to state that a Bel is "only a scaling of power", which is a statement about the units of the two variables in the logarithm (but ultimately, once we know the denominator that belongs to the definition, the numerator is also known, so we only need to know the reference denominator). In Bayesian probability theory, there is a quantity known as the "evidence". It is defined as e(D|H) = 10 * log_10 (O(D|H)), where O(D|H) is the odds of some data, D, given the hypothesis, H. The odds are the ratio of the probability of the data given that the hypothesis H is true, over the probability of the data given that the hypothesis is false, or: O(D|H) = P(D|H)/P(D|NOT(H)). Taking the logarithm of the odds allows us to add up terms instead of multiplying the probability ratios when we are dividing D into subsets; so we can construct systems that reason through additive increases or decreases in evidence, as new data "arrives" in some sequence. The advantage of representing the evidence in dB is that we often deal with changes to odds that are difficult to represent in decimal, such as the difference between 1000:1 (probability of 0.999, or an evidence of 30dB) and 10000:1 (probability of 0.9999, or evidence of 40dB). This use of evidence has been around at least since the 60s. For example, you can find it in Chapter 4 of "Probability Theory - The Logic of Science" by E.T. Jaynes. | |||||||||||||||||
| |||||||||||||||||
▲ | kazinator a day ago | parent | prev [-] | ||||||||||||||||
(Funny you should mention that because while writing my grandparent comment above I vaguely ruminated about some kind of example involving standard deviation and variance, due to those being linked by squaring.) Even if there isn't a +10dB(stddev), logarithmic graphs are a thing, in many disciplines. You just refer to the axes as "log <whatever>". Any time you are dealing with data which has a wide dynamic range, especially with scale-invariant patterns. Back in the realm of electronics and signal processing, we commonly apply logarithm to the frequency domain, for Bode plots and whatnot. I've not heard of a word being assigned to the log f axis; it's just log f. | |||||||||||||||||
|