Remix.run Logo
LegionMammal978 5 days ago

How my high-school calculus textbook did it was to first define ln(x) so that ln(1) = 0 and d/dx ln(x) = 1/x, then take exp(x) as the inverse function of ln(x), and finally set e = exp(1). It's definitely a bit different from the exp-first formulation, but it does do a good job connecting the natural logarithm to a natural definition. (It's an interesting exercise to show, using only limit identities and algebraic manipulation, that this is equivalent to the usual compound-interest version of e.)

jcranmer 5 days ago | parent | next [-]

That's how my textbook did it as well (well, it defined e as ln(e) = 1, but only because it introduced e before exp).

The problem with this approach is that, since we were already introduced to exponents and logarithms in algebra but via different definitions, it always left this unanswered question in my head about how we knew these two definitions were the same, since everyone quickly glossed over that fact.

LegionMammal978 5 days ago | parent | next [-]

I suppose the method would be to derive ln(xy) = ln(x) + ln(y) and the corresponding exp(x + y) = exp(x)exp(y), then see how this lets exp(y ln(x)) coincide with repeated multiplication for integer y. Connecting this to the series definition of exp(x) would also take some work, but my textbook wasn't very big on series definitions in general.

5 days ago | parent [-]
[deleted]
dawnofdusk 4 days ago | parent | prev | next [-]

>The problem with this approach is that, since we were already introduced to exponents and logarithms in algebra but via different definitions, it always left this unanswered question in my head about how we knew these two definitions were the same, since everyone quickly glossed over that fact.

They just shouldn't be taught in algebra. There, one is thinking about oh how do I extend the definition of exponentiation from integers to real numbers. But thinking about continuous extensions of an integer function to the real line is really something that should be saved for calculus.

ogogmad 5 days ago | parent | prev [-]

Did you eventually realise that the expression a^b should be understood to "really" mean exp(b * ln(a)), at least in the case that b might not be an integer?

I think even in complex analysis, the above definition a^b := exp(b ln(a)) makes sense, since the function ln() admits a Riemann surface as its natural domain and the usual complex numbers as its codomain.

[EDIT] Addressing your response:

> Calculus glosses over the case when a is negative

The Riemann surface approach mostly rescues this. When "a" is negative, and b is 1/3 (for instance), choose "a" = (r, theta) = (|a|, 3 pi). This gives ln(a) = ln |a| + i (3 pi). Then a^b = exp((|a| + i 3 pi) / 3) = exp(ln |a|/3 + i pi) = -|a|^(1/3), as desired.

Notice though that I chose to represent "a" using theta=3pi, instead of let's say 5pi.

LegionMammal978 5 days ago | parent | next [-]

I see what GP's point is, high-school-level calculus generally restricts itself to real numbers, where the logarithm is simply left undefined for nonpositive arguments. After all, complex analysis has much baggage of its own, and you want to have a solid understanding of real limits, derivatives, integrals, etc. before you start looking into limits along paths and other such concepts.

Even then, general logarithms become messy. It's easy to say "just take local segments of the whole surface" in the abstract, but any calculator will have to make some choice of branch cuts. E.g., clearly (−1)^(1/3) = −1 for any sane version of exponentiation on the reals, but many calculators will spit out the equivalent of (−1)^(1/3) = −e^(4πi/3) instead.

(Just in general, analytic continuation only makes sense in the abstract realm. If you try doing it numerically to extend a series definition, you'll quickly find out how mind-bogglingly unstable it is. I think there was one paper that showed you need an exponential number of terms and exponentially many bits of accuracy w.r.t. the number of steps. Not even "it's 2025, we can crank it out to a billion bits" can save you from that.)

selimthegrim 5 days ago | parent [-]

I was once a contractor for TI (and we wrote the same subroutines for Casio) so I can actually answer this. See my story here: https://news.ycombinator.com/item?id=6017670

selimthegrim 5 days ago | parent [-]

And of course by Muphry's law I managed to confuse real and principal roots in that answer. -1 is not the principal cube root of -1.

jcranmer 5 days ago | parent | prev [-]

The problem is a^b := exp(b ln(a)) sort of breaks down when a is negative, which is a case that is covered in algebra class but glossed over in calculus.

dawnofdusk 4 days ago | parent | next [-]

It doesn't break down, one just needs the complex logarithm. If you ignore complex numbers it breaks down in both cases. If you allow complex numbers it works in both cases.

ogogmad 3 days ago | parent [-]

No, using complex numbers alone DOES NOT work. To really allow complex numbers, you also need Riemann surfaces. The function "ln" has type ln: R -> CC where "R" denotes the Riemann surface corresponding to the natural domain of ln, and "CC" denotes the complex numbers. See here for details: https://en.wikipedia.org/wiki/Complex_logarithm#The_associat...

dawnofdusk an hour ago | parent [-]

You can also allow it to be multi-valued and consider a principal branch when needed, the same we do when we discuss roots of monomials in algebra. The two situations are identical (as they must be, because logarithms generalize roots).

5 days ago | parent | prev [-]
[deleted]
ogogmad 5 days ago | parent | prev [-]

I think this approach is the most logically "efficient". You can phrase it as defining ln(x) to be the integral of 1/t from 1 to x. Maybe not the most intuitive, though.

Interestingly, a similar approach gives the shortest proof that exp(x) and ln(x) are computable functions (since integration is a computable functional, thanks to interval arithmetic), and therefore that e = exp(1) is a computable real number.

LegionMammal978 5 days ago | parent [-]

Yeah, the hairiest part is probably the existence and uniqueness of the antiderivative, followed by the existence of an inverse for exp(1). In fact, I can't quite recall whether the book defined it as a Riemann integral or an antiderivative, but of course it had a statement of the FTC which would connect the two. (It was just a high-school textbook, so it tended to gloss over the finer points of existence and uniqueness.)