| > Im saying that the origin of complex numbers is the ability to do arbitrary rotations and scaling through multiplication, and that i being the sqrt of -1 is the emergent property. Not true historically -- the origin goes back to Cardano solving cubic equations. But that point aside, it seems like you are trying to find something like "the true meaning of complex numbers," basing your judgement on some mix of practical application and what seems most intuitive to you. I think that's fruitless. The essence lies precisely in the equivalence of the various conceptions by means of proof. "i" as a way "to do arbitrary rotations and scaling through multiplication", or as a way give the solution space of polynomials closure, or as the equivalence of Taylor series, etc -- these are all structurally the same mathematical "i". So "i" is all of these things, and all of these things are useful depending on what you're doing. Again, by what principle do you give priority to some uses over others? |
| |
| ▲ | ActorNightly 3 hours ago | parent [-] | | >"Rotations fell out of the structure of complex numbers. They weren't placed there on purpose. If you want to rotate things there are usually better ways." I mean, the derivation to rotate things with complex numbers is pretty simple to prove. If you convert to cartesian, the rotation is a scaling operation by a matrix, which you have to compute from r and theta. And Im sure you know that for x and y, the rotation matrix to the new vector x' and y' is x' = cos(theta)*x - sin(theta)*y y' = sin(theta)*x + cos(theta)*y However, like you said, say you want to have some representation of rotation using only 2 parameters instead of 4, and simplify the math. You can define (xr,yr) in the same coordinates as the original vector. To compute theta, you would need ArcTan(yr/xr), which then plugged back into Sin and Cos in original rotation matrix give you back xr and yr. Assuming unit vectors: x'= xr*x - yr*y y'= yr*x + xr*y the only trick you need is to take care negative sign on the upper right corner term. So you notice that if you just mark the y components as i, and when you see i*i you take that to be -1, everything works out. So overall, all of this is just construction, not emergence. | | |
| ▲ | srean 2 hours ago | parent [-] | | Yes it's simple and I agree with almost everything except that arctan bit (it loses information, but that's aside story). But all that you said is not about the point that I was trying to convey. What I showed was you if you define addition of tuples a certain, fairly natural way. And then define multiplication on the same tuples in such a way that multiplication and addition follow the distributive law (so that you can do polynomials with them). Then your hands are forced to define multiplication in very specific way, just to ensure distributivity. [To be honest their is another sneaky way to do it if the rules are changed a bit, by using reflection matrices] Rotation so far is nowhere in the picture in our desiderata, we just want the distributive law to apply to the multiplication of tuples. That's it. But once I do that, lo and behold this multiplication has exactly the same structure as multiplication by rotation matrices (emergence? or equivalently, recognition of the consequences of our desire) In other words, these tuples have secretly been the (scaled) cos theta, sin theta tuples all along, although when I had invited them to my party I had not put a restriction on them that they have to be related to theta via these trig functions. Or in other words, the only tuples that have distributive addition and multiplication are the (scaled) cos theta sin theta tuples, but when we were constructing them there was no notion of theta just the desire to satisfy few algebraic relations (distributivity of add and multiply). | | |
| ▲ | ActorNightly 2 hours ago | parent [-] | | I just don't like this characterization of > "How shall I define multiplication, so that multiplication so defined is a group by itself and interacts with the addition defined earlier in a distributive way. Just the way addition and multiplication behave for reals." which eventually becomes > "Ah! It's just scaled rotation" and the implication is that emergent. Its like you have a set of objects, and defining operations on those objects that have properties of rotations baked in ( because that is the the only way that (0, 1) * (0, 1) = (-1, 0) ever works out in your definition), and then you are surprised that you get something that behaves like rotation. Meanwhile, when you define other "multiplicative" like operations on tuples, namely dot and cross product, you don't get rotations. | | |
| ▲ | srean 2 hours ago | parent [-] | | > I just don't like this characterization That's ok. It's a personal value judgement. However, the fact remains that rotations can "emerge" just from the desire to do additions and multiplications on tuples to be able to do polynomials with them ... which is more directly tied to its historical path of discovery, to solve polynomial equations, starting with cubic. | | |
| ▲ | ActorNightly 39 minutes ago | parent [-] | | >historical path of discovery, to solve polynomial equations, starting with cubic. Even with polynomial equations that have complex roots, the idea of a rotation is baked in in solving them. Rotation+scaling with complex numbers is basically an arbitrary translation through the complex plane. So when you are faced with a*x*x + b*x + c = 0, where a b and c all lie on the real number line, and you are trying to basically get to 0, often you can't do it by having x on a number line, so you have to start with more dimentions and then rotate+scale so you end up at zero. Its the same reason for negative numbers existing. When you have positive numbers only, and you define addition and subtraction, things like 5-6+10 become impossible to compute, even though all the values are positive. But when you introduce the space of negative numbers, even though they don't represent anything in reality, that operation becomes possible. |
|
|
|
|
|