Remix.run Logo
andrewla 2 days ago

It's crazy that Linear Algebra is one of the deepest and most interesting areas of mathematics, with applications in almost every field of mathematics itself plus having practical applications in almost every quantitative field that uses math.

But it is SOOO boring to learn the basic mechanics. There's almost no way to sugar coat it either; you have to learn the basics of vectors and scalars and dot products and matrices and Gaussian elimination, all the while bored out of your skull, until you have the tools to really start to approach the interesting areas.

Even the "why does matrix multiplication look that way" is incredibly deep but practically impossible to motivate from other considerations. You just start with "well that's the way it is" and grind away until one day when you're looking at a chain of linear transformations you realize that everything clicks.

This "little book" seems to take a fairly standard approach, defining all the boring stuff and leading to Gaussian elimination. The other approach I've seen is to try to lead into it by talking about multi-linear functions and then deriving the notion of bases and matrices at the end. Or trying to start from an application like rotation or Markov chains.

It's funny because it's just a pedagogical nightmare to get students to care about any of this until one day two years later it all just makes sense.

srean 2 days ago | parent | next [-]

> Even the "why does matrix multiplication look that way" is incredibly deep but practically impossible to motivate from other considerations. You just start with "well that's the way it is" and grind away

In my experience it need not be like that at all.

One can start by defining and demonstrating linear transformations. Perhaps from graphics -- translation, rotation, reflection etc. Show the students that these follow the definition of a linear transformation. That rotating a sum is same as summing the rotated(s).

[One may also mention that all differentiable functions (from vector to vector) are locally linear.]

Then you define adding two linear transformations using vector addition. Next you can define scaling a linear transformation. The point being that the combination can be expressed as linear transformations themself. No need to represent the vectors as R^d, geometric arrows and parallelogram rule would suffice.

Finally, one demonstrates composition of linear transformations and the fact that the result itself is a linear transformation.

The beautiful reveal is that this addition and composition of linear transformations behave almost the same as addition and multiplication of real numbers.

The addition asociates and commutes. The multiplication associates but doesn't necessarily commute. Most strikingly, the operations distributes. It's almost like algebra of real numbers !

Now, when you impose a coordinate system or choose a basis, the students can discover that matrix multiplication rule for themselves over a couple of days of playing with it -- Look, rather than maintaining this long list of linear transformations, I can store it as a single linear transformation in the chosen basis.

andrewla 2 days ago | parent | next [-]

> Perhaps from graphics -- translation, rotation, reflection

Maybe ... but the fact that you included translation in the list of linear operations seems like a big red flag. Translation feels very linear but it is emphatically not [1]. This is not intended to be a personal jab; just that the intuitions of linear algebra are not easy to internalize.

Adding linear transformations is similarly scary territory. You can multiply rotations to your heart's content but adding two rotations gives you a pretty funky object that does not have any obvious intuition in graphics.

[1] I wouldn't jump into projective or affine spaces until you have the linear algebra tools to deal with them in a sane way, so this strikes me as a bit scary to approach it this way.

srean 2 days ago | parent | next [-]

Mea culpa about translation.

For a moment I was thinking in homogeneous coordinates - that's not the right thing to do in the introductory phase.

Thanks for catching the error and making an important point. I am letting my original comment stand unedited so that your point stands.

About rotations though, one need not let the cat out of the bag and explain what addition of rotation is *.

One simply defines addition of two linear operators as the addition of the vectors that each would have individually produced. This can be demonstrated geometrically with arrows, without fixing coordinates.

* In 2D it's a scaled rotation.

dreamcompiler 2 days ago | parent | prev | next [-]

To me the fact that translation in n dimensions is nonlinear but it becomes linear if you embed your system in n+1 dimensions is one of the coolest results from linear algebra, and it's why you need 4x4 matrices to express the full set of transformations possible in 3-space.

ferfumarma 2 days ago | parent | prev [-]

Can you elaborate on your point that translation is not linear? The OP agrees with you, so clearly your point is correct, but I personally just don't understand it. Isn't it true that translation is linear within the coordinate space of your model, even if the final distance traveled within a projected camera view is not?

edit to add: (I think your point relates only to the projection system, and not a pure, unprojected model; I just want to make sure I understand because it seems like an important point)

srean 2 days ago | parent | next [-]

All linear operators map origin to origin. But translation applied to the origin will shift it. So translation cannot be linear.

Let's take another approach.

Take a point p that's sum of vectors a and b, that is

p = a + b.

Now, if translation was a linear transformation, then translating p (say along x-axis by 1 unit) is equivalent to applying same translation to a and b separately and then summing them. But the latter ends up translating by twice the amount. Or in other words

p +t ≠ (a +t) + (b +t) = p + 2t.

So translation is not a linear operators in this vector space.

andrewla a day ago | parent | prev [-]

No, with projective geometry or affine geometry you can make translation into a linear operation. But in ordinary Euclidean space translation is not a linear operation.

Most obvious case that it fails is that it doesn't map zero to itself, and you can see the contradiction there:

    T(0 + 0) = T(0) = t
    T(0) + T(0) = t + t = 2 * t
cosmic_cheese 2 days ago | parent | prev | next [-]

If anybody is aware of materials that teach linear algebra via graphics as suggested here, I would be interested to hear about them. As someone who learns best through practical application, maths have been by far among my greatest weak points, despite having written software for upwards of a decade. It’s limiting in some scenarios and pure imposter syndrome fuel.

viewtransform 2 days ago | parent | next [-]

3Blue1Brown [Essense of linear algebra](https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2x...)

itchingsphynx 2 days ago | parent [-]

This series by Grant was very useful to review and learn and see the implications. Highly recommended.

te 2 days ago | parent | prev | next [-]

codingthematrix.com

rramadass a day ago | parent | prev | next [-]

https://news.ycombinator.com/item?id=45110857

AadiBugg 2 days ago | parent | prev [-]

[dead]

imtringued 2 days ago | parent | prev [-]

>The beautiful reveal is that this addition and composition of linear transformations behave almost the same as addition and multiplication of real numbers.

This is only beautiful if you already understand monoids, magmas and abelian half groups (semigroups) and how they form groups. Also, we do not talk of linear transformations, we talk of group homomorphisms.

I don't know about anyone else, but I was taught linear algebra this way in the first semester and it felt like stumbling in a dark room and then having the lights turned on in the last week as if that was going to be payback for all the toe stubbing.

srean 2 days ago | parent [-]

It can be beautiful with less.

All that needs to be demonstrated is that for real numbers + associates and commutes. That * associates and commutes. And most satisfyingly, these two operations interact through the distribution property.

Of course, it's more revealing and interesting if one has some exposure to groups and fields.

Do people encounter linear algebra in their course work before that ?

For us it came after coordinate/analytical geometry where we had encountered parallelogram law. So while doing LA we had some vague awareness that there's a connection. This connection solidified later.

We also had an alternative curriculum where matrices were taught in 9th grade as a set of rules without any motivation whatsoever. "This is the rule for adding, this one's for multiplication, see you at the test"

tptacek 2 days ago | parent | prev | next [-]

I didn't think any part of linear algebra was boring. I was hooked from the moment I saw Ax=b => x = b/A. Gaussian elimination is a blast, like an actually-productive Sudoku puzzle, and once you have it down you can blaze through the first 2/3rds of an undergrad linear algebra course. I don't consciously try to gain automaticity with math subjects, but matrix-column multiplication I got pretty quickly and now I just have it.

I learned from Strang, for what it's worth, which is basically LU, spaces, QR, then spectral.

I am really bad at math, for what it's worth; this is just the one advanced math subject that intuitively clicked for me.

dpflan 2 days ago | parent | next [-]

MIT OCW is an amazing resource -- anyone can learn from Strang, which is a goldmine.

He also created a course on using Linear Algebra for machine learning:

> Linear algebra concepts are key for understanding and creating machine learning algorithms, especially as applied to deep learning and neural networks. This course reviews linear algebra with applications to probability and statistics and optimization–and above all a full explanation of deep learning.

- MIT OCW Course: Matrix Methods in Data Analysis, Signal Processing, and Machine Learning (https://ocw.mit.edu/courses/18-065-matrix-methods-in-data-an...)

- The text book website: Linear Algebra and Learning from Data (2019) https://math.mit.edu/~gs/learningfromdata/

- The Classic Linear Algebra Course: https://ocw.mit.edu/courses/18-06-linear-algebra-spring-2010...

tenacious_tuna 2 days ago | parent | prev | next [-]

> I didn't think any part of linear algebra was boring.

My formal linear algebra course was boring as hell, to me. The ~4 lectures my security prof dedicated to explaining just enough to do some RSA was absolutely incredible. I would pay lots of money for a hands-on what-linalg-is-useful-for course with practical examples like that.

tptacek 2 days ago | parent | next [-]

Try this. :)

https://kel.bz/post/lll/

(If you work through the prerequisites and use "understanding this post" as a sort of roadmap of what you actually need to know, this gets you about 2/3rds through undergraduate linear algebra, and you can skim through nullspaces --- all in the service of learning a generally useful tool for attacking cryptosystems).

srean 2 days ago | parent [-]

Thanks for the notes. This is marvellous. I do not work on, or have interest in cryptography algorithms, but this is such an interesting read.

tptacek 2 days ago | parent [-]

Kelby Ludwig is such a talented explainer, it upsets me.

sureglymop 2 days ago | parent | prev [-]

There's a book called ILA (Interactive Linear Algebra) that I found really good: https://textbooks.math.gatech.edu/ila/

andrewla 2 days ago | parent | prev | next [-]

I haven't looked at Strang's approach.

The "x = b / A" is a bit of a gut-punch on first look because my mind immediately tells me all the ways that that does not work. It makes a some sense once I take a second to think about it, and I can see why it would make you want to jump in a little deeper, but matrices being non-commutative makes me cringe at the idea of a division operator which does not very very clearly spell out where it appears in the chain.

Ax = b is all well and good, but AxA^-1 = bA^-1 is not meaningful; the application/composition order is very important.

2 days ago | parent [-]
[deleted]
jdshaffer 2 days ago | parent | prev [-]

Didn't know about Gaussian elimination until today. Really cool, and really useful! Thanks for sharing!

boppo1 18 hours ago | parent | prev | next [-]

>Even the "why does matrix multiplication look that way" is incredibly deep but practically impossible to motivate from other considerations. You just start with "well that's the way it is" and grind away until one day when you're looking at a chain of linear transformations you realize that everything clicks.

Can you expand on your experience with this? I do some graphics programming so I understand that applying matrix transformations works, and I've seen the 3blue1brown 'matrices are spreadsheets' explanation (luv me sum spreadsheets), but the intuition still isn't really there. The 'incredibly deep "why matrix multiplication looks that way"' is totally lost on me.

hiAndrewQuinn 2 days ago | parent | prev | next [-]

Linear algebra was such a tedious thing to learn I skipped over it to abstract algebra and doubled back once I had some kind of minimally interesting framework to work with it against. Normally I think this is a foolish way to do things, but sometimes things are just so dull you have to take the hard route to power through at all.

Sharlin 2 days ago | parent | prev | next [-]

For anyone who’s interested in graphics programming and/or is a visual learner/thinker, there’s an incredibly motivating and rewarding way to learn the basics of linear algebra. (And affine algebra, which tends to be handwaved away, unfortunately. I’m writing a MSc thesis about this and related topics.)

andrewla 2 days ago | parent | next [-]

To a degree I think this is true, but it requires (at least in my experience) that you have an intrinsic grasp of trigonometry for it to make sense. If you have some complex function analysis and e^itheta then you can skirt the problem for a bit, but if you're like me and have to break out soh-cah-toa whenever you break down a triangle then this method ends up being pretty tedious too.

Sharlin 2 days ago | parent [-]

I’m not sure what you mean. Beyond rotation matrices, there’s really only trig involved in graphics if you actively want it.

andrewla 2 days ago | parent [-]

Maybe I was making unwarranted assumptions about the nature of your way to learn linear algebra. The approaches that I've seen invariably have to produce a sample matrix, and rotation is really the best example. The rotation matrix is going to have sines and cosines, and understanding what that means is not trivial; and even now if you asked me to write a rotation matrix I would have to work it out from scratch. Easy enough to do mechanically but I have no intuitions here even now.

Sharlin 2 days ago | parent | next [-]

Rotation matrices are somewhat mysterious to the uninitiated, but so is matrix multiplication until it "clicks". Whether it ever clicks is a function of the quality of the learning resource (I certainly do not recommend trying to learn linalg via 3D graphics by just dabbling without a good graphics-oriented textbook or tutorial – that usually doesn’t end well).

Anyway, I believe that it's perfectly possible to explain rotation matrices so that it "clicks" with a high probability, as long as you understand the basic fact that (cos a, sin a) is the point that you get when you rotate the point (1, 0) by angle a counter-clockwise about the origin (that's basically their definition!) Involving triangles at all is fully optional.

srean 2 days ago | parent | prev [-]

In 2D there's an alternative. One can rotate purely synthetically, by that I mean with compass and straight edge. This avoids getting into transcendentals.

Of course I am not suggesting building synthetic graphics engines :) but the synthetic approach is sufficient to show that the operation is linear.

greymalik 2 days ago | parent | prev | next [-]

> there’s an incredibly motivating and rewarding way to learn the basics of linear algebra

What is it?

tptacek 2 days ago | parent [-]

They mean graphics programming; learning graphics programming will give you intuitions for a lot of linear algebra stuff.

bmacho 2 days ago | parent | prev | next [-]

There is no such thing as affine algebra: https://en.wikipedia.org/wiki/Affine_algebra

Sharlin 2 days ago | parent [-]

There are affine spaces, and there is an algebra of the elements of affine spaces. That is, rules that describe how the elements can be manipulated. There are affine transforms, affine combinations, affine bases, and so on, all of them analogous to the corresponding concepts in linear algebra.

(The term "algebra" can also refer to a particular type of algebraic structure in math, but that’s not what I meant.)

cassepipe 2 days ago | parent | prev [-]

...

What is this incredible motivating way ? Please do tell

bmacho 2 days ago | parent | next [-]

Linear algebra has motivations and applications everywhere, since its main defining concepts, 'addition' and 'transformation that keeps sums' are everywhere. So a linear algebra curse is a huge pile of disjointed facts. It is not such a set of material that can have motivations behind it.

But the good news is that if you are only interested in for example geometry, game theory, systems of linear equations, polynomials, statistics, etc, then you can skip 80% of the content of linear algebra books. You don't have to read them, understand them, memorize them. You'll interact with a tiny part of linear algebra anyway, and you don't have to do that upfront.

Sharlin 2 days ago | parent | prev [-]

Well, graphics programming itself. Learning while doing, preferably from some good resource written with graphics in mind. 2D is fine for the basics, 3D is more challenging and potentially confusing but also more rewarding.

joshmarlow 2 days ago | parent | prev | next [-]

The older I get the more convinced I am that "math is not hard; teaching math is hard".

rramadass 2 days ago | parent | next [-]

This is far more truer than most people may realize.

Because there is so much to teach/learn, "Modern Mathematics" syllabi has devolved into giving students merely an exposure to all possible mathematical tools in an abstract manner, dis-jointly with no unifying framework, and no motivating examples to explain the need for such mathematics. Most teachers are parrots and have no understanding/insight that they can convey to students and so the system perpetuates itself in a downward spiral.

The way to properly teach/learn mathematics is to follow V.I.Arnold's advice i.e. On Teaching Mathematics - https://dsweb.siam.org/The-Magazine/All-Issues/vi-arnold-on-... Ground all teaching in actual physical phenomena (in the sense of existence with a purpose) and then show the invention/derivation of abstract mathematics to explain such phenomena. Everything is "Applied Mathematics", there is no "Pure Mathematics" which is just another name for "Abstract Mathematics" to generalize methods of application to different and larger classes of problems.

gsinclair a day ago | parent | prev [-]

As a maths teacher who is interested in (and sufficiently skilled at) programming, I find teaching programming to be very hard, even to interested students.

Teaching maths to interested students is not hard (for me).

jcranmer 2 days ago | parent | prev | next [-]

> Even the "why does matrix multiplication look that way" is incredibly deep but practically impossible to motivate from other considerations.

It's only difficult if you are wedded to a description of matrices and vectors as seas of numbers that you grind your way through without trying to instill a fuller understanding of what those numbers actually mean. The definition makes a lot more sense when you see a matrix as a description of how to convert one sense of basis vectors to another set of basis vectors, and for that, you first need to understand how vectors are described in terms of basis vectors.

nh23423fefe 2 days ago | parent [-]

I dont agree with this. Matrices don't convert sets of basis vectors to sets of basis vectors. What would you say about singular matrices for example?

The natural motivation of matrices is as representing systems of equations.

jcranmer 2 days ago | parent | next [-]

If I write a vector v = [1, 3, 2], what I am actually saying is that v is equal to 1 * e₁ + 3 * e₂ + 2 * e₃ for three vectors I have previously decided on ahead of time that form an orthonormal basis of the corresponding vector space.

If I write a matrix, say, this:

  [[1  2]
   [3  4]
   [5  6]]
What I am doing is describing is a transformation of one vector space into another, by describing how the basis vectors of the first vector space are represented as a linear combination of the basis vectors of the second vector space. Of course, the transformed vectors may not necessarily be a basis of the latter vector space.

> The natural motivation of matrices is as representing systems of equations.

That is very useful for only very few things about matrices, primarily Gaussian elimination and related topics. Matrix multiplication--which is what the original poster was talking about, after all--is something that doesn't make sense if you're only looking at it as a system of equations; you have to understand a matrix as a linear transformation to have it make sense, and that generally means you have to start talking about vector spaces.

nh23423fefe 2 days ago | parent [-]

Doesn't make sense is too strong though.

If you have a system Ax=y and a system By=z there exists a system (BA)x=z

This system BA is naturally seen as the composition of both systems of equations

And the multiplication rule expresses the way to construct the new systems' coefficients over x constrained by z.

The C_i equation has coefficients which are the evaluations of the B_i equation over the A_k-th coefficients

C_ik = B_ij A_jk

concretely

        A11 x1 + A12 x2 = y1
        A21 x1 + A22 x2 = y2

        and

        B11 y1 + B12 y2 = z1
        B21 y1 + B22 y2 = z2

        then

        B11 (A11 x1 + A12 x2) + B12 (A21 x1 + A22 x2) = z1
        B21 (A11 x1 + A12 x2) + B22 (A21 x1 + A22 x2) = z2

        rearrange and collect terms

        (B11 A11 + B12 A21) x1 + (B11 A12 + B12 A22) x2 = z1
        (B21 A11 + B22 A21) x1 + (B21 A12 + B22 A22) x2 = z2
the coefficients express the dot product rule directly
griffzhowl 2 days ago | parent | prev [-]

There's no single best way to understand any of this, but the action of a matrix on the standard basis vectors is a totally reasonable place to start because of its simplicity, and then the action on any vector can be built out of that because they're linear combinations of basis vectors.

nh23423fefe 11 hours ago | parent [-]

i don't agree because this seems circular. You cant even define a matrix as something that acts on vectors meaningfully until you have some machinery.

if you start with a set S and then make it vector space V over field K. Then by definition, linear combinations (and its not an algebra so nonlinear isn't even defined) are closed in V.

You can then define spanning sets and linear independence to get bases. From bases you can define coordinate vectors over K^n as isomorphic to V. Then given some linear function f : V->W by definition f(v) = f(v^i * b_i) = v^i * f(b_i)

Only here is when you can even define a matrix meaningfully as a tuple of coordinate vectors which are the image of some basis vectors.

Then you need to prove that what was function application of linear functions on vectors is the same as a new operation of multiplication of matrices with coordinate vectors.

And then to prove the multiplication rule (which is inherently coordinate based) you are going make the same argument I made in sibling comment. But I could prove the rule directly by substitution using only systems of linear equations as the starting point.

griffzhowl 10 hours ago | parent [-]

Where's the circularity?

What you're saying is fine as an abstract presentation, but I was talking about how students might initially come to learn about matrices, so just introducing column vectors as representing points in 2 and 3 dimensional space and how matrices transform them is fine.

Beginning with the field and vector space axioms might be fine for sophisticated students, but I don't think it would make for an optimal learning experience for most students. We also don't teach kids the Peano axioms before they learn to add and multiply

nh23423fefe 8 hours ago | parent [-]

But the question was about deriving the multiplication rule. I said you could derive it from systems of equations directly and gave a proof.

> and how matrices transform them is fine

this is circular. You are introducing/assuming the multiplication rule right here. You can't then derive it

dapper_bison17 2 days ago | parent | prev | next [-]

> This "little book" seems to take a fairly standard approach, defining all the boring stuff and leading to Gaussian elimination. The other approach I've seen is to try to lead into it by talking about multi-linear functions and then deriving the notion of bases and matrices at the end. Or trying to start from an application like rotation or Markov chains.

Which books or “non-standard” resources would you recommend then, that do a better job?

andrewla 2 days ago | parent [-]

I have yet to encounter an approach that is not boring. You just have to power through it. This approach seems as good as any.

Once you get to eigenvalues (in my opinion) things start to pick up in terms of seeing that linear spaces are actually interesting.

This approach sort of betrays itself when the very first section about scalars has this line:

> Vectors are often written vertically in column form, which emphasizes their role in matrix multiplication:

This is a big "what?" moment because we don't know why we should care about anything in that sentence. Just call it a convention and later on we can see its utility.

_mu 2 days ago | parent [-]

Maybe we can petition Paul Lockhart to do a book on Linear Algebra, I would definitely buy it.

jameshart 2 days ago | parent | prev | next [-]

What I find amazing is, given how important linear algebra is to actual practical applications, high school math still goes so deep on calculus at the expense of really covering even basic vectors and matrices.

Where vectors do come up it’s usually only Cartesian vectors for mechanics, and only basic addition, scalar multiplication and component decomposition are talked about - even dot products are likely ignored.

bee_rider 2 days ago | parent | next [-]

I think it was a brilliant and evil trick by the linear algebra folks.

Start the path at calculus. Naturally, this will lead to differential equations. Trick the engineers into defining everything in terms of differential equations.

The engineers will get really annoyed, because solving differential equations is impossible.

Then, the mathematicians swoop in with the idea of discretizing everything and using linear algebra to step through it instead. Suddenly they can justify all the million-by-millions matrices they wanted and everybody thinks they are heroes. Engineers will build the giant vector processing machines that they want.

rramadass a day ago | parent [-]

Ha, Ha, True dat :-)

ViscountPenguin 2 days ago | parent | prev | next [-]

That's very strange, where I live linear algebra was a significant portion of the highschool maths curriculum.

The actual presentation was terrible, I'll be lucky if I die before having to invert a matrix by hand again, but it was there.

JadeNB 2 days ago | parent | prev [-]

I think that, to be frank, it's a combination of (1) a curriculum developed before it was clear how ubiquitous linear algebra would become, and (2) the fact that it's a lot easier to come up with a standardized assessment for algorithmic calculus than for linear algebra, precisely because linear algebra is both conceptual and proof-based in a way that has been squeezed out of algorithmic calculus.

(I use algorithmic calculus to describe the high-school subject, and distinguish it from what in American universities is usually called "analysis," where one finally has the chance to make the acquaintance of the conceptual and proof-based aspects squeezed out of algorithmic calculus.)

ViscountPenguin 2 days ago | parent | prev | next [-]

I found the university level presentation of "Vector spaces -> Linear functions -> Matrices are isomorphic to linear functions" much more motivating than the rote mechanics I was taught in highschool, but it's hard to see if I would've had that appreciation without being taught the shitty way first.

jonahx 2 days ago | parent | prev | next [-]

> Even the "why does matrix multiplication look that way" is incredibly deep but practically impossible to motivate from other considerations.

Short, simple answer to that question by Michael Penn: https://www.youtube.com/watch?v=cc1ivDlZ71U

Another interesting treatment by Math the World: https://www.youtube.com/watch?v=1_2WXH4ar5Q&t=4s

There's no impenetrable mystery here. Probably just bad teaching you experienced.

Chinjut 2 days ago | parent | prev | next [-]

Why do you say it's practically impossible to motivate matrix multiplication? The motivation is that this represents composition of linear functions, exactly as you follow up by mentioning.

It's a disservice to anyone to tell them "Well, that's the way it is" instead of telling them from the start "Look, these represent linear functions. And look, this is how they compose".

andrewla 2 days ago | parent | next [-]

Sure, that's a way to approach it. All you have to do is stay interested in "linear functions" long enough to get there. It's totally possible -- I got there, and so did many many many other people (arguably everyone who has applied mathematics to almost any problem has).

But when I was learning linear algebra all I could think was "who cares about linear functions? It's the simplest, dumbest kind of function. In fact, in one dimension it's just multiplication -- that's the only linear function and the class of scalar linear functions is completely specified by the factor that you multiple by". I stuck to it because that was what the course taught, and they wouldn't teach me multidimensional calculus without making me learn this stuff first, but it was months and years later when I suddenly found that linear functions were everywhere and I somehow magically had the tools and the knowledge to do stuff with them.

bananaflag 2 days ago | parent | next [-]

Yeah, concepts can make a student reject them with passion.

I remember in a differential geometry course, when we reached "curves on surfaces", I thought "what stupidity! what are the odds a curve lies exactly on a surface?"

ndriscoll 2 days ago | parent | prev | next [-]

Linear functions are the ones that we can actually wrap our heads around (maybe), and the big trick we have to understand nonlinear problems is to use calculus to be able to understand them in terms of linear ones again. Problems that can't be made linear tend to be exceptionally difficult, so basically any topic you want to learn is going to be calculus+linear algebra because everything else is too hard.

The real payoff though is after you do a deep dive and convince yourself there's plenty of theory and all of these interesting examples and then you learn about SVD or spectral theorems and that when you look at things correctly, you see they act independently in each dimension by... just multiplication by a single number. Unclear whether to be overwhelmed or underwhelmed by the revelation. Or perhaps a superposition.

JadeNB 2 days ago | parent | prev [-]

> But when I was learning linear algebra all I could think was "who cares about linear functions? It's the simplest, dumbest kind of function. In fact, in one dimension it's just multiplication -- that's the only linear function and the class of scalar linear functions is completely specified by the factor that you multiple by".

This seems to make it good motivation for an intellectually curious student—"linear functions are the simplest, dumbest kind of function, and yet they still teach us this new and exotic kind of multiplication." That's not how I learned it (I was the kind of obedient student who was interested in a mathematical definition because I was told that I should be), but I can't imagine that I wouldn't have been intrigued by such a presentation!

cassepipe 2 days ago | parent | prev [-]

Agree.The fact that it's just linear functions is what made it click for me

BenFranklin100 2 days ago | parent | prev | next [-]

Probability and statistics falls into that category too. It’s one of the more boring undergraduate math courses, but is mind-boggling useful in the real world.

(Basic probability / combinatorics is actually pretty cool, but both tend to be glossed over.)

2 days ago | parent | prev | next [-]
[deleted]
Sleaker 2 days ago | parent | prev | next [-]

I went through Khan academy on linear algebra a long time ago because I wanted to learn how to write rendering logic, for me I was implementing things as I learned them with immediate feedback for a lot of the material. Was probably the single most useful thing I learned then.

127 2 days ago | parent | prev | next [-]

I had the opposite experience when learning linear algebra as I was also doing 3D computer graphics at the time. It was super interesting and fun. I guess you just have to find an application for it.

hinkley 2 days ago | parent | prev | next [-]

There was a brief time when I understood how linear algebra was used to render 3d models but that's all gone now.

I wonder if these days that would be a better starting point.

srean 2 days ago | parent [-]

That's just the sad truth of the maximum -- use it or lose it. It can be quite disheartening. The good part is that it becomes easier to pick it up next time around. Happened to me so many times.

winwang 2 days ago | parent | prev | next [-]

I hated precalc, but I loved proof-based linear algebra, my first university math course (linalg + multivariate calc).

The simplicity(/beauty) of matrix multiplication still irks me though, in the sense of "wow, seriously? when you work it out, it really looks that simple?"

imtringued 2 days ago | parent | prev | next [-]

Actually, the calculation of matrix multiplication is extremely easy to understand if you show it done with a row and column vector first. Once your students understand dot products, they will recognize matrix multiplication as the mere pairwise calculation of dot products.

msgodel 2 days ago | parent | prev [-]

I don't know, maybe it's because I read the book on the side in highschool when I was supposed to be doing something else but I really loved Linear Algebra. Once I understood what it was I used matrix operations for everything. Vector spaces are such a powerful abstraction.