| ▲ | quietbritishjim 21 hours ago | |||||||
> Big disadvantages of matlab: I will add to that: * it does not support true 1d arrays; you have to artificially choose them to be row or column vectors. Ironically, the snippet in the article shows that MATLAB has forced them into this awkward mindset; as soon as they get a 1d vector they feel the need to artificially make it into a 2d column. (BTW (Y @ X)[:,np.newaxis] would be more idiomatic for that than Y @ X.reshape(3, 1) but I acknowledge it's not exactly compact.) They cleverly chose column concatenation as the last operation, hardly the most common matrix operation, to make it seem like it's very natural to want to choose row or column vectors. In my experience, writing matrix maths in numpy is much easier thanks to not having to make this arbitrary distinction. "It's this 1D array a row or a column?" is just over less thing to worry about in numpy. And I learned MATLAB first, do I don't think I'm saying that just because it's what I'm used to. | ||||||||
| ▲ | D-Machine 20 hours ago | parent [-] | |||||||
* it does not support true 1d arrays; you have to artificially choose them to be row or column vectors. I despise Matlab, but I don't think this is a valid criticism at all. It simply isn't possible to do serious math with vectors that are ambiguously column vs. row, and this is in fact a constant annoyance with NumPy that one has to solve by checking the docs and/or running test lines on a REPL or in a debugger. The fact that you have developed arcane invocations of "[:,np.newaxis]" and regular .reshape calls I think is a clear indication that the NumPy approach is basically bad in this domain. You do actually need to make a decision on how to handle 0 or 1-dimensional vectors, and I do not think that NumPy (or PyTorch, or TensorFlow, or any Python lib I've encountered) is particularly consistent about this, unless you ingrain certain habits to always call e.g. .ravel or .flatten or [:, :, None] arcana, followed by subsequent .reshape calls to avoid these issues. As much as I hated Matlab, this shaping issue was not one I ran into as immediately as I did with NumPy and Python Tensor libs. EDIT: This is also a constant issue working with scikit-learn, and if you regularly read through the source there, you see why. And, frankly, if you have gone through proper math texts, they are all extremely clear about column vs row vectors and notation too, and all make it clear whether column vs. row vector is the default notation, and use superscript transpose accordingly. It's not that you can't figure it out from context, it is that having to figure it out and check seriously damages fluent reading and wastes a huge amount of time and mental resources, and terrible shaping documentation and consistency is a major sore point for almost all popular Python tensor and array libraries. | ||||||||
| ||||||||