| ▲ | radarsat1 3 hours ago | |||||||
> I opened PR #31132 to address issue #31130 — a straightforward performance optimization replacing np.column_stack() with np.vstack().T(). > The technical facts: - np.column_stack([x, y]): 20.63 µs - np.vstack([x, y]).T: 13.18 µs - 36% faster Does anyone know if this is even true? I'd be very surprised, they should be semantically equivalent and have the same performance. In any case, "column_stack" is a clearer way to express the intention of what is happening. I would agree with the maintainer that unless this is a very hot loop (I didn't look into it) the sacrifice of semantic clarity for shaving off 7 microseconds is absolutely not worth it. That the AI refuses to understand this is really poor, shows a total lack of understanding of what programming is about. Having to close spurious, automatically-generated PRs that make minor inconsequential changes is just really annoying. It's annoying enough when humans do it, let alone automated agents that have nothing to gain. Having the AI pretend to then be offended is just awful behaviour. | ||||||||
| ▲ | einr 3 hours ago | parent [-] | |||||||
The benchmarks are not invented by the LLM, they are from an issue where Scott Shambaugh himself suggests this change as low-hanging, but low importance, perf improvement fruit: | ||||||||
| ||||||||