Remix.run Logo
kingstnap 6 days ago

> this is just a side-effect of another zig possible feature: removal of unnecessary arguments? I’m not a zigxpert, so I can’t answer.

No, it's not. The way structural equivalence is keyed in Zig was changed (mid 2024) to be keyed on two things:

Source location + Captured constants

The issue for that was raised here.

https://github.com/ziglang/zig/issues/18816

I don't actually think that that's sufficient. It makes more sense, imo, to capture some implicit context. Specifically inline context (so an inlined function or inline unrolled for loop will create a unique struct) and opened an issue here.

https://github.com/ziglang/zig/issues/24931

But it got closed "to make incremental compilation simpler to implement."

I consider it a kinda janky dismissal. I assume some actual reasoning about this is on Zulip / Meeting notes / git commit somewhere, but Idk im not a lawyer doing discovery. I get not designing yourself into a corner, but I don't think you need to be so afraid of sometimes needing to do a full recompile because some changes have viral behaviour.

Anyway, something interesting in your post is that as a side effect, you get to view compilation order in the name.

jamiejquinn 6 days ago | parent [-]

Oh did I get frustrated seeing "to make incremental compilation simpler to implement" in the latest 0.15.1 release notes...

I have no doubt incremental compilation will make many large projects easier to compile, and perhaps it really is the future of all compiled languages, but you're right, it's being wielded as a janky dismissal of many good ideas.

kingstnap 6 days ago | parent [-]

Maybe maybe not.

Working with / testing systems that have huge input spaces is difficult. You have to stamp out lots of edge cases because the boundary of the space is huge.

Working with systems that both have huge input spaces but also have internal state is exponentially worse. You basically have edge cases to the power of edge cases to deal with.

Simultaneously, it's not always faster / easier to manage deltas in programming. As an example, consider the case of sorting a 1 million element list. How much difference is there between starting from completely scrambled vs. I instead told you it's mostly sorted, but only 1% of elements were out of place.

Its definitely not 99% easier let me tell you that.

tialaramex 6 days ago | parent [-]

It's complicated:

https://github.com/Voultapher/sort-research-rs/blob/main/wri...

The left chart there is the proposed (now current) Rust unstable sort, named IPN Sort and the right chart is the old (at the time current) unstable sort

That nice black triangle line going straight up and to the right is random input, a typical input for testing sort performance. Compare those essentially flat lines at the bottom in orange and green, for the easy ascending and descending inputs (sorting them is no work, or trivial as appropriate) and the distinct brown circle s95 which is markedly worse than the black line.

s95 is 95% sorted data, plus 5% random data, which is what you'd see if you:

1. sort container of N Things 2. do work which overall removes about 0.05 x N Things, maybe they're completed 3. add 0.05 x N Things, maybe they're new 4. repeat from step 1

Compared to the usual test workload this is actually more work because indeed it's mostly sorted but the non-sorted parts are far out of position.

The vertical on these charts is mean comparisons per item, that is if we can sort 100 items with total 1000 comparisons, that scores 10 on the vertical axis, the reason to do this is that it's independent of physical realisation. Because the horizontal axis is log size a horizontal line means the sort is O(n) for that class of input, a diagonal means O(n log n) which is what we're hoping for from a sort algorithm.