Remix.run Logo
Retr0id 4 hours ago

One of my "let's try out this vibecoding thing" toy projects was a custom programming language. At the time, I felt like it was my design, which I iterated on through collaborative conversations with Claude.

Then I saw someone's Show HN post for their own vibecoded programming language project, and many of the feature bullet points were the same. Maybe it was partly coincidence (all modern PLs have a fair bit of overlap), but it really gave me pause, and I mostly lost interest in the project after that.

Ucalegon 3 hours ago | parent [-]

Thats the thing about a normalization system, it is going to normalize outputs because its not built to output uniqueness, its to winnow uniqueness to a baseline. That is good in some instances, assuming that baseline is correct, but it also closes the aperture of human expression.

Retr0id 2 hours ago | parent [-]

I agree in a "the purpose of a system is what it does" sense but I'm not sure they're inherently normalization systems.

Ucalegon 2 hours ago | parent [-]

Token selection is based off normalization, even if you train a model to produce outlier answers, even in that process you are biasing to a subset of outliers, which is inherently normalizing.

Retr0id an hour ago | parent [-]

Could you elaborate on "token selection is based off normalization"?

Ucalegon an hour ago | parent [-]

Sure;

https://arxiv.org/pdf/1607.06450

Depending on the model architecture, there is normalization taking place in multiple different places in order to save compute and ensure (some) consistency in output. Training, by its very nature, also is a normalization function, since you are telling the model which outputs are and are not valid, shaping weights that define features.