Remix.run Logo
pelario 7 days ago

> Personally, I hate it; I don't like magic or black boxes.

So, no compilers for you neither ?

(To be fair: I'm not loving the whole vibe coding thing. But I'm trying to approach this wave with open mind, and looking for the good arguments in both side. This is not one of them)

pjc50 7 days ago | parent | next [-]

Apart from various C UB fiascos, the compiler is neither a black box nor magic, and most of the worthwhile ones are even determinstic.

viralpraxis 7 days ago | parent [-]

I’m sorry for an off-topic, are there any non-determenistic compilers you can name? I’d been wondering for a while if they actually exist

pjc50 7 days ago | parent [-]

Accidental non-deterministic compilers are fairly easy if you use sort algorithms and containers that aren't "stable". You then can get situations where OS page allocation and things like different filenames give different output. This is why "deterministic build" wasn't just the default.

Actual randomness is used in FPGA and ASIC compilers which use simulated annealing for layout. Sometimes the tools let you set the seed.

dpoloncsak 7 days ago | parent | prev [-]

I think you're misunderstanding. AI is not a black-box, and neither is a compiler. We(as a species) know how they work, and what they do.

The 'black-boxes' are the theoretical systems non-technical users are building via 'vibe-coding'. When your LLM says we need to spin up an EC2 instance, users will spin one up. Is it configured? Why is it configured that way? Do you really need a VPS instead of a Pi? These are questions the users, who are building these systems, won't have answers to.

drdeca 6 days ago | parent | next [-]

If there are cryptographically secure program obfuscation (in the sense of indistinguishability obfuscation) methods, and someone writes some program, applies the obfuscation method to it, publishes the result, deletes the original version of the program, and then dies, would you say that humanity "knows how the (obfuscated) program works, and what it does"? Assume that the obfuscation method is well understood.

When people do interpretabililty work on some NN, they often learn something. What is it that they learn, if not something about how the works?

Of course, we(meaning, humanity) understand the architecture of the NNs we make, and we understand the training methods.

Similarly, if we have the output of an indistinguishability obfuscation method applied to a program, we understand what the individual logic gates do, and we understand that the obfuscated program was a result of applying an indistinguishability obfuscation method to some other program (analogous to understanding the training methods).

So, like, yeah, there are definitely senses in which we understand some of "how it works", and some of "what it does", but I wouldn't say of the obfuscated program "We understand how it works and what it does.".

(It is apparently unknown whether there are any secure indistinguishability obfuscation methods, so maybe you believe that there are none, and in that case maybe you could argue that the hypothetical is impossible, and therefore the argument is unconvincing? I don't think that would make sense though, because I think the argument still makes sense as a counterfactual even if there are no cryprographically secure indistinguishability obfuscation methods. [EDIT: Apparently it has in the last ~5 years been shown, under relatively standard cryptographic assumptions, that there are indistinguishability obfuscation methods after all.])

mr_toad 7 days ago | parent | prev [-]

> AI is not a black-box

Any worthwhile AI is non-linear, and it’s output is not able to be predicted (if it was, we’d just use the predictor).