| ▲ | sagarm a day ago | |||||||
LLMs absolutely let you explore ideas and areas you wouldn't have otherwise...but does your new design actually _work_? I'm curious whether the "knowledge" you gained was real or hallucinatory. I've been using LLMs this way myself, but I worry I'm contaminating my memory with false information. | ||||||||
| ▲ | WhatIsDukkha a day ago | parent | next [-] | |||||||
At some point this existential doubt about your own work and others seems pretty weird. Go ahead and figure out ways to interrogate on your work with technical means, that's a critical part of the process with an LLM or not. | ||||||||
| ||||||||
| ▲ | peteforde a day ago | parent | prev [-] | |||||||
I think that you're confusing what you're doing with what I'm doing. What I'm doing is learning the circuit constructs that I need and then putting them to work in real circuits. There's usually a few breadboard steps in the middle, which you could call reinforcement learning. To me, the telling thing about your question is the implication that I would spend a week learning how to do something and then not test it out. I know that this reply reads as salty, but I'm really struggling to contain my own "wtf" on this end. Seriously, people that are so determined to prove that LLMs don't work despite how easy it is to test for yourself and see that they clearly do work are the ones that are hallucinating. | ||||||||
| ||||||||