| ▲ | CGMthrowaway 5 hours ago |
| >INSUFFICIENT DATA FOR MEANINGFUL ANSWER Boy, it sure would be nice if real LLMs were capable of giving an answer like that. |
|
| ▲ | temp0826 2 hours ago | parent | next [-] |
| Living in South America a bit really showed me this. I think it's a cultural thing here but someone will always give you an answer, even if it's wrong, confidently. It was hard for me at first- I am usually the first person to say "I don't know" (often followed by "but let's slow down and find a good solution"). |
| |
| ▲ | jfaulken 2 hours ago | parent | next [-] | | This was similar to my experience running a software team in India (I'm an American) a couple decades ago. I had to learn not to ask yes/no questions because the answer would always be yes. | | |
| ▲ | HiPhish 2 hours ago | parent [-] | | It's a long-standing jodke that AI stands for "Actual Indians". |
| |
| ▲ | throwaway132448 41 minutes ago | parent | prev [-] | | Talking about South America as a homogeneous unit is… weird. Even neighbouring countries speaking the same language can be entirely different in this regard. | | |
| ▲ | temp0826 19 minutes ago | parent [-] | | I agree (and I don't normally generalize like this, so I apologize). I've spent most of my time in Peru but noticed it in neighboring countries as well. |
|
|
|
| ▲ | gwerbin 4 hours ago | parent | prev | next [-] |
| They can do it, it's just not "by default", they need to be prompted to do it. So at least the danger is manageable if you know what you're doing and how to prompt around it. |
| |
| ▲ | saghm 23 minutes ago | parent | next [-] | | "Just don't accidentally forget to do the thing that makes it safe" is not a very effective strategy for something that so many vested interests are trying to push into all corners of society. If it's so easy to misuse it, then it shouldn't be used in any context outside of where there are no major consequences for bad output and there's amble opportunity and ability to validate it | |
| ▲ | Bridged7756 4 hours ago | parent | prev | next [-] | | Not really. They're still non deterministic language predictors. Believing that a prompt is an effective way to actually control these machines' actual behavior is really far fetched. They com like that from factory. Hardcoded to never say no. | | |
| ▲ | LPisGood 3 hours ago | parent | next [-] | | The thing is that they are completely incapable of meta-cognition. Reasoning models don’t show their actual reasoning at all. | | |
| ▲ | DonaldPShimoda 3 hours ago | parent [-] | | Right — they're not reasoning, they're generating text that statistically models reasoning. Anyone who says differently is selling something. | | |
| ▲ | jeremyjh an hour ago | parent [-] | | That is what a base model does. After RL it is a very different thing, and anyone who says they know what it is, is naive or dishonest. These things are grown, not made, and we really do not understand how they work in many important ways. |
|
| |
| ▲ | eloisant 3 hours ago | parent | prev | next [-] | | They're not hardcoded to never say no, but some of the models were trained to be "yes men" because their creators thought it would be a good property to have. GPT-4o for example. | |
| ▲ | wat10000 3 hours ago | parent | prev [-] | | Not believing that a prompt is an effective way to actually control their behavior is obviously incorrect to anyone who's actually used these things. It's not a guaranteed way to control their behavior, but you can more than move the needle. | | |
| ▲ | wwweston 19 minutes ago | parent | next [-] | | The word most relevant to this conversation is “influence.” Influence is possible and users observe it and use it to increase margins of useful outcomes. “Control” is incorrect. | |
| ▲ | fl4regun 3 hours ago | parent | prev [-] | | yeah that distinction is pretty important, and in general that guy I believe IS making the point - if you can not control it with guaranteed outcomes - you cannot control it. | | |
| ▲ | gwerbin an hour ago | parent | next [-] | | You can't control it any more than you can control a draw from a deck of cards, but you can absolutely control the deck of cards that you choose to draw from. | |
| ▲ | wat10000 26 minutes ago | parent | prev [-] | | That's silly. My car is not absolutely guaranteed to turn left when I turn the steering wheel left, but you wouldn't say I can't control my car on that basis. Steering an LLM with a prompt is way less reliable than steering a car with a steering wheel, but there's still control. It's just not absolute. |
|
|
| |
| ▲ | romaniv 3 hours ago | parent | prev [-] | | [dead] |
|
|
| ▲ | cortesoft 3 hours ago | parent | prev | next [-] |
| There are a lot of humans who refuse to give that answer, too |
| |
| ▲ | Tallain 2 hours ago | parent | next [-] | | This continues to be the most tiring response to any criticism of LLM output. It's pretty much guaranteed to show up at this point. I guess with similar enough input tokens, we're guaranteed the same output... | |
| ▲ | Forgeties79 3 hours ago | parent | prev [-] | | I don’t have to spend dozens if not hundreds of dollars a month to talk to most people in my life lol | | |
| ▲ | _diyar 2 hours ago | parent [-] | | Do you have to talk to LLMs? | | |
| ▲ | vhantz 2 hours ago | parent | next [-] | | Another way to say the same thing:
"to talk to most people in my life lol I don’t have to spend dozens if not hundreds of dollars a month" | |
| ▲ | Forgeties79 an hour ago | parent | prev [-] | | According to HN, every employer, and general social chatter, apparently yes. |
|
|
|
|
| ▲ | bargainbin 4 hours ago | parent | prev | next [-] |
| You’re absolutely right! I do have insufficient data for a meaningful answer. This is not an *insightful prediction* — it’s *Dunning-Kruger masquerading as qualified intelligence* |
| |
|
| ▲ | ryanjshaw 3 hours ago | parent | prev | next [-] |
| I reckon that’s how we know we’ve hit ASI. |
|
| ▲ | narginal 3 hours ago | parent | prev | next [-] |
| 2061, mark the date |
|
| ▲ | otikik 3 hours ago | parent | prev [-] |
| Just add a skill to Claude |