| ▲ | xrd 9 hours ago |
| I recently replaced a power supply to upgrade a GPU. I bought the power supply on Cragslist, so it had a jumble of cables and no manual. In the past I would have read an article that I would have found on one of those sites. This time I conversed entirely with Gemini, sending pictures of the cables and of the components and the motherboard. I'll not soon forget when I plugged in a cable incorrectly and sent an image of that cable to Gemini. Gemini said "It is very important that you stop and unplug that cable immediately... Hopefully the power supply's safety precautions kicked in before any permanent damage occurred." I know that Gemini was conversing with me using plagiarized information from all those sites. But, it was so much better to do this than to try to synthesize that in my brain by reading a bunch of articles. I don't see a future for tech content because Gemini isn't paying the authors and they don't give me an option to direct payments to them either. |
|
| ▲ | Latty 7 hours ago | parent | next [-] |
| It's crazy to me that you'd trust the output of an LLM for that. It's something where if you do it wrong it could cause major damage, and LLMs are literally famous for creating plausbile-sounding but wrong output. If you wanted to use an LLM to identify it, sure, you can validate that, and then find the manufacturer instructions and use those. Just following what it says about the cables without any validation it's correct is just wild to me. These are products with instruction manuals made for them specifically designed for this. |
| |
| ▲ | visarga 4 hours ago | parent | next [-] | | > It's crazy to me that you'd trust the output of an LLM for that. It's something where if you do it wrong it could cause major damage, With critical tasks you need to cross reference multiple AI, start by running 4 deep reports, on Claude, ChatGPT, Gemini and Perplexity, then put all of them into a comparative - critical analysis round. This reduces variance, the models are different, and using different search tools, you can even send them in different directions, one searches blogs, one reddit, etc. | | |
| ▲ | Latty 3 hours ago | parent [-] | | Or you can ask for a link to the manual. I genuinely can't tell if your post is real advice or sarcasm intended to highlight the insanity of trying to fit square pegs in round holes of using LLMs for everything. |
| |
| ▲ | ashleyn 4 hours ago | parent | prev [-] | | I'd probably view LLM advice like the blind spot indicator on my car. Trust when it's lit. Don't trust when it's not lit. |
|
|
| ▲ | PacificSpecific 9 hours ago | parent | prev | next [-] |
| If the hardware changes significantly and those sites don't exist in the future wouldn't that mean gemeni would degrade in quality because it has nothing to pull from? |
| |
| ▲ | hydrogen7800 9 hours ago | parent | next [-] | | Right, that success story is only because there was "organic" (for lack of a better term) information from an original source. What happens when all information is nth generation AI feedback with all links to the original source lost? Edit: A question from AI/LLM ignorance- Can the source database for an LLM be one-way, in that it does not contain output from itself, or other LLMs? I can imagine a quarantined database used for specific applications that remains curated, but this seems impossible on the open internet. | | |
| ▲ | bigthymer 8 hours ago | parent | next [-] | | > Can the source database for an LLM be one-way, in that it does not contain output from itself, or other LLMs? I think, for public internet data, we can only be reasonably confident for information before the big release of ChatGPT. | |
| ▲ | nsvd2 5 hours ago | parent | prev | next [-] | | Yes, people have likened pre-LLM Internet content to low-background steel. If in the hypothetical future the continual learning problem gets solved, the AI could just learn from the real world instead of publications and retain that data. | |
| ▲ | nprateem 3 hours ago | parent | prev | next [-] | | One reason why Google made that algorithm to watermark AI output | |
| ▲ | black_puppydog 9 hours ago | parent | prev [-] | | That's exactly why text written before the first LLMs has a premium on it these days. So no, all major models suffer from slop in their training data. |
| |
| ▲ | andy81 9 hours ago | parent | prev | next [-] | | We've all tried to ask the LLM about something outside of its training data by now. In that situation, they give the (wrong) answer that sounds the most plausible. | | |
| ▲ | PacificSpecific 9 hours ago | parent | next [-] | | That's definitely been my experience. I work with a lot of weird code bases that have never been public facing and AI has horrible responses for that stuff. As soon as I tried to make a todomvc it started working great but I wonder how much value that really brings to the table. It's great for me though. I can finally make a todomvc tailored to my specific needs. | | |
| ▲ | ctoth 6 hours ago | parent [-] | | I'm not sure what sorts of weird codebases you're working with but I recently saw Claude programming well on a Lambda MOO -- weirder than that? |
| |
| ▲ | visarga 4 hours ago | parent | prev | next [-] | | > In that situation, they give the (wrong) answer that sounds the most plausible. Not if you use web search or deep report, you should not use LLMs as knowledge bases, they are language models - they learn language not information, and are just models not replicas of the training set. | |
| ▲ | NoMoreNicksLeft 8 hours ago | parent | prev [-] | | Once or twice, for me it's deflected rather than answer at all. On the other hand, they've also surfaced information (later independently confirmed by myself) that I had not been able to find for years. I don't know what to make of it. |
| |
| ▲ | visarga 4 hours ago | parent | prev | next [-] | | > because it has nothing to pull from? Chat rooms produce trillions of tokens per day now, interactive tokens, where AI can poke and prod at us, and have its ideas tested in the real world (by us). | |
| ▲ | elictronic 8 hours ago | parent | prev | next [-] | | This then becomes the hardware manufacturers problem. If their new hardware fails for to many users it will no longer be purchased. If they externalize their problem solving like so many companies, they won't be able to gain market share. This creates financial incentives to pay companies running the new version of search. Your thinking of this as a problem for these companies, when in reality it is a financial incentive. | |
| ▲ | esperent 9 hours ago | parent | prev | next [-] | | Presumably companies will still provide manuals. | | |
| ▲ | SiempreViernes 9 hours ago | parent | next [-] | | It'll be a single sheet of paper with a QR code that redirects to a canned prompt hosted at whichever LLM server paid the most to the manufacturer for their content. | |
| ▲ | PacificSpecific 9 hours ago | parent | prev [-] | | If that was adequate then wouldn't there not be supplementary material? Results vary of course. I have some very wonderful synthesizer manuals. |
| |
| ▲ | roxolotl 8 hours ago | parent | prev [-] | | Yea so I’ve had an issue getting video output after boot on a new AMD R9700 Pro. None of the, albeit free, models from OpenAI/Google/Anthropic have really been helpful. I found the pro drivers myself. They never mentioned them. Thats not to say AI is bad. It’s great in many cases. More that I’m worried about what happens when the repositories of new knowledge get hollowed out. Also my favorite response was this gem from Sonnet: > TL;DR: Move your monitor cable from the motherboard to the graphics card. |
|
|
| ▲ | nancyminusone 7 hours ago | parent | prev | next [-] |
| That's more than a little concerning you would put full faith in AI to connect expensive hardware without verifying. I'd at least ask for a citation to the product manual (even though half the time it cites another fucking AI generated site instead) |
|
| ▲ | cj 8 hours ago | parent | prev | next [-] |
| Same experience here: someone at our company had a bricked Macbook Pro. It was previously MDM-managed with JamF, and it wouldn't boot up. Asked ChatGPT to give me steps to fix it. The first set of steps didn't work, so we iteratively sent pictures of the screen until the steps eventually did work and the issue was fixed. This saved us from having to call Apple support. |
|
| ▲ | throwaway85825 8 hours ago | parent | prev | next [-] |
| There is no modular PSU cable standard. Mixing cables between PSUs can destroy your hardware. Even among the same brand there is no standard. |
|
| ▲ | dehrmann 7 hours ago | parent | prev | next [-] |
| > I'll not soon forget when I plugged in a cable incorrectly I'm surprised this was a problem. Back in the day, there were things like making sure your two very similar AT power connectors had the black wires next to each other, not forcing in a molex connector upside down, or the same for ribbon cables. These days? The connectors are standardized and keyed, as long as your modular PSU vendor didn't get lazy on their keying. |
| |
| ▲ | vel0city 4 hours ago | parent [-] | | FWIW, things are standardized and keyed on the ATX board side of things. They aren't standardized on the power supply side of a modular power supply. Unless you've absolutely confirmed pinouts, never swap cables between modular power supplies. Fitment doesn't imply its actually going to put the right voltage on the right pins. Even within the same manufacturer pinouts have sometimes been different between models! | | |
| ▲ | delecti 4 hours ago | parent [-] | | Also, some non-standard hardware looks very standard. (At least some) Dell motherboard/PSU connectors infamously are physically compatible (the plug fits the socket) with the ATX standard, but the wiring is sufficiently different that it can damage or be damaged by other hardware. |
|
|
|
| ▲ | BoredPositron 9 hours ago | parent | prev | next [-] |
| I have never seen a review site or tech blog go into detail about how to wire a specific power supply to a specific motherboard. I would also never go to such a site to get information I can easily get from the manufacturer through a handbook but I would also never ask a chatbot. Really odd use case tbh. |
| |
| ▲ | esseph 8 hours ago | parent [-] | | > Really odd use case tbh. For 99.99999% of people out there, LLMs are the new search. You can gnash teeth and yell and sob, but it is how things are. |
|
|
| ▲ | beej71 8 hours ago | parent | prev | next [-] |
| > But, it was so much better to do this than to try to synthesize that in my brain For some definitions of "better", that is. :( |
|
| ▲ | righthand 9 hours ago | parent | prev | next [-] |
| I see a future just like the seo issue of today, where the well is poisoned and llm information is garbage. |
|
| ▲ | 9 hours ago | parent | prev [-] |
| [deleted] |