Remix.run Logo
IshKebab 2 days ago

AI is a competitor. You know how StackOverflow is dead because AI provided an alternative? That's happening in search too.

You might think "but ChatGPT isn't a search engine", and that's true. It can't handle all queries you might use a search engine for, e.g. if you want to find a particular website. But there are many many queries that it can handle. Here's just a few from my recent history:

* How do I load a shared library and call a function from it with VCS? [Kind of surprising it got the answer to this given how locked down the documentation is.]

* In a PAM config what do they keywords auth, account, password, session, and also required/sufficient mean?

* What do you call the thing that car roof bars attach to? The thing that goes front to back?

* How do I right-pad a string with spaces using printf?

These are all things I would have gone to Google for before, but ChatGPT gives a better overall experience now.

Yes, overall, because while it bullshits sometimes, it also cuts to the chase a lot more. And no ads for now! (Btw, someone gave me the hint to set its personality mode to "Robot", and that really helps make it less annoying!)

skinkestek 2 days ago | parent | next [-]

> You know how StackOverflow is dead because AI provided an alternative? That's happening in search too.

Stack Overflow isn’t dead because of AI. It’s dead because they spent years ignoring user feedback and then doubled down by going after respected, unpaid contributors like Monica.

Would they have survived AI? Hard to say. But the truth is, they were already busy burning down their own community long before AI showed up.

When AI arrived I'd already been waiting for years for an alternative that didn’t aggressively shut down real-world questions (sometimes with hundreds of upvotes) just because they didn’t fit some rigid format.

4ggr0 2 days ago | parent | next [-]

> and then doubled down by going after respected, unpaid contributors like Monica.

if like me you didn't know what this was referring to, here's some context: https://judaism.meta.stackexchange.com/questions/5193/stack-...

IshKebab 2 days ago | parent | prev | next [-]

> Stack Overflow isn’t dead because of AI. It’s dead because they spent years ignoring user feedback

It is dead because of both of those things. Everyone hated Stackoverflow's moderation, but kept using it because they didn't have a good alternative until AI.

> When AI arrived I'd already been waiting for years for an alternative that didn’t aggressively shut down real-world questions

Exactly.

goku12 2 days ago | parent [-]

I'm not sure that AI has as much impact on resources like SO as one might imagine. There is one reason why I resort to using AI, and two reasons why I always double check its answers.

The reason why I resort to AI is to find out alternative solutions quickly. But quite honestly, it's more of a problem with SO moderation. People are willing to answer even stale, actual/mistaken duplicate or slightly/seemingly irrelevant questions with good quality solutions and alternatives. But I always felt that their moderation dissuaded the contributors from it.

Meanwhile, the first reason why I always double check the AI results is because they hallucinate way too much. They fake completely believable answers far too often. The second reason is that AI often neglects interesting/relevant extra information that humans always recognize as important. This is very evident if you read elaborate SO answers or official documentation like MDN, docs.rs or archwiki. One particular example for this is the XY-problem. People seem to make similar mistaken assumptions and SO answers are very good at catching those. Recipe-book/cookbook documentation also address these situations well. Human generated content (even static or archived ones) seem to anticipate/catch and address human misconceptions and confusions much better than AI.

dabockster a day ago | parent | prev [-]

> Stack Overflow isn’t dead because of AI. It’s dead because they spent years ignoring user feedback and then doubled down by going after respected, unpaid contributors like Monica.

They also devolved into a work friendly variant of 4Chan's /g/ board. "Work friendly" as in nothing obviously obscene, but the overall tone and hostility towards newcomers is still there (among other things).

bigstrat2003 2 days ago | parent | prev | next [-]

I don't agree that ChatGPT gives an overall better experience than Google, let alone an actual good search engine like Kagi. It's very rare that I need to ask something in plain English because I just don't know what the keywords are, so the one edge the LLM might have is moot. Meanwhile, because it bullshits a lot (not just sometimes, a lot), I can't trust anything it tells me. At least with a search engine I can figure out if a given site is reliable or not, with the LLM I have no idea.

People say all the time that LLMs are so much better for finding information, but to me it's completely at odds with my own user experience.

Wurdan 2 days ago | parent | next [-]

Why not both? You mention Kagi, and I find its Assistant to be a very useful mix of LLM and search engine. Something I asked it recently is whether Gothenburg has any sky-bars that overlook Hisingen to the North, and it correctly gave me one. A search engine could have given me a list of all sky-bars. And by looking at their photos on Google maps, I could probably have found one with the view / perspective I wanted. But Kagi Assistant using Kimi K2 did a decent job of narrowing the options I had to research.

barnabee 2 days ago | parent | prev | next [-]

I’d rather use every LLM that can search the web (including whatever local model I’m currently running on my MacBook) over Google. I also prefer the results from Kagi (which I generally use), DuckDuckGo, and Ecosia.

I still don’t think a company with at least one touch point on such a high percentage on web usage should be allowed to have one of 2 mobile OSs that control that market, the most popular browser, the most popular search engine, the top video site (that’s also a massive social network), and a huge business placing ads on 3rd party sites.

Any two of these should be cause for concern, but we are well beyond the point that Google’s continued existence as a single entity is hugely problematic.

jve 2 days ago | parent | prev | next [-]

For me, ChatGPT in some instances replace Google in a very powerful way.

Been researching about waterproofing techniques in my area. Asked chatgpt about products in my region. Gladly mentioned some, provided links to shop. Found out I need to prep foundation with product X. One shop had only Y available, from description felt similar.

Asked about differences between products. Provided me with summary table that was crystal clear that one is more of a finishing stuff and the other is more of a structural and can also be used as finishing. Provided me with links to datasheets that confirm the information.

I could ask about alternative products and it listed me some, etc. Great when I need to research unknown field and has links... that is the good part :)

Andrew_nenakhov 2 days ago | parent | prev [-]

Chatgpt, Grok and the likes give an overall better experience than Google because they give you the answer, not links to some pages where you might find the answer. So unless I'm explicitly searching for something, like some article, asking Grok is faster and gets you an acceptable answer.

dns_snek 2 days ago | parent [-]

You get an acceptable answer maybe about 60% of the time, assuming most of your questions are really simple. The other 40% of the time it's complete nonsense dressed up as a reasonable answer.

Andrew_nenakhov 2 days ago | parent | next [-]

In my experience I get acceptable answers in more than 95% of questions I ask. In fact, I rarely use search engines now. (btw I jumped off Google almost a decade ago now, have been using duckduckgo as my main search driver)

sfdlkj3jk342a 2 days ago | parent | prev [-]

Have you used Grok or ChatGPT in the last year? I can't remember the last time I got a nonsense response. Do you have a recent example?

tim1994 2 days ago | parent | next [-]

I think the problem is that they cannot communicate that they don't know something and instead make up some BS that sounds somewhat reasonable. Probably due to how they are built. I notice this regularly when asking questions about new web platform features and there is not enough information in the training data.

dns_snek 2 days ago | parent | prev | next [-]

Yes I (try to) use them all the time. I regularly compare ChatGPT, Gemini, and Claude side by side, especially when I sniff something that smells like bullshit. I probably have ~10 chats from the past week with each one. I ask genuine questions expecting a genuine answer, I don't go out of my way to try to "trick" them but often I'll get an answer that doesn't seem quite right and then I dig deeper.

I'm not interested in dissecting specific examples because never been productive, but I will say that most people's bullshit detectors are not nearly as sensitive as they think they are which leads them to accepting sloppy incorrect answers as high-quality factual answers.

Many of them fall into the category of "conventional wisdom that's absolutely wrong". Quick but sloppy answers are okay if you're okay with them, after all we didn't always have high-quality information at our fingertips.

The only thing that worries me is how really smart people can consume this slop and somehow believe it to be high-quality information, and present it as such to other impressionable people.

Your success will of course vary depending on the topic and difficulty of your questions, but if you "can't remember" the last time you had a BS answer then I feel extremely confident in saying that your BS detector isn't sensitive enough.

lelanthran 2 days ago | parent [-]

> Your success will of course vary depending on the topic and difficulty of your questions, but if you "can't remember" the last time you had a BS answer then I feel extremely confident in saying that your BS detector isn't sensitive enough.

Do you have a few examples? I'm curious because I have a very sensitive BS detector. In fact, just about anyone asking for examples, like the GP, has a sensitive BS detector.

I want to compare the complexity of my questions to the complexity of yours. Here's my most recent one, the answer to which I am fully capable of determining the level of BS:

    I want to parse markdown into a structure. Leaving aside the actual structure for now, give me a exhaustive list of markdown syntax that I would need to parse.
It gave me a very large list, pointing out CommonMark-specific stuff, etc.

I responded with:

    I am seeing some problems here with the parsing: 1. Newlines are significant in some places but not others. 2. There are some ambiguities (for example, nested lists which may result in more than four spaces at the deepest level can be interpreted as either nested lists or a code block) 3. Autolinks are also ambiguous - how can we know that the tag is an autolink and not HTML which must be passed through? There are more issues. Please expand on how they must be resolved. How do current parsers resolve the issues?

Right. I've shown you mine. Now you show yours.
svieira 2 days ago | parent | prev [-]

Today, I asked Google if there was a constant time string comparison algorithm in the JRE. It told me "no, but you can roll your own". Then I perused the links and found that MessageDigest.isEqual exists.

ryandrake 2 days ago | parent | prev | next [-]

Is it common to use Internet search like that??? You're typing in literal questions to a search box rather than keywords, the name of the site you're looking for, or topics you want to read about. Maybe I'm just too old school, from the time where internet searches were essentially keyword searches, but it would have never occurred to me to type an actual english question as a full sentence into a search box.

If that's how most people use search engines these days, then I guess the transition into "type a prompt" will be smoother than I would have thought.

balder1991 2 days ago | parent | next [-]

I’m quite sure it was common, because Google optimized for that over time, that’s why they switched to a semantic search instead of actual “contains” (remember they had a few questions and answers at the top way before ChatGPT).

Also if you type a few words on Google, it’ll “autocomplete” with the most common searches. Or you can just go to trends.google.com and explore search trends in real time.

fwipsy 2 days ago | parent | prev | next [-]

I think those are examples of AI prompts, not search queries. Searching sometimes requires effort even for simple questions. For example, if you're trying to find the word for an object, you might need to consider what sort of website might talk about that, how to find that website in a sea of SEO spam, and then read through the article manually to find the specific information you are looking for. Using an AI, you can just ask "what is xyz called" and get a quick answer.

chillfox 2 days ago | parent [-]

Search engines have been good at answering those kinds of questions for the last decade. SEO spam often answers simple questions like that.

fwipsy 2 days ago | parent [-]

You can find the answer this way if the query is simple enough, but in general, if you are asking for something specific or trying to retrieve a piece of information based on keywords it's not usually indexed by, AI will do better. For example, "how are large concrete piers supporting a roadway constructed on a 45 degree slope?" Claude gave me an answer immediately, most Google results for my first two queries weren't specific enough/didn't include all details. I'm sure Google could find the answer, but asking Claude is just easier.

chillfox 21 hours ago | parent [-]

I would not call that example simple, that's a pretty complex question.

chillfox 2 days ago | parent | prev | next [-]

It's been common for the last decade. It's been a great way of finding forum/blog posts where the question is answered, even if phrased slightly different.

unethical_ban 2 days ago | parent | prev | next [-]

The questions above would be changed up for a Google search. The point is that LLMs can answer those questions pretty accurately now. I'm using LLMs to write technical cheat sheets for Linux sysadmin stuff, and to write a hobby website. I'm using search far less than before.

keiferski 2 days ago | parent | prev | next [-]

I have been using computers since the early 2000s, and I honestly don't remember the last time I searched Google for an answer to a specific question. It's incredibly inefficient compared to the even the most basic AI tool.

rascul 2 days ago | parent | prev [-]

Isn't that what Ask Jeeves was for in the 90s?

al_borland 2 days ago | parent | prev | next [-]

Google also has AI and has integrated it into search. It's not Google Search vs ChatGPT. It's Google Search + Gemini vs ChatGPT, where the Google option has a huge advantage of falling into people's already ingrained habits and requires no user education.

CamperBob2 2 days ago | parent | prev | next [-]

StackOverflow is dead because its rules are nonsensical and many of its users are dicks.

It's going to be a real problem going forward, because if AI hadn't killed them something else would have, and now it's questionable whether that "something else" will ever emerge. The need for something like SO is never going to go away as long as new technologies, algorithms, languages and libraries continue to be created.

balder1991 2 days ago | parent | next [-]

Besides the issue of repetitive beginner questions, which today could be answered with an LLM, was a significant driver of low-quality content, requiring substantial intervention from StackOverflow.

However, your point stands: as new technologies develop, StackOverflow will be the main platform where relevant questions gain visibility through upvotes.

Andrew_nenakhov 2 days ago | parent | next [-]

Obscure problems would get no visibility though — because of their obscurity.

CamperBob2 2 days ago | parent | prev [-]

If it were just a matter of upvotes and downvotes, that would be one thing, but voting to close a question for being a "duplicate," forcibly terminating an emerging discussion because somebody asked something vaguely similar 10 years ago for a completely different platform or language, is just nuts.

Or closing a general question because in the opinion of Someone Important, it runs afoul of some poorly-defined rule regarding product recommendations.

A StackOverflow that wasn't run like a stereotypical HOA would be very useful. The goal should be to complement AI rather than compete with it.

balder1991 2 days ago | parent [-]

You’re right, but I suspect that it became this hostile to beginners because of the constant flood of repetitive questions. It’s possible that with LLMs doing this filtering, the community loosen up the hostility when new more and more new questions are stuff LLMs can’t answer.

Chinjut 2 days ago | parent | prev [-]

A scary (if not particularly original) thought: If people become utterly reliant on LLMs and no longer embrace any new language etc for which there is insufficient LLM training, new languages etc will no longer continue to be created.

CamperBob2 2 days ago | parent | next [-]

If languages stop being created, it will be because there won't be a need for them. That's not necessarily a bad thing.

Think of programming languages as you currently think of CPU ISAs. We only need so many of those. And at this point, machine-instruction architecture has diverged so far from traditional ISAs that it no longer gets called that. Instead of x86 and ARM and RISC-V we talk about PTX and SASS and RDNA. Or rather, hardly anyone talks about them, because the interesting stuff happens at a higher level of abstraction.

shadowgovt 2 days ago | parent | prev [-]

Possible, but I think unlikely. New languages already suffer this uphill battle because they don't yet have a community to do Q&A like entrenched languages; their support is the documentation, source code of implementations, and whatever dedicated userbase they have as a seed for future community. People are currently utterly reliant on community-based support like StackOverflow, and new languages continue to be born.

01100011 2 days ago | parent | prev | next [-]

Google is the only serious competition to Nvidia right now. AI is both a threat to their core business and a core strength of their business. They invented transformers and a cheap inference chip. Their models are top-tier. I think google will be fine.

mprovost 2 days ago | parent [-]

While they invented transformers, I'm not convinced that they've figured out a way to monetise them in the same way that they spent decades optimising their search results page into a money printing machine. Kodak invented the digital camera...

unleaded 2 days ago | parent | prev | next [-]

>(Btw, someone gave me the hint to set its personality mode to "Robot", and that really helps make it less annoying!)

Kimi K2's output style is something like a mix of Cynic and Robot as seen here https://help.openai.com/en/articles/11899719-customizing-you... and I absolutely love it. I think more people should give it a try (kimi.com).

grumbel 2 days ago | parent | prev | next [-]

> AI is a competitor.

AI isn't competition for Google, AI is technology. Not only is Google using AI themselves, they are pretty damn near the top of the AI game.

It's also questionable how this is relevant for past crimes of Google. It's completely hypothetical speculation about the future. Could an AI company rise and dethrone classic Google? Yeah. Could Google themselves be the AI company that does it? Probably, especially when they can continue due abuse their monopoly across multiple fields.

There is also the issue that current AI companies are still just bleeding money, none of them have figured out how to make money.

rendaw 2 days ago | parent | prev | next [-]

Is StackOverflow dead now? And because of AI?

It still usually has the standard quality of answers for most questions I google. I google fewer questions because modern languages have better documentation cultures.

tgsovlerkhgsel 2 days ago | parent | next [-]

All my stackoverflow-style queries are now going to whatever AI chatbot is most accessible when I need my answer.

They tend to provide answers that are at least as correct as StackOverflow (i.e. not perfect but good enough to be useful in most cases), generally more specific (the first/only answer is the one I want, I don't have to find the right one first), and the examples are tailored to my use case to the point where even if I know the exact command/syntax, it's often easier to have one of the chatbots "refactor" it for me.

You still want to only use them when you can verify the answer and verifying won't take more time. I recently asked a bot to explain a rsync command line, then finding myself verifying the answers against the man page anyways (i.e. I could have used the manpage instead from the start) - and while the first half of the answer was spot on, the second contained complete hallucinations about what the arguments meant.

ozgrakkurt 2 days ago | parent | next [-]

I am using free chatgpt and free deepseek mostly.

They are both terrible in terms of correctness compared to duckduckgo->stackoverflow.

As an example deepsek makes stuff up if I as for what syscall to use for deleting directories. And it really misleads me in a convincing way. If I search then I end up in the man page and I can exentually figure it out after 2-3 minutes

skinkestek 2 days ago | parent | prev [-]

Also with AI, I get an answer instantly—no snark, no misunderstanding my question just to shut it down, and no being bounced around to some obscure corner of Stack Exchange.

IshKebab 2 days ago | parent | prev [-]

Yes: https://blog.pragmaticengineer.com/stack-overflow-is-almost-...

Raw data here if you want an update: https://data.stackexchange.com/stackoverflow/query/1882532/q...

It hasn't got better - down from a peak of 300k/month to under 10k/month.

NooneAtAll3 2 days ago | parent | prev | next [-]

so what you're saying is that we're about to get ad-spammed Ai as well...

keithnz 2 days ago | parent | prev | next [-]

I pretty much exclusively use chatgpt search now. Best thing about it is you can have all kinds of follow up questions

jeffhwang 2 days ago | parent | prev | next [-]

I personally have moved almost all my Stack Overflow usage to LLMs. Just wondering if other folks have done the same…

balder1991 2 days ago | parent | next [-]

The thing is, a lot of questions user have aren’t unique, maybe just with a slightly different context and LLMs are good at adapting answers to other contexts.

But it only works for stuff that is already consolidated. For example, something like a new version of a language will certainly spark new questions that can only be discussed with other programmers.

brookst 2 days ago | parent [-]

> something like a new version of a language will certainly spark new questions that can only be discussed with other programmers.

I'm not sure this is true? Most languages have fairly open development processes, so discussions about the changes are likely indexed in the web search tools LLMs use, if not in the training data itself. And LLMs are very good at extrapolating.

littlecranky67 2 days ago | parent | prev | next [-]

I have moved almost all of my internet search to LLMs (bing chat and perplexity, both work without login with firefox tmp containers).

smohare 2 days ago | parent | prev [-]

[dead]

1oooqooq 2 days ago | parent | prev | next [-]

quit that narrative! stack overflow is dead because it's garbage! try to visit it without being logged in, the entire screen is covered by four halfscreen popups! then search is useless and require to be logged. when you finally give in, the answer is deleted by overzealous power tripping users.

it's a miracle it survived that long. and i think it saving grace was that nobody wanted to browse reddit at work, nothing else.

so tired of AI apologists exploiting this isolated case as if it is some proof AI is magic and a solution to anything. it's all so inane and expose how that side is grasping for straw.

sahila 2 days ago | parent | next [-]

It's not the only example, Chegg's another one.

croemer 2 days ago | parent | prev [-]

And now they're going to enshittify the comment UI

rafark 2 days ago | parent | prev | next [-]

Correct. I’ve been using ai chatbots more and more instead of google search (I still use google quite a lot but considerably less than a year or two ago).

...but ironically that chatbot is Gemini from ai studio, so still the same company but a different product. Google search will look very different in the next 5-10 years compared to the same period a decade ago.

harmmonica 2 days ago | parent | prev | next [-]

Exactly this. Another way of putting it is that LLMs are doing all the clicking, reading, researching and many times even the "creating" for me. And I can watch it source things and when I need to question whether it's hallucinating I get a shortcut because I can see all the steps that went into finding the info it's presenting. And on top of it replacing Google Search it's now creating images, diagrams, drawings and endless other "new work" that Google search could never do for me in the first place.

I swear in the past week alone things that would've taken me weeks to do are taking hours. Some examples: create a map with some callouts on it based on a pre-existing design (I literally would've needed several hours of professional or at least solid amateur design work to do this in the past; took 10 minutes with ChatGPT). Figure out how much a rooftop solar system's output would be compromised based on the shading of a roof at a specific address at different times of the day (a task I literally couldn't have completed on my own). Structural load calculations for a post in a house (another one I couldn't have completed on my own). Note some of these things can't be wrong so of course you can't blindly rely on ChatGPT, but every step of the way I'm actually taking any suspicious-sounding ChatGPT output and (ironically I guess) running keyword searches on Google to make sure I understand what exactly ChatGPT is saying. But we're talking orders of magnitude less time, less searching and less cost to do these things.

Edit: not to say that the judge's ruling in this case is right. Just saying that I have zero doubt that LLM's are an existential threat to Google Search regardless of what Google's numbers said during their past earnings call.

qnleigh 2 days ago | parent [-]

> Structural load calculations for a post in a house

You're relying on ChatGPT for this? How do you check the result? That sounds kind of dangerous...

harmmonica 2 days ago | parent [-]

Not dangerous in this implementation. I knew going in there was likely significant margin for error. I would not rely on ChatGPT if I was endangering myself, my people or anyone else for that matter (though this project is at my place).

That said, the word "relying" is taking it too far. I'm relying on myself to be able to vet what ChatGPT is telling me. And the great thing about ChatGPT and Gemini, at least the way I prompt, is that it gives me the entire path it took to get to the answer. So when it presents a "fact," in this example a load calculation or the relative strength of a wood species, for instance, I take the details of that, look it up on Google and make sure that the info it presented is accurate. If you ask yourself "how's that saving you time?" The answer is, in the past, I would've had to hire an engineer to get me the answer because I wouldn't even quite be sure how to get the answer. It's like the LLM is a thought partner that fills the gap in my ability to properly think about a problem, and then helps me understand and eventually solve the problem.

ozgrakkurt 2 days ago | parent | next [-]

How do you “vet” something technical and something that you can’t even do yourself is beyond me.

Vetting things is very likely harder than doing the thing correctly.

Especially the thing you are vetting is designed to look correct more than actually being correct.

You can picture a physics class where teacher gives a trick problem/solution and 95% of class doesn’t realize until the teacher walks back and explains it.

harmmonica a day ago | parent [-]

Hey, just replied to a sibling comment of yours that sort of addresses your commentary. Just in case you didn't read it because I didn't reply to you directly. One thing that reply didn't cover and I'll add here: I disagree that the LLM is actually designed to look correct more than it's trying to actually be correct. I might have a blind spot, but I don't think that is a logical conclusion about LLM's, but if you have special insight about why that's the case please do share. That does happen, of course, but I don't think that is intentional, part of the explicit design, or even inherent to the design. As I said, open to being educated otherwise.

lazide 2 days ago | parent | prev [-]

Nothing about what you are describing sounds sane or legal in most jurisdictions. You still need a structural engineer. None of the sources you are describing are reliable.

harmmonica a day ago | parent [-]

The sources are reliable. There are prescribed sources for lumber products, fasteners, etc. in residential construction, at least in the US, that are just as accessible to you and me online as they are to structural engineers. Those same sources are what the engineers themselves rely on to do their work, or, more likely, most engineers rely on software that has those sources built in and don't ever reference the primary sources. All the information you need to make concrete, empirical decisions about things like posts in residential construction are available online and don't require an engineering degree to figure out. LLM's are great at taking the uncertain language you input and finding all the sources, and the calculations, for you so you don't have to spend hours digging around on Google to find a "document" you didn't know how to search for, that then has 600 numbers on it that you have to spend more time discerning which number is the right one to use. Or which calculation out of the infinite number out there is the right one for your case. Kind of like a skeleton key or maybe a dictionary that equips you with the language you don't yet know to get to the bottom of something you don't yet fully understand.

Btw, I would not trust an LLM to tell me how to build a suspension bridge. First, I'm unfamiliar with that space. Second, even if I was familiar, the stakes are, as you say, so high that it would be insane to trust something so complex without expert sign off. The post I'm specifically talking about? Near-zero stakes and near-zero risk.

<stepping on the soapbox> I beg folks to always try and pierce the veil of complexity. Some things are complex and require very specialized training and guardrails. But other complexity is fabricated. There are entrenched interests who want you to feel like you can't do certain things. They're not malicious, but they sometimes exist to make or protect money. There are entire industries propped up by trade groups that are there to make it seem like some things are too complex to be done by laypeople, who have lobbied legislators for regulations that keep folks like you from tackling them. And if your knee-jerk reply is that I'm some kind of conspiracy theorist or anarchist all I'm saying is it's a spectrum. Suspension bridge with traffic driving over it --> should double, triple, quadruple check with professional(s); a post in a house supporting the entire house's load (exaggeration for effect) --> get a single professional to sign off; a post in a house that's supporting a single floor joist with minimal live and dead load (my case!) --> use an LLM to help you DIY the "engineering" to get to good enough (good enough = high margin for error); replace a light switch --> DIY YouTube video.

I am the king of long-winded HN posts. Obviously the time I took to write this (look, ma, no LLM!) is asymmetric with what you wrote, but I'm genuinely wondering if any of this makes you think differently. If not, that's cool of course (and great for the engineers and permit issuers!).

lazide a day ago | parent [-]

The issue here is you still don’t know what you don’t know. But you think you do.

The reason you hire a structural engineer is because they do - and they are on the hook if it goes wrong. Which is also why they have to stamp drawings, etc.

Because the next person who owns the house should have some idea who was screwing with the structure of it.

You might be 100% on top of it - in which case that structural engineer should have no problem stamping your calcs eh?

harmmonica a day ago | parent [-]

Ah, nice, thanks so much for actually sticking around to reply. I mean, I get what you're saying, and I know I won't be able to convince you otherwise, but I'll repeat that structural engineering can be complex, but it's not always and a lot of it is prescriptive.

The only other thing I'll add is the ideal vs. the reality. What percent of structural projects done to single-family construction, in particular, do you think is done by engineers? I would guess it's far less than 50%. That's based on my own experience working in the industry, which I know you won't trust (why would you? Random internet guy after all). But for conversation's sake suffice it to say that I believe every time you walk into a house that's several decades old or older you're likely walking into a place that has been manipulated structurally without an engineer's stamp. And the vast majority (99%+ of the time) it's perfectly safe to be in that space.

lazide a day ago | parent [-]

Of course - but if you’ve gone behind 99% of people doing their own electrical, you’ll also understand why I’m saying what I’m saying.

Everyone thinks they are the exception. Occasionally, one of them is even right, eh?

harmmonica a day ago | parent [-]

Think I've done that and I'll raise you an "and yet barely any houses burn down due to their electrical!" I actually jest. But electrical is one of those things that anyone can do, but ought to be done with the utmost care (and in the US many jurisdictions allow DIY electrical if you're doing the work for your own place).

And just to clarify I don't think I'm the exception. I was actually making the opposite argument. Almost anyone can and should attempt to deconstruct complexity because doing things is not always as difficult as it would seem (or as difficult as we've been told).

Appreciate the dialogue, lazide!

lazide a day ago | parent [-]

Eh, I’ve dealt with enough people to say nah - people really should not be doing their own structural engineering, or electrical.

It isn’t due to ‘complexity’ either - rather indifference, laziness, or just plain stupidity.

I’ve seen people almost burn down their places multiple times - and at least one family actually die from an electrical fire. Also, partial building collapses.

The reason you don’t see it more often is because people generally don’t actually try.

scotty79 2 days ago | parent | prev | next [-]

Google does AI.

matthewfcarlson 2 days ago | parent | prev [-]

AI has a huge advantage over search. It gets to the question you want answered rather than adjacent search terms. I honestly trust the congealed LLM slop over the piecemeal SEO optimized AI slop for many questions.

How long is the rear seat room is the 2018 XX Yy car? What is the best hotel to stay at in this city? I’m interested in these things and not interested in these amenities. I have leftovers that I didn’t like much, here’s the recipe, what can I do with it? (it turned it into a lovely soup btw).

These are the types of questions many of us search and don’t want to wade through a small ocean of text to get the answer to. Many people just stick Reddit on the query for that reason

blinding-streak a day ago | parent [-]

Have you noticed that Google has AI answers built into search results?