| ▲ | porcoda 4 days ago |
| Not a big surprise once LLMs came along: stack overflow developed some pretty unpleasant traits over time. Everything from legitimate questions being closed for no good reason (or being labeled a duplicate even though they often weren’t), out of date answers that never get updated as tech changes, to a generally toxic and condescending culture amongst the top answerers. For all their flaws, LLMs are so much better. |
|
| ▲ | palata 4 days ago | parent | next [-] |
| Agreed. I personally stopped contributing to StackOverflow before LLMs, because of the toxic moderation. Now with LLMs, I can't remember the last time I visited StackOverflow. |
| |
| ▲ | Alupis 4 days ago | parent | next [-] | | People in this thread are missing another key component in the decline of StackOverflow - the more experienced you become, the less useful it is. The harder the problem, the less engagement it gets. People who spend hours working on your issue are rewarded with a single upvote. Meanwhile, "how do I concat a string" gets dozens or hundreds of upvotes. The incentive/reward structure punished experienced folks with challenging/novel questions. Pair that with the toxic moderation and trigger-happy close-votes, you get a zombie community with little new useful content. | | |
| ▲ | parpfish 4 days ago | parent | next [-] | | Also: the bigger the corpus of already answered questions, it’s more likely that you can just look up an answer instead of asking. Eventually SO becomes a site exclusively for lurkers instead of a platform for active participation | | |
| ▲ | sdenton4 4 days ago | parent [-] | | On the other hand, another week another JavaScript framework, amirite? There continues to be new stuff to ask questions about, but stack overflow failed to be the default location for new stuff. I guess now there's more discussion directly on GitHub and discord. | | |
| ▲ | speedgoose 4 days ago | parent [-] | | The JavaScript ecosystem has mostly stabilised. React is 12 years old for example. |
|
| |
| ▲ | bmacho 4 days ago | parent | prev | next [-] | | > People in this thread are missing another key component in the decline of StackOverflow - the more experienced you become, the less useful it is. This is literally not true. The rate you learn and encounter new things depends on many things: you, your mood, your energy etc. But not on the amount of your experience. > The harder the problem, the less engagement it gets. People who spend hours working on your issue are rewarded with a single upvote. This is true, but not relevant, I don't think many people care. Some might, but not many. | | |
| ▲ | zdc1 4 days ago | parent | next [-] | | I don't know what your experience has been, but I do feel that at some point you will find yourself on or beyond SO's "knowledge frontier". The questions you land on will be unanswered or have equally confused replies; or you might be the one who's asking a question instead. I've "paid back" by leaving a high quality response on unanswered SO questions that I've had to figure out myself, but it felt quite thankless since even the original poster would disappear, and anyone who found my answer from Google wouldn't be able to give me an upvote either. | | |
| ▲ | YetAnotherNick 4 days ago | parent | next [-] | | No, you don't. Not only there are many examples of detailed stackoverflow articles written by absolute experts, you also need answer often for something trivial(which is like half of my chatgpt), e.g. how to export in pgadmin, or a nondescriptive error in linux. | | |
| ▲ | palata 3 days ago | parent [-] | | > No, you don't. When someone says "I feel like" and you answer "No, you don't", you're most certainly wrong :-). I do feel like the parent. | | |
| ▲ | YetAnotherNick 3 days ago | parent [-] | | If you read parent's comment it's not "I feel like" comment even though he mentioned it. I have been in software engineering for long and the queries to stackoverflow/chatgpt combined haven't decreased for me. | | |
| ▲ | palata 2 days ago | parent [-] | | > I don't know what your experience has been, but I do feel Are you being serious here? |
|
|
| |
| ▲ | palata 3 days ago | parent | prev [-] | | > I've "paid back" by leaving a high quality response on unanswered SO questions that I've had to figure out myself I was used to doing that, but then the moderation got in the way. So I stopped. |
| |
| ▲ | luckylion 4 days ago | parent | prev [-] | | I think it is true, but not because you have nothing more to learn when you're experienced, but that there are fewer and fewer people on SO to answer the questions that you encounter when you get more and more experienced. I've answered about 200 questions. I've asked two, and both remain unanswered to this day. One of them had comments from someone who clearly was out of their league but wanted to be helpful. The people who could've answered those questions are not (or were not at that time) on SO. | | |
| ▲ | palata 3 days ago | parent [-] | | The more experienced I got, the subtler my questions/answers. The few times I asked a question, I would start by saying "it may look similar to this, this and that questions, but it is not", only to see my question get closed as duplicate by moderators. If the moderators are not competent to understand if your question is a duplicate or not, and close it as duplicate when in doubt, then it contributes to the toxic atmosphere, maybe? |
|
| |
| ▲ | whiterook6 4 days ago | parent | prev | next [-] | | Here's my brilliant idea: the longer it takes for an answer to be marked correct, or the more answers there are before one is marked correct, the more points that answer deserves. | | |
| ▲ | xp84 4 days ago | parent | next [-] | | The idea of one “accepted answer” there always bugged me. The correct/best answer of many things changes radically over time. For instance The only sane way to do a lot of things in “JavaScript” in 2009 was to install jquery and use it. Most of those same things can (and should) be done just as succinctly with native code today, but the accepted answers in practice were rarely updated or changed. I don’t even know if you could retroactively years later re-award it to a newer answer. Since the gamification angle was so prominent, that might rob the decade-old author of their points for their then-correctness, so idk if they even allowed it. | | |
| ▲ | Cordiali 4 days ago | parent [-] | | I noticed a similar thing for Python 3 questions, closed as a duplicate of a Python 2 response. Why they weren't collated and treated as a living document is beyond me. | | |
| ▲ | palata 2 days ago | parent | next [-] | | My feeling is that many times the moderators are not competent to decide correctly. They could go with "when in doubt, keep the duplicate", but they chose the opposite. Meaning that instead of happy users and duplicates, they have no duplicates, and no more users. | |
| ▲ | 4 days ago | parent | prev [-] | | [deleted] |
|
| |
| ▲ | frickinLasers 4 days ago | parent | prev [-] | | How about if people with a higher reputation contribute an exponentially higher score when voting? Like, someone with ten top-rated answers has a 1,000-point vote (more nuanced than that, obviously). |
| |
| ▲ | stirfish 4 days ago | parent | prev | next [-] | | Remember when the R developers would ask and answer their own basic questions about R, essentially building up a beginner tutorial on stack overflow? That was a cool time | | |
| ▲ | fragmede 4 days ago | parent [-] | | Human psychology is fascinating. If I say I'm cool, I'm full of myself. If someone else says that I'm cool, that hits different. So is reverse psychology. | | |
| |
| ▲ | ChrisMarshallNY 4 days ago | parent | prev | next [-] | | This has been my experience. My initial (most popular) questions (and I asked almost twice as many questions, as I gave answers) were pretty basic, but they started getting a lot more difficult, as time went on, and they became unanswered, almost always (I often ended up answering my own question, after I figured it out on my own). I was pretty pissed at this, because the things I encountered, were the types of things that people who ship, encounter; not academic exercises. Tells me that, for all the bluster, a lot of folks on there, don't ship. LLMs may sometimes give pretty sloppy answers, but they are almost always ship-relevant. | |
| ▲ | nerdponx 4 days ago | parent | prev | next [-] | | That might be true on Stackoverflow but not on other network sites like Cross Validated, which was killed by splitting the community into multiple SE sites and longtime users quitting in protest over various policies and not being replaced. | | |
| ▲ | emodendroket 4 days ago | parent [-] | | I think there's a basic problem that the original revenue model for the site just didn't work (I mean, they wouldn't have shut down Stack Overflow Jobs if that actually made them any money) and anything they were able to do to fix that pissed people off. | | |
| ▲ | smcin 4 days ago | parent [-] | | Stack Overflow Jobs was a superb, uncluttered, direct interface to the hiring manager, with accurate details about a position. So when they canned it (but kept their advertising revenue stream plus started "SO for Teams" in 2018), that was a major canary that the whole revenue model wasn't viable, at least for independent developers. | | |
| ▲ | emodendroket 3 days ago | parent [-] | | Well I think part of the problem here is that, by all accounts, developers loved it, but they're not the actual paying customer. | | |
| ▲ | smcin 3 days ago | parent [-] | | If SO wanted to keep experienced developers on their site and contributing content for free, it shouldn't have been unthinkable to find some model to fund SO Jobs. Yahoo is one cautionary tale of what happens when a site pursues more or lower-quality advertising revenue without regard for losing users. "Sunsetting Jobs & Developer Story" 3/2022 https://meta.stackoverflow.com/questions/415293/sunsetting-j... | | |
| ▲ | emodendroket 15 hours ago | parent [-] | | The point of SO Jobs was to fund the rest of the site. You're saying they should have subsidized what was meant to be the revenue driver too? | | |
| ▲ | smcin 10 hours ago | parent [-] | | Don't distort my words. If SO Jobs was one of the key engagement features bringing thousands of experienced developers to SO to contribute free content (and the site was valued at $1.8bn in the acquisition), then any reasonable accounting would find those features were cash-positive. (That seems comparable to arguing that Facebook shouldn't subsidize posting baby photos). But if it was the case that SO mgmt decided (2017-2020) that they didn't care to keep experienced users engaged, and just let the site degenerate into new users posting bigger volumes of duplicates, questions without code, etc., then that would be on them. You don't have to assume their actions were rational; look how badly they mismanaged moderation in that period and how many experienced users that lost them. | | |
| ▲ | emodendroket 7 hours ago | parent [-] | | I think that it is simultaneously the case that 1) SO Jobs had job-seekers who loved it 2) it was not actually a major draw to the site 3) it didn't make money, despite 4) being primarily intended as a monetization mechanism. You are starting from different premises you didn't bother stating and then accusing me of being dishonest for not divining them. |
|
|
|
|
|
|
| |
| ▲ | johnnyanmac 3 days ago | parent | prev | next [-] | | Yeah, I think this is the real answer. I still pop into SO when in learning a new language or trip into new simple questions (in my case, how to connect and test a local server). But when you're beyond the weeds, SO is as best an oasis in the desert. Half the time a mirage, nice when it does help out. But rare either way. I don't use LLMs eother. But the next generation might feel differently and those trends mean there's no new users coming in. | |
| ▲ | tmsbrg 4 days ago | parent | prev | next [-] | | Maybe there's a key idea for something to replace StackOverflow as a human tech Q&A forum: Having a system which somehow incentivizes asking and answering these sorts of challenging and novel questions. These are the questions which will not easily be answered using LLMs, as they require more thought and research. | |
| ▲ | why-o-why 4 days ago | parent | prev | next [-] | | Wasn't there a "bounty" program where if it had a lot of views but no answers, the answer rewarded more internet ego points? | | |
| ▲ | viraptor 4 days ago | parent [-] | | Not automatically. You could add a bounty using your own points if the question didn't get an accepted answer in 2 days. Which is kinda cool, but also very biased for older contributors. I could drop thousands of points bounty without thinking about it, but new users couldn't afford the attention they needed. |
| |
| ▲ | cyberdick 4 days ago | parent | prev | next [-] | | There is also github issues discussions now which also helped in asking these niche questions directly to the team responsible. I dont ask questions about a library on SO I just ask it on the github of the library and I get immediate answers | |
| ▲ | 4 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | timeon 4 days ago | parent | prev | next [-] | | > the more experienced you become, the less useful it is This is killer feature of LLMs - you will not became more experienced. | |
| ▲ | encom 4 days ago | parent | prev [-] | | >toxic moderation and trigger-happy close-votes >zombie community Like Reddit post 2015. | | |
| ▲ | fabian2k 4 days ago | parent | next [-] | | Stack Overflow moderation is very transparent compared to whatever Reddit considers moderation. For programming my main problem with Reddit is that the quality of posts is very low compared to SO. It's not quite comparable because the more subjective questions are not allowed on SO, but there's a lot of advice on Reddit that I would consider harmful (often in the direction of adding many more libraries than most people should). | |
| ▲ | 4 days ago | parent | prev [-] | | [deleted] |
|
| |
| ▲ | eurekin 4 days ago | parent | prev | next [-] | | Same here. I just didn't want to expend energy racing trigger happy mods. It was so odd, to this day remember vividly how they cleanup their arguments once proven wrong on the closing vote. Literally minutes before it would the close threshold. | |
| ▲ | lysace 4 days ago | parent | prev | next [-] | | Gen 0: expertsexchange.com, later experts-exchange.com (1996) Gen 1: stackoverflow.com (2008) Gen 2: chatgpt.com (2022, sort of) | | | |
| ▲ | why-o-why 4 days ago | parent | prev | next [-] | | And you can't delete your post when you realize how awful it was years later! That anti-information sticks around for ages. Even worse when there are bad answers attached to it, too. | | |
| ▲ | immibis 4 days ago | parent | next [-] | | Fun story: SO officially states comments are ephemeral and can be deleted whenever, so I deleted some of my comments. I was then banned. After my ban expired I asked on the meta site if it was okay to delete comments. I was banned again for asking that. | |
| ▲ | Ajedi32 4 days ago | parent | prev | next [-] | | If you're talking about deleting questions, that's because deleting the question would delete everyone's answers that they potentially worked very hard on and which others might find useful. If you think the answers are bad you can always post your own competing answer. | | |
| ▲ | conorcleary 4 days ago | parent | next [-] | | "A Human commented at ##:##pm"
"An AI Bot commented at..."
"A suspected AI Bot commented at..."
"An unconfirmed Human commented at..." | |
| ▲ | why-o-why 4 days ago | parent | prev [-] | | ya but you assume someone worked hard on the answer. there are alot of times when you get garbage top to bottom. |
| |
| ▲ | what 4 days ago | parent | prev [-] | | You can’t delete anything here either… so make sure you don’t say anything awful. | | |
| ▲ | why-o-why 4 days ago | parent [-] | | create a new account every few weeks and don't forget to mix you you'er writin' style to fakeout stylometrics. its all against the rules but i disagree with HN terms. internet points don't mean crapola to me. but i like dropping in here every now and then to chit caht. i should have the right to be anonymous and non-deidentifiable here and speak freely. of IP address ---are--- tracked here and you can easily be shadowbanned. but i don't say anything awful, but i am naturally an asshat and i just can't seem to change my spots. 90% of the time i'm ok, but 10% i'm just a raving tool. |
|
| |
| ▲ | xp84 4 days ago | parent | prev | next [-] | | The dumbest part of SO is how the accepted answer would often be bad, and sometimes someone had posted a better answer after the fact, and if the all-powerful moderators had the power to update it, they sure never did. Likewise, there were often better insights in comments. Apparently if you have the right mod powers, you can just edit an answer (such as the accepted one) to make it correct, but that always struck me as a bizarre feature, to put words in other people’s mouths. I think overall SO took the gamification, and the “internet points” idea, way too far. As a professional, I don’t care about Reddit Karma or the SO score or my HN karma. I just wanted answers that are correct, and a place to discuss anything that’s actually interesting. I did value SO once as part of the tedious process of attempting to get some technical problem solved, as it was the best option we had, but I definitely haven’t been there since 2023. RIP. | | |
| ▲ | halapro 4 days ago | parent [-] | | > took gamification way too far I disagree, I always thought it SO did a great job with it. The only part I would have done differently would be to cap the earnable points per answer. @rndusr124 shouldn't have moderation powers just because his one and only 2009 answer got 3589 upvotes. |
| |
| ▲ | baby 4 days ago | parent | prev [-] | | The same is true for reddit imo, it became impossible to post anything to a subreddit way before LLMs |
|
|
| ▲ | Salgat 4 days ago | parent | prev | next [-] |
| Seemed like for every other question, I received unsolicited advice telling me how I shouldn't be doing it this way, only for me to have to explain why I wanted to do it this way (with silence from them). |
| |
| ▲ | bmacho 4 days ago | parent | next [-] | | This is called the XY problem https://meta.stackexchange.com/a/66378 . You ask for X, I tell you that what you really want is Y, I bully you, and I become more convinced that you and people that ask for X want Y. | |
| ▲ | jbreckmckye 4 days ago | parent | prev | next [-] | | Oh I love that game! (At least I think it's a game) You ask how to do X. Member M asks why you want to do X. Because you want to do Y. Well!? why do you want to do Y?? Because Y is on T and you can't do K so you need a Z Well! Well! Why do you even use Z?? Clearly J is the way it is now recommended! Because Z doesn't work on a FIPS environment. ... Can you help me? ... I just spent 15 minutes explaining X, Y and Z. Do you have any help? ...(crickets) | | |
| ▲ | Philip-J-Fry 4 days ago | parent | next [-] | | To be fair, asking why someone wants to do something is often a good question. Especially in places like StackOverflow where the people asking questions are often inexperienced. I see it all the time professionally too. People ask "how do I do X" and I tell them. Then later on I find out that the reason they're asking is because they went down a whole rabbit hole they didn't need to go down. An analogy I like is imagine you're organising a hike up a mountain. There's a gondola that takes you to the top on the other side, but you arrange hikes for people that like hiking. You get a group of tourists and they're all ready to hike. Then before you set off you ask the question "so, what brings you hiking today" and someone from the group says "I want to get to the top of the mountain and see the sights, I hate hiking but it is what it is". And then you say "if you take a 15 minute drive through the mountain there's a gondola on the other side". And the person thanks you and goes on their way because they didn't know there was a gondola. They just assumed hiking was the only way up. You would have been happy hiking them up the mountain but by asking the question you realised that they didn't know there was an easier way up. It just goes back to first principles. The truth is sometimes people decide what the solution looks like and then ask for help implementing that solution. But the solution they chose was often the wrong solution to begin with. | | |
| ▲ | magicalhippo 4 days ago | parent | next [-] | | The well known XY problem[1]. I spent years on IRC, first getting help and later helping others. I found out myself it was very useful to ask such questions when someone I didn't know asked a somewhat unusual question. The key is that if you're going to probe for Y, you usually need to be fairly experienced yourself so you can detect the edge cases, where the other person has a good reason. One approach I usually ended up going for when it appeared the other person wasn't a complete newbie was to first explain that I think they're trying to solve the wrong problem or otherwise going against the flow, and that there's probably some other approach that's much better. Then I'd follow up with something like "but if you really want to proceed down this rrack, this is how I'd go about it", along with my suggestion. [1]: https://en.wikipedia.org/wiki/XY_problem | |
| ▲ | rendaw 4 days ago | parent | prev | next [-] | | It's great when you're helping people one on one, but it's absolutely terrible for a QA site where questions and answers are expected to be helpful to other people going forward. I don't think your analogy really helps here, it's not a question. If the question was "How do I get to the top of the mountain" or "How do I want to get to the top of the mountain without hiking" the answer to both would be "Gondola". | |
| ▲ | marcosdumay 4 days ago | parent | prev | next [-] | | > Especially in places like StackOverflow where the people asking questions are often inexperienced. Except that SO has a crystal clear policy that the answer to questions should be helpful for everybody reaching it through search, not only the person asking it. And that questions should never be asked twice. So if by chance, after all this dance the person asking the question actually needs the answer to a different question, you'll just answer it with some completely unrelated information and that will the the mandatory correct answer for everybody that has the original problem for any reason. | | |
| ▲ | abanana 4 days ago | parent [-] | | Yes exactly. The fact that the "XY problem" exists, and that users sometimes ask the wrong question, isn't being argued. The problem is that SO appears to operate at the extreme, taking the default assumption that the asker is always wrong. That toxic level of arrogance (a) pushes users away and (b) ...what you said. | | |
| ▲ | lovehashbrowns 4 days ago | parent [-] | | Which is why LLMs are so much more useful than SO and likely always will be. LLMs do this even. Like trying to write my own queue by scratch and I ask an LLM for feedback I think it’s Gemini that often tells me Python’s deque is better. duh! That’s not the point. So I’ve gotten into the habit of prefacing a lot of my prompts with “this is just for practice” or things of that nature. It actually gets annoying but it’s 1,000x more annoying finding a question on SO that is exactly what you want to know but it’s closed and the replies are like “this isn’t the correct way to do this” or “what you actually want to do is Y” |
|
| |
| ▲ | IAmGraydon 4 days ago | parent | prev [-] | | >I see it all the time professionally too. People ask "how do I do X" and I tell them. Then later on I find out that the reason they're asking is because they went down a whole rabbit hole they didn't need to go down. Yep. The magic question is "what are you trying to accomplish?". Oftentimes people lacking experience think they know the best way to get the results they're after and aren't aware of the more efficient ways someone with more experience might go about solving their problem. |
| |
| ▲ | phoenixy1 4 days ago | parent | prev | next [-] | | My heuristic is that if your interlocutor asks follow-up questions like that with no indication of why (like “why do you want to do X?” rather than “why do you want to do X? If the answer is Y, then X is a bad approach because Q, you should try Z instead”) then they are never going to give you a helpful answer. | |
| ▲ | stirfish 4 days ago | parent | prev | next [-] | | How do I add a second spout to this can? ... Well, the pump at the gas station doesn't fit in my car, but they sold me a can with a spout that fits in my car. ... It's tedious to fill the can a dozen times when I just want to fill up my gas tank. Can you help me or not? ... I understand, but I already bought the can. I don't need the "perfect" way to fill a gas tank, I just want to go home. | | | |
| ▲ | hsbauauvhabzb 4 days ago | parent | prev [-] | | Tbf the problem there is probably FIPS more than anything else. | | |
| ▲ | immibis 4 days ago | parent [-] | | If someone is paying you to implement a security vulnerability and you've told them and you don't have liability, you just do it. That's how capitalism works. You do whatever people give you money for. | | |
| ▲ | hsbauauvhabzb 3 days ago | parent [-] | | I wasn’t referring to vulnerabilities, I was referring to arbitrary silly security theatre controls. But id hate to deal with you professionally. Gross. |
|
|
| |
| ▲ | econ 4 days ago | parent | prev [-] | | To avoid going insane the mindset should be to produce something useful for future readers. |
|
|
| ▲ | loeg 4 days ago | parent | prev | next [-] |
| Long before LLMs. Setting aside peak-COVID as a weird aberration, question volume has been in decline since 2014 or maybe 2016. |
| |
| ▲ | echelon 4 days ago | parent [-] | | Stack Overflow would still have a vibrant community if it weren't for the toxic community. Imagine a non-toxic Stack Overflow replacement that operated as an LLM + Wiki (CC-licensed) with a community to curate it. That seems like the sublime optimal solution that combines both AI and expertise. Use LLMs to get public-facing answers, and the community can fix things up. No over-moderation for "duplicates" or other SO heavy-handed moderation memes. Someone could ask a question, an LLM could take a first stab at an answer. The author could correct it or ask further questions, and then the community could fill in when it goes off the rails or can't answer. You would be able to see which questions were too long-tail or difficult for the AI to answer, and humans could jump in to patch things up. This could be gamified with points. This would serve as fantastic LLM training material for local LLMs. The authors of the site could put in a clause saying that "training is allowed as long as you publish your weights + model". Someone please build this. Edit: Removed "LLMs did not kill Stack Overflow." first sentence as suggested. Perhaps that wasn't entirely accurate, and the rest of the argument stands better on its own legs. | | |
| ▲ | MPSimmons 4 days ago | parent | next [-] | | The fact that they basically stopped the ability to ask 'soft' questions without a definite answer made it very frustrating. There's no definitive answer to a question about best practices, but you can't ask people to share their experiences or recommendations. | | |
| ▲ | banana_giraffe 4 days ago | parent [-] | | They actually added some new question categories a while ago [1] "Troubleshooting / Debugging" is meant for the traditional questions, "Tooling recommendation", "Best practices", and "General advice / Other" are meant for the soft sort of questions. I have no clue what the engagement is on these sort of categories, though. It feels like a fix for a problem that started years ago, and by this point, I don't really know if there's much hope in bringing back the community they've worked so hard to scare away. It's pretty telling just how much the people that are left hate this new feature. [1] https://meta.stackoverflow.com/questions/435293/opinion-base... | | |
| ▲ | MPSimmons 4 days ago | parent [-] | | Oh, that's good that they added them. I stopped being active in on the sites a long time ago, so I missed that. |
|
| |
| ▲ | fabian2k 4 days ago | parent | prev | next [-] | | Fixing loads of LLM-generated content is neither easy nor fun. You'll have a very hard time getting people to do that. | | |
| ▲ | echelon 4 days ago | parent [-] | | Hardly. - A huge number of developers will want to use such a tool. Many of them are already using AI in a "single player" experience mode. - 80% of the answers will be correct when one-shot for questions of moderate difficulty. - The long tail of "corrector" / "wiki gardening" / pedantic types fill fix the errors. Especially if you gamify it. Just because someone doesn't like AI doesn't mean the majority share the same opinion. AI products are the fastest growing products in history. ChatGPT has over a billion MAUs. It's effectively won over all of humanity. I'm not some vibe coder. I've been programming since the 90's, including on extremely critical multi-billion dollar daily transaction volume infra, yet I absolutely love AI. The models have lots of flaws and shortcomings, but they're incredibly useful and growing in capability and scope -- I'll stand up and serve as your counter example. | | |
| ▲ | fabian2k 4 days ago | parent | next [-] | | People answer on SO because it's fun. Why should they spend their time fixing AI answers? It's very tedious as the kind of mistakes LLMs make can be rather subtle and AI can generate a lot of text very fast. It's a sisyphean taks, I doubt enough people would do it. | |
| ▲ | beepbooptheory 4 days ago | parent | prev | next [-] | | I just think you could save a lot of money and energy doing all this but skipping the LLM part? Like what is supposed to be gained? The moment/act of actual generation of lines of code or ideas, whether human or not, is a much smaller piece of the pie relative to ongoing correction, curation, etc (like you indicate). Focusing on it and saying it intrinsically must/should come from the LLM mistakes the intrinsically ephemeral utility of the LLMs and the arguably eternal nature of the wiki at the same time. As sibling says, it turns it into work vs the healthy sharing of ideas. The whole pitch here just feels like putting gold flakes on your pizza: expensive and would not be missed if it wasn't there. Just to say, I'm maybe not as experienced and wise I guess but this definitely sounds terrible to me. But whatever floats your boat I guess! | |
| ▲ | rob802 3 days ago | parent | prev [-] | | Your points are arguing that the tool would be useful - not that anyone would build it. No one wants to curate what is, essentially, randomly generated text. What an absolute nightmare that would be | | |
| ▲ | fragmede 3 days ago | parent [-] | | > essentially, randomly generated text. You oversimplified and lost too much precision. Try again? |
|
|
| |
| ▲ | zahlman 4 days ago | parent | prev | next [-] | | The community is not "toxic". The community is overwhelmed by newcomers believing that they should be the ones who get to decide how the site works (more charitably: assuming that they should be able to use the site the same way as other sites, which are not actually at all the same and have entirely different goals). I don't know why you put "duplicates" in quotation marks. Closing a duplicate question is doing the OP (and future searchers) a service, by directly associating the question with an existing answer. | |
| ▲ | shrx 4 days ago | parent | prev | next [-] | | > Someone could ask a question, an LLM could take a first stab at an answer. The author could correct it or ask further questions, and then the community could fill in when it goes off the rails or can't answer. Isn't this how Quora is supposed to operate? | | |
| ▲ | Alupis 4 days ago | parent [-] | | Maybe my experience is unique - but Quora seems to be largely filled with adverts-posing-as-answers. | | |
| ▲ | rfmoz 4 days ago | parent [-] | | Quora, sadly, is a good example of enshittification. |
|
| |
| ▲ | 4 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | eek2121 4 days ago | parent | prev [-] | | Absolutely 100% this. I've used them on and off throughout the years. The community became toxic, so I took my question to other platforms like Reddit (they became toxic as well) and elsewhere. Mind you, while I'm a relative nobody in terms of open source, I've written everything from emulators and game engines in C++ to enterprise apps in PHP, Java, Ruby, etc. The consistent issues I've encountered are holes in documentation, specifically related to undocumented behavior, and in the few cases I've asked about this on SO, I received either no response and downvotes, or negative responses dismissing my questions and downvotes. Early on I thought it was me. What I found out was that it wasn't. Due to the toxic responses, I wasn't about to contribute back, so I just stopped contributing, and only clicked on an SO result if it popped up on Google, and hit the back button if folks were super negative and didn't answer the question. Later on, most of my answers actually have come from Github,and 95% of the time, my issues were legitimate ones that would've been mentioned if a decent number of folks used the framework, library, or language in question. I think the tl;dr of this is this: If you can't provide a positive contribution on ANY social media platform like Stack Overflow, Reddit, Github, etc. Don't speak. Don't vote. Ignore the question. If you happen to know, help out! Contribute! Write documentation! I've done so on more than one occasion (I even built a website around it and made money in the process due to ignorance elsewhere, until I shut it down due to nearly dying), and in every instance I did so, folks were thankful, and it made me thankful that I was able to help them. (the money wasn't a factor in the website I built, I just wanted to help folks that got stuck in the documentation hole I mentioned) EDIT: because I know a bunch of you folks read Ars Technica and certain other sites. I'll help you out: If you find yourself saying that you are being "pedantic", you are the problem, not the solution. Nitpicking doesn't solve problems, it just dilutes the problem and makes it bigger. If you can't help, think 3 times and also again don't say anything if your advice isn't helpful. |
|
|
|
| ▲ | gfhifd42 4 days ago | parent | prev | next [-] |
| It doesn't have anything to do with LLMs. It has to do with shifting one's focus from doing good things to making money. Joel did that, and SO failed because of it. Joel promised the answering community he wouldn't sell SO out from under them, but then he did. And so the toxicity at the top trickled down into the community. Those with integrity left the community and only toxic, selfcentered people remained to destroy what was left in effort to salvage what little there was left for themselves. Mods didn't dupe questions to help the community. They did it to keep their own answers at the top on the rankings. |
| |
| ▲ | oofbey 4 days ago | parent [-] | | How did Joel sell out? Curious as I’m not aware of any monetary changes. I watched Joel several times support completely brain dead policies in the meta discussions which really set the rules and tone. So my respect there is low. | | |
| ▲ | oofbey 4 days ago | parent | next [-] | | They literally sold: https://arstechnica.com/gadgets/2021/06/stack-overflow-sold-... | |
| ▲ | gfhifd42 4 days ago | parent | prev [-] | | He and Jeff made it abundantly clear their mission was to destroy the sex change site because that site was immoral for monetizing the benevolence of the community who answered the questions. "Knowledge should be free" they said. "You shouldn't make money off stuff like this," they said. Plenty of links and backstory in my other comments. |
|
|
|
| ▲ | adrum 4 days ago | parent | prev | next [-] |
| I also wonder if GitHub Discussions was also a (minor) contributing factor to the decline. I recall myself using GitHub Discussions more and more when it came to repo specific issues. The timeline also matches: https://github.blog/changelog/2020-12-08-github-discussions-... https://github.blog/news-insights/product-news/github-discus... |
| |
| ▲ | dom96 4 days ago | parent [-] | | Do we have any stats for the number of GitHub discussions created each month to compare to this? |
|
|
| ▲ | jonahss 4 days ago | parent | prev | next [-] |
| It seemed to me that pre-llm, google had stopped surfacing stackoverflow answers in search results. |
| |
| ▲ | wood_spirit 4 days ago | parent | next [-] | | My memory is there were a spate of SO scraping sites that google would surface above SO and google just would not zap. It would have been super trivial to fix but google didn’t. My pet theory was that google were getting doubleclick revenue from the scrapers so had incentives to let them scrape and to promote them in search results. | | |
| ▲ | xp84 4 days ago | parent [-] | | I remember those too! There were seemingly thousands of them! Reminds me of my most black-hat project — a Wikipedia proxy with 2 Adsense ads injected into the page. It made me like $20-25 a month for a year or so but sadly (nah, perfectly fairly) Google got wise to it. | | |
| ▲ | 20after4 4 days ago | parent [-] | | I'm actually surprised it was only ~$20 a month. | | |
| ▲ | xp84 a day ago | parent [-] | | Honestly, if I'd gotten in earlier I bet it would have made more. I also made zero attempts to SEO it -- there were no links to it anywhere else on the Internet, so it would have been in the very first tranche of useless duplicative spam sites of that type to be cleaned up. It was up from like 2010-2013 or so. |
|
|
| |
| ▲ | eigenspace 4 days ago | parent | prev [-] | | Because nobody was clicking on them |
|
|
| ▲ | zahlman 4 days ago | parent | prev | next [-] |
| > legitimate questions being closed for no good reason They are closed for good reasons. People just have their own ideas about what the reasons should be. Those reasons make sense according to others' ideas about what they'd like Stack Overflow to be, but they are completely wrong for the site's actual goals and purposes. The close reasons are well documented (https://meta.stackoverflow.com/questions/417476) and well considered, having been exhaustively discussed over many years. > or being labeled a duplicate even though they often weren’t I have seen so many people complain about this. It is vanishingly rare that I actually agree with them. In the large majority of cases it is comically obvious to me that the closure was correct. For example, there have been many complaints in the Python tag that were on the level of "why did you close my question as a duplicate of how to do X with a list? I clearly asked how to do it with a tuple!" (for values of X where you do it the same way.) > a generally toxic and condescending culture amongst the top answerers. On the contrary, the top answerers are the ones who will be happy to copy and paste answers to your question and ignore site policy, to the constant vexation of curators like myself trying to keep the site clean and useful (as a searchable resource) for everyone. > For all their flaws, LLMs are so much better. I actually completely agree that people who prefer to ask LLMs should ask LLMs. The experience of directly asking (an LLM) and getting personalized help is explicitly the exact thing that Stack Overflow was created to get away from (i.e., the traditional discussion forum experience, where experts eventually get tired of seeing the same common issues all the time and all the same failures to describe a problem clearly, and where third parties struggle to find a useful answer in the middle of along discussion). |
| |
| ▲ | abanana 4 days ago | parent | next [-] | | You seem to have filled this thread with a huge number of posts that try to justify SO's actions. Over and over, these justifications are along the lines of "this is our mission", "read our policy", "understand us". Often, doing what your users want leads to success. Stamping authority over your users, and giving out a constant air of "we know better than all of you", drives them away. And when it's continually emphasized publicly (rather than just inside a marketing department) that the "mission" and the "policy" are infinitely more important than what your users are asking for, that's a pretty quick route to failure. When you're completely embedded in a culture, you don't have the ability to see it through the eyes of the majority on the outside. I would suggest that some of your replies here - trying to deny the toxicity and condescension - are clearly showing this. | | |
| ▲ | zahlman 3 days ago | parent [-] | | > Often, doing what your users want leads to success. You misunderstand. People with accounts on Stack Overflow are not "our users". Stack Exchange, Inc. does not pay the moderators, nor high-rep community members (who do the bulk of the work, since it is simply far too much for a handful of moderators) a dime to do any of this. Building that resource was never going to keep the lights on with good will and free user accounts (hence "Stack Overflow for Teams" and of course all the ads). Even the company is against us, because the new owners paid a lot of money for this. That doesn't change what we want to accomplish, or why. > When you're completely embedded in a culture, you don't have the ability to see it through the eyes of the majority on the outside. I am not "embedded in" the culture. I simply understand it and have put a lot of time into its project. I hear the complaints constantly. I just don't care. Because you are trying to say that I shouldn't help make the thing I want to see made. > trying to deny the toxicity and condescension I consider the term "toxicity" more or less meaningless in general, and especially in this context. As for "condescension", who are you to tell me what I should seek to accomplish? |
| |
| ▲ | Flimm 3 days ago | parent | prev [-] | | > "why did you close my question as a duplicate of how to do X with a list? I clearly asked how to do it with a tuple!" (for values of X where you do it the same way.) This is a great example of a question that should not be closed as a duplicate. Lists are not tuples in Python, regardless of how similar potential answers may be. | | |
| ▲ | zahlman 3 days ago | parent [-] | | I'm talking here about cases (which is basically all of them) where the first person to ask was simply needlessly specific. Or where the canonical has the list as an incidental detail and the next person insists that the answers won't work because this code has a tuple, you see, and doesn't see the merit in trying them. If you imagine that the answer should be re-written from scratch to explain that the approach will be the same, you have fundamentally misunderstood the purpose of the site. Abstraction of contextually unimportant details is supposed to be an essential skill for programmers. |
|
|
|
| ▲ | ChrisMarshallNY 4 days ago | parent | prev | next [-] |
| Oh yeah. My favorite feature of LLMs, is the only dumb question, is the one I don't ask. I guess someone could train an LLM to be spiteful and nasty, but that would only be for entertainment. |
| |
| ▲ | fragmede 4 days ago | parent [-] | | If you say the wrong thing to grok, it will go off on you. It's quite entertaining! |
|
|
| ▲ | bborud 4 days ago | parent | prev | next [-] |
| I suppose all sites that have a voting component run the risk of becoming unpleasant. Hacker News, and we who frequent it, ought to have that in mind. |
| |
| ▲ | shermantanktop 4 days ago | parent | next [-] | | dang and the other HN moderators do a heroic job to set the tone, which has second- and third-order effects on behavior. | | | |
| ▲ | cellularmitosis 4 days ago | parent | prev [-] | | I think it has more to do with the fact that when you offer zero salary for moderators, you have to take what you can get, and it ain't good. I don't really see a connection to the voting mechanic. | | |
| ▲ | immibis 4 days ago | parent | next [-] | | It's also disconnected incentives. SO users get numbers to go up by taking moderation actions so of course they do that. Also you literally get banned from reviewing questions if you don't flag enough of them to be closed. These are incentives put in place by the SO company intentionally. It's not like only slimy people get to use moderator tools like on Reddit, since you need a lot of reputation points you get by having questions and answers voted up. It's more like (1) you select people who write surface-level-good answers since that's what's upvoted, and they moderate with a similar attitude and (2) once you have access to moderator tools you're forced to conform with (1) or your access is revoked, and (3) the company is completely incompetent and doesn't give a shit about any of this. | |
| ▲ | bborud 4 days ago | parent | prev [-] | | Why do you think it makes a difference if they are paid or not? Or more to the point: what are you saying? That people have different standards when paid? That lack of remuneration justifies poor effort? Isn’t that a very transactional view of human interaction? Are we that transactional? Do we want this? We’re talking about how communities can become toxic. How we humans sometimes create an environment that is at odds with our intentions. Or at least what we outwardly claim to be our intentions. I think it is a bit sad when people feel they have to be compensated to not let a community deteriorate. | | |
| ▲ | Chaosvex 4 days ago | parent [-] | | > That people have different standards when paid? That lack of remuneration justifies poor effort? Isn’t that a very transactional view of human interaction? Are we that transactional? The answer to all of these questions is yes, for the most part. Volunteers are much harder to wrangle than employees and it's much easier for drama and disagreements to flare when there are zero consequences other than losing an unpaid position, particularly if anonymity is in the mix. Volunteers can be great but on average they're going to be far harder to manage and far more fickle than employees. | | |
| ▲ | bborud 4 days ago | parent [-] | | Then you have a much darker view of humanity than I have. What you seem to suggest is that because building a community on volunteers is hard it is not worth doing. What makes a community worthwhile is its ability to resolve differences productively. I think that if you replace individual responsibility with transactionality you have neither community nor long term viability or scalability. Then again, we live in times when transactional thinking seems to dominate discourse. | | |
| ▲ | Chaosvex 3 days ago | parent [-] | | It's because I was involved with a large volunteer-based project that was a literal 24/7/365 operation for several years (dozens of volunteers at any given time and tens of thousands of concurrent users) and can speak first hand as to the differences. I didn't say it's not worth doing but it will bring challenges that wouldn't exist with employees. Paying people adds a strong motivator to keep toxic behaviour at bay. Your experiences will heavily depend on the type of project you're running but regardless, you can't hold volunteers, especially online, to the same expectations or standards as employees. The amount of time and effort they can invest will wax and wane and there's nothing you can do about it. Anonymity and lack of repercussions will eventually lead to drama or power struggles when a volunteer steps out of line in a way that they wouldn't in paid employment. There is no fix that'll stop occasional turbulence, it's just the way it is. Not all of your volunteers will be there for the greater good of your community. Again, that is absolutely not to say that it can't be worth the effort but if you go into it eyes open, you'll have a much better time and be able to do a better job at heading off problems. I've seen other people express similar opinions to yours and it wasn't until they experienced being in the driver's seat that they understood how difficult it is. | | |
| ▲ | bborud 3 days ago | parent [-] | | My argument is that it stops being a community when it becomes a business. |
|
|
|
|
|
|
|
| ▲ | nautilus12 4 days ago | parent | prev | next [-] |
| The irony is that the LLMs are trained on stack overflow and should inherit a lot of those traits and errors. |
| |
| ▲ | Bratmon 4 days ago | parent [-] | | Yeah, but they don't inherit their rules and attitude. Really, if we could apply some RLHF to the Stack Overflow community, it would be doing a lot better. |
|
|
| ▲ | ivewonyoung 4 days ago | parent | prev | next [-] |
| >For all their flaws, LLMs are so much better But LLMs get their answers from StackOverflow and similar places being used as the source material. As those start getting outdated because of lack of activity, LLMs won't have the source material to answer questions properly. |
| |
| ▲ | porcoda 4 days ago | parent | next [-] | | I regularly use Claude and friends where I ask it to use the web to look at specific GitHub repos or documentation to ask about current versions of things. The “LLMs just get their info from stack overflow” trope from the GPT-3 days is long dead - they’re pretty good at getting info that is very up to date by using tools to access the web. In some cases I just upload bits and pieces from a library along with my question if it’s particularly obscure or something home grown, and they do quite well with that too. Yes, they do get it wrong sometimes - just like stack overflow did too. | | |
| ▲ | cyral 4 days ago | parent | next [-] | | The amount of docs that have a “Copy as markdown” or “Copy for AI” button has been noticeably increasing, and really helps the LLM with proper context. | |
| ▲ | inferiorhuman 4 days ago | parent | prev [-] | | they’re pretty good at getting info that is very up to date by using tools to access the web
Yeah that's a charitable way to phrase "perform distributed denial of service attacks". Browsing github as a human with their draconian rate limits that came about as a result of AI bots is fucking great. | | |
| ▲ | immibis 4 days ago | parent [-] | | You know DDoS attacks are illegal, right? If you have proof that OpenAI is DDoSing your site, go sue them for millions of dollars. | | |
| ▲ | inferiorhuman 3 days ago | parent [-] | | Ah, I see you have a JD from OpenAI. I don't run personal sites worth millions of dollars. I do, however, use sites like Sourcehut, DigiKey, Github, Mouser, Farnell, etc, etc, etc. that have opted to put everything behind bullshit captchas because of the DDoS (nee AI) bots. |
|
|
| |
| ▲ | badthingfactory 4 days ago | parent | prev | next [-] | | StackOverflow answers are outdated. Every time I end up on that site these days, I find myself reading answers from 12 years ago that are no longer relevant. | | |
| ▲ | shawn_w 4 days ago | parent | next [-] | | I see plenty of old answers that are still very relevant. Suppose it depends on what language/tech tags you follow. | |
| ▲ | zahlman 4 days ago | parent | prev [-] | | There have been many times I have seen someone complain on the meta site about answers being old and outdated, and then they give specific examples, and I go check them out and they're actually still perfectly valid. |
| |
| ▲ | terminalshort 4 days ago | parent | prev | next [-] | | Now they can read the documentation and code in the repo directly and answer based on that. | | |
| ▲ | croes 4 days ago | parent [-] | | SO had answers that you couldn't find in the documentation and were you can't look in the source code. If everything would be well documentated SO wouldn't have being as big as it was in the first place. |
| |
| ▲ | nutjob2 4 days ago | parent | prev [-] | | I think the industry is quickly moving to syntheticly derived knowledge, or custom/systematic knowledge production from humans. |
|
|
| ▲ | 4 days ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | jstummbillig 4 days ago | parent | prev | next [-] |
| That depends on what you mean by "came along". If you mean "once that everyone got around to the idea that LLMs were going to be good at this thing" then sure, but it was not long ago that the majority of people around here were very skeptical of the idea that LLMs would ever be any good at coding. |
| |
| ▲ | make3 4 days ago | parent | next [-] | | What you're arguing about is the field completely changing over 3 years; it's nothing, as a time for everyone to change their minds. LLMs were not productified in a meaningful way before ChatGPT in 2022 (companies had sufficiently strong LLMs, but RLHF didn't exist to make them "PR-safe"). Then we basically just had to wait for LLM companies to copy Perplexity and add search engines everywhere (RAG already existed, but I guess it was not realistic to RAG the whole internet), and they became useful enough to replace StackOverflow. | |
| ▲ | rustystump 4 days ago | parent | prev | next [-] | | I dont think this is true. People were skeptical of agi / better than human coding which is not the case. As a matter of fact i think searching docs was one of the first manor uses of llms before code. | |
| ▲ | nutjob2 4 days ago | parent | prev [-] | | That's because there has been rapid improvement by LLMs. Their tendency to bullshit is still an issue, but if one maintains a healthy skepticism and uses a bit of logic it can be managed. The problematic uses are where they are used without any real supervision. Enabling human learning is a natural strength for LLMs and works fine since learning tends to be multifaceted and the information received tends to be put to a test as a part of the process. |
|
|
| ▲ | secondcoming 4 days ago | parent | prev | next [-] |
| How can we be sure that LLMs won't start giving stale answers? |
| |
| ▲ | mark-r 4 days ago | parent | next [-] | | We can't. I don't think the LLMs themselves can recognize when an answer is stale. They could if contradicting data was available, but their very existence suppresses the contradictory data. | | |
| ▲ | zahlman 4 days ago | parent [-] | | LLMs don't experience the world, so they have no reason a priori to know what is or isn't truthful in the training data. (Not to mention the confabulation. Making up API method names is natural when your model of the world is that the method names you've seen are examples and you have no reason to consider them an exhaustive listing.) |
| |
| ▲ | g947o 4 days ago | parent | prev | next [-] | | They will, but model updates and competition help solve the problem. If people find that Claude consistently gives better/more relevant answers over GPT, for example, people will choose the better model. The worst thing with Q/A sites isn't they don't work. It's that they there are no alternatives to stackoverflow. Some of the most upvoted answers on stackoverflow prove that it can work well in many cases, but too bad most other times it doesn't. | |
| ▲ | Someone1234 4 days ago | parent | prev | next [-] | | They still use the official documentation/examples, public Github Repos, and your own code which are all more likely to be evergreen. SO was definitely a massive training advantage before LLMs matured though. | |
| ▲ | Cloudef 4 days ago | parent | prev [-] | | LLMs are just statistics, eventually they kill themselves with feedback loop by consuming their own farts (literally) |
|
|
| ▲ | yigalirani 4 days ago | parent | prev | next [-] |
| all true, but i still find myself ask questions there after llm gave wrong answers and wasted my time |
|
| ▲ | antisthenes 4 days ago | parent | prev | next [-] |
| Yep, LLMs are perfect for the "quick buy annoying to answer 500 times" questions about writing a short script, or configuring something, or using the right combination of command line parameters. Quicker than searching the entirety of Google results and none of the attitude. |
|
| ▲ | p-e-w 4 days ago | parent | prev | next [-] |
| Indeed. StackOverflow was by far the most unpleasant website that I have regularly interacted with. Sometimes, just seeing how users were treated there (even in Q&A threads that I wasn’t involved in at all) disturbed me so much it was actually interfering with my work. I’m so, so glad that I can now just ask an AI to get the same (or better) answers, without having to wade through the barely restrained hate on that site. |
|
| ▲ | vivzkestrel 4 days ago | parent | prev | next [-] |
| not only stackoverflow, but also reddit.com/r/aws reddit.com/r/docker reddit.com/r/postgresql all 3 of them have extremely toxic communities. ask a question and get downvoted instantly! Noo!! your job is to actually upvote the question to maximize exposure for the algorithm unless it is a really really stupid question that a google search could fix |
|
| ▲ | hdgvhicv 4 days ago | parent | prev | next [-] |
| > For all their flaws, LLMs are so much better. For now. They still need to be enshitted. |
| |
| ▲ | esafak 4 days ago | parent | next [-] | | Models are check-pointed. You can save one you like and use it forever. | | |
| ▲ | shermantanktop 4 days ago | parent | next [-] | | You can save an open source + open weights model, which is frozen in time. That’s still very useful for some things but lacks knowledge of current data. So we’ll end up with a choice of low-performing stale models or high-performing enshittified models which know about more current information. | | |
| ▲ | esafak 4 days ago | parent [-] | | Open source models get updated all the time. You'd only be a few months behind. | | |
| ▲ | shermantanktop 4 days ago | parent [-] | | Direct enshittification is intentional and wouldn’t affect open models. Indirect pollution via AI slop in the input and the same content manipulation mechanisms as SEO hacking is still a threat for open models. |
|
| |
| ▲ | stavros 4 days ago | parent | prev | next [-] | | Doesn't help when the ads are a layer above the model. | | |
| ▲ | esafak 4 days ago | parent [-] | | There are open source models you yourself or a trusted third party can run. No ads. |
| |
| ▲ | _alternator_ 4 days ago | parent | prev | next [-] | | Yup. Like Claude 3 Opus. | |
| ▲ | Bratmon 4 days ago | parent | prev [-] | | Really? I thought you could only do that with open source models. Can you teach me how to checkpoint the current version of Claude Code so I can keep it as-is forever? |
| |
| ▲ | fooker 4 days ago | parent | prev [-] | | Yeah just wait for the ads |
|
|
| ▲ | dmezzetti 4 days ago | parent | prev [-] |
| This change was happening well before LLMs. People were tired of being yelled at and treated poorly. A cautionary tale for many of these types of tech platforms, this one included. |