| ▲ | ezekiel68 10 hours ago |
| > Both postgres and redis are used with the out of the box settings Ugh. I know this gives the illusion of fairness, but it's not how any self-respecting software engineer should approach benchmarks. You have hardware. Perhaps you have virtualized hardware. You tune to the hardware. There simply isn't another way, if you want to be taken seriously. Some will say that in a container-orchestrated environment, tuning goes out the window since "you never know" where the orchestrator will schedule the service but this is bogus. If you've got time to write a basic deployment config for the service on the orchestrator, you've also got time to at least size the memory usage configs for PostgreSQL and/or Redis. It's just that simple. This is the kind of thing that is "hard and tedious" for only about five minutes of LLM query or web search time and then you don't need to revisit it again (unless you decide to change the orchestrator deployment config to give the service more/less resources). It doesn't invite controversy to right-size your persistence services, especially if you are going to publish the results. |
|
| ▲ | IanCal 7 hours ago | parent | next [-] |
| I disagree. They found that Postgres, without tuning, was easily fast enough on low level hardware and would come with the benefit of not deploying another service. Additionally tuning it isn’t really relevant. If the defaults are fine for a use case then unless I want to tune it for personal interest it’s either a poor use of my fun time or a poor use of my clients funds. |
| |
| ▲ | perrygeo an hour ago | parent | next [-] | | The default shared memory is 128MiB, not even 1% of typical machines today. A benchmark run with these settings is effectively crippling your hardware by making sure 99% of your available memory is ignored by postgres. It's an invalid benchmark, unless redis is similarly crippled. | |
| ▲ | lemagedurage 6 hours ago | parent | prev [-] | | "If we don't need performance, we don't need caches" feels like a great broader takeaway here. | | |
| ▲ | indymike 2 hours ago | parent | next [-] | | Sometimes, a cache is all about reducing expense: I.e, free cache query vs expensive API query. | |
| ▲ | IanCal 3 hours ago | parent | prev | next [-] | | A cache being fast enough doesn’t mean no caching is relevant - I’m not sure why you’d equate the two. | |
| ▲ | motorest 5 hours ago | parent | prev | next [-] | | > "If we don't need performance, we don't need caches" feels like a great broader takeaway here. I don't think this holds true. Caches are used for reasons other than performance. For example, caches are used in some scenarios for stampede protection to mitigate DoS attacks. Also, the impact of caches on performance is sometimes negative. With distributed caching, each match and put require a network request. Even when those calls don't leave a data center, they do cost far more than just reading a variable from memory. I already had the displeasure of stumbling upon a few scenarios where cache was prescribed in a cargo cult way and without any data backing up the assertion, and when we took a look at traces it was evident that the bottleneck was actually the cache itself. | | |
| ▲ | ralegh 4 hours ago | parent [-] | | DoS is a performance problem, if your server was infinitely fast with infinite storage they wouldnt be an issue. | | |
| |
| ▲ | hobs 3 hours ago | parent | prev [-] | | I see people downvoting this. Anyone who disagrees with this, we have YAGNI for a reason - if someone said to me my performance was fine and they added caches, I would look at them with a big hairy eyeball because we already know cache invalidation is a PITA, that correctness issues are easy to create, and now you have the performance of two different systems to manage. Amazon actually moved away from caches for some parts of its system because consistent behavior is a feature, because what happens if your cache has problems and the interaction between that and your normal thing is slow? What if your cache has some bugs or edge case behavior? If you don't need it you are just doing a bunch of extra work to make sure things are in sync. |
|
|
|
| ▲ | Timshel 6 hours ago | parent | prev | next [-] |
| > for only about five minutes of LLM query or web search I think I have more trust in the PG defaults that in the output of a LLM or copy pasting some configuration I might not really understand ... |
| |
| ▲ | rollcat 4 hours ago | parent | next [-] | | It's crazy how wildly inaccurate "top-of-the-list" LLMs are for straightforward yet slightly nuanced inquiries. I've asked ChatGPT to summarize Go build constraints, especially in the context of CPU microarchitectures (e.g. mapping "amd64.v2" to GOARCH=amd64 GOAMD64=v2). It repeatedly smashed its head on GORISCV64, claiming all sorts of nonsense such as v1, v2; then G, IMAFD, Zicsr; only arriving at rva20u64 et al under hand-holding. Similar nonsense for GOARM64 and GOWASM. It was all right there in e.g. the docs for [cmd/go]. This is the future of computer engineering. Brace yourselves. | | |
| ▲ | yomismoaqui 4 hours ago | parent | next [-] | | If you are going to ask ChatGPT some specific tidbit it's better to force it to search on the web. Remember, an LLM is a JPG of all the text of the internet. | | |
| ▲ | dgfitz an hour ago | parent [-] | | Wait, what? Isn't that the whole point, to ask it specific tidbits of information? Are we to ask it large, generic pontifications and claim success when we get large, generic pontifications back? The narrative around these things changes weekly. | | |
| ▲ | wredcoll 43 minutes ago | parent [-] | | I mean, like most tools they work when they work and don't when they fail. Sometimes I can use an llm to find a specific datum and sometimes I use google and sometimes I use bing. You might think of it as a cache, worth checking first for speed reasons. The big downside is not that they sometimes fail, its that they give zero indication when they do. |
|
| |
| ▲ | simonw 3 hours ago | parent | prev | next [-] | | Did you try pasting in the docs for cmd/go and asking again? | | |
| ▲ | Implicated 2 hours ago | parent [-] | | I mean - this is the entire problem right here. Don't ask LLMs that are trained on a whole bunch of different versions of things with different flags and options and parameters where a bunch of people who have no idea what they're doing have asked and answered stackoverflow questions that are likely out of date or wrong in the first place how to do things with that thing without providing the docs for the version you're working with. _Especially_ if it's the newest version, regardless if it's cutoff date was after that version was released - you have no way to know if it was _included_. (Especially about something related to a programming language with ~2% market share) The contexts are so big now - feed it the docs. Just copy paste the whole damn thing into it when you prompt it. |
| |
| ▲ | pbronez 3 hours ago | parent | prev [-] | | How was the LLM accessing the docs? I’m not sure what the best pattern is for this. You can put the relevant docs in your prompt, add them to a workspace/project, deploy a docs-focused MCP server, or even fine-tune a model for a specific tool or ecosystem. | | |
| ▲ | Implicated 2 hours ago | parent [-] | | > I’m not sure what the best pattern is for this. > You can put the relevant docs in your prompt I've done a lot of experimenting with these various options for how to get the LLM to reference docs. IMO it's almost always best to include in prompt where appropriate. For a UI lib that I use that's rather new, specifically there's a new version that the LLMs aren't aware of yet, I had the LLM write me a quick python script that just crawls the docs site for the lib and feeds the entire page content back into itself with a prompt describing what it's supposed to do (basically telling it to generate a .md document with the specifics about that thing, whether it's a component or whatever, ie: properties, variants, etc in an extremely brief manner) as well as build an 'index.md' that includes a short paragraph about what the library is and a list of each component/page document that is generated. So in about 60 seconds it spits out a directory full of .md files and I then tell my project-specific LLM (ie: Claude Code or Opencode within the project) to review those files with the intention of updating the CLAUDE.md in the project to instruct that any time we're building UI elements we should refer to the index.md for the library to understand what components are available and when appropriate to use one of them we _must_ review the correlating document first. Works very very very well. Much better than an MCP server specifically built for that same lib. (Huge waste of tokens, LLM doesn't always use it, etc) Well enough that I just copy/paste this directory of docs into my active projects using that library - if I wasn't lazy I'd package it up but too busy building stuff. |
|
| |
| ▲ | Implicated 3 hours ago | parent | prev | next [-] | | > copy pasting some configuration I might not really understand Uh, yea... why would you? Do you do that for configurations you found that weren't from LLMs? I didn't think so. I see takes like this all the time and I'm really just mind-boggled by it. There are more than just the "prompt it and use what it gives me" use cases with the LLMs. You don't have to be that rigid. They're incredible learning and teaching tools. I'd argue that the single best use case for these things is as a research and learning tool for those who are curious. Quite often I will query Claude about things I don't know and it will tell me things. Then I will dig deeper into those things myself. Then I will query further. Then I will ask it details where I'm curious. I won't blindly follow or trust it like I wouldn't a professor or anyone or any thing else, for that matter. Just like I would when querying a human for or the internet in general for information, I'll verify. You don't have to trust it's code, or it's configurations. But you can sure learn a lot from them, particularly when you know how to ask the right questions. Which, hold onto your chairs, only takes some experience and language skills. | |
| ▲ | simonw 3 hours ago | parent | prev | next [-] | | So run the LLM in an agent loop: give it a benchmarking tool, let it edit the configuration and tell it to tweak the settings and measure and see how much if a performance improvement it can get. That's what you'd do by hand if you were optimizing, so save some time and point Claude Code or Codex CLI or GitHub Copilot at it and see what happens. | | |
| ▲ | IgorPartola 2 hours ago | parent | next [-] | | “We will take all the strokes off Jerry's game when we kill him.” - the LLM, probably. Just like Mr Meeseeks, it’s only a matter of time before it realizes that deleting all the data will make the DB lightning fast. | |
| ▲ | lomase 2 hours ago | parent | prev [-] | | How much that would cost? | | |
| |
| ▲ | huflungdung 5 hours ago | parent | prev [-] | | [dead] |
|
|
| ▲ | vidarh 7 hours ago | parent | prev | next [-] |
| On one hand I agree with you, but on the other hand defaults matter because I regularly see systems with the default config and no attempt to tune. Benchmarking the defaults and benchmarking a tuned setup will measure very different things, but both of them matter. |
| |
| ▲ | matt-p 6 hours ago | parent | next [-] | | IME very very few people tune the underlying host. Orgs like uber, google or whatever do but outside of that few people know what they're really doing/cares that much. Easier to "increase EC2 size" or whatever. | |
| ▲ | kijin 4 hours ago | parent | prev [-] | | Defaults have all sorts of assumptions built into them. So if you compare different programs with their respective defaults, you are actually comparing the assumptions that the developers of those programs have in mind. For example, if you keep adding data to a Redis server under default config, it will eat up all of your RAM and suddenly stop working. Postgres won't do the same, because its default buffer size is quite small by modern standards. It will happily accept INSERTs until you run out of disk, albeit more slowly as your index size grows. The two programs behave differently because Redis was conceived as an in-memory database with optional persistence, whereas Postgres puts persistence first. When you use either of them with their default config, you are trusting that the developers' assumptions will match your expectations. If not, you're in for a nasty surprise. | | |
| ▲ | vidarh 3 hours ago | parent [-] | | Yes, all of this is fine but none of it address my point: Enough people use the default settings that benchmarking the default settings is very relevant. It often isn't a good thing to rely on the defaults, but it's nevertheless the case that many do. (Yes, it is also relevant to benchmark tuned versions, as I also pointed out, my argument was against the claim that it is somehow unfair not to tune) |
|
|
|
| ▲ | otikik an hour ago | parent | prev | next [-] |
| > Ugh. > if you want to be taken seriously For someone so enthusiastic about giving feedback you don't seem to have invested a lot of effort into figuring out how to give it effectively. Your done and demeanor diminish the value of your comment. |
| |
|
| ▲ | wewewedxfgdf 10 hours ago | parent | prev | next [-] |
| Fully agree. Postgres is a power tool usable for many many use cases - if you want performance it must be tuned. If you judge Postgres without tuning it - that's not Postgres being slow, that's the developer being naive. |
| |
| ▲ | gopalv 9 hours ago | parent | next [-] | | > If you judge Postgres without tuning it - that's not Postgres being slow, that's the developer being naive. Didn't OP end by picking Postgres anyway? It's the right answer even for a naive developer, perhaps even more so for a naive one. At the end of the post it even says >> Having an interface for your cache so you can easily switch out the underlying store is definitely something I’ll keep doing | |
| ▲ | lelanthran 9 hours ago | parent | prev [-] | | He concluded postgresql to be fast enough, so what's the problem? IOW, he judged it fast enough. |
|
|
| ▲ | high_na_euv 7 hours ago | parent | prev | next [-] |
| Disagree, majority of software is running on defaults, it makes sense to compare them this way |
|
| ▲ | conradfr 3 hours ago | parent | prev | next [-] |
| But why doesn't Postgres tuned itself based on the system is running on, at least the basics based on available RAM & cores? |
| |
|
| ▲ | 6 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | oulipo2 6 hours ago | parent | prev [-] |
| Perhaps, but in this case this shows at least that even non-tuned Postgres can be used as a fast cache for many real-world use-cases |