| ▲ | Prometheus 3.0(prometheus.io) |
| 183 points by dmazin 9 hours ago | 30 comments |
| |
|
| ▲ | the_duke 7 hours ago | parent | next [-] |
| I'm curios: are many people here actually still running mainline Prometheus over one of the numerous compatible solutions that are more scalable and have better storage? (Mimir, Victoria, Cortex, OpenObserve, ...) |
| |
| ▲ | robinhoodexe 6 hours ago | parent | next [-] | | We’re running standard Prometheus on Kubernetes (14 onprem Talos clusters, total of 191 nodes, 1.1k cpu cores, 4.75TiB memory and 4k pods). We use Thanos to store metrics in self-hosted S3 (seaweedfs) with 30 days retention, aggressively downsample after 3 days. It works pretty good tbh. I’m excited about upgrading to version 3, as is does take a lot of resources to keep going, especially on clusters with a lot of pods being spawned all the time. | | |
| ▲ | ChocolateGod 5 hours ago | parent [-] | | > We use Thanos to store metrics in self-hosted S3 (seaweedfs) with 30 days retention, aggressively downsample after 3 days. Any reason to not just use Mimir for this? |
| |
| ▲ | aorth 5 hours ago | parent | prev | next [-] | | Using Victoria Metrics here. Very easy to set up and run. I monitor under 100 hosts and resource usage is low, performance is good. One gripe is that they recently stopped publishing tarballs for LTS versions, which caused some grumbling in the community. Fair enough since they are developing for free, but felt like a bait and switch. | |
| ▲ | never_inline 5 hours ago | parent | prev | next [-] | | I am curious to hear from people on this forum, at what point will people practically cross the limits of prometheus, and straightforward division (eg, different prometheus across clusters and environments) does not work? | |
| ▲ | majewsky 3 hours ago | parent | prev | next [-] | | Regular Prometheus inside clusters for collection and alerting, Thanos for cross-cluster aggregation and long retention. | |
| ▲ | raffraffraff 6 hours ago | parent | prev | next [-] | | Nope. Mimir. Before that, Thanos. | |
| ▲ | rad_gruchalski 6 hours ago | parent | prev [-] | | Mimir |
|
|
| ▲ | pentagrama 5 hours ago | parent | prev | next [-] |
| I didn't know this tool, and looking the homepage, is not clear to me if is something like Google Analytics to get metrics of website traffic, or something more dev oriented. Someone can explain to me please? Maybe I'm a bit lost because I'm not a dev. I'm a designer considering selfhosted Google Analytics alternatives and this one may be interesting to add to the research (so far I have Matomo, plausible, open panel, umami, open replay, highlight). Thanks |
| |
|
| ▲ | c0balt 9 hours ago | parent | prev | next [-] |
| That's good news, especially the reduced memory usage and OTLP ingestion support look nice. I have experimented with OTLP metrics before but eventually fell back to prometheus to avoid adding another service to our systems. |
|
| ▲ | never_inline 5 hours ago | parent | prev | next [-] |
| > Native Histograms are still experimental and not yet enabled by default, and can be turned on by passing --enable-feature=native-histograms. Some aspects of Native Histograms, like the text format and accessor functions / operators are still under active design. Ah, slightly disappointed :). Looking at major version I thought it was going to be all about Native histograms. |
|
| ▲ | kuon 8 hours ago | parent | prev | next [-] |
| Migration to victoria metrics has been on my list for nearly a year, but the licensing of it always scared me a bit. My main issue is CPU and memory usage of prometheus, so maybe this upgrade will fix that. |
| |
| ▲ | smw 8 hours ago | parent | next [-] | | Apache 2 is scary? Maybe I'm missing something. | |
| ▲ | alphabettsy 7 hours ago | parent | prev | next [-] | | Victoria Metrics is well worth the migration. Much better performance and lower resource utilization in my experience. | |
| ▲ | nine_k 8 hours ago | parent | prev | next [-] | | It's a reminder to us all that when we think: "Hey, why sweating over this memory layout or that extra CPU expenditure, it's small and nobody will notice", there will be times when everybody will notice. Maybe notice as much as to switch to our competitors' products. | | |
| ▲ | hinkley 7 hours ago | parent [-] | | Developers tend to ignore C in order of complexity calculations but customers don’t. Game developers and HFTs seem to understand this, and very few regular devs I’ve interacted with do. I’ve seen customers say they switched to someone else for speed reasons. And I’ve worked on projects where the engineers were claiming this as fast as we can make it, and they were off by at least a factor of three. We like to think that being off by 10 or 30% doesn’t matter that much but lots of companies run on thin margins and publicly traded companies’ stock prices reflect EBITDA, it matters. Particularly in the Cloud era, where it’s much easier to see how sloppy programming leads more directly to hardware cost excess (as opposed to already purchased servers running closer to capacity) | | |
| ▲ | therealdrag0 6 hours ago | parent [-] | | Those margins also mean you have to pick your battles. Most software is not as performance sensitive as video games or HFT. I take an efficient market hypothesis on this. Obviously devs can make stuff faster, and they do where it matters, as can be seen in games and HFT. In other software it’s a discussion with product of trade offs. |
|
| |
| ▲ | Fizzadar 8 hours ago | parent | prev [-] | | VM was a game changer for us, 7x reduction in memory and 3x CPU, plus the scaling flexibility. | | |
| ▲ | oulipo 7 hours ago | parent [-] | | Hmmm but the documentation seems poorly written, what is the team behind? | | |
| ▲ | hagen1778 5 hours ago | parent | next [-] | | What makes you think that about docs? Of course, it was written by developers, not tech writers. But anyway, what do you think can be improved? | |
| ▲ | trallnag 5 hours ago | parent | prev [-] | | Specifically the documention or VictoriaMetrics overall? The latter started with a small number of Ukrainians |
|
|
|
|
| ▲ | RoxaneFischer1 8 hours ago | parent | prev | next [-] |
| A solid upgrade :))) |
|
| ▲ | dvh 7 hours ago | parent | prev [-] |
| I've read entire page and still don't know what it is. Release notes are communication tool and this was a failure as such. You are losing random passerbys by not telling what your product is in first sentence of release notes, especially x.0 |
| |
| ▲ | moondev 7 hours ago | parent | next [-] | | Before reading the entire page did you consider clicking the header. This will bring you to the main landing page of the product/project which more often than not contains a helpful summary of what it is and why it exists. You can also apply this pattern to other unknown things you come across. | |
| ▲ | never_inline 5 hours ago | parent | prev | next [-] | | Sir, for thought leaders and CTOs they have a home page. This is for DevOps minions. | |
| ▲ | arccy 7 hours ago | parent | prev | next [-] | | if you can't tell what it is but still read the whole thing... i think the problem is on you? | |
| ▲ | soupbowl 7 hours ago | parent | prev | next [-] | | https://prometheus.io/ | |
| ▲ | cess11 7 hours ago | parent | prev [-] | | I think most people that read release notes and changelogs want the text to be concise and easy to interpret when they're doing due diligence to decide when to start rolling out upgrades. They know what the software is about and don't care for some sales pitch. |
|