| ▲ | mcsniff 14 hours ago |
| Ugh. This 100% shows how janky and unmaintained their setup is. All the hand waving and excuses around global supply chains, quotes, etc...it took pretty long for them to acquire commodity hardware and shove it in a special someone's basement and they're trying to make it seem like a good thing? F-Droid is often discussed in the GrapheneOS community, the concerns around centralization and signing are valid. I understand this is a volunteer effort, but it's not a good look. |
|
| ▲ | lrvick 14 hours ago | parent | next [-] |
| As someone that has run many volunteer open source communities and projects for more than 2 decades, I totally get how big "small" wins like this are. The internet is run on binaries compiled in servers in random basements and you should be thankful for those basements because the corpos are never going to actually help fund any of it. |
| |
| ▲ | 13 hours ago | parent | next [-] | | [deleted] | |
| ▲ | pydry 12 hours ago | parent | prev [-] | | It's a shame mozilla wont step up to fund it. They've spunked way more money on way dumber things. | | |
| ▲ | quantummagic 10 hours ago | parent [-] | | Imagine the good they could do if they didn't pay their CEO 6 million a year. | | |
| ▲ | wongarsu an hour ago | parent [-] | | They'd probably burn it without much to show for, like the rest of their funds |
|
|
|
|
| ▲ | lukan 14 hours ago | parent | prev | next [-] |
| "I understand this is a volunteer effort, but it's not a good look." I would agree, that it is not a good look for this society, to lament so much about the big evil corporations and invest so little in the free alternatives. |
| |
| ▲ | fruitworks 12 hours ago | parent [-] | | You can't just host servers in your own basement! You need to pay out the ass to host servers in some big company's basement! | | |
| ▲ | JuniperMesos 11 hours ago | parent | next [-] | | I don't have a problem with an open source project I use (and I do use F-Froid) hosting a server in a basement. I do have a problem with having the entire project hosted on one server in a basement, because it means that the entire project goes down if that basement gets flooded or the house burns down or the power goes out for an extended period of time, etc. Having two servers in two basements not near each other would be good, having five would be better, and honestly paying money to put them in colo facilities to have more reliable power, cooling, etc. would be better still. Computer hardware is very cheap today and it doesn't cost that much money to get a substantial amount of redundancy, without being dependent on any single big company. | | |
| ▲ | nine_k 6 hours ago | parent | next [-] | | This sounds reasonable. But this is a build server, not the entire project infrastructure. I bet the server should be quite powerful, with tons of CPU, RAM and SSD/NVMe to allow for fast builds. Memory of all kinds was getting more and more expensive this year, so the prolonged sourcing is understandable. The trusted contributor, as the text says, is considered more trustworthy than an average colocation company. Maybe they have an adequate "basement", e.g. run their own colo company, or something. It would be great to have a spare server, but likely it's not that simple, including the organization and the trust. A build server would be a very juicy attack target to clandestinely implant spyware. | |
| ▲ | schubidubiduba 10 hours ago | parent | prev | next [-] | | What do you think would happen if that server went down? People can't get app updates, or install new ones. That is all. That is not critical. They can then probably whip up a new hosted server to take over within a few days, at most. Big deal. They are not hosting a critical service, and running on donations. They are doing everything right. | | |
| ▲ | tisdadd 7 hours ago | parent [-] | | I concur, and given the amount of apps they build it makes sense to spend the money on a good build server to me, especially if it is someone with experience hosting trusted servers as mentioned as well as a contributor already. If people do not want to use it, the source code to build yourself is still available for the apps they supply. |
| |
| ▲ | autoexec 6 hours ago | parent | prev [-] | | > Computer hardware is very cheap today As long as you don't need RAM or hard drives. It's getting more expensive all the time too. This isn't the ideal moment to replace a laptop let alone a server. |
| |
| ▲ | 7 hours ago | parent | prev [-] | | [deleted] |
|
|
|
| ▲ | troyvit 6 hours ago | parent | prev | next [-] |
| It's like ya'll are so eager to crap on a thing that you don't even read tfa. > this server is physically held by a long time contributor with a proven track record of securely hosting services. So you are assuming it's a rando's basement when they never said anything like that. If their way of doing business is so offensive either don't use them, disrupt them or pitch in and help. > I understand this is a volunteer effort, but it's not a good look. What does make a "good look" for a volunteer project? |
| |
| ▲ | wtallis 6 hours ago | parent [-] | | > What does make a "good look" for a volunteer project? It's an open-source project. It should be... open. Not mysterious or secretive about overdue replacements of critical infrastructure. |
|
|
| ▲ | magguzu 12 hours ago | parent | prev | next [-] |
| Graphene is a great product but their incessant mud slinging at any service that isn't theirs is tiresome at best. Some of their points are valid but way too often they're unable to accept that different services aren't always trying to solve the same problem. |
|
| ▲ | xandrius 13 hours ago | parent | prev | next [-] |
| "Nothing is ever good enough" (tm) |
| |
| ▲ | orthecreedence 11 hours ago | parent [-] | | If I were running a volunteer project, I would be dumping thousands a month into top-tier hosting across multiple datacenters around the world with global failover. | | |
| ▲ | amrit3128 7 hours ago | parent [-] | | the _if_ is doing a lot of heavy lifting there. You're free to complain about it but Fdroid has been running fine for years and I'd rather have a volunteer manage the servers than some big corporation | | |
| ▲ | wtallis 6 hours ago | parent [-] | | They quite notably haven't been running fine for years: https://news.ycombinator.com/item?id=44884709 Their recent public embarrassment resulting from having such an outdated build server is likely what triggered them to finally start the process of obtaining a replacement for their 12 year old server (that was apparently already 7 years old when they started using it?). | | |
| ▲ | pabs3 5 hours ago | parent [-] | | Its embarrassing that Google binaries don't even use runtime instruction selection. https://wiki.debian.org/InstructionSelection | | |
| ▲ | wtallis 5 hours ago | parent [-] | | Nah, if you actually read into what's available there, it's clear that the compilers have never implemented features to make this broadly usable. You only get runtime instruction selection if you've manually tagged each individual function that uses SIMD to be compiled with function multi-versioning, so that's only really useful for known hot spots that are intended to use autovectorization. If you just want to enable the latest SIMD across the whole program, GCC and clang can't automatically generate fallback versions of every function they end up deciding could use AVX or whatever. The alternative is to make big changes to your build system and packaging to compile N different versions of the executable/library. There's no easy way to just add a compiler flag that means "use AXV512 and generate SSE2 fallbacks where necessary". The people that want to keep running new third-party binaries on 12+ year old CPUs might want to work with the compiler teams to make it feasible for those third parties to automatically generate the necessary fallback code paths. Otherwise, there will just be more and more instances of companies like Google deciding to start using the hardware features they've been deploying for 15+ years. But you already know all that, since we discussed it four months ago. So why are you pretending like what you're asking for is easy when you know the tools that exist today aren't up to the task? |
|
|
|
|
|
|
| ▲ | gnufx 12 hours ago | parent | prev | next [-] |
| > commodity hardware Apart from the "someone's basement", as objected to in this thread, it also doesn't say they acquired "commodity hardware"; I took it to suggest the opposite, presumably for good reason. |
| |
| ▲ | wtallis 5 hours ago | parent [-] | | > it also doesn't say they acquired "commodity hardware"; I took it to suggest the opposite, presumably for good reason. This seems entirely like wishful thinking. They were using a 12 year old server that was increasingly unfit for the day-to-day task of building Android applications. It doesn't seem like they were in a position to acquire and deploy any exotic hardware (except to the extent that really old hardware can be considered exotic and no longer a commodity). I'd be surprised if the new server is anything other than off the shelf x86 hardware, and if we're lucky then maybe they know how to do something useful with a TPM or other hardware root of trust to secure the OS they're running on this server and protect the keys they're signing builds with. | | |
| ▲ | gnufx an hour ago | parent [-] | | I'm just reading what was written, especially "the specific components we needed", and assuming they're not as incompetent as is being suggested, given they've served me well. Perhaps you haven't been tendering for server hardware recently, even bog-standard stuff, and seen the responses that even say they can't quote a fixed price currently. At least, that's in my part of the world, in an operation buying a good deal of hardware. We also have systems over ten years old running. |
|
|
|
| ▲ | viraptor 13 hours ago | parent | prev | next [-] |
| > shove it in a special someone's basement They didn't say what conditions it's held in. You're just adding FUD, please stop. It could be under the bed, it could be in a professional server room of the company ran by the mentioned contributor. |
| |
| ▲ | lrvick 13 hours ago | parent [-] | | 100%. Just as an example I have several racks at home, business fiber, battery backup, and a propane generator as a last resort. Also 4th amendment protections so no one gets access without me knowing about it. I host a lot of things at home and trust it more than any DC. | | |
| ▲ | Aurornis 12 hours ago | parent | next [-] | | > Also 4th amendment protections so no one gets access without me knowing about it. If there's ever a need for a warrant for any of the projects, the warrant would likely involve seizure of every computer and data storage device in the home. Without a 3rd party handling billing and resource allocation they can't tell which specific device contains the relevant data, so everything goes. So having something hosted at home comes with downsides, too. Especially if you don't control all of the data that goes into the servers on your property. | |
| ▲ | hypeatei 12 hours ago | parent | prev | next [-] | | Isn't a business line quite expensive to maintain per month along with a hefty upfront cost? For a smaller team with a tight budget, just going somewhere with all of that stuff included is probably cheaper and easier like a colo DC. > Also 4th amendment protections so no one gets access without me knowing about it laughs in FISA | |
| ▲ | kube-system 11 hours ago | parent | prev [-] | | Which one of those things do you think you can't get in a datacenter? | | |
| ▲ | drnick1 9 hours ago | parent [-] | | That's not the point. The point is that a "home" setup can basically replicate or exceed a "professional" setup when done right. | | |
| ▲ | kube-system 9 hours ago | parent [-] | | A home setup might be able to rival or beat an “edge” enterprise network closet. It’s not going to even remotely rival a tier 3/4 data center in any way. The physical security, infrastructure, and connectivity will never come close. E.g. nobody is doing full 2N electrical and environmental in their homelab. And they certainly aren’t building attack resistant perimeter fences and gates around their homes, unless they’re home labbing on a compound in a war torn country. |
|
|
|
|
|
| ▲ | cyberax 13 hours ago | parent | prev [-] |
| I read it a bit differently: you don't need to be a mega-corp with millions of servers to actually make a difference for the better. It really doesn't take much! Also, even 12-year-old hardware is wicked fast. |
| |
| ▲ | Aurornis 13 hours ago | parent [-] | | The issue isn’t the hardware, it’s the fact that it’s hosted somewhere private in conditions they wont name under the control of a single member. Typically colo providers are used for this. | | |
| ▲ | unethical_ban 3 hours ago | parent | next [-] | | Is it one person? Is it an organization/professional company with close ties to F-Droid? There are a lot of worst-case assumptions in this thread. | |
| ▲ | cyberax 12 hours ago | parent | prev [-] | | Eh. It's just a different set of trade-offs unless you start doing things super-seriously like Let's Encrypt. With f-droid their main strength has always been replicable builds. We ideally just need to start hosting a second f-droid server somewhere else and then compare the results. |
|
|