|
| ▲ | EduardoBautista 5 hours ago | parent | next [-] |
| It shouldn't. The issue is that most developers would rather spin up another instance of their server than solve the performance issue in their code, so now it's a common belief that computers are really slow to serve content. And we are talking about static content. You will be bottlenecked by bandwidth before you are ever bottlenecked by your laptop. |
| |
| ▲ | Nextgrid 4 hours ago | parent [-] | | To be fair, computers are slow if you intentionally rent slow & overpriced ones from really poor-value vendors like cloud providers. For people who started their career in this madness they might be genuinely unaware of how fast modern hardware has become. |
|
|
| ▲ | eqvinox 5 hours ago | parent | prev | next [-] |
| With a 2025 tech stack, yes. With a 2005 tech stack, no. Don't use any containers, no[/limited] server-side dynamic script languages, no microservices or anything like that. Considering the content is essentially static, this is actually viable. Search functions might be a bit problematic, but that's a solvable problem. Of course you pay with engineering skills and resources. |
| |
| ▲ | stackskipton 4 hours ago | parent | next [-] | | SRE here, Containers are not causing any performance problem. | | |
| ▲ | grim_io 3 hours ago | parent [-] | | Maybe the perception comes from all the Mac and Windows devs having to run a Linux VM to use containers. |
| |
| ▲ | eirpoeior 5 hours ago | parent | prev [-] | | Is there any feasible way to implement search client-side on a database of this scale? I guess you would need some sort of search term to document id mapping that gets downloaded to the browser but maybe there's something more efficient than trying to figure out what everyone might be searching for in advance? And how would you do searching for phrases or substrings? I've no idea if that's doable without having a database server-side that has the whole document store to search through. | | |
| ▲ | ffsm8 4 hours ago | parent | next [-] | | Theoretically, just thinking about the problem... You could probably embrace offline first and sync to indexeddb? After that search would become simple to query. Obviously comes with it's own challenges, depending on your user base (e.g. not a good idea if it's only a temporary login etc) | |
| ▲ | namibj 4 hours ago | parent | prev [-] | | There are several implementations of backing an Sqlite3 database with a lazy loaded then cached network storage, including multiple that work over HTTP (iirc usually with range requests).
Those basically just work. |
|
|
|
| ▲ | eddythompson80 5 hours ago | parent | prev | next [-] |
| No it won't. This is static content we're talking about. The only thing limiting you is your network throughput and maybe disk IO (assuming it doesn't fit in a compressed RAM). Even for an "around the globe roundtrip" latency, we're still talking few hundred msec. Some cloud products have distorted an entire generation of developers understanding of how services can scale. |
|
| ▲ | array_key_first 2 hours ago | parent | prev | next [-] |
| A laptop from 10 years ago should be able to comfortably serve that. Computers are really really fast. I'm sorry, thousands of users or tens of thousands of requests a day is nothing. |
|
| ▲ | LunaSea 3 hours ago | parent | prev | next [-] |
| A 6 core server or laptop can easily serve 100K requests per second, so 259B requests per month. 576x more than their current load. |
|
| ▲ | computomatic 5 hours ago | parent | prev | next [-] |
| I think it’s more helpful to discuss this in requests per second. I’d interpret “thousands of people hitting a single endpoint multiple times a day” as something like 10,000 people making ~5 requests per 24 hours. That’s 0.5 requests per second. |
|
| ▲ | lanyard-textile 5 hours ago | parent | prev [-] |
| It all depends of course, but generally no, a laptop could handle that just fine. |
| |
| ▲ | marginalia_nu 4 hours ago | parent [-] | | There may be a risk of running into thermal throttling in such a use-case, as laptops are really not designed for sustained loads of any variety. Some deal with it better than others, but few deal with it well. Part of why this is a problem is that consumer grade NICs often tend to overload quite a lot of work to the CPU that higher end server specced NICs do themselves, as a laptop isn't really expected to have to keep up with 10K concurrent TCP connections. |
|