▲ | deathanatos 3 days ago | ||||||||||||||||
Pagination: do not force me to drink from a paginated coffee stir. I do not want 640 B of data in a response, and then have to send another response for the next 640 B. And often, pagination means the calls are serialized, so I'm just doing nothing but waiting for round trip latency after round trip latency for the next meager 640 B of data. Azure I'm looking at you. Many of their services do this, but Blob storage is something else: I've literally gotten information-free responses there. (I.e., 0 B of actual data. I wish I could say 0 B were used to transfer it.) When you're designing, think about how big a record/object/item is, and return a reasonable number of them in a page. For programmatic consumers who want to walk the dataset, a 640 KiB response is really not that big, and I've seen so many times responses orders of magnitude less, because someone thought "100 items is a good page size, right?" and 100 items was like 4 KiB of data. > If you have thirty API endpoints, every new version you add introduces thirty new endpoints to maintain. You will rapidly end up with hundreds of APIs that all need testing, debugging, and customer support. You version the one thing that's changing. As much as I hate the /v2/... form of versioning, nobody reversions all the /v1/... APIs just because one API needed a /v2. /v2 is ghost town, save for the /v2 APIs. | |||||||||||||||||
▲ | atoav 3 days ago | parent [-] | ||||||||||||||||
Yeah, pagination ia a great option — maybe even a good default. But don't make it the only choice, give developers the choice to make the tradeoff between number of requests and payload size. | |||||||||||||||||
|