| "You could argue that I’m abusing 414 URI Too Long. I respond that it’s funnier this way. Other options I considered were: 400 Bad Request, the generic client error code, which is correct but boring;
402 Payment Required, and honestly if you want to pay me to make a particular URL with query string work, I’m open to it;
404 Not Found, but it’s too likely to have side effects, and it doesn’t convey the idea that the request was malformed, which is what I’m going for; and
303 See Other with no Location header, which is extremely uncommon these days but legitimate. Or at least it was in RFC 2616 (“The different URI SHOULD be given by the Location field in the response”), but it was reworded in 7231 and 9110 in a way that assumes the presence of a Location header (“… as indicated by a URI in the Location header field”), while 301, 302, 307 and 308 say “the server SHOULD generate a Location header field”. Well, I reckon See Other with no Location header is fair enough. But URI Too Long was funnier."
https://chrismorgan.info/no-query-strings?foo |
| |
| ▲ | ollien 6 hours ago | parent | next [-] | | I don't think it's an abuse, RFC9110 defines 414 as a response for "refusing to service the request because the target URI is longer than the server is willing to interpret". Since adding a query string involves only adding characters, this seems fine; there's no stipulation as far as I can tell that all pages a server hosts must adhere to the same length. I'd be curious if any well-known clients interpret it that way though, and make caching decisions based on it. As far as I know, they shouldn't. Obviously it's against the spirit of the thing, but I don't think it's wrong per-se. | | |
| ▲ | lucketone 5 hours ago | parent [-] | | If the goal is to be misleading, but technically correct, it hits the bullseye | | |
| ▲ | ollien 4 hours ago | parent [-] | | When the goal is "the funniest way", I think that's a hit :) |
|
| |
| ▲ | thayne 4 hours ago | parent | prev | next [-] | | You could also redirect to the url with the query string dropped. | |
| ▲ | 1shooner 7 hours ago | parent | prev [-] | | Also from the 414 page: >Complain to whoever gave you the bad link, and ask them to stop modifying URLs, because it’s bad manners. It's ironic that an error response so blatantly violating the robustness principle is throwing shade about bad manners. | | |
| ▲ | btilly 5 hours ago | parent | next [-] | | Opinions vary on how good an idea the robustness principle is. That is why, for example, the XML standard requires a conforming validator to throw an error on invalid XML. In our modern world, the robustness principle has become an invitation to security bugs, and vendor lock-in. Edge cases snuck through one system on robustness, then trigger unfortunate behavior when they hit a different system. Two systems tried to do something reasonable on an ambiguous case, but did it differently, leading to software that works on one, failing to work on the other. | | |
| ▲ | 1shooner 4 hours ago | parent [-] | | I generally agree, but I don't think XML is the best example. Getting HTML out of XML is considered to have been the right move isn't it? I was pro-XHTML2 at the time but in retrospect, have we suffered much for not sending webpage validation errors to end users? | | |
| ▲ | btilly an hour ago | parent [-] | | Once people have gotten used to not having to conform, forcing them to conform is an uphill battle. Doubly so when, as happened with Microsoft and IE, the vendors would like to encourage vendor lock-in. The only time to reasonably do it is at the start. That said, we are paying a huge complexity cost due to our efforts to allow nonconforming pages. This complexity is widely abused by malicious actors. See, for instance, https://cheatsheetseries.owasp.org/cheatsheets/XSS_Filter_Ev... for ways in which attackers try to bypass security filters. A lot of it is only possible because of this unnecessary complexity. |
|
| |
| ▲ | zaphar 4 hours ago | parent | prev | next [-] | | But, this is robust? I mean it's pretty clearly stating that you are visiting an unsupported URL. It provides direction on what to do about it to the user. It does not crash the browser or the server. In pretty much every dimension this is highly robust. | |
| ▲ | wizzwizz4 7 hours ago | parent | prev [-] | | The robustness principle is itself bad manners, in plenty of contexts. If I deliver packages by throwing them at the customer, I really want a customer to tell me "hey, don't throw packages at me!" before I attempt to lob something fragile and breakable, or something heavy at someone fragile and breakable. Otherwise, how am I supposed to learn that I'm doing anything wrong? | | |
|
|