Remix.run Logo
em-bee a day ago

i am not aware of any parser that does that differently, but i would also argue that this is not the job of parsers. after parsing (or before exporting) there should be a data validation step based on whitelists.

so the user can send in unknown fields all they want, the code will only accept the username and firstname strings, and ignore the other ones.

same with fetching data and sending it to the user. i fetch only the fields i want and create the correct datastructures before invoking the marshaling step.

there are no footguns. if you expect your parser to protect you you are using it wrong. they were not designed for that.

input -> parse -> extract the fields we want, which are valid -> create a data-structure with those fields.

data -> get fields i want -> create datastructures with only wanted fields -> write to output format

nottorp 11 hours ago | parent | next [-]

> this is not the job of parsers

The part of the article that I read before getting annoyed at the clickbaity title is basically "if you trust external data here's how you can blame that design decision on the parser".

securesaml a day ago | parent | prev | next [-]

This is correct. In blog post they say: > Other examples exist, but most follow the same pattern: the component that does security checks and the component that performs the actions differ in their view of the input data.

This would be solved (as you described), by ensuring that the downstream layer uses only contents that are verified in the security check layer.

If they are using a microservice then: Security check API -> return verified data (i.e. re-serialize the verified JSON or XML into byte form, NOT the original input) -> Processing layer i.e. userCreate API uses verified data.

This is the method we used in fixing the ruby-saml example.

See: https://bsky.app/profile/filippo.abyssdomain.expert/post/3le...

20 hours ago | parent | prev [-]
[deleted]