Remix.run Logo
fidotron a day ago

Having only recently encountered this, does anyone have any insight as to why it takes 2GB to handle a 100MB file?

This looks highly reminiscent (though not exactly the same, pedants) of why people used to get excited about using SAX instead of DOM for xml parsing.

itamarst 21 hours ago | parent | next [-]

I talk about this more explicitly in the PyCon talk (https://pythonspeed.com/pycon2025/slides/ - video soon) though that's not specifically about Pydantic, but basically:

1. Inefficient parser implementation. It's just... very easy to allocate way too much memory if you don't think about large-scale documents, and very difficult to measure. Common problem with many (but not all) JSON parsers.

2. CPython in-memory representation is large compared to compiled languages. So e.g. 4-digit integer is 5-6 bytes in JSON, 8 in Rust if you do i64, 25ish in CPython. An empty dictionary is 64 bytes.

cozzyd 20 hours ago | parent [-]

Funny to see awkward array in this context! (And... do people really store giant datasets in json?!?).

chao- 17 hours ago | parent | next [-]

Often the legacy of an engineer (or team) who "did what they had to do" to meet a deadline, and if they wanted to migrate to something better post-launch, weren't allowed to allocate time to go back and do so.

At least JSON or CSV is better than the ad hoc homegrown formats you found at medium-sized companies that came out of the 90's and 00's.

5 hours ago | parent [-]
[deleted]
ljm 13 hours ago | parent | prev | next [-]

Some people even use AI-generated JSON as a semantic layer over their SQL.

jfb 18 hours ago | parent | prev [-]

My sweet summer child

CJefferson 12 hours ago | parent | prev | next [-]

To take 2GB to parse a 100MB file, we increase file size 20x

Let's imagine the file is mostly full of single digit numbers with no spaces (so lists like 2,4,1,0,9,3...).

We need to spend 40 bytes storing a number.

Make a minimal sized class to store an integer:

    class JsonInt:
        x = 1
That object's size is already 48 bytes.

Usually we store floats from JSON, the size of 1 as a float in python is 24 bytes.

Now, you can get smaller, but as soon as you introduce any kind of class structure or not parsing numbers until they are used (in case you want people to be able to intrepret them as ints or floats), you blow through 20x memory size increase.

fidotron 10 hours ago | parent [-]

> We need to spend 40 bytes storing a number.

But . . . why? Assuming they aren't BigInts or similar these are maximum 8 bytes of actual data. This overhead is ridiculous.

Using classes should enable you to be much smaller than the JSON representation, not larger. For example, V8 does it like https://v8.dev/docs/hidden-classes

> not parsing numbers until they are used

Doesn't this defeat the point of pydantic? It's supposed to be checking the model is valid as it's loaded using jiter. If the data is valid it can be loaded into an efficient representation, and if it's not the errors can be emitted during iterating over it.

jerf 8 hours ago | parent [-]

"But . . . why?"

This is CPython. This is how it works. It's not particularly related to JSON. That sort of overhead is put on everything. It just hurts the most when the thing you're putting the overhead on is a single integer. It hurts less when you're doing it to, say, a multi-kilobyte string.

Even in your v8 example, that's a JIT optimization, not "how the language works". You break that optimization, which you can do at any moment with any change in your code base, you're back to similar sizes.

Boxing everything lets you easily implement the dynamic scripting language's way of treating everything as an Object of some sort, but it comes at a price. There's a reason dynamic scripting languages, even after the JIT has come through, are generally substantially slower languages. This isn't the only reason, but it's a significant part of it.

fidotron 8 hours ago | parent [-]

> Even in your v8 example, that's a JIT optimization, not "how the language works". You break that optimization, which you can do at any moment with any change in your code base, you're back to similar sizes.

The whole point of the v8 optimization is it works in the face of prototype chains that merge etc. as you add new fields dynamically so if you change your code base it adapts.

15 hours ago | parent | prev [-]
[deleted]