Remix.run Logo
drsopp 7 hours ago

Did some quick calculations, and at this precision, it seems a table lookup might be able to fit in the L1 cache depending on the CPU model.

Pannoniae 7 hours ago | parent | next [-]

Microbenchmarks. A LUT will win many of them but you pessimise the rest of the code. So unless a significant (read: 20+%) portion of your code goes into the LUT, there isn't that much point to bother. For almost any pure calculation without I/O, it's better to do the arithmetic than to do memory access.

jcalvinowens 6 hours ago | parent [-]

Locality within the LUT matters too: if you know you're looking up identical or nearby-enough values to benefit from caching, an LUT can be more of a win. You only pay the cache cost for the portion you actually touch at runtime.

I could imagine some graphics workloads tend compute asin() repeatedly with nearby input values. But I'd guess the locality isn't local enough to matter, only eight double precision floats fit in a cache line.

groundzeros2015 6 hours ago | parent | prev | next [-]

I don’t want to fill up L1 for sin.

jcalvinowens 6 hours ago | parent | prev [-]

Surely the loss in precision of a 32KB LUT for double precision asin() would be unacceptable?

Jyaif 6 hours ago | parent [-]

By interpolating between values you can get excellent results with LUTs much smaller than 32KB. Will it be faster than the computation from op, that I don't know.

drsopp 5 hours ago | parent | next [-]

I experimented a bit with the code. Various tables with different datatypes. There is enough noise from the Monte Carlo to not make a difference if you use smaller data types than double or float. Even dropping interpolation worked fine, and got the speed to be on par with the best in the article, but not faster.

jcalvinowens 5 hours ago | parent [-]

Does your benchmark use sequential or randomly ordered inputs? That would make a substantial difference with an LUT, I would think. But I'm guessing. Maybe 32K is so small it doesn't matter (if almost all of the LUT sits in the cache and is never displaced).

> if you use smaller data types than double or float. Even dropping interpolation worked fine,

That's kinda tautological isn't it? Of course reduced precision is acceptable where reduced precision is acceptable... I guess I'm assuming double precision was used for a good reason, it often isn't :)

drsopp 4 hours ago | parent [-]

I didnt inspect the rest of the code but I guess the table is fetched from L2 on every call? I think the L1 data cache is flooded by other stuff going on all the time.

About dropping the interpolation: Yes you are right of course. I was thinking about the speed. No noticable speed improvement by dropping interpolation. The asin calls are only a small fraction of everything.

jcalvinowens 6 hours ago | parent | prev [-]

I'm very skeptical you wouldn't get perceptible visual artifacts if you rounded the trig functions to 4096 linear approximations. But I'd be happy to be proven wrong :)