Remix.run Logo
simonw 8 hours ago

It looks like there are two maintained implementations of this at the moment - one in C# https://github.com/j-brooke/FracturedJson/wiki/.NET-Library and another in TypeScript/JavaScript https://github.com/j-brooke/FracturedJsonJs. They each have their own test suite.

There's an older pure Python version but it's no longer maintained - the author of that recently replaced it with a Python library wrapping the C# code.

This looks to me like the perfect opportunity for a language-independent conformance suite - a set of tests defined as data files that can be shared across multiple implementations.

This would not only guarantee that the existing C# and TypeScript implementations behaved exactly the same way, but would also make it much easier to build and then maintain more implementations across other languages.

Interestingly the now-deprecated Python library does actually use a data-driven test suite in the kind of shape I'm describing: https://github.com/masaccio/compact-json/tree/main/tests/dat...

That new Python library is https://pypi.org/project/fractured-json/ but it's a wrapper around the C# library and says "You must install a valid .NET runtime" - that makes it mostly a non-starter as a dependency for other Python projects because it breaks the ability to "pip install" them without a significant extra step.

klysm 24 minutes ago | parent | next [-]

This is also basically a pure function which makes it super simple to write a harness.

odyssey7 7 hours ago | parent | prev | next [-]

This is a good idea, though I don’t think it would guarantee program equivalence beyond the test cases.

simonw 7 hours ago | parent | next [-]

Depends on how comprehensive the test suite is.

And OK it's not equivalent to a formal proof, but passing 1,000+ tests that cover every aspect of the specification is pretty close from a practical perspective, especially for a visual formatting tool.

boxed 7 hours ago | parent [-]

With mutation testing you can guarantee that all the behavior in the code is tested.

odyssey7 6 hours ago | parent | next [-]

UC Berkeley: “Top-level functional equivalence requires that, for any possible set of inputs x, the two pieces of code produce the same output. … testing, or input-output (I/O) equivalence, is the default correctness metric used by the community. … It is infeasible to guarantee full top-level functional equivalence (i.e., equivalence for any value of x) with testing since this would require testing on a number of inputs so large as to be practically infinite.”

https://www2.eecs.berkeley.edu/Pubs/TechRpts/2025/EECS-2025-...

esrauch 6 hours ago | parent [-]

In practice mutation fuzz testers are able to whitebox see where branches are in the underlying code, with a differential fuzz test under that approach its generally able to fuzz over test cases that go over all branches.

So I think under some computer science theory case for arbitrary functions its not possible, but for the actual shape of behavior in question from this library I think its realistic that a decent corpus of 'real' examples and then differential fuzzing would give you more confidence that anyone has in nearly any program's correctness here on real Earth.

odyssey7 4 hours ago | parent [-]

Yes, there are different levels of sureness being described.

When I hear guarantee, it makes me think of correctness proofs.

Confidence is more of a practical notion for how much you trust the system for a given use case. Testing can definitely provide confidence in this scenario.

wizzwizz4 5 hours ago | parent | prev [-]

You can guarantee that all the cases in the code are tested. That doesn't necessarily mean that all the behaviour is tested. If two implementations use very different approaches, which happen to have different behaviour on the Mersenne primes (for deep mathematical reasons), but one of them special-cases byte values using a lookup table generated from the other, you wouldn't expect mutation testing to catch the discrepancy. Each implementation is still the local optimum as far as passing tests is concerned, and the mutation test harness wouldn't know that "disable the small integer cache" is the kind of mutation that shouldn't affect whether tests pass.

There are only 8 32-bit Mersenne primes, 4 of which are byte-valued. Fuzzing might catch the bug, if it happened to hit one of the four other 32-bit Mersenne primes (which, in many fuzzers, is more likely than a uniform distribution would suggest), but I'm sure you can imagine situations where it wouldn't.

JonChesterfield 2 hours ago | parent [-]

I think if you hit full path coverage in each of them independently and run all the cases through both and check they're consistent you're still done.

Or branch coverage for the lesser version, the idea is still to generate interesting cases based on each implementation, not based solely on one of them.

wizzwizz4 9 minutes ago | parent [-]

If the buggy implementation relies indirectly on the assumption that 2^n - 1 is composite, by performing a calculation that's only valid for composite values on a prime value, there won't be a separate path for the failing case. If the Mersenne numbers don't affect flow control in a special way in either implementation, there's no reason for the path coverage heuristic to produce a case that distinguishes the implementations.

rafabulsing 7 hours ago | parent | prev [-]

Well yeah, but then any discrepancies that are found can be discussed (to decide which of the behaviors is the expected one) and then added as a test for all existing and future implementations.

EmilStenstrom 4 hours ago | parent | prev [-]

Data driven test suites are really good for building trust in a library. Both the html5lib-tests suite and my recent xss-bench are examples of this!