▲ | measurablefunc 6 days ago | ||||||||||||||||||||||||||||||||||
That still won't work b/c there is no backtracking. The point is that there is no way to encode backtracking/choice points like in Prolog w/ a Markov chain. The argument you have presented is not extensionally equivalent to the Prolog solver. It is almost equivalent but it's missing choice points for starting at a valid solution & backtracking to an incomplete board to generate a new one. The typical argument for absorbing states doesn't work b/c sudoku is not a typical deterministic puzzle. | |||||||||||||||||||||||||||||||||||
▲ | Ukv 6 days ago | parent [-] | ||||||||||||||||||||||||||||||||||
> That still won't work b/c there is no backtracking. It's essentially just a lookup table mapping from input board to the set of valid output boards - there's no real way for it not to work (obviously not practical though). If board A has valid solutions B, C, D, then the transition matrix cell mapping {A} to {B, C, D} is 1.0, and all other entries in that row are 0.0. > The point is that there is no way to encode backtracking/choice points You can if you want, keeping the same variables as a regular sudoku solver as part of the Markov chain's state and transitioning instruction-by-instruction, rather than mapping directly to the solution - just that there's no particular need to when you've precomputed the solution. | |||||||||||||||||||||||||||||||||||
|