Thank you for giving me an excellent morning of reading. ðŸ™‚

I’m currently processing upper left to bottom right, depth-first. Diagonals are part of the constraints, but like you said, it almost doesn’t matter from the standpoint of the 2×2 pattern repeat.

And that high likelihood of a pattern is a problem.

I remember being introduced to linear programming in my linear algebra class and I did a smattering of non-linear modeling during my chemistry research days, but that was 4-5 years ago. The terminology and concepts were familiar, the mechanics long gone from my memory. ðŸ™‚

Anyway, Wikipedia, the Sudoku paper, and Matlab code were very enlightening. In thinking about the application of LP to this problem, I’m not sure if it makes more sense to have three decision variables (x, y, z) representing color, richness, and thickness, each binary, or to have a single variable x with values 0-3 for color present, richness present, thickness present, or otherwise. Three decision variables simplifies the declaration of constraints, I think, but then what would my objective function be? ðŸ™‚ I’ll definitely be playing with this once I get home and get R installed.

Handling the degradation of the constraints based on my actual given input would be pretty graceful with the LP approach. If they’re asking for a 4-deep blanket, then automatically swap out the “no duplicate richnesses and thicknesses” constraints for the “no adjacent richnesses and thickness”, etc. Lots of logic, but nothing actually difficult there, and having the constraints defined distinctly (rather than tangled in the logic of the brute-force algorithm) makes this all very clean and maintainable (theoretically–haven’t done it yet).

Questions for you, though: Is the recommendation to move from thinking in terms of graph coloring to (B)ILP based solely on the addition of new constraints to improve the solution, or would you have tackled the problem with LP from the get-go? (In other words, why the paradigm shift? Or is it even a paradigm shift to you?)

]]>Michael alerted me to this. I find this post wonderfully interesting. Here are some observations of a mathematician / graph theorist.

Existence of a solution. I’m not sure if you count diagonally adjacent squares as adjacent or not. Either way, if I have a 2×2 block satisfying the constraints, then I possibly enlarge my grid to be even by even, and starting in the upper left hand corner I may paste this block horizontally and vertically to cover the quilt and the result is a quilt meeting the constraints after possibly shrinking the quilt back to the appropriate size. Your constraints are highly local and you are working on a grid which has a repetitive structure. A pattern type solution is not only a good way to show existence, but may actually show up with high probability.

It is quite possible that reasonably good upper and lower bounds for the number of solutions can be obtained by imagining that we are to fill in the quilt from upper left corner to lower right corner as if we were reading. By doing this the next square to be filled in is only constrained by the adjacent squares to the left and above this square. This is at most half the constraints we would need to worry about if we had to fill in a square surrounded on all sides by squares we have already determined. Since solutions fail to exist when even a single square can’t be filled in, we want to make sure we never have to fill in a square surrounded by already determined squares.

Now when you did a brute force depth first search, if you ordered your squares in a similar upper left to lower right manner, then because of the relatively low number of constraints imposed on the next square to be determined, your probability of successfully finding a solution is fairly good. This ordering of the squares, the small number of constraints, the local nature of the constraints, and the sparseness and regularity of a grid graph, explain the ease with which your program identifies solutions.

You start your post talking about designing a random looking quilt. Since I already indicated in my existence condition that patterns might appear with high probability, if this is undesirable, you want to add constraints or a function to optimize to fight this. You might want each color appearing approximately the same number of times. So either you calculate the average number of times a color might appear and allow it to appear say 50% more than this, or your optimization function could be the product of the number of times each color is used. Of course now we are making the problem a little harder adding these global considerations and we might care to question whether we still want to use depth first brute force.

You end your post asking how to feed this to a graph coloring algorithm. But now I’ve opened the optimization can of worms. So what you really want to do is formulate this as an integer linear programming problem. Your constraints are linear in that the number of times a color of yarn is used in each neighborhood can not exceed 1. I don’t know how much linear programming you’ve done before, after you’ve read the Wikipedia articles, read this paper by a group I worked with on how to write Sudoku as a LP: http://langvillea.people.cofc.edu/sudoku5.pdf . In fact, Sudoku gives a good framework for thinking about your problem.

Thanks for a nice problem,

Jim