My knitting colleague E. made the (arguably) goofy decision to refresh her crocheting skills by taking on a granny squares blanket.
It’s a great idea for using up a ton of scrap yarn.
It’s not a great idea if you enjoyed the level of sanity you had when you started.
She quickly ran into the classic self-randomizing problem: given 20 different colored yarns of different weights, how do you put 3 different ones in each square while trying to keep the colors as random as possible? Sounds easy enough, but after 15 or so squares, it gets tricky. If you’re aiming for randomization, the last thing you want is a big diagonal of purple in your blanket when you’re done.
So E. appealed to me and asked me to write her a “script” to randomize her colors. I was on board, look forward to some Python/Django fun before I realized that what I’d been handed was a graph coloring problem with some fun restraints. (Turns out it was easy, but fun to think through.)
Before I get into the technical bits, go make a blanket or two. Then go find some esoteric method to contact me (or comment here) and let me know what you think, especially if you run into an issue.
Given a set of yarn colors each with one of three richnesses (light, medium, and dark) and one of three yarn weights/thicknesses (light, medium, heavy) generate a blanket x squares wide by y squares tall using n colors per square.
The optimal solution will have one of each richness per square and one of each yarn weight per square. In addition, no two adjacent yarns in different squares will be the same color, richness, or thickness. (That is, the yarn at depth 2 of square one should be different in color, richness, and thickness from the yarn at depth 2 of square two.)
If there aren’t enough different richnesses or thicknesses provided for one per square (e.g. a square that is four colors deep must have a duplicate richness and thickness), the generated blanket should try to ensure that no two adjacent yarns have the same thickness.
If not enough colors are provided for a single square (e.g. three colors for a four-deep square), the blanket should create a random pattern and apply it repetitively to the blanket. (In other words, “screw that mess.”)
I found that–surprisingly–given the other slowness inherent in a web app (network latency, apparently page load times, etc.) and the small sizes I’d be dealing with (10×10 squares and 3-4 colors deep are common) that a brute-force color search worked well enough. (Averaging less than 35 ms for up to 8 colors per square and less than 20 ms for 4 or fewer colors.) For each color in each square, I start from the set of all colors, reduce it down by eliminating colors based on the above constraints, and pick randomly. If there aren’t any viable colors, then I pick a random color that isn’t one already in the square.
Not that these aren’t “will do’s”. They just haven’t been done yet.
The fallback for missing viable colors should be more graceful. If there are no optimal colors, then I should try to find one with a different richness. If there are none of those, then one with a different thickness. Then I should fall back to the random color that isn’t already in the square.
Second is the presentation. Some tiny CSS tweaks to make errors more noticeable and the current field highlight, etc. I also toyed around with the idea of making the color entry tabular, but stuck with the currently more expanded form for accessibility. I still want to find a good way to make that work.
Lastly, I’ve never learned graph theory well, and this project could be an awesome way to rectify that. The target users of this probably won’t give a damn, but it’d be fun to pigeonhole this into an established graph coloring problem and work up a faster algorithm.
5 thoughts on “Chewing on Granny Squares”
Michael alerted me to this. I find this post wonderfully interesting. Here are some observations of a mathematician / graph theorist.
Existence of a solution. I’m not sure if you count diagonally adjacent squares as adjacent or not. Either way, if I have a 2×2 block satisfying the constraints, then I possibly enlarge my grid to be even by even, and starting in the upper left hand corner I may paste this block horizontally and vertically to cover the quilt and the result is a quilt meeting the constraints after possibly shrinking the quilt back to the appropriate size. Your constraints are highly local and you are working on a grid which has a repetitive structure. A pattern type solution is not only a good way to show existence, but may actually show up with high probability.
It is quite possible that reasonably good upper and lower bounds for the number of solutions can be obtained by imagining that we are to fill in the quilt from upper left corner to lower right corner as if we were reading. By doing this the next square to be filled in is only constrained by the adjacent squares to the left and above this square. This is at most half the constraints we would need to worry about if we had to fill in a square surrounded on all sides by squares we have already determined. Since solutions fail to exist when even a single square can’t be filled in, we want to make sure we never have to fill in a square surrounded by already determined squares.
Now when you did a brute force depth first search, if you ordered your squares in a similar upper left to lower right manner, then because of the relatively low number of constraints imposed on the next square to be determined, your probability of successfully finding a solution is fairly good. This ordering of the squares, the small number of constraints, the local nature of the constraints, and the sparseness and regularity of a grid graph, explain the ease with which your program identifies solutions.
You start your post talking about designing a random looking quilt. Since I already indicated in my existence condition that patterns might appear with high probability, if this is undesirable, you want to add constraints or a function to optimize to fight this. You might want each color appearing approximately the same number of times. So either you calculate the average number of times a color might appear and allow it to appear say 50% more than this, or your optimization function could be the product of the number of times each color is used. Of course now we are making the problem a little harder adding these global considerations and we might care to question whether we still want to use depth first brute force.
You end your post asking how to feed this to a graph coloring algorithm. But now I’ve opened the optimization can of worms. So what you really want to do is formulate this as an integer linear programming problem. Your constraints are linear in that the number of times a color of yarn is used in each neighborhood can not exceed 1. I don’t know how much linear programming you’ve done before, after you’ve read the Wikipedia articles, read this paper by a group I worked with on how to write Sudoku as a LP: http://langvillea.people.cofc.edu/sudoku5.pdf . In fact, Sudoku gives a good framework for thinking about your problem.
Thanks for a nice problem,
Thank you for giving me an excellent morning of reading. 🙂
I’m currently processing upper left to bottom right, depth-first. Diagonals are part of the constraints, but like you said, it almost doesn’t matter from the standpoint of the 2×2 pattern repeat.
And that high likelihood of a pattern is a problem.
I remember being introduced to linear programming in my linear algebra class and I did a smattering of non-linear modeling during my chemistry research days, but that was 4-5 years ago. The terminology and concepts were familiar, the mechanics long gone from my memory. 🙂
Anyway, Wikipedia, the Sudoku paper, and Matlab code were very enlightening. In thinking about the application of LP to this problem, I’m not sure if it makes more sense to have three decision variables (x, y, z) representing color, richness, and thickness, each binary, or to have a single variable x with values 0-3 for color present, richness present, thickness present, or otherwise. Three decision variables simplifies the declaration of constraints, I think, but then what would my objective function be? 🙂 I’ll definitely be playing with this once I get home and get R installed.
Handling the degradation of the constraints based on my actual given input would be pretty graceful with the LP approach. If they’re asking for a 4-deep blanket, then automatically swap out the “no duplicate richnesses and thicknesses” constraints for the “no adjacent richnesses and thickness”, etc. Lots of logic, but nothing actually difficult there, and having the constraints defined distinctly (rather than tangled in the logic of the brute-force algorithm) makes this all very clean and maintainable (theoretically–haven’t done it yet).
Questions for you, though: Is the recommendation to move from thinking in terms of graph coloring to (B)ILP based solely on the addition of new constraints to improve the solution, or would you have tackled the problem with LP from the get-go? (In other words, why the paradigm shift? Or is it even a paradigm shift to you?)
1 more thing. LP problems are polynomial time, whereas searching all the permutations is obviously not.
Comments are closed.