Large tablebases
Re: Large tablebases
It seems that these Supermicro products actually have independent nodes. To inter-link the memory at high-speed, one would need the IBQF versions ($500-1000 more) that support Infiniband. However, for tablebases, as I stated above, I'm not sure that one really needs faster than (say) a Gigabit connection. On the plus(?) side, the independence of the nodes would make it easier to run "ponder on" games between them (I don't think numactl would quite suffice here, as one would need to worry about which program was running on the cpu that the OS was using for kernel operations, in particular hard drive reading/caching).
Re: Large tablebases
I think another way to do DTM is for each position evaluating all captures (by probing subtables) and setting the value of its position to that of the "best" capture. Then during iteration x < y, allow "win-in-y" to be overridden by "win-in-x" (based on a "loss-in-(x-1)" for the other side to move), and essentially treat "loss-in-y" as "at least a draw". During iteration y, check whether "loss-in-y" positions are really lost-in-y by looking at the non-capture moves.BB+ wrote:I might point out that, at least for pawnless positions, you can do DTM with little more effort than DTC, by always having the maximal number of pieces around and indicating captured pieces by (say) making them have the same square as the friendly/opposing king (and interpreting this in the retrograde code in right manner, of course). Then you just solve all the subgames simultaneously with the main game. For instance, KQKR would have a position with Ke1Qd1Ke8Re8 corresponding to the Ke1Qd1Ke8 subgame. This Ke1Qd1Ke8Re8 configuration would have the Re8 ignored in retro-generation, while any White un-move would have to consider un-capturing it (e.g. wQd1-a1, with the bRe8 appearing at bRd1).
(I'm using the "grandfather" method, not the "out-counting" method.)
Interesting, because this should allow me to adapt my (in-RAM) DTZ-generator to do DTM.
One complication is tablebases with DTM so large that 1 byte per position is not sufficient. However, that should be solvable by flushing the table to disk and collapsing all depth values. Capture positions will then have to be re-seeded by a new round of probing (assuming there are captures to positions won or lost with high enough DTM), but that seems acceptable.
Re: Large tablebases
Unless I misunderstood, don't you have that backwards? The memory needed to do opposite-colored bishops should be about twice as much as the memory needed to do the positions with same-colored bishops (which can be done in 2 independent slices, dark and light). For N=32, there are (N*N) ways to place two black bishops with one on a dark square and one on a light square, but there are only (N*(N-1)/2) ways to place them both on dark squares (and same for placing them both on light ones). These three sets of positions are independent of each other and could be computed independently in separate slices: there is no move that crosses from one set into another.BB+ wrote: [Bourzutschky says in 2006 regarding krnnkbbn says: "This ending could be generated with 64 GB of RAM in a few months on a fast single CPU machine and about 5 terabytes of storage. Any takers?" -- I presume he is taking the bishops to be of opposite colours, as else it would be more like twice as much memory by my accounting. ...]
Edit:
462 * 62 * 61 * (60*59/2) * (32*32) +
462 * 62 * 61 * (60*59/2) * (32*31/2) +
462 * 62 * 61 * (60*59/2) * (32*31/2)
Re: Large tablebases
"Twice as much memory as doing all bishop pairs" is what I think I meant.I presume he is taking the bishops to be of opposite colours, as else it would be more like twice as much memory by my accounting. ..