Code: Select all
function negamax(node, depth, α, β, color)
alphaOrig := α
// Transposition Table Lookup; node is the lookup key for ttEntry
ttEntry := TranspositionTableLookup( node )
if ttEntry is valid and ttEntry.depth ≥ depth
if ttEntry.Flag = EXACT
return ttEntry.Value
else if ttEntry.Flag = LOWERBOUND
α := max( α, ttEntry.Value)
else if ttEntry.Flag = UPPERBOUND
β := min( β, ttEntry.Value)
endif
if α ≥ β
return ttEntry.Value
endif
The major problem is, is that we can't assume that the entry coming back from the TT represents the actual position we are probing. It could have been written over, even if the depth is valid. I blindly used the returned bounds. later in my code i actually verify if the move stored in the TT is legal, to use it as the first move in move ordering, but the damage had been done. alpha/beta were modified prior to calling QSearch, for example, or even before, in the code itself, score is returned for EXACT and cutoff.
the pseudo code 'if ttEntry is valid' has hidden meaning that can't be answered by simply comparing hash keys.
so, now that i know alpha/beta from the hash can't be used unless we know the stored move is valid for the current position, it is back to the drawing board to move some code around.
while on the subject of rearranging code, i have a question. I've seen many code examples that have checks for leaf node, repetition, 50 move rule, and other checks that return prior to searching deeper. Is there any logic to the order of these checks. it seems that each developer tends to do them in different order. Repetition, for example. if we enter a leaf node, is it a repeated positions if we aren't going to go deeper in the search? is the ordering done for efficiency (which is more likely to occur first)?