bob wrote
Moving to OpenChess is fine.
However, be aware that there are lots of issues with what you suggest. On a normal system, you can't do things like disable interrupts, since that is a privileged instruction. On the old DOS systems you could get away with it, but not today.
As far as consistency goes, there are several things you have to deal with. Each time you run your program, you _must_ get it loaded into the same set of real memory pages, in the same order. Otherwise cache/memory aliasing will be different, which changes the speed, and eliminates the consistency. I don't want to write code to take an executable and load it into memory, set up the virtual memory mapping tables, the protection, deal with the graphics hardware to display stuff, deal with low-level I/O stuff to read and write various devices to access disks, keyboard/screen, network interface, timer interrupt to maintain time, etc.
And even if I did all of that I do not believe it is possible to get consistent results down to searching exactly the same number of nodes in a specific interval of time. And if I can't get it _exact_ then there is no reason to do it since a single node will ultimately change the game somewhere most of the time.
This is a real challenge. And I do mean _real_ challenge.
I know that it would be the big task to do. In my past I wrote the OS in the assembler I8080 for the terminal od mainframe.
I do not intend to write the new OS and solve all mentioned problems. The good definition of the proper environment for the IT experiment is important factor. Why to deal with Unix, Windows multitasking ?
The limting problem to Dos is solving many tasks for me. Dos is not using the advanced instructions of the new processors but it shall run, I checked that with my colleagues. I can install pure Dos on the disk and SO shall run.
My problem is to check if the algorithm is deterministic , so I can use the limit of the memory to 640 kB even, limit the # table size and made it range permament . Lets say #table 64 kB one memory segment of I8086 family.
The main and most operations with disabled interrupts will be in the memory. That time will be counted for the time of 'thinking ' on move. Then the interrupts will be enabled, time counting stopped until next entry in disabled area.
It shall solve the time and memory allocation problems.
I think the time calculation for thinking on move is the main issue.
The isolation of period used for calculating of move not for i/o etc. it is the task.
If it can be solved in current (Unix,Linux,Win) environment it will be the best. May be the installation of additional timer, clock dedicated for that engine would be enough. It is possible for sure.
The specialized hardware chess devices shall be done on that way.
Rgds Hood
'Our genuine ...' continued.
'Our genuine ...' continued.
Smolensk 2010. Murder or accident... Cui bono ?
There are not bugs free programms. There are programms with undiscovered bugs.
Alleluia.
There are not bugs free programms. There are programms with undiscovered bugs.
Alleluia.
-
- Posts: 1242
- Joined: Thu Jun 10, 2010 2:13 am
- Real Name: Bob Hyatt (Robert M. Hyatt)
- Location: University of Alabama at Birmingham
- Contact:
Re: 'Our genuine ...' continued.
I copied this from the other thread on this topic:
Let's re-set the stage here.
The basic problem can be summed up in two ways.
(1) if a program searches the same position for the same number of nodes, all else being the same as well, it will play the same move with the same score. But if you let it search only one node more the second time, you may get a different move or score. Or you may change something (killer moves, hash table entries, history counters, etc) that doesn't change this move, but changes the next, or the next. Several of us have run "fixed node" tests, where you tell a program to search exactly 1,000,000 nodes before it makes a move. And it will play the exact same moves, with the exact same scores, game after game after game. But if you change that to let it search 1,000,001 nodes, then most of the time, somewhere in the game, it will choose a different move. It may lead to a different result, or it might not. But the game moves will not be the same.
(2) programs use time to determine how long to search. If their NPS varies by only 1, in a 1 second search, then as above in (1) you will see a difference in the game. Crafty typically searches 25M nodes per second on my 8-core 3-year-old box. which means roughly 1 node every 40 nanoseconds. Or taken another way, if you measure time and use that to stop the search, and you are off by 40 ns, it will play a different game. That's a tiny amount of time. The time taken to handle an interrupt. The time required for an extra cache miss due to different memory pages being used the second time around. Etc. If you can't do something to get the timer resolution/accuracy down to considerably less than that, then complete determinism is hopeless. And since I don't want to disable interrupts and kill everything else (how can someone enter a move while interrupts are disabled among other problems) I don't see any possible solution. If I somehow fix that problem, then how do I guarantee my program uses the same physical set of real pages each time I start it and load it into memory? Operating systems don't do that, so I get to write that code as well. And I eventually have millions of lines of code _besides_ my chess engine, to try to produce that consistency. And I will still fail, most likely, as there are still hardware issues to deal with, such as an occasional memory refresh, or a one cycle bus delay (the hardware is highly asynchronous).
In short, this is a hunt for fool's goal. If you go back to a 1mhz 8086, you might have a chance because instead of 25M nodes per second, we would be talking maybe 1,000 nodes per second (optimistic probably) which means we only need 1ms accuracy in the hardware and we could likely get that. I have run on hardware that took Crafty beyond 100M nodes per second, which gets down to 10ns per node. And it will only get worse as hardware continues to evolve.
Let's re-set the stage here.
The basic problem can be summed up in two ways.
(1) if a program searches the same position for the same number of nodes, all else being the same as well, it will play the same move with the same score. But if you let it search only one node more the second time, you may get a different move or score. Or you may change something (killer moves, hash table entries, history counters, etc) that doesn't change this move, but changes the next, or the next. Several of us have run "fixed node" tests, where you tell a program to search exactly 1,000,000 nodes before it makes a move. And it will play the exact same moves, with the exact same scores, game after game after game. But if you change that to let it search 1,000,001 nodes, then most of the time, somewhere in the game, it will choose a different move. It may lead to a different result, or it might not. But the game moves will not be the same.
(2) programs use time to determine how long to search. If their NPS varies by only 1, in a 1 second search, then as above in (1) you will see a difference in the game. Crafty typically searches 25M nodes per second on my 8-core 3-year-old box. which means roughly 1 node every 40 nanoseconds. Or taken another way, if you measure time and use that to stop the search, and you are off by 40 ns, it will play a different game. That's a tiny amount of time. The time taken to handle an interrupt. The time required for an extra cache miss due to different memory pages being used the second time around. Etc. If you can't do something to get the timer resolution/accuracy down to considerably less than that, then complete determinism is hopeless. And since I don't want to disable interrupts and kill everything else (how can someone enter a move while interrupts are disabled among other problems) I don't see any possible solution. If I somehow fix that problem, then how do I guarantee my program uses the same physical set of real pages each time I start it and load it into memory? Operating systems don't do that, so I get to write that code as well. And I eventually have millions of lines of code _besides_ my chess engine, to try to produce that consistency. And I will still fail, most likely, as there are still hardware issues to deal with, such as an occasional memory refresh, or a one cycle bus delay (the hardware is highly asynchronous).
In short, this is a hunt for fool's goal. If you go back to a 1mhz 8086, you might have a chance because instead of 25M nodes per second, we would be talking maybe 1,000 nodes per second (optimistic probably) which means we only need 1ms accuracy in the hardware and we could likely get that. I have run on hardware that took Crafty beyond 100M nodes per second, which gets down to 10ns per node. And it will only get worse as hardware continues to evolve.
Re: 'Our genuine ...' continued.
Thanks, it clears the situation for me.
Fixed depth shall be the better way to test that problem There shall be no fight vs OS. I hope that 1 000 001 is not the magic number .
I suppose that the number of nodes to check shall to have sth with the nodes added/created by 1 iteration of the program at certain depth.
so adding only 1 node has generated the noise, the iteration was cut somwhere. (by the iteration i understand 1 time going through all considered moves in the position.)
Fixed depth shall be the better way to test that problem There shall be no fight vs OS. I hope that 1 000 001 is not the magic number .
I suppose that the number of nodes to check shall to have sth with the nodes added/created by 1 iteration of the program at certain depth.
so adding only 1 node has generated the noise, the iteration was cut somwhere. (by the iteration i understand 1 time going through all considered moves in the position.)
Smolensk 2010. Murder or accident... Cui bono ?
There are not bugs free programms. There are programms with undiscovered bugs.
Alleluia.
There are not bugs free programms. There are programms with undiscovered bugs.
Alleluia.
-
- Posts: 1242
- Joined: Thu Jun 10, 2010 2:13 am
- Real Name: Bob Hyatt (Robert M. Hyatt)
- Location: University of Alabama at Birmingham
- Contact:
Re: 'Our genuine ...' continued.
1,000,001 is not magic. Any change will do the trick.Hood wrote:Thanks, it clears the situation for me.
Fixed depth shall be the better way to test that problem There shall be no fight vs OS. I hope that 1 000 001 is not the magic number .
I suppose that the number of nodes to check shall to have sth with the nodes added/created by 1 iteration of the program at certain depth.
so adding only 1 node has generated the noise, the iteration was cut somwhere. (by the iteration i understand 1 time going through all considered moves in the position.)
I can't use fixed-depth, as it is unfair. Suppose I take program A, and modify it to do more search extensions and call the new version B. When I play A vs B, B will likely win. Because its tree search space might be much larger to the extensions. If we were using time, it would not get to search to the same depth, but with fixed depth it will. So for consistency, the idea works. But it is worthless for comparing any two programs as how do you compute "equivalent depths" so that the two programs are equal in terms of search space, even if not in terms of depth reached.
Re: 'Our genuine ...' continued.
If the target will be to compare several programs it will be unfair but for stating how 'deterministic' algorithm ,is that is not problem. Then playing engine vs engine match from certain position to check if the games will be identical will make sense, too.
Thanks for your honest and instructive replys. As i am on the start of the adventure with the engine programming they are useful for me.
rgds Hood
Thanks for your honest and instructive replys. As i am on the start of the adventure with the engine programming they are useful for me.
rgds Hood
Smolensk 2010. Murder or accident... Cui bono ?
There are not bugs free programms. There are programms with undiscovered bugs.
Alleluia.
There are not bugs free programms. There are programms with undiscovered bugs.
Alleluia.