Generally speaking, an engine is probably not very good if a trivial change results in a large improvement. There is certainly no reason to think such a change would put it ahead of the rest of the field. But Rybka did achieve this, against Toga, commercial Fruit and all others, for a few years.
For the 2nd point, Rybka 1.0 Beta and commercial Fruit are about the same Elo level in their 32-bit versions. For the first point, the history of the development of Fruit seems to imply that Letouzey made no particular effort to optimise strength in Fruit 2.1 [for one (as he told me himself), he left in various "development-oriented" rather than "performance-oriented" code -- there are other comments about the Fruit history that I made
here]. Particularly from the Fruit 2.1
readme:
Although I believe I could keep on increasing strength by adding more and more eval terms, I have little interest in doing so. I would not learn anything in the process, unless I develop new tuning/testing techniques. Ideally I would like to spend more time in alternative software, like my own GUI perhaps (specific to engine testing/matches).
So if you conclude from this that Fruit was "not very good" and thus rather suspectible to improvement, I guess I would agree. As I noted in the post linked above, Thomas Gaksch gained ~125 Elo over Fruit 2.1 while working essentially on a "hobby" basis.
In software engineering, lines of code is considered a poor measure of programmer performance.
The question at hand is not about programmer performance. I should think that "lines of code" would be one measure of originality, which is more relevant here.
At any rate, I hope you realize that you must subject other ICGA contestants to the same BB+ -style derivation analysis in order to obtain a prior for the judgment concerning Rybka...
I can't agree with the "must" here, as it will depend on the direction the ICGA process takes. If an issue is raised as to whether the Rybka-Fruit overlap could fall under "accepted practises" in the field (and who knows -- Rajlich might get a number of programmers to sign a letter stating this, for all I know), then I agree that this could be useful.
But outside of something like this, and without a specific complaint against any other contestants, I don't see any pressing reason to undertake such an analysis. I would expect that most CC programmers are already capable of stating whether they think the current information on the Rybka/Fruit overlap is sufficient to render Rybka 1.0 Beta non-original, and I don't think a Fruit/XYZ analysis would shed any additional light on this originality question [whether or not it would help the
public understand the issues better is a different matter -- also, it seems to me that having XYZ be Crafty (or Stockfish, though it's never competed in an ICGA event) would suffice].
I think that using HIARCS or Shredder for this purpose would be an excellent idea. Of course, it's not my time...
Doing a complete ASM-based analysis (with no "Osipov code" to follow) would be a major pain. But I suspect that either Uniacke or Meyer-Kahlen would agree to allow some independent ICGA inspector to see their source code if it came to that [whether or not the "source code" would be the prime item of study, or merely a guide to a ASM-based study (so as to try to replicate the conditions with Rybka) does not seem to me to matter much, but outsiders seem to perceive there is some distinction between the two]. It would also likely be necessary to redact most of the specifics of such an analysis, which would mean that the "public" would be reliant on the expert opinion of the investigator [this last point is again not crucial to me, though I can't imagine the nitpickers on various fora would fail to jump on it].