AZ also had this behavior, besides we're testing our approach right now. Please be patient.
为什么现在训练的是5/6 block网络,而AZ用的是20block
Why the network size is only 6 blocks comparing to 20 blocks of AZ
在项目起步阶段,较小的网络可以在短时间内得到结果,也可以尽早发现/解决问题,
目前的主要目的是为了测试系统的可行性,这对今后的完整重现十分重要(为将来的大网络打好基础)。
This is effectively a testing run to see if the system works, and which things are important for doing a full run. I expected 10 to 100 people to run the client, not 600.
Even so, the 20 block version is 13 times more computationally expensive, and expected to make SLOWER progress early on. I think it's unwise to do such a run unless it's proven that the setup works, because you are going to be in for a very long haul.
为什么比较两个网络强弱时经常下十几盘就不下了
Why only dozens of games are played when comparing two networks
We use SPRT to decide if a newly trained network is better. A better network is only chosen if SPRT finds it's 95% confident that the new network has a 55% (boils down to 35 elo) win rate over the previous best network.
自对弈时产生的棋谱为什么下得很糟
Why the game generated during self-play contains quite a few bad moves
The MCTS playouts of self-play games is only 1000, and with noises added (For randomness of each move thus training has something to learn from). If you load Leela Zero with Sabaki, you'll probably find it is actually not that weak.
自对弈为什么使用1000的模拟次数,而不是AZ的1600
For self-play, why use 1000 playouts instead of 1600 playouts as AZ
没人知道AZ的1600是怎么得到的。这里的1000是基于下面几点估计得到的:
1.对于某一个选点,MCTS需要模拟几次才能得出概率结果。在开始阶段,每个选点的概率不会差太多,所以开始的360次模拟大概会覆盖整个棋盘。所以如果要让某些选点可以做几次模拟的话,大概需要2到3 x 360次的模拟。
Nobody knows. The Zero paper doesn't mention how they arrive at this number, and I know of no sound background to estimate the optimal. I chose it based on some observations:
a) For the MCTS to feed back search probabilities to the learning, it must be able to achieve a reasonable amount of look-ahead on at least a few variations. In the beginning, when the network is untrained, the move probabilities are not very extreme, and this means that the first 360~ simulations will be spent expanding every answer at the root. So if we want to expand at least a few moves, we probably need 2 to 3 x 360 playouts.
b) One person on computer-go, who ran a similar experiment on 7x7, reported that near the end of the learning, he observed increased performance from increasing the number from 1000 to 2000. So maybe this is worthwhile to try when the learning speed starts to decrease or flatten out. But it almost certainly isn't needed early on.
c) Obviously, the speed of acquiring data is linearly related to this setting.
So, the current number is a best guess based on these observations. To be sure what the best value is, one would have to rerun this experiment several times.
This is expected. Due to randomness of self-play games, once Black choose to pass at the beginning, there is a big chance for White to pass too (7.5 komi advantage for White). See issue #198 for defailed explanation.