When the authors tested out prospective configuration in artificial neural networks they found that they learned in a much more human-like way—more robustly and with less training—than models trained with backprop.
当作者测试人工神经网络中的前瞻性配置时,他们发现与使用反向传播训练的模型相比,它们的学习方式更像人类,而且更加稳健,所需训练更少。