作者: Matthew Botvinick , Misha Denil , Nando de Freitas , Yutian Chen , Matthew W. Hoffman
DOI:
关键词:
摘要: We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. show that these learned exhibit a remarkable degree of transfer in they can be used to efficiently optimize broad range derivative-free black-box functions, including Gaussian process bandits, control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up the training horizon, trade-off exploration exploitation, compare favourably with heavily engineered Bayesian packages for tuning.