Faster and more diverse de novo molecular optimization with double-loop reinforcement learning using augmented SMILES

10/22/2022
by   Esben Jannik Bjerrum, et al.
0

Molecular generation via deep learning models in combination with reinforcement learning is a powerful way of generating proposed molecules with desirable properties. By defining a multi-objective scoring function, it is possible to generate thousands of ideas for molecules that scores well, which makes the approach interesting for drug discovery or material science purposes. However, if the scoring function is expensive regarding resources, such as time or computation, the high number of function evaluations needed for feedback in the reinforcement learning loop becomes a bottleneck. Here we propose to use double-loop reinforcement learning with simplified molecular line entry system (SMILES) augmentation to use scoring calculations more efficiently and arrive at well scoring molecules faster. By adding an inner loop where the SMILES strings generated are augmented to alternative non-canonical SMILES and used for additional rounds of reinforcement learning, we can effectively reuse the scoring calculations that are done on the molecular level. This approach speeds up the learning process regarding scoring function calls, as well as it protects moderately against mode collapse. We find that augmentation repeats between 5-10x seem safe for most scoring functions and additionally increase the diversity of the generated compounds, as well as making the sampling runs of chemical space more reproducible

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset