Backprop is used to train deep neural networks to remarkable success. Deep neural networks, on the other hands, can be seen as as a specific kind of computer function that receives inputs and produces outputs. Suppose, though, that one wanted to train functions in a programming language like Python or Haskell, that solve a specific problem. Due to the discrete nature of these objects, there is obviously no way to apply backprop to generate these. My question is: is there any algorithm that has satisfactory success in the task of training or generating these functions?
The last time I had this question, I learned about program synthesis, but the field seems to be way behind machine learning, in special due to the lack of satisfactory training procedures. What is the state of the art and where can I find the relevant literature?