Is there neural machine translation methods, that for one input sentence outputs multiple alternative output sentences in that target language. It is quite possible, that sentence in source language have multiple meanings and it is not desirable that neural network discards some of the meanings if there is no context for disambiguation provided. How multiple outputs can be acommodated into encode-decoder architecture, or different architecture is required?
I am aware of only one work https://arxiv.org/abs/1805.10844 (and one referen herein) but I am still digesting whether their network outputs multiple sentences or whether it just acommodates variations during training phase.