It seems to me that Seq2Seq models and Bidirectional RNNs try to do the same thing. Is that true?
Also, when would you recommend one setup over another?
It seems to me that Seq2Seq models and Bidirectional RNNs try to do the same thing. Is that true?
Also, when would you recommend one setup over another?
Seq2Seq and Bidirectional RNNs are not doing the same thing, at least in their classic form.
Seq2Seq models are used to generate a sequence from another sequence. Consider, for example, the translation task from one language to another. In that sense, Seq2Seq is more a family of models, not an architecture.
On the other hand, the Bidirectional RNN is a neural network architecture and can be used to build several models including Seq2Seq, for example, the encoding part of the Seq2Seq can be a Bi-RNN, but they also can be used for other tasks, for example, sentence classification or sentiment analysis.