In the website the following explanation is provided about Embedding layer:
The Embedding layer is initialized with random weights and will learn an embedding for all of the words in the training dataset.
It is a flexible layer that can be used in a variety of ways, such as:
It can be used alone to learn a word embedding that can be saved and used in another model later. It can be used as part of a deep learning model where the embedding is learned along with the model itself. It can be used to load a pre-trained word embedding model, a type of transfer learning.
Isnt embeddings model specific? I mean to learn a representation of something we need the model that something was used to represent! so how can embeddings learned in one model can be used in another?