I am getting started with the vector embeddings. I have a general question about the embedding vectors generated by popular algorithms.
In PCA, usually, there is an implicit order of importance in the dimensions, with the most informative (by largest Eigen value) dimension first and the least at the end. Is there a similar property with vector embeddings generated by various models like Sentence transformers, OpenAI Embedding API, Google PaLM models etc?