For example, I guess that for some retrieval augmented LLMs, their generated contents may lack some creativity? Recent work has explored the inability of retrieval augmented methods to enhance the reasoning of LLMs. Then imagine a scenario where we prompt ChatGPT in order to have it implement a task related to information extraction, and suppose we continue the session process and we want it to answer a question for us ( here the task transform from Information Extraction to Open-domain QA), then will the LLM remain the same ability to answer a question?
So, a natural consideration is: when prompt has become a paradigm for aligning LLM with human needs, how can we eliminate the impact of prompt and recover LLM?
But perhaps a more economical strategy would be to open a new session window and give the LLM a prompt about the open-domain QA.