I'm currently wrestling with an issue using the OpenAI npm package for API calls. My setup involves prompts, user history, and questions with parameters like temperature, top_p, and frequency_penalty. Strangely, responses from the API lack the context-holding finesse I get when I run the same prompts in the OpenAI console.
Has anyone else encountered this and found solutions? How do you ensure coherent conversation context and top-notch responses? Any tips to share?
Eager to hear your experiences and insights!