0

I'm currently wrestling with an issue using the OpenAI npm package for API calls. My setup involves prompts, user history, and questions with parameters like temperature, top_p, and frequency_penalty. Strangely, responses from the API lack the context-holding finesse I get when I run the same prompts in the OpenAI console.

Has anyone else encountered this and found solutions? How do you ensure coherent conversation context and top-notch responses? Any tips to share?

Eager to hear your experiences and insights!

  • This could be as simple as maintaining a conversation id between sessions, and separating conversations as appropriate. You may get better practical answers if you use Stack Overlflow and share some of your API call and data model logic. – Neil Slater Aug 25 '23 at 12:23
  • The ChatGPT UI is probably using the previous prompts (maybe not all of them, but at least the previous 1) as additional inputs when you send a new prompt. You can easily do the same with the API because you can send multiple messages in one request. Anyway, this question is off-topic here because this is about a product/tool, not an AI theoretical question, although one could also argue that there's some theory that needs to be clarified here. Ask it on Data Science SE or Stack Overflow. – nbro Aug 27 '23 at 01:55
  • @nbro let me kindly suggest that you exercise some caution when suggesting alternative SE sites; the question would be certainly (way) off-topic in Stack Overflow, which (Stack Overflow) is not among the general suggestions in the closing reason you seem to have chosen here (however indicative and non-exhaustive these may be). – desertnaut Aug 28 '23 at 00:47

0 Answers0