0

So I have created a chatbot using GPT-3.5 turbo. I have a vector database that holds vector embeddings of brands, ratings, commission percentages, outlets, tags, etc. Here's how the system is designed.

  1. User Asks a question.
  2. The question is converted to vector embeddings using gpt-3 ada embedding model.
  3. The converted vector is searched in Milvus vector database using L2 Method.
  4. The fetched content is then used to create a custom knowledge context.
  5. This custom context is added in prompt for gpt3.5
  6. We send the created prompt to gpt3.5 turbo and answer user.

The Problem: The system is working perfectly. The problem is that this system is designed to work on similarity searches. This means users can ask for brands selling pizza, burgers, etc. Anything that is keyword based is fine. But when users search for the highest commission, highest rating, or highest-rated pizza brand. The results are random because the results are based on similarity not logical reasoning.

I want to know a way to do this. When a user asks a logic-based question that needs reasoning we can somehow find out the correct brands from the database/vector database and then send that brand to gpt3.5 in a prompt.

I'm open to ideas but don't want to use a rule-based bot.

TIA.

  • Current state-of-th-art LLM does not do logical reasoning, so no luck so far. If you can solve this problem you may win a Turing Award. – lpounng May 16 '23 at 09:14

0 Answers0