Questions tagged [prompt]

14 questions
24
votes
1 answer

Meaning of roles in the API of GPT-4/ChatGPT (system/user/assistant)

In the API of GPT-4 and ChatGPT, the prompt for a chat conversation is a list of messages, each marked as one of three roles: system, user or assistant.* I understand which information this represents - but what does the model with that…
Volker Siegel
  • 589
  • 1
  • 4
  • 17
4
votes
1 answer

Who invented DAN?

DAN was a prompt that went through many, many iterations during the initial months of ChatGPT’s release to the public. DAN is an acronym which stood for “Do Anything Now”, and was a prompt specifically designed to circumvent the grid lines OpenAI…
hmltn
  • 103
  • 9
3
votes
2 answers

How to Formulate a realiable ChatGPT Prompt for Sentiment Analysis of a Text, and show that it is reliable?

I have a dataset which consists of like.. 400000 sentences and I want give each sentence to ChatGPT so it classifies each sentence as positive or negative. My question is, where can I find a reliable / trusted prompt to do that? and provide evidence…
cnmesr
  • 131
  • 3
1
vote
2 answers

Does Negative Prompting Exist?

All the prompt engineering techniques I've seen seem to focus on telling the model what to do e.g. Few-Shot Prompting. Is there any value in giving the model examples of what not to do? Can you link me to any papers/techniques on the…
codeananda
  • 111
  • 2
1
vote
0 answers

Using Stable Diffusion, how do I generate an image of a fictional location from a movie or tv show?

So today's Michael J Fox's birthday, and I thought I could pay homage to him on my Facebook timeline by posting an image of him and Christopher LLoyd from the 2015 version of Hill Valley as depicted in the movie "Back To the Future 2" that I hand…
1
vote
1 answer

What researched-backed findings is there for prompting LLM’s / GPT-4 to give specific information or actionable plans?

I have learned a bit recently about prompt strategies. For example, there was a paper about how just by saying “Let’s think step by step” can increase answer quality by like 40%. I have also come to appreciate that models like GPT4 sometimes…
1
vote
1 answer

Teaching an LLM about daily updated machine-readable information so it can respond questions

I’m quite new in this field, and despite having spent some good amount of time learning the ins and outs of frameworks like LangChain, and browsing around the internet quite a bit, I still don’t know what approach to follow for the use case…
newlog
  • 121
  • 2
1
vote
1 answer

If we prompt a large language model on a task, will its ability for other tasks be affected? How to recover?

For example, I guess that for some retrieval augmented LLMs, their generated contents may lack some creativity? Recent work has explored the inability of retrieval augmented methods to enhance the reasoning of LLMs. Then imagine a scenario where we…
0
votes
0 answers

Improving Contextual Consistency and Quality in OpenAI API Responses

I'm currently wrestling with an issue using the OpenAI npm package for API calls. My setup involves prompts, user history, and questions with parameters like temperature, top_p, and frequency_penalty. Strangely, responses from the API lack the…
0
votes
1 answer

Fine-tuning or Prompt Engineering or both?

We have a dataset of legal cases, academic papers etc which we will load into a vector database. We want to develop an agent that will allow a user to enter a specific legal issue, the agent then will search for all related documents (or portions of…
GEM
  • 1
  • 1
0
votes
0 answers

Word Embeddings but for Logical reasoning in custom knowledge GPT-3.5 bot

So I have created a chatbot using GPT-3.5 turbo. I have a vector database that holds vector embeddings of brands, ratings, commission percentages, outlets, tags, etc. Here's how the system is designed. User Asks a question. The question is…
0
votes
1 answer

How to guide the interaction between two ChatGPT conversational agents?

I wrote a code for a conversation between 2 ChatGPTs (which I call them agent 1 and agent 2) using poe.com and its reversed engineered API: import poe agent_1 = {"token":""} print("Agent 1: ON") agent_2 = {"token":""} print("Agent 2: ON") client_1…
0
votes
0 answers

When would you use prompting vs. fine tuning?

I would like to hear your thoughts on when is it appropriate to use prompting vs. fine-tuning. Does one make more sense for specific tasks than the other. Kindly elaborate.
0
votes
1 answer

can I add to a language model a prompt with output example?

I want to finetune GPT2 to extract relevant data from a given text. So for (a trivial) example, given the text "the car was manufactured in X, can reach Y km/h, and has Z horse powers", my desired output would be manufacturer: X, max speed:Y,…
Hadar Sharvit
  • 371
  • 1
  • 12