When trying to use Chat Completions through the OpenAI API, I run into difficulties with the model maintaining a memory of this conversation. I use a Python script to run a series of questions about a series of deals. I want to initialize the conversation for each new deal, but I want to make sure that the same conversation is maintained for all questions referencing the same deal.
Currently, I receive output like this when referencing an acquisition deal that the API already responded to:
“As an AI, I don’t have the ability to recall previous interactions. However, …
- Company A:…”
I apologize for this, I am new to Python. I was under the impression that I could use previous_context
and/or conversation_context
to maintain this type of conversation.
`def ask_gpt(prompt, client, previous_context=””, max_retries=3, retry_delay=5, initialize=False):
retries = 0
while retries < max_retries:
messages = []
if initialize:
messages.append({
"role": "system",
"content": ()
})
# Include previous context
if previous_context:
messages.append({
"role": "user",
"content": previous_context
})
# Add the main user query
messages.append({
"role": "user",
"content": prompt
})
print(messages) # Print messages for diagnosis
try:
response = client.chat.completions.create(
model="gpt-4-0613",
messages=messages,
max_tokens=3200,
n=1,
stop=None,
top_p=0.0,
temperature=0.0,
frequency_penalty=1,
presence_penalty=1
)
answer = response.choices[0].message.content # Corrected path
return answer
# Handle any exception that may occur
except openai.RateLimitError as e:
print("Rate limit exceeded. Please try again later.")
print(e) # Print the error for diagnosis
time.sleep(retry_delay) # Sleep before retrying
retry_delay *= 2 # Exponential backoff
retries += 1
except Exception as e:
print(f"An unexpected error occurred: {e}")
retries += 1
if retries >= max_retries:
raise
time.sleep(retry_delay)
retry_delay *= 2 # double the delay for the next retry
def generate_questions(df, word_file):
doc = Document(word_file)
prompts = [p.text for p in doc.paragraphs if p.text.startswith(“Prompt”)]
def process_xlsx(input_file, output_file, word_file, client):
df = pd.read_excel(input_file, sheet_name=’FullSample’, engine=’openpyxl’)
generate_questions(df, word_file)
prompt_cols = [col for col in df.columns if col.startswith("Prompt")]
# Iterate over each row in the DataFrame
for idx, row in tqdm(df.iterrows(), total=len(df)):
# Reset context at the beginning of each row/new deal
conversation_context = ""
initialize = True
# Iterate over each prompt column for the current row
for col in prompt_cols:
prompt = row[col]
# Fetch the response from the GPT model with acquirer and target lists as needed
response = ask_gpt(prompt, client, initialize=initialize)
conversation_context += f"Q: {prompt}nA: {response}n"
# Store the response in the DataFrame
df.at[idx, f'Response_for_{col}'] = response
# Reset initialize only once per row
initialize = False
time.sleep(1) # Throttle requests
df.drop(columns=prompt_cols, inplace=True)
df.to_excel(output_file, index=False, engine='openpyxl')`
Adam is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.