Automation Action: ChatGPT
Send a prompt to ChatGPT and assign the response to a variable. Prompts can be assigned a Conversation Id if previous prompts/responses should be included when part of a conversation.
Send a prompt to OpenAI ChatGPT and assign the response to a variable. You can send single one-off prompts or prompts that are part of a conversation. The ThinkAutomation ChatGPT action enables you to automate requests to ChatGPT and then use the response further in your Automation.
Before you can use this action you must create an account with OpenAI. Go to OpenAI and click the Get Started link to create an account.
On your account page select API Keys and generate a new secret key. Make a note of this key as it is only displayed once. This is your OpenAI API Key.
Specify your OpenAI API Key. You can specify a different OpenAI API Key on each ChatGPT action. If you will only be using a single OpenAI Account then you can enter your API key in the ThinkAutomation Server Settings - Integrations - ChatGPT section. This key will be used by default if you do not specify one on the ChatGPT action itself.
Specify the Operation. This can be:
- Ask ChatGPT To Respond To A Prompt
- Add Context To A Conversation
- Clear Conversation Context
Ask ChatGPT To Respond To A Prompt
The System Message is optional. This can help to set the behavior of the assistant. For example: 'You are a helpful assistant'.
The Prompt is the text you want a response to.
What category is the email below? Is it sales, marketing, support or spam? Respond with just sales, marketing, support or spam. Subject: %Msg_Subject% %Msg_Digest% Response: sales
Extract the name and mailing address from this email: Dear Kelly, It was great to talk to you at the seminar. I thought Jane's talk was quite good. Thank you for the book. Here's my address 2111 Ash Lane, Crestview CA 92002 Best, Maya Response: Name: Maya Mailing Address: 2111 Ash Lane, Crestview CA 92002
I am flying from Manchester (UK) to Orlando. What are the airport codes? Respond with just the codes separated by comma. Response: MAN,MCO
Prompts you send to ChatGPT have a limit of approximately 1000 words.
Tip: When using ChatGPT to analyze incoming emails, you can use the %Msg_Digest% built-in field instead of %Msg_Body%. The %Msg_Digest% contains the last reply text only with all blank lines and extra whitespace removed. It is also trimmed to the first 750 characters. This is usually enough to categorize the text and will save your usage count.
The Model entry allows you to select the OpenAI Model to use. You can select from:
- Your own fine-tuned model name
See the OpenAI documentation for details about the different models. GPT-3.5-turbo is the default and works for most scenarios, it is also the least expensive.
Specify the variable to receive the response from the Assign Response To list.
You can also optionally assign the number of tokens used for the prompt/response. Select the variable to receive the tokens used from the Assign Used Token Count To list. OpenAI charges are based on tokens used. For example, the current pricing for gpt-3.5-turbo is $0.002 per 1000 tokens.
You can optionally specify a Conversation Id. This is useful if multiple ChatGPT requests will be made within the same Solution and you want to include previous prompts/responses for context, or if you want to add your own context prior to asking ChatGPT for a response.
The Conversation Id can be any text. For example, setting it to %Msg_FromEmail% will link any requests for the same incoming email address.
The Max Conversation Lines entry controls the maximum number of previous prompts/response pairs that are included with each request. For example, if the Max Conversation Lines is set to 10 then the last (most recent) 10 prompt/response pairs will be sent prior to the current prompt. As the conversation grows, the oldest items will be removed to prevent the total prompt text going over the ChatGPT token limit.
Conversations are shared by all Automations within a Solution and conversation lines older than 48 hours are removed.
Suppose you send 'What is the capital city of France?' in one prompt and receive a response. If you then send another separate prompt of 'What is the population?' with the same conversation id then you will receive a correct response about the population of Paris because ChatGPT already knows the context. This would work across multiple Automation executions for up to 48 hours, as long as the conversation id is the same.
Add Context To A Conversation
You can add context to a conversation. Context is used to help ChatGPT give the correct answer to a question. The can be Static Text or you can search articles based on the incoming message from the Embedded Knowledge Store and send the most relevant articles to provide context.
You could also lookup context any other way (via your own database or web lookup). For example: If the customer provides an email at the start of the chat you could lookup customer & accounting/order information and add this to the context in case the customer asks about outstanding orders.
The same context wont be added to a conversation if the conversation already has it. So you can add standard context (for example, general information about your business) along with searched for context within your Automation prior to asking ChatGPT to a response.
You can add multiple ChatGPT - Add Context To A Conversation actions in your Automation prior to the ChatGPT - Ask ChatGPT To Respond To A Prompt action.
For example: Suppose you have a company chat bot on your website using the Web Chat message source. A user asks 'what is the current price for widgets?'. You first add some general context about your business, you then do a knowledge base search with the Search Text set to the incoming question. You add the most relevant articles relating to widgets to the conversation as context. ChatGPT will then be able to answer the user's question from the context you provided.
The context itself does not appear in the chat or get saved anywhere - it simply gets added to the prompt sent to ChatGPT to provide context to help ChatGPT answer the users question. The benefit of this is that you can use the standard ChatGPT models without training - and you can always provide up to date information by keeping your local knowledge base updated or looking up context from a database. This is a much faster way of creating a working bot, and is a much more cost effective solution than training your own model or using 3rd party hosted services.
Note: ChatGPT has a limit of 4096 tokens per request. Typically a token corresponds to about 4 characters of text. This includes the response from ChatGPT itself. ThinkAutomation will only include the most recently added context with the ChatGPT request, to ensure the token limit is not exceeded. Therefore, if you add too much context, some of it may not be included.
Adding Tabular Context
You can add tabular context to a conversation. A user can then ask questions relating to the data.
For example, you could lookup invoices for a customer (based on the email address provided) from a database and return the data in CSV format:
invoice_number,invoice_date,product,amount_due INV-2023351,2023-01-05,Plain Widgets,1500.00 INV-2023387,2023-01-10,Orange Niblets,2500.00 INV-2023421,2023-01-15,Flat Widgets,1800.00 INV-2023479,2023-01-20,Flat Widgets,3500.00 INV-2023521,2023-01-25,Round Niblets,1200.00
You would assign the CSV data to a %variable% and then add Static Context:
Given the following list of invoices in CSV format for the user, answer questions about this data. The 'amount_due' column gives the outstanding balance for the invoice in dollars. %CSVData%
The chat user could then ask questions such as:
What is the total amount due? or
Can you show me a list of my invoices?
When adding context as tabular data, you need to proceed the data with a clear instruction of what the data is. You would need to experiment with the prompt text to ensure ChatGPT responds correctly.
Clear Conversation Context
This operation will clear any Context added to a conversation. Specify the Conversation Id.
ChatGPT Rate Limits
Your OpenAI account will set a rate limit for the maximum requests per minute. The Open AI API Key - Rate Limit Retries setting determines how many times ThinkAutomation will retry the request if a rate limit error is returned. It will automatically increase the wait time for each retry. The default wait period is 30 seconds. If the request still fails after the retries then an error will be raised.
ChatGPT has many uses. Other than being a regular chat bot that has knowledge of many subjects, you can use it to:
- Create a Chat Bot using the Web Chat Message Source type that can answer specific questions about your business by adding context using the Embedded Knowledge Store.
- Provided automated responses to incoming support emails, utilizing the Embedded Knowledge Store.
- Parse unstructured text and extract key information.
- Summarize text.
- Classify emails.
- Translate text.
- Correct grammar/spelling.
- Convert natural language into code (SQL, PowerShell etc).
and much more. See: Examples - OpenAI