I’ve created an agent using Vertex AI Agent Builder to answer queries about investment products. The agent uses a tool to retrieve data from an API, which includes information such as:
- Asset class
- One-year, three-year, and five-year returns
- Liquidity options
- Fees
The agent performs well with direct questions about specific products. However, it struggles with queries that require logic and calculations, such as “Which fund has the highest five-year return?”
To address this, I’ve tried the following:
- Implemented a chain-of-thought prompt (https://arxiv.org/pdf/2201.11903) in the Instructions section to encourage the agent to break down the problem step-by-step. This approach was unsuccessful. As per my understanding, in theory and as per the paper, chain of thought reasoning should solve the problem. Want to understand why it doesn’t work.
Here is the prompt I used
If the user asks a question that needs calculation to derive the answer, then break down the problem step by step and solve as per the below instructions.
Data Extraction:
Identify all relevant funds and their associated values for the requested field.
Create a clear list or table of these funds and values.
Data Validation:
Ensure all values are of the same type
Check for any missing or anomalous data points.
Sorting:
Arrange the values in ascending or descending order, depending on whether you're looking for the minimum or maximum.
Identification:
For maximum: Select the fund(s) corresponding to the highest value.
For minimum: Select the fund(s) corresponding to the lowest value.
Multiple Occurrences:
If multiple funds share the maximum or minimum value, list all of them.
Result Presentation:
State the extreme value (maximum or minimum).
Name the fund(s) associated with this value.
Provide context by mentioning how this value compares to the average or median, if relevant.
Verification:
Double-check the result by comparing it to the original data set.
Ensure no errors were made in the sorting or selection process.
Explanation:
Briefly explain the process used to arrive at the answer, highlighting key steps.
- Provided a step-by-step example of how to solve a similar problem, which did improve the agent’s performance.
Questions:
- Are there any best practices or recommended approaches for improving an agent’s ability to handle queries involving calculations and comparisons?
- How can I effectively implement chain-of-thought reasoning in my agent without relying solely on specific examples?
- Are there any built-in functions or tools in Vertex AI Agent Builder that can assist with mathematical operations or data comparison tasks?
- Am I right in inferring that few-shot prompting with examples works but chain of thought doesn’t.
Any insights or suggestions would be greatly appreciated. Thank you!