Thanks heaps for a detailed reply.
Task: text classification
Model =vertex ai Large Language model (supervised fine tuning)->using google console vertexAi-> language->Tuning.
training data= jsonl file = Format : input_text , Output_text(according to required schema)
evaluation data=json file =Format: input_text ,Output_text(according to required schema)
Scenario 1: when i do tuning through google console, i put both training and evaluation data but unable to trace where the evaluation stores after successful training. (checked all the files in dedicated bucket)
Scenario 2: And when I do tuning through google colab python code, i do not provide evaluation file while training. but after training when i separately try to create evaluation through Model Registry->create Evaluation , this tool does ask for eval file having (âpromptâ, âground_truthâ). [even via python code as well get same issue].
Now my question is two fold.
1- For scenerio 1: google console gave the schema for (input_text ,Output_text), so i did it as is ,just unable to locate the evaluation results.
2- For scenerio 2: if i use the evaluation tool format (âpromptâ, âground_truthâ) where as i trained the model on (input_text, output_text), will it provide me reliable evaluation?
I am so confuse how to project my results , where as my model are working excellent when i deploy and test them. I am a research student so I have to project F1, Accuracy, precision , recall and confusion metrics. Please guide.