I have a custom extractor processor on Document AI. Have labeled dataset of 100 training/25 test documents. After fine tuning the model and looking into full evaluation report, I get result that I can`t wrap my head around: both - ground truth and prediction are pointing at exact value on the document, but i get these False positive/False negative results. Maybe someone can explain me what is happening? When i try this fine tuned processor with the same document ,it finds what i am looking for

