This is not urgent as I think the disparate formats I have, AppSheet’s limit of ten training images and the lack of identifiable anchor words makes trained OCR unsuitable for this app.
It looks like Appsheet OCR Training is preferentially recognise inverted text.
AppSheet OCR Training showed the image, and others, with the correct orientation. However, in every case where there was an upside down copy of the target text, the inverted text appears to have been preferentially identified.
Based on the anchor word warning message text, I understood that if no anchor words were identified the default was to match from the top left.
I have searched generally for Tesseract training inversion and found nothing helpful.
Is this expected behaviour?
In the end I went for untrained OCR and didn't worry about the training aspects.