'Increasing OCR accuracy with tesseract? (Python)

Im trying to build some simple script to auto-farm on a game.

Basically in-game I train until I reach a certain percentage of a stat called Body Fatigue thats always on the screen. What I would like to happen is detect when it reaches 60% and store that number in a variable so I can work with it. I tried using tesseract's image_to_string function with a snippet taken but it displays the wrong info because of the weird font of the text. I'll put the picture here as well and any info would be appreciated!percentage example

I tried rescaling the picture x2 and it seems to be improving but its still doing critical mistakes that could ruin the thing ex taking 5 as a $ or 6 as 0. The following code is supposed to find a saved image snippet called BodyFatigue.png in the region determined by the reg tuple. It returns the coordinates (top, left, width, height) to reg2 then saves a .PNG of the reg2 region.

I was supposed to extract the percentage number and use it in some further code but I'm having difficulties with tesseract's text recognition, I think its because of the weird font.

import pyautogui as pag
import pytesseract as pyt
from PIL import Image

pyt.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe"

reg = 0, 197, 841, 146

reg2 = pag.locateOnScreen('resources/BodyFatigue.png', region = reg, confidence = 0.4, grayscale = True)
bdftg = pag.screenshot('resources/BFNew.png', region = reg2)

new_size = tuple(2*x for x in bdftg.size) 
bdftg = bdftg.resize(new_size, Image.ANTIALIAS)
output = pyt.image_to_string(bdftg)

print(output)


Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source