'run ai models inference on client devices

I am working with client devices that are not connected to the internet or any network servers these client devices are supposed to run inferences on a pytorch model what is the best way to setup these client machines in a way that the user cannot have access to the pytorch models

Note: the user must have access to the client device but we want to hide the models so they cannot be used outside this particular device

Sorry if any thing I wrote is not clear

Thanks



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source