'ANSI encoded text file in blob storage has corrupted characters when reading it in a logic app

I have an issue with text files that are save to a blob storage and then iterated through using a logic app. The files come from a very old (but unfortunately very important) system. Access to it is highly restricted, so I have zero control over how the files are created.

They are uploaded to our blob storage without any suffix and with ANSI encoding. If I view their file contents through the Azure portal I can see that non-standard characters (in this case åäö) are corrupted : [corrupted chars][1]

I assume this is because Azure assumes UTF-8 encoding? The problem occurs when I use a logic app to iterate through the blobs, then getting the file contents and placing them in a string variable. As far as I understood it, Azure logic apps should automatically convert the encoding to UTF-8, but this doesn't seem to happen, since the characters åäö in the string variable are still garbled.

The data from the string variable is used as input data for an Azure function that needs to be able to see the åäö characters properly. Manually converting the files to UTF-8 before uploading them solves the issue, but this is not practical, since this data flow is supposed to be automated.

The file contents are extracted like so: [file contents to variable][2]

Infer content makes no difference, neither does renaming the files with a proper .txt suffix. [1]: https://i.stack.imgur.com/5B2ru.png [2]: https://i.stack.imgur.com/zb6yj.png



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source