'Is it possible to get container OS logs from Google Cloud Run
I'm using google cloud run. I run container with simple Flask+gunicorn app that starts heavy computation.
Sometimes it fails
Application exec likely failed
terminated: Application failed to start: not available
I'm 100% confident it's not related to google cloud run timeouts or Flask + gunicorn timeouts.
I've added hooks for gunicorn: worker_exit, worker_abort, worker_int, on_exit. Mentioned hooks are not invoked.
Exactly the same operation works well locally. I can reproduce it at cloud run only.
Seems like something crashes at cloud run and just kills my python process completely. Is there any chance to debug it?
Maybe I can stream tail -f /var/log/{messages,kernel,dmesg,syslog} somehow in parallel to logs?
The idea is to understand what kills app.
UPD: I've managed to get a bit more logs
Default
[INFO] Handling signal: term
Caught SIGTERM signal.Caught SIGTERM signal.
What is the right way to find what (and why) sends SIGTERM to my python process?
Solution 1:[1]
I would suggest setting up Cloud Logging with your Cloud Run instance. You can easily do so by following this documentation which shows how to attach Cloud Logging to the Python root logger. This will allow you to have more control over the logs that appear for your Cloud Run application.
Setting Up Cloud Logging for Python
Also in setting up Cloud Logging it should allow Cloud Run to pick up automatically any logs under the var/log directory as well as any syslogs (dev/log).
Hope this helps! Let me know if you need further assistance.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Jack Wotherspoon |
