'How to avoid making a network request to Google's authorization server before making an API call with bigquery creds- python?

Hi I am trying to automate a script to write to bigquery with schedule, by using streamlit and deploying it on heroku with worker running. However I keep getting this in the log:

2022-04-25T16:00:01.312881+00:00 app[worker.1]: 2022-04-25 23:00:01.312 WARNING google.auth.compute_engine._metadata: Compute Engine Metadata server unavailable on attempt 1 of 3. Reason: [Errno 111] Connection refused

2022-04-25T16:00:02.312860+00:00 app[worker.1]: 2022-04-25 23:00:02.312 WARNING google.auth.compute_engine._metadata: Compute Engine Metadata server unavailable on attempt 2 of 3. Reason: [Errno 111] Connection refused

2022-04-25T16:00:03.312893+00:00 app[worker.1]: 2022-04-25 23:00:03.312 WARNING google.auth.compute_engine._metadata: Compute Engine Metadata server unavailable on attempt 3 of 3. Reason: [Errno 111] Connection refused


 2022-04-25 23:00:03.312 WARNING google.auth._default: Authentication failed using Compute Engine authentication due to unavailable metadata server.
2022-04-25T16:00:03.313922+00:00 app[worker.1]: Please visit this URL to authorize this application: https://accounts.google.com/o/oauth2/auth?xxxxxxxxxxx

When I tried clicking the url, the site can't be reach citing localhost refuse to connect,after I give click a few prompters << this can only be done locally.

the connection is done through this:

# Create API client for bigquery
service_account_info = json.load(open('data/creds/creds.json'))
credentials = service_account.Credentials.from_service_account_info(service_account_info)
client = bigquery.Client(credentials=credentials)
projectId = "xxxxxx"

this is the code to push to table in bigquery with a dataframe

df = pandas.DataFrame(
    {
        "my_string": ["a", "b", "c"],
        "my_int64": [1, 2, 3],
        "my_float64": [4.0, 5.0, 6.0],
        "my_bool1": [True, False, True],
        "my_bool2": [False, True, False],
        "my_dates": pandas.date_range("now", periods=3),
    }
)
#write to gbq
def writeToBQ():
    pandas_gbq.to_gbq(
        df, 'x.sandbox.x', project_id=projectId, if_exists='append',
        )

schedule.every().day.at('23:00').do(lambda :writeFile())

while True:
 schedule.run_pending()
 sleep(1)

I've run an instance of heroku worker to run a schedule that writes the df. However I think the problem is with how pandas_gbq interacted with bigquery or the connection made through google authorization server. Is there a way so that the scheduler can write to bigquery from heroku's worker without making a network request to google authorization server? If I run this locally, it works.



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source