'Snowflake error in RStudio which limits the size of rows to fetch - Error: nanodbc/nanodbc.cpp:2695: HY000

I have an issue where I kind of know what it is but do not know the solution to it.

When I fetch data from an external Snowflake DB, it limits the size of the data that I can fetch in one query. For instance, I want to fetch data of of size larger than 1 million rows.

But the limit seems to be set to 1883 rows to fetch. I can fetch all rows when using the Snowflake browser and download it to .csv but not via RStudio. If I do not set the limit in the query to 1883 (max limit for some reason) then I cant fetch the data.

How do I bypass the size limit?

Query:

select (cast(mod_date as date)+(type-1)) as Date
, item_nr as Itemnr
, quantity as Forecast
, store_nr as Store
from MASKED.DB
where country = 'MASKED'
and store_nr in (MASKED, MASKED)
and mod_date > '2022-04-25'
and type < 4
order by Date
limit 1883;
Error in result_fetch(res@ptr, n) : 
  nanodbc/nanodbc.cpp:2695: HY000: [Snowflake][Snowflake] (25) 
      Result download worker error: Worker error: [Snowflake][Snowflake] (4) 
      REST request for URL https://MASKED.blob.core.windows.net/results
      /MASKED%2Fmain%2Fdata_0_0_0?sv=2020-08-04&spr=https&se=2022-04-25T19%3A32%3A22Z&sr=b&sp=r&sig=MASKED&rsce=gzip 
      failed: CURLerror (curl_easy_perform() failed) - code=7 msg='Couldn't connect to server' osCode=10013 osMsg='Unknown error'.
    
     
Warning message:
In dbClearResult(rs) : Result already cleared


Solution 1:[1]

The solution for me was to set a proxy rule for Snowflake without manipulating environment variables.

In any proxy program or in your solution set the rule like this:

Applications: Any (any app/program can use it)
Target hosts: *.your_link_name.snowflakecomputing.com
Target ports: Any 
Action: Direct (not via HTTPS or SOCKET)

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 geometricfreedom