'Applying time series forcasting model at scaled in categorised data [pyspark]

My dataset looks like this

+-------+--------+----------+
|     ID|     Val|      Date|
+-------+--------+----------+
|Ax3838J|81119.73|2021-07-01|
|Ax3838J|81289.62|2021-07-02|
|Ax3838J|81385.62|2021-07-03|
|Ax3838J|81385.62|2021-07-04|
|Ax3838J|81385.62|2021-07-05|
|Bz3838J|81249.76|2021-07-02|
|Bz3838J|81324.28|2021-07-03|
|Bz3838J|81329.28|2021-07-04|
|Bz3838J|81329.28|2021-07-05|
|Bz3838J|81329.28|2021-07-06|
+-------+--------+----------+

In real, there are 2.7 million IDs and total 56 million rows. I am using Azure Databricks (PySpark) and trying to apply fbprophet on a sampled dataset of 10000 rows and it's already taking 5+ hours.

I am considering applying NeuralProphet and StatsForecast but not sure how can I apply the forecast model for each individual ID to do the forecasting on ID basis.

Any suggestions?

NB: while applying fbprophet, val becomes 'y' and Date becomes ds in the respective order.

Here is what I tried for fbprophet

def forecast_balance(history_pd: pd.DataFrame) -> pd.DataFrame:

    anonym_cis = history_pd.at[0,'ID']
    
    # instantiate the model, configure the parameters
    model = Prophet(
        interval_width=0.95,
        growth='linear',
        daily_seasonality=True,
        weekly_seasonality=True,
        yearly_seasonality=False,
        seasonality_mode='multiplicative'
    )

    # fit the model
    model.fit(history_pd)

    # configure predictions
    future_pd = model.make_future_dataframe(
        periods=30,
        freq='d',
        include_history=False
    )

    # make predictions
    results_pd = model.predict(future_pd)
    results_pd.loc[:, 'ID'] = anonym_cis

    # . . .


    # return predictions
    return results_pd[['ds', 'ID', 'yhat', 'yhat_upper', 'yhat_lower']]

result_schema =StructType([
  StructField('ds',DateType()),
  StructField('CIS_ANONYM',IntegerType()),
  StructField('yhat',FloatType()),
  StructField('yhat_upper',FloatType()),
  StructField('yhat_lower',FloatType())
  ])

historic_data = df.filter(F.col('ds') < '2022-02-20')
 
group_results = (
    historic_data
    .groupBy('ID')
    .applyInPandas(forecast_balance, schema=result_schema)
    )
 
   


Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source