'how to make a decorator that batch respective input of a function to a single input and return the result respectively

The idea is that my model uses TensorFlow takes a weird time when executing with different batch sizes1. The time is insensitive to the size in several sections.

run predict with different batch size

so if I server the model with batched data from several requests, it may improve the performance. Then, how can I do it?

I've tried to use async to solve the problem like

from uuid import uuid4
import asyncio
data_queue=[]
task_queue=[]
result_dict={}
def create_task(data):
    id = uuid4()
    data_queue.append(data)
    task_queue.append(id)
    return id
async def get_result(id):
    while True:
        try:
            res = result_dict.pop(id)
            break
        except KeyError:
            asyncio.sleep()


def process_data():
    pass

what should I do next to finish the task?



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source