I get schedule values from .env file. And sometimes parameters in .env file change. Is it possible to change schedule values of already running celery beat task
I'm using Django, Celery, and RabbitMQ for simple tasks on Ubuntu but celery gives no response. I can't figure out why the task is pending in the browser, while
I have an application that runs 2 tasks every minute. Both of them have almost the same code, except for the API endpoint they communicate with (see below). Whe
I actually had issues printing mine on command prompt because I was using the wrong command but I found a link to a project which I forked Project (If on Mac )
my task.py file from celery import Task from random import randint @shared_task def create_nb(): a = randint(1, 100) b = randint(1, 100) return {"
Killing all celery processes involves 'grep'ing on 'ps' command and run kill command on all PID. Greping on ps command results in showing up self process info o
Working on getting Celery setup with a mongodb as result_backend. Following the configuration guidelines set out in the official docs, my celeryconfig.py is set
I have a django app that uses celery to run tasks. Sometimes, I have a "hard shutdown" and a bunch of models aren't cleaned up. I created a task called clean_up
Here is the workflow I'm trying to achieve with celery: * download a big file * split it in chunks (to files) * process each chunk independantly * notifications
I have a FastAPI api code that is executed using uvicorn. Now I want to add a queu system, and I think Celery and Flower can be great tools for me since my api
I'm trying to run a simple periodic task every 10 seconds using flask and celery with the following code in my controllers.py: @celery.task() def print_hello(wo
Directory structure: Here is my cw_manage_integration/psa_integration/api_service/sync_config/init.py: from celery import Celery from kombu import Queue from
I am using feedparser feedparser=6.0.2 to parse some rss resource in Python 3.10, when I using feedparser to get the response in the CentOS 7.x, the feedparser
Whenever I run celery -A reminders worker -l INFO --detach, I get the following error: zsh: command not found: celery My assumption is that the bug lies in my p
When running the following simple celery task I always get 'client unexpectedly closed TCP connection' warnings in the RabbitMQ log output. from celery import
I deployed the latest airflow on a centos 7.5 vm and updated sql_alchemy_conn and result_backend to postgres databases on a postgresql instance and designated m
I am using Celery,RabbitMq,FastAPI,docker to develop a application.This is my docker-compose-yml. services: rabbitmq: container_name: rabbitmq
I want to pull data from a crypto exchange API. For that I would run my code in GKE. The API is limited at 20 requests per second. But if I would run my program
With Django 1.8 I used Django-celery to run asynchronous tasks and I was able to debug them in my IDE (either PyCharm or Eclipse+PyDev) just launching "python c
time="2017-10-27T07:39:20Z" level=error msg="Can't add file /var/app/current/app/content_classifier/forest.pickle to tar: io: read/write on closed pipe" time="