egradman
2010-11-11 19:09:38 UTC
My tasks import some SQLAlchemy code that creates an engine. I
believe that the engine is created prior to the worker processes
forking, and that the connection pool shared among processes is
causing some disconnection issues. Is there some way of doing per-
worker initialization, so each worker can instantiate its own engine?
I'm not using SQLAlchemy for any celery functionality (queueing,
tombstones); I'm only using it inside the tasks. I'm using the
postgres dbapi.
believe that the engine is created prior to the worker processes
forking, and that the connection pool shared among processes is
causing some disconnection issues. Is there some way of doing per-
worker initialization, so each worker can instantiate its own engine?
I'm not using SQLAlchemy for any celery functionality (queueing,
tombstones); I'm only using it inside the tasks. I'm using the
postgres dbapi.