tutorial pricing elastic ec2 aws django amazon-web-services elastic-beanstalk django-celery

django - pricing - elastic beanstalk vs ec2



¿Cómo ejecutar a un trabajador de apio en AWS Elastic Beanstalk? (2)

Versiones:

  • Django 1.9.8
  • apio 3.1.23
  • django-apio 3.1.17
  • Python 2.7

Estoy tratando de ejecutar mi trabajador de apio en AWS Elastic Beanstalk. Utilizo Amazon SQS como corredor de apio.

Aquí está mi settings.py

INSTALLED_APPS += (''djcelery'',) import djcelery djcelery.setup_loader() BROKER_URL = "sqs://%s:%s@" % (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY.replace(''/'', ''%2F''))

Cuando escribo la línea a continuación en la terminal, comienza el trabajador en mi local. También he creado algunas tareas y se ejecutan correctamente. ¿Cómo puedo hacer esto en AWS EB?

python manage.py celery worker --loglevel=INFO

Encontré esta pregunta en StackOverflow. Dice que debo agregar una configuración de apio a la carpeta .ebextensions que ejecuta la secuencia de comandos después de la implementación. Pero no funciona. Agradecería cualquier ayuda. Después de instalar el supervisor, no hice nada con eso. Tal vez eso es lo que me estoy perdiendo. Aquí está el guión.

files: "/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh": mode: "000755" owner: root group: root content: | #!/usr/bin/env bash # Get django environment variables celeryenv=`cat /opt/python/current/env | tr ''/n'' '','' | sed ''s/export //g'' | sed ''s/$PATH/%(ENV_PATH)s/g'' | sed ''s/$PYTHONPATH//g'' | sed ''s/$LD_LIBRARY_PATH//g''` celeryenv=${celeryenv%?} # Create celery configuration script celeryconf="[program:celeryd] command=/opt/python/run/venv/bin/celery worker --loglevel=INFO directory=/opt/python/current/app user=nobody numprocs=1 stdout_logfile=/var/log/celery-worker.log stderr_logfile=/var/log/celery-worker.log autostart=true autorestart=true startsecs=10 ; Need to wait for currently executing tasks to finish at shutdown. ; Increase this if you have very long running tasks. stopwaitsecs = 600 ; When resorting to send SIGKILL to the program to terminate it ; send SIGKILL to its whole process group instead, ; taking care of its children as well. killasgroup=true ; if rabbitmq is supervised, set its priority higher ; so it starts first ; priority=998 environment=$celeryenv" # Create the celery supervisord conf script echo "$celeryconf" | tee /opt/python/etc/celery.conf # Add configuration script to supervisord conf (if not there already) if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf then echo "[include]" | tee -a /opt/python/etc/supervisord.conf echo "files: celery.conf" | tee -a /opt/python/etc/supervisord.conf fi # Reread the supervisord config supervisorctl -c /opt/python/etc/supervisord.conf reread # Update supervisord in cache without restarting all services supervisorctl -c /opt/python/etc/supervisord.conf update # Start/Restart celeryd through supervisord supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd

Registros de EB: Parece que funciona pero aún no ejecuta mis tareas.

------------------------------------- /opt/python/log/supervisord.log ------------------------------------- 2016-08-02 10:45:27,713 CRIT Supervisor running as root (no user in config file) 2016-08-02 10:45:27,733 INFO RPC interface ''supervisor'' initialized 2016-08-02 10:45:27,733 CRIT Server ''unix_http_server'' running without any HTTP authentication checking 2016-08-02 10:45:27,733 INFO supervisord started with pid 2726 2016-08-02 10:45:28,735 INFO spawned: ''httpd'' with pid 2812 2016-08-02 10:45:29,737 INFO success: httpd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2016-08-02 10:47:14,684 INFO stopped: httpd (exit status 0) 2016-08-02 10:47:15,689 INFO spawned: ''httpd'' with pid 4092 2016-08-02 10:47:16,727 INFO success: httpd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2016-08-02 10:47:23,701 INFO spawned: ''celeryd'' with pid 4208 2016-08-02 10:47:23,854 INFO stopped: celeryd (terminated by SIGTERM) 2016-08-02 10:47:24,858 INFO spawned: ''celeryd'' with pid 4214 2016-08-02 10:47:35,067 INFO success: celeryd entered RUNNING state, process has stayed up for > than 10 seconds (startsecs) 2016-08-02 10:52:36,240 INFO stopped: httpd (exit status 0) 2016-08-02 10:52:37,245 INFO spawned: ''httpd'' with pid 4460 2016-08-02 10:52:38,278 INFO success: httpd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2016-08-02 10:52:45,677 INFO stopped: celeryd (exit status 0) 2016-08-02 10:52:46,682 INFO spawned: ''celeryd'' with pid 4514 2016-08-02 10:52:46,860 INFO stopped: celeryd (terminated by SIGTERM) 2016-08-02 10:52:47,865 INFO spawned: ''celeryd'' with pid 4521 2016-08-02 10:52:58,054 INFO success: celeryd entered RUNNING state, process has stayed up for > than 10 seconds (startsecs) 2016-08-02 10:55:03,135 INFO stopped: httpd (exit status 0) 2016-08-02 10:55:04,139 INFO spawned: ''httpd'' with pid 4745 2016-08-02 10:55:05,173 INFO success: httpd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2016-08-02 10:55:13,143 INFO stopped: celeryd (exit status 0) 2016-08-02 10:55:14,147 INFO spawned: ''celeryd'' with pid 4857 2016-08-02 10:55:14,316 INFO stopped: celeryd (terminated by SIGTERM) 2016-08-02 10:55:15,321 INFO spawned: ''celeryd'' with pid 4863 2016-08-02 10:55:25,518 INFO success: celeryd entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)


puedes usar el supervisor para ejecutar el apio. Eso ejecutará el apio en el proceso demoníaco.

[program:tornado-8002] directory: name of the director where django project lies command: command to run celery // python manage.py celery stderr_logfile = /var/log/supervisord/tornado-stderr.log stdout_logfile = /var/log/supervisord/tornado-stdout.log


Olvidé agregar una respuesta después de resolver esto. Así es como lo arreglé. Creé un nuevo archivo "99-apio.config" en mi carpeta .ebextensions. En este archivo, he agregado este código y funciona perfectamente. (no olvide cambiar el nombre de su proyecto en la línea número 16, el mío es molocate_eb)

files: "/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh": mode: "000755" owner: root group: root content: | #!/usr/bin/env bash # Get django environment variables celeryenv=`cat /opt/python/current/env | tr ''/n'' '','' | sed ''s/export //g'' | sed ''s/$PATH/%(ENV_PATH)s/g'' | sed ''s/$PYTHONPATH//g'' | sed ''s/$LD_LIBRARY_PATH//g''` celeryenv=${celeryenv%?} # Create celery configuraiton script celeryconf="[program:celeryd] ; Set full path to celery program if using virtualenv command=/opt/python/current/app/molocate_eb/manage.py celery worker --loglevel=INFO directory=/opt/python/current/app user=nobody numprocs=1 stdout_logfile=/var/log/celery-worker.log stderr_logfile=/var/log/celery-worker.log autostart=true autorestart=true startsecs=10 ; Need to wait for currently executing tasks to finish at shutdown. ; Increase this if you have very long running tasks. stopwaitsecs = 600 ; When resorting to send SIGKILL to the program to terminate it ; send SIGKILL to its whole process group instead, ; taking care of its children as well. killasgroup=true ; if rabbitmq is supervised, set its priority higher ; so it starts first priority=998 environment=$celeryenv" # Create the celery supervisord conf script echo "$celeryconf" | tee /opt/python/etc/celery.conf # Add configuration script to supervisord conf (if not there already) if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf then echo "[include]" | tee -a /opt/python/etc/supervisord.conf echo "files: celery.conf" | tee -a /opt/python/etc/supervisord.conf fi # Reread the supervisord config supervisorctl -c /opt/python/etc/supervisord.conf reread # Update supervisord in cache without restarting all services supervisorctl -c /opt/python/etc/supervisord.conf update # Start/Restart celeryd through supervisord supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd

Editar: En caso de un error de supervisor en AWS, solo asegúrese de eso;

  • Está utilizando Python 2, no Python 3, ya que el supervisor no funciona en Python 3.
  • No olvide agregar supervisor a sus requisitos.txt.
  • Si todavía da error (me pasó una vez), simplemente ''Rebuild Environment'' y probablemente funcione.