Skip to content

Commit

Permalink
Spelling
Browse files Browse the repository at this point in the history
  • Loading branch information
ask committed Aug 7, 2011
1 parent 60da4a3 commit e9dc653
Show file tree
Hide file tree
Showing 11 changed files with 62 additions and 29 deletions.
3 changes: 2 additions & 1 deletion FAQ
Original file line number Diff line number Diff line change
Expand Up @@ -461,7 +461,8 @@ Tasks
How can I reuse the same connection when applying tasks?
--------------------------------------------------------

**Answer**: See :ref:`executing-connections`.
**Answer**: Yes! See the :setting:`BROKER_POOL_LIMIT` setting.
This setting will be enabled by default in 3.0.

.. _faq-execute-task-by-name:

Expand Down
6 changes: 3 additions & 3 deletions celery/app/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ def pyimplementation():


class LamportClock(object):
"""Lamports logical clock.
"""Lamport's logical clock.
From Wikipedia:
Expand Down Expand Up @@ -80,7 +80,7 @@ class LamportClock(object):
When sending a message use :meth:`forward` to increment the clock,
when receiving a message use :meth:`adjust` to sync with
the timestamp of the incoming message.
the time stamp of the incoming message.
"""
#: The clocks current value.
Expand Down Expand Up @@ -382,7 +382,7 @@ def amqp(self):

@cached_property
def backend(self):
"""Storing/retreiving task state. See
"""Storing/retrieving task state. See
:class:`~celery.backend.base.BaseBackend`."""
return self._get_backend()

Expand Down
6 changes: 3 additions & 3 deletions celery/app/task/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ def get(self, key, default=None):


class TaskType(type):
"""Metaclass for tasks.
"""Meta class for tasks.
Automatically registers the task in the task registry, except
if the `abstract` attribute is set.
Expand Down Expand Up @@ -216,7 +216,7 @@ class BaseTask(object):
#: worker crashes mid execution (which may be acceptable for some
#: applications).
#:
#: The application default can be overriden with the
#: The application default can be overridden with the
#: :setting:`CELERY_ACKS_LATE` setting.
acks_late = False

Expand Down Expand Up @@ -374,7 +374,7 @@ def apply_async(self, args=None, kwargs=None, countdown=None,
:keyword exchange: The named exchange to send the task to.
Defaults to the :attr:`exchange` attribute.
:keyword exchange_type: The exchange type to initalize the exchange
:keyword exchange_type: The exchange type to initialize the exchange
if not already declared. Defaults to the
:attr:`exchange_type` attribute.
Expand Down
18 changes: 9 additions & 9 deletions celery/worker/consumer.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
consumer (+ QoS), and the broadcast remote control command consumer.
Also if events are enabled it configures the event dispatcher and starts
up the hartbeat thread.
up the heartbeat thread.
* Finally it can consume messages. :meth:`~Consumer.consume_messages`
is simply an infinite loop waiting for events on the AMQP channels.
Expand Down Expand Up @@ -60,7 +60,7 @@
* Notice that when the connection is lost all internal queues are cleared
because we can no longer ack the messages reserved in memory.
Hoever, this is not dangerous as the broker will resend them
However, this is not dangerous as the broker will resend them
to another worker when the channel is closed.
* **WARNING**: :meth:`~Consumer.stop` does not close the connection!
Expand Down Expand Up @@ -194,7 +194,7 @@ def update(self):

class Consumer(object):
"""Listen for messages received from the broker and
move them the the ready queue for task processing.
move them to the ready queue for task processing.
:param ready_queue: See :attr:`ready_queue`.
:param eta_schedule: See :attr:`eta_schedule`.
Expand Down Expand Up @@ -226,7 +226,7 @@ class Consumer(object):

#: The thread that sends event heartbeats at regular intervals.
#: The heartbeats are used by monitors to detect that a worker
#: went offline/disappeared.
#: went off-line/disappeared.
heart = None

#: The logger instance to use. Defaults to the default Celery logger.
Expand Down Expand Up @@ -289,7 +289,7 @@ def __init__(self, ready_queue, eta_schedule, logger,
def start(self):
"""Start the consumer.
Automatically surivives intermittent connection failure,
Automatically survives intermittent connection failure,
and will retry establishing the connection and restart
consuming messages.
Expand Down Expand Up @@ -348,7 +348,7 @@ def on_task(self, task):
eta = timer2.to_timestamp(task.eta)
except OverflowError, exc:
self.logger.error(
"Couldn't convert eta %s to timestamp: %r. Task: %r" % (
"Couldn't convert eta %s to time stamp: %r. Task: %r" % (
task.eta, exc, task.info(safe=True)),
exc_info=sys.exc_info())
task.acknowledge()
Expand Down Expand Up @@ -392,7 +392,7 @@ def receive_message(self, body, message):
:param message: The kombu message object.
"""
# need to guard against errors occuring while acking the message.
# need to guard against errors occurring while acking the message.
def ack():
try:
message.ack()
Expand Down Expand Up @@ -558,7 +558,7 @@ def reset_connection(self):
self.initial_prefetch_count, self.logger)
self.qos.update()

# receive_message handles incomsing messages.
# receive_message handles incoming messages.
self.task_consumer.register_callback(self.receive_message)

# Setup the process mailbox.
Expand All @@ -583,7 +583,7 @@ def restart_heartbeat(self):
"""Restart the heartbeat thread.
This thread sends heartbeat events at intervals so monitors
can tell if the worker is offline/missing.
can tell if the worker is off-line/missing.
"""
self.heart = Heart(self.priority_timer, self.event_dispatcher)
Expand Down
20 changes: 17 additions & 3 deletions docs/configuration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -392,7 +392,7 @@ Example configuration
MongoDB backend settings
------------------------

.. note::
.. note::

The MongoDB backend requires the :mod:`pymongo` library:
http://github.com/mongodb/mongo-python-driver/tree/master
Expand Down Expand Up @@ -535,7 +535,7 @@ BROKER_TRANSPORT
The Kombu transport to use. Default is ``amqplib``.

You can use a custom transport class name, or select one of the
built-in transports: ``amqplib``, ``pika``, ``redis``, ``beanstalk``,
built-in transports: ``amqplib``, ``pika``, ``redis``, ``beanstalk``,
``sqlalchemy``, ``django``, ``mongodb``, ``couchdb``.

.. setting:: BROKER_HOST
Expand Down Expand Up @@ -587,6 +587,8 @@ by all transports.
BROKER_POOL_LIMIT
~~~~~~~~~~~~~~~~~

.. versionadded:: 2.3

The maximum number of connections that can be open in the connection pool.

A good default value could be 10, or more if you're using eventlet/gevent
Expand Down Expand Up @@ -635,6 +637,8 @@ Default is 100 retries.
BROKER_TRANSPORT_OPTIONS
~~~~~~~~~~~~~~~~~~~~~~~~

.. versionadded:: 2.2

A dict of additional options passed to the underlying transport.

See your transport user manual for supported options (if any).
Expand Down Expand Up @@ -750,6 +754,8 @@ methods that have been registered with :mod:`kombu.serialization.registry`.
CELERY_TASK_PUBLISH_RETRY
~~~~~~~~~~~~~~~~~~~~~~~~~

.. versionadded:: 2.2

Decides if publishing task messages will be retried in the case
of connection loss or other connection errors.
See also :setting:`CELERY_TASK_PUBLISH_RETRY_POLICY`.
Expand All @@ -761,6 +767,8 @@ Disabled by default.
CELERY_TASK_PUBLISH_RETRY_POLICY
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. versionadded:: 2.2

Defines the default policy when retrying publishing a task message in
the case of connection loss or other connection errors.

Expand Down Expand Up @@ -1050,6 +1058,8 @@ Send events so the worker can be monitored by tools like `celerymon`.
CELERY_SEND_TASK_SENT_EVENT
~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. versionadded:: 2.2

If enabled, a `task-sent` event will be sent for every task so tasks can be
tracked before they are consumed by a worker.

Expand Down Expand Up @@ -1105,8 +1115,10 @@ Logging
CELERYD_HIJACK_ROOT_LOGGER
~~~~~~~~~~~~~~~~~~~~~~~~~~

.. versionadded:: 2.2

By default any previously configured logging options will be reset,
because the Celery apps "hijacks" the root logger.
because the Celery programs "hijacks" the root logger.

If you want to customize your own logging then you can disable
this behavior.
Expand Down Expand Up @@ -1223,6 +1235,8 @@ Default is ``processes``.
CELERYD_AUTOSCALER
~~~~~~~~~~~~~~~~~~

.. versionadded:: 2.2

Name of the autoscaler class to use.

Default is ``"celery.worker.autoscale.Autoscaler"``.
Expand Down
5 changes: 4 additions & 1 deletion docs/includes/introduction.txt
Original file line number Diff line number Diff line change
Expand Up @@ -23,10 +23,11 @@ Celery is used in production systems to process millions of tasks a day.
Celery is written in Python, but the protocol can be implemented in any
language. It can also `operate with other languages using webhooks`_.

The recommended message broker is `RabbitMQ`_, but limited support for
The recommended message broker is `RabbitMQ`_, but `limited support`_ for
`Redis`_, `Beanstalk`_, `MongoDB`_, `CouchDB`_ and
databases (using `SQLAlchemy`_ or the `Django ORM`_) is also available.


Celery is easy to integrate with `Django`_, `Pylons`_ and `Flask`_, using
the `django-celery`_, `celery-pylons`_ and `Flask-Celery`_ add-on packages.

Expand All @@ -47,6 +48,8 @@ the `django-celery`_, `celery-pylons`_ and `Flask-Celery`_ add-on packages.
.. _`Flask-Celery`: http://github.com/ask/flask-celery/
.. _`operate with other languages using webhooks`:
http://ask.github.com/celery/userguide/remote-tasks.html
.. _`limited support`:
http://kombu.readthedocs.org/en/latest/introduction.html#transport-comparison

.. contents::
:local:
Expand Down
2 changes: 1 addition & 1 deletion docs/userguide/concurrency/eventlet.rst
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ spawn hundreds, or thousands of green threads. In an informal test with a
feed hub system the Eventlet pool could fetch and process hundreds of feeds
every second, while the multiprocessing pool spent 14 seconds processing 100
feeds. Note that is one of the applications evented I/O is especially good
at (asynchronous HTTP requests). You may want a a mix of both Eventlet and
at (asynchronous HTTP requests). You may want a mix of both Eventlet and
multiprocessing workers, and route tasks according to compatibility or
what works best.

Expand Down
14 changes: 9 additions & 5 deletions docs/userguide/executing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -197,12 +197,16 @@ to use when sending a task:
Connections and connection timeouts.
====================================
Currently there is no support for broker connection pools, so
`apply_async` establishes and closes a new connection every time
it is called. This is something you need to be aware of when sending
more than one task at a time.
.. admonition:: Automatic Pool Support
You handle the connection manually by creating a
In version 2.3 there is now support for automatic connection pools,
so you don't have to manually handle connections and publishers
to reuse connections.
See the :setting:`BROKER_POOL_LIMIT` setting.
This setting will be enabled by default in version 3.0.
You can handle the connection manually by creating a
publisher:
.. code-block:: python
Expand Down
14 changes: 12 additions & 2 deletions docs/userguide/optimizing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,16 @@ like adding new worker nodes, or revoking unnecessary tasks.
Worker Settings
===============

.. _optimizing-connection-pools:

Broker Connection Pools
-----------------------

You should enable the :setting:`BROKER_POOL_LIMIT` setting,
as this will drastically improve overall performance.

This setting will be enabled by default in version 3.0.

.. _optimizing-prefetch-limit:

Prefetch Limits
Expand All @@ -74,15 +84,15 @@ If you have many tasks with a long duration you want
the multiplier value to be 1, which means it will only reserve one
task per worker process at a time.

However -- If you have many short-running tasks, and throughput/roundtrip
However -- If you have many short-running tasks, and throughput/round trip
latency[#] is important to you, this number should be large. The worker is
able to process more tasks per second if the messages have already been
prefetched, and is available in memory. You may have to experiment to find
the best value that works for you. Values like 50 or 150 might make sense in
these circumstances. Say 64, or 128.

If you have a combination of long- and short-running tasks, the best option
is to use two worker nodes that are configured separatly, and route
is to use two worker nodes that are configured separately, and route
the tasks according to the run-time. (see :ref:`guide-routing`).

.. [*] RabbitMQ and other brokers deliver messages round-robin,
Expand Down
1 change: 1 addition & 0 deletions docs/userguide/tasksets.rst
Original file line number Diff line number Diff line change
Expand Up @@ -164,6 +164,7 @@ It supports the following operations:
Chords
======

.. versionadded:: 2.3

A chord is a task that only executes after all of the tasks in a taskset has
finished executing.
Expand Down
2 changes: 1 addition & 1 deletion docs/userguide/workers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -260,7 +260,7 @@ Example changing the rate limit for the `myapp.mytask` task to accept
>>> rate_limit("myapp.mytask", "200/m")

Example changing the rate limit on a single host by specifying the
destination hostname::
destination host name::

>>> rate_limit("myapp.mytask", "200/m",
... destination=["worker1.example.com"])
Expand Down

0 comments on commit e9dc653

Please sign in to comment.