- RQ now stores multiple job execution results. This feature is only available on Redis >= 5.0 Redis Streams. Please refer to the docs for more info. Thanks @selwin!
- Improve performance when enqueueing many jobs at once. Thanks @rggjan!
- Redis server version is now cached in connection object. Thanks @odarbelaeze!
- Properly handle
at_frontargument when jobs are scheduled. Thanks @gabriels1234! - Add type hints to RQ's code base. Thanks @lowercase00!
- Fixed a bug where exceptions are logged twice. Thanks @selwin!
- Don't delete
job.worker_nameafter job is finished. Thanks @eswolinsky3241!
queue.enqueue_many()now supportson_successand onon_failurearguments. Thanks @y4n9squared!- You can now pass
enqueue_at_fronttoDependency()objects to put dependent jobs at the front when they are enqueued. Thanks @jtfidje! - Fixed a bug where workers may wrongly acquire scheduler locks. Thanks @milesjwinter!
- Jobs should not be enqueued if any one of it's dependencies is canceled. Thanks @selwin!
- Fixed a bug when handling jobs that have been stopped. Thanks @ronlut!
- Fixed a bug in handling Redis connections that don't allow
SETNAMEcommand. Thanks @yilmaz-burak!
- This will be the last RQ version that supports Python 3.5.
- Allow jobs to be enqueued even when their dependencies fail via
Dependency(allow_failure=True). Thanks @mattchan-tencent, @caffeinatedMike and @selwin! - When stopped jobs are deleted, they should also be removed from FailedJobRegistry. Thanks @selwin!
job.requeue()now supportsat_front()argument. Thanks @buroa!- Added ssl support for sentinel connections. Thanks @nevious!
SimpleWorkernow works better on Windows. Thanks @caffeinatedMike!- Added
on_failureandon_successarguments to @job decorator. Thanks @nepta1998! - Fixed a bug in dependency handling. Thanks @th3hamm0r!
- Minor fixes and optimizations by @xavfernandez, @olaure, @kusaku.
- BACKWARDS INCOMPATIBLE: synchronous execution of jobs now correctly mimics async job execution. Exception is no longer raised when a job fails, job status will now be correctly set to
FAILEDand failure callbacks are now properly called when job is run synchronously. Thanks @ericman93! - Fixes a bug that could cause job keys to be left over when
result_ttl=0. Thanks @selwin! - Allow
ssl_cert_reqsargument to be passed to Redis. Thanks @mgcdanny! - Better compatibility with Python 3.10. Thanks @rpkak!
job.cancel()should also remove itself from registries. Thanks @joshcoden!- Pubsub threads are now launched in
daemonmode. Thanks @mik3y!
- You can now enqueue jobs from CLI. Docs here. Thanks @rpkak!
- Added a new
CanceledJobRegistryto keep track of canceled jobs. Thanks @selwin! - Added custom serializer support to various places in RQ. Thanks @joshcoden!
cancel_job(job_id, enqueue_dependents=True)allows you to cancel a job while enqueueing its dependents. Thanks @joshcoden!- Added
job.get_meta()to fetch fresh meta value directly from Redis. Thanks @aparcar! - Fixes a race condition that could cause jobs to be incorrectly added to FailedJobRegistry. Thanks @selwin!
- Requeueing a job now clears
job.exc_info. Thanks @selwin! - Repo infrastructure improvements by @rpkak.
- Other minor fixes by @cesarferradas and @bbayles.
- Added success and failure callbacks. You can now do
queue.enqueue(foo, on_success=do_this, on_failure=do_that). Thanks @selwin! - Added
queue.enqueue_many()to enqueue many jobs in one go. Thanks @joshcoden! - Various improvements to CLI commands. Thanks @rpkak!
- Minor logging improvements. Thanks @clavigne and @natbusa!
- Jobs that fail due to hard shutdowns are now retried. Thanks @selwin!
Schedulernow works with custom serializers. Thanks @alella!- Added support for click 8.0. Thanks @rpkak!
- Enqueueing static methods are now supported. Thanks @pwws!
- Job exceptions no longer get printed twice. Thanks @petrem!
- You can now declare multiple job dependencies. Thanks @skieffer and @thomasmatecki for laying the groundwork for multi dependency support in RQ.
- Added
RoundRobinWorkerandRandomWorkerclasses to control how jobs are dequeued from multiple queues. Thanks @bielcardona! - Added
--serializeroption torq workerCLI. Thanks @f0cker! - Added support for running asyncio tasks. Thanks @MyrikLD!
- Added a new
STOPPEDjob status so that you can differentiate between failed and manually stopped jobs. Thanks @dralley! - Fixed a serialization bug when used with job dependency feature. Thanks @jtfidje!
clean_worker_registry()now works in batches of 1,000 jobs to prevent modifying too many keys at once. Thanks @AxeOfMen and @TheSneak!- Workers will now wait and try to reconnect in case of Redis connection errors. Thanks @Asrst!
- Added
job.worker_nameattribute that tells you which worker is executing a job. Thanks @selwin! - Added
send_stop_job_command()that tells a worker to stop executing a job. Thanks @selwin! - Added
JSONSerializeras an alternative to the defaultpicklebased serializer. Thanks @JackBoreczky! - Fixes
RQSchedulerrunning on Redis withssl=True. Thanks @BobReid!
- Worker now properly releases scheduler lock when run in burst mode. Thanks @selwin!
- Workers now listen to external commands via pubsub. The first two features taking advantage of this infrastructure are
send_shutdown_command()andsend_kill_horse_command(). Thanks @selwin! - Added
job.last_heartbeatproperty that's periodically updated when job is running. Thanks @theambient! - Now horses are killed by their parent group. This helps in cleanly killing all related processes if job uses multiprocessing. Thanks @theambient!
- Fixed scheduler usage with Redis connections that uses custom parser classes. Thanks @selwin!
- Scheduler now enqueue jobs in batches to prevent lock timeouts. Thanks @nikkonrom!
- Scheduler now follows RQ worker's logging configuration. Thanks @christopher-dG!
- Scheduler now uses the class of connection that's used. Thanks @pacahon!
- Fixes a bug that puts retried jobs in
FailedJobRegistry. Thanks @selwin! - Fixed a deprecated import. Thanks @elmaghallawy!
- Fixes for Redis server version parsing. Thanks @selwin!
- Retries can now be set through @job decorator. Thanks @nerok!
- Log messages below logging.ERROR is now sent to stdout. Thanks @selwin!
- Better logger name for RQScheduler. Thanks @atainter!
- Better handling of exceptions thrown by horses. Thanks @theambient!
- Failed jobs can now be retried. Thanks @selwin!
- Fixed scheduler on Python > 3.8.0. Thanks @selwin!
- RQ is now aware of which version of Redis server it's running on. Thanks @aparcar!
- RQ now uses
hset()on redis-py >= 3.5.0. Thanks @aparcar! - Fix incorrect worker timeout calculation in SimpleWorker.execute_job(). Thanks @davidmurray!
- Make horse handling logic more robust. Thanks @wevsty!
- Added
job.get_position()andqueue.get_job_position(). Thanks @aparcar! - Longer TTLs for worker keys to prevent them from expiring inside the worker lifecycle. Thanks @selwin!
- Long job args/kwargs are now truncated during logging. Thanks @JhonnyBn!
job.requeue()now returns the modified job. Thanks @ericatkin!
- Reverted changes to
hmsetcommand which causes workers on Redis server < 4 to crash. Thanks @selwin! - Merged in more groundwork to enable jobs with multiple dependencies. Thanks @thomasmatecki!
- Default serializer now uses
pickle.HIGHEST_PROTOCOLfor backward compatibility reasons. Thanks @bbayles! - Avoid deprecation warnings on redis-py >= 3.5.0. Thanks @bbayles!
- Custom serializer is now supported. Thanks @solababs!
delay()now acceptsjob_idargument. Thanks @grayshirt!- Fixed a bug that may cause early termination of scheduled or requeued jobs. Thanks @rmartin48!
- When a job is scheduled, always add queue name to a set containing active RQ queue names. Thanks @mdawar!
- Added
--sentry-ca-certsand--sentry-debugparameters torq workerCLI. Thanks @kichawa! - Jobs cleaned up by
StartedJobRegistryare given an exception info. Thanks @selwin! - Python 2.7 is no longer supported. Thanks @selwin!
- Support for infinite job timeout. Thanks @theY4Kman!
- Added
__main__file so you can now dopython -m rq.cli. Thanks @bbayles! - Fixes an issue that may cause zombie processes. Thanks @wevsty!
job_idis now passed to logger during failed jobs. Thanks @smaccona!queue.enqueue_at()andqueue.enqueue_in()now supports explicitargsandkwargsfunction invocation. Thanks @selwin!
Job.fetch()now properly handles unpickleable return values. Thanks @selwin!
enqueue_at()andenqueue_in()now sets job status toscheduled. Thanks @coolhacker170597!- Failed jobs data are now automatically expired by Redis. Thanks @selwin!
- Fixes
RQSchedulerlogging configuration. Thanks @FlorianPerucki!
- This release also contains an alpha version of RQ's builtin job scheduling mechanism. Thanks @selwin!
- Various internal API changes in preparation to support multiple job dependencies. Thanks @thomasmatecki!
--verboseor--quietCLI arguments should override--logging-level. Thanks @zyt312074545!- Fixes a bug in
rq infowhere it doesn't show workers for empty queues. Thanks @zyt312074545! - Fixed
queue.enqueue_dependents()on customQueueclasses. Thanks @van-ess0! RQand Python versions are now stored in job metadata. Thanks @eoranged!- Added
failure_ttlargument to job decorator. Thanks @pax0r!
- Added
max_jobstoWorker.workand--max-jobstorq workerCLI. Thanks @perobertson! - Passing
--disable-job-desc-loggingtorq workernow does what it's supposed to do. Thanks @janierdavila! StartedJobRegistrynow properly handles jobs with infinite timeout. Thanks @macintoshpie!rq infoCLI command now cleans up registries when it first runs. Thanks @selwin!- Replaced the use of
procnamewithsetproctitle. Thanks @j178!
Backward incompatible changes:
-
job.statushas been removed. Usejob.get_status()andjob.set_status()instead. Thanks @selwin! -
FailedQueuehas been replaced withFailedJobRegistry:get_failed_queue()function has been removed. Please useFailedJobRegistry(queue=queue)instead.move_to_failed_queue()has been removed.- RQ now provides a mechanism to automatically cleanup failed jobs. By default, failed jobs are kept for 1 year.
- Thanks @selwin!
-
RQ's custom job exception handling mechanism has also changed slightly:
- RQ's default exception handling mechanism (moving jobs to
FailedJobRegistry) can be disabled by doingWorker(disable_default_exception_handler=True). - Custom exception handlers are no longer executed in reverse order.
- Thanks @selwin!
- RQ's default exception handling mechanism (moving jobs to
-
Workernames are now randomized. Thanks @selwin! -
timeoutargument onqueue.enqueue()has been deprecated in favor ofjob_timeout. Thanks @selwin! -
Sentry integration has been reworked:
- RQ now uses the new sentry-sdk in place of the deprecated Raven library
- RQ will look for the more explicit
RQ_SENTRY_DSNenvironment variable instead ofSENTRY_DSNbefore instantiating Sentry integration - Thanks @selwin!
-
Fixed
Worker.total_working_timeaccounting bug. Thanks @selwin!
- Compatibility with Redis 3.0. Thanks @dash-rai!
- Added
job_timeoutargument toqueue.enqueue(). This argument will eventually replacetimeoutargument. Thanks @selwin! - Added
job_idargument toBaseDeathPenaltyclass. Thanks @loopbio! - Fixed a bug which causes long running jobs to timeout under
SimpleWorker. Thanks @selwin! - You can now override worker's name from config file. Thanks @houqp!
- Horses will now return exit code 1 if they don't terminate properly (e.g when Redis connection is lost). Thanks @selwin!
- Added
date_formatandlog_formatarguments toWorkerandrq workerCLI. Thanks @shikharsg!
- Added support for Python 3.7. Since
asyncis a keyword in Python 3.7,Queue(async=False)has been changed toQueue(is_async=False). Theasynckeyword argument will still work, but raises aDeprecationWarning. Thanks @dchevell!
Workernow periodically sends heartbeats and checks whether child process is still alive while performing long running jobs. Thanks @Kriechi!Job.createnow acceptstimeoutin string format (e.g1h). Thanks @theodesp!worker.main_work_horse()should exit with return code0even if job execution fails. Thanks @selwin!job.delete(delete_dependents=True)will delete job along with its dependents. Thanks @olingerc!- Other minor fixes and documentation updates.
@jobdecorator now acceptsdescription,meta,at_frontanddepends_onkwargs. Thanks @jlucas91 and @nlyubchich!- Added the capability to fetch workers by queue using
Worker.all(queue=queue)andWorker.count(queue=queue). - Improved RQ's default logging configuration. Thanks @samuelcolvin!
job.dataandjob.exc_infoare now stored in compressed format in Redis.
- Fixed an issue where
worker.refresh()may fail whenbirth_dateis not set. Thanks @vanife!
- Fixed an issue where
worker.refresh()may fail when upgrading from previous versions of RQ.
Workerstatistics!Workernow keeps track oflast_heartbeat,successful_job_count,failed_job_countandtotal_working_time. Thanks @selwin!Workernow sends heartbeat during suspension check. Thanks @theodesp!- Added
queue.delete()method to deleteQueueobjects entirely from Redis. Thanks @theodesp! - More robust exception string decoding. Thanks @stylight!
- Added
--logging-leveloption to command line scripts. Thanks @jiajunhuang! - Added millisecond precision to job timestamps. Thanks @samuelcolvin!
- Python 2.6 is no longer supported. Thanks @samuelcolvin!
- Fixed an issue where
job.save()may fail with unpickleable return value.
- Replace
job.idwithJobinstance in local_job_stack. Thanks @katichev! job.save()no longer implicitly callsjob.cleanup(). Thanks @katichev!- Properly catch
StopRequestedworker.heartbeat(). Thanks @fate0! - You can now pass in timeout in days. Thanks @yaniv-g!
- The core logic of sending job to
FailedQueuehas been moved torq.handlers.move_to_failed_queue. Thanks @yaniv-g! - RQ cli commands now accept
--pathparameter. Thanks @kirill and @sjtbham! - Make
job.dependencyslightly more efficient. Thanks @liangsijian! FailedQueuenow returns jobs with the correct class. Thanks @amjith!
- Refactored APIs to allow custom
Connection,Job,WorkerandQueueclasses via CLI. Thanks @jezdez! job.delete()now properly cleans itself from job registries. Thanks @selwin!Workershould no longer overwritejob.meta. Thanks @WeatherGod!job.save_meta()can now be used to persist custom job data. Thanks @katichev!- Added Redis Sentinel support. Thanks @strawposter!
- Make
Worker.find_by_key()more efficient. Thanks @selwin! - You can now specify job
timeoutusing strings such asqueue.enqueue(foo, timeout='1m'). Thanks @luojiebin! - Better unicode handling. Thanks @myme5261314 and @jaywink!
- Sentry should default to HTTP transport. Thanks @Atala!
- Improve
HerokuWorkertermination logic. Thanks @samuelcolvin!
- Fixes a bug that prevents fetching jobs from
FailedQueue(#765). Thanks @jsurloppe! - Fixes race condition when enqueueing jobs with dependency (#742). Thanks @th3hamm0r!
- Skip a test that requires Linux signals on MacOS (#763). Thanks @jezdez!
enqueue_jobshould use Redis pipeline when available (#761). Thanks mtdewulf!
- Better support for Heroku workers (#584, #715)
- Support for connecting using a custom connection class (#741)
- Fix: connection stack in default worker (#479, #641)
- Fix:
fetch_jobnow checks that a job requested actually comes from the intended queue (#728, #733) - Fix: Properly raise exception if a job dependency does not exist (#747)
- Fix: Job status not updated when horse dies unexpectedly (#710)
- Fix:
request_force_stop_sigrtminfailing for Python 3 (#727) - Fix
Job.cancel()method on failed queue (#707) - Python 3.5 compatibility improvements (#729)
- Improved signal name lookup (#722)
- Jobs that depend on job with result_ttl == 0 are now properly enqueued.
cancel_jobnow works properly. Thanks @jlopex!- Jobs that execute successfully now no longer tries to remove itself from queue. Thanks @amyangfei!
- Worker now properly logs Falsy return values. Thanks @liorsbg!
Worker.work()now acceptslogging_levelargument. Thanks @jlopex!- Logging related fixes by @redbaron4 and @butla!
@jobdecorator now acceptsttlargument. Thanks @javimb!Worker.__init__now acceptsqueue_classkeyword argument. Thanks @antoineleclair!Workernow saves warm shutdown time. You can access this property fromworker.shutdown_requested_date. Thanks @olingerc!- Synchronous queues now properly sets completed job status as finished. Thanks @ecarreras!
Workernow correctly deletescurrent_job_idafter failed job execution. Thanks @olingerc!Job.create()andqueue.enqueue_call()now acceptsmetaargument. Thanks @tornstrom!- Added
job.started_atproperty. Thanks @samuelcolvin! - Cleaned up the implementation of
job.cancel()andjob.delete(). Thanks @glaslos! Worker.execute_job()now exportsRQ_WORKER_IDandRQ_JOB_IDto OS environment variables. Thanks @mgk!rqinfonow accepts--configoption. Thanks @kfrendrich!Workerclass now hasrequest_force_stop()andrequest_stop()methods that can be overridden by custom worker classes. Thanks @samuelcolvin!- Other minor fixes by @VicarEscaped, @kampfschlaefer, @ccurvey, @zfz, @antoineleclair, @orangain, @nicksnell, @SkyLothar, @ahxxm and @horida.
- Job results are now logged on
DEBUGlevel. Thanks @tbaugis! - Modified
patch_connectionso Redis connection can be easily mocked - Customer exception handlers are now called if Redis connection is lost. Thanks @jlopex!
- Jobs can now depend on jobs in a different queue. Thanks @jlopex!
- Add support for
--exception-handlercommand line flag - Fix compatibility with click>=5.0
- Fix maximum recursion depth problem for very large queues that contain jobs that all fail
(July 8th, 2015)
- Fix compatibility with raven>=5.4.0
(June 3rd, 2015)
- Better API for instantiating Workers. Thanks @RyanMTB!
- Better support for unicode kwargs. Thanks @nealtodd and @brownstein!
- Workers now automatically cleans up job registries every hour
- Jobs in
FailedQueuenow have their statuses set properly enqueue_call()no longer ignoresttl. Thanks @mbodock!- Improved logging. Thanks @trevorprater!
(April 14th, 2015)
- Support SSL connection to Redis (requires redis-py>=2.10)
- Fix to prevent deep call stacks with large queues
(March 9th, 2015)
- Resolve performance issue when queues contain many jobs
- Restore the ability to specify connection params in config
- Record
birth_dateanddeath_dateon Worker - Add support for SSL URLs in Redis (and
REDIS_SSLconfig option) - Fix encoding issues with non-ASCII characters in function arguments
- Fix Redis transaction management issue with job dependencies
(Jan 30th, 2015)
- RQ workers can now be paused and resumed using
rq suspendandrq resumecommands. Thanks Jonathan Tushman! - Jobs that are being performed are now stored in
StartedJobRegistryfor monitoring purposes. This also prevents currently active jobs from being orphaned/lost in the case of hard shutdowns. - You can now monitor finished jobs by checking
FinishedJobRegistry. Thanks Nic Cope for helping! - Jobs with unmet dependencies are now created with
deferredas their status. You can monitor deferred jobs by checkingDeferredJobRegistry. - It is now possible to enqueue a job at the beginning of queue using
queue.enqueue(func, at_front=True). Thanks Travis Johnson! - Command line scripts have all been refactored to use
click. Thanks Lyon Zhang! - Added a new
SimpleWorkerthat does not fork when executing jobs. Useful for testing purposes. Thanks Cal Leeming! - Added
--queue-classand--job-classarguments torqworkerscript. Thanks David Bonner! - Many other minor bug fixes and enhancements.
(May 21st, 2014)
- Raise a warning when RQ workers are used with Sentry DSNs using asynchronous transports. Thanks Wei, Selwin & Toms!
(May 8th, 2014)
- Fix where rqworker broke on Python 2.6. Thanks, Marko!
(May 7th, 2014)
- Properly declare redis dependency.
- Fix a NameError regression that was introduced in 0.4.3.
(May 6th, 2014)
- Make job and queue classes overridable. Thanks, Marko!
- Don't require connection for @job decorator at definition time. Thanks, Sasha!
- Syntactic code cleanup.
(April 28th, 2014)
- Add missing depends_on kwarg to @job decorator. Thanks, Sasha!
(April 22nd, 2014)
- Fix bug where RQ 0.4 workers could not unpickle/process jobs from RQ < 0.4.
(April 22nd, 2014)
-
Emptying the failed queue from the command line is now as simple as running
rqinfo -Xorrqinfo --empty-failed-queue. -
Job data is unpickled lazily. Thanks, Malthe!
-
Removed dependency on the
timeslibrary. Thanks, Malthe! -
Job dependencies! Thanks, Selwin.
-
Custom worker classes, via the
--worker-class=path.to.MyClasscommand line argument. Thanks, Selwin. -
Queue.all()andrqinfonow report empty queues, too. Thanks, Rob! -
Fixed a performance issue in
Queue.all()when issued in large Redis DBs. Thanks, Rob! -
Birth and death dates are now stored as proper datetimes, not timestamps.
-
Ability to provide a custom job description (instead of using the default function invocation hint). Thanks, İbrahim.
-
Fix: temporary key for the compact queue is now randomly generated, which should avoid name clashes for concurrent compact actions.
-
Fix:
Queue.empty()now correctly deletes job hashes from Redis.
(December 17th, 2013)
- Bug fix where the worker crashes on jobs that have their timeout explicitly removed. Thanks for reporting, @algrs.
(December 16th, 2013)
- Bug fix where a worker could time out before the job was done, removing it from any monitor overviews (#288).
(August 23th, 2013)
- Some more fixes in command line scripts for Python 3
(August 20th, 2013)
- Bug fix in setup.py
(August 20th, 2013)
-
Python 3 compatibility (Thanks, Alex!)
-
Minor bug fix where Sentry would break when func cannot be imported
(June 17th, 2013)
-
rqworkerandrqinfohave a--urlargument to connect to a Redis url. -
rqworkerandrqinfohave a--socketoption to connect to a Redis server through a Unix socket. -
rqworkerreadsSENTRY_DSNfrom the environment, unless specifically provided on the command line. -
Queuehas a new API that supports pagingget_jobs(3, 7), which will return at most 7 jobs, starting from the 3rd.
(February 26th, 2013)
- Fixed bug where workers would not execute builtin functions properly.
(February 18th, 2013)
-
Worker registrations now expire. This should prevent
rqinfofrom reporting about ghosted workers. (Thanks, @yaniv-aknin!) -
rqworkerwill automatically clean up ghosted worker registrations from pre-0.3.6 runs. -
rqworkergrew a-qflag, to be more silent (only warnings/errors are shown)
(February 6th, 2013)
-
ended_atis now recorded for normally finished jobs, too. (Previously only for failed jobs.) -
Adds support for both
RedisandStrictRedisconnection types -
Makes
StrictRedisthe default connection type if none is explicitly provided
(January 23rd, 2013)
- Restore compatibility with Python 2.6.
(January 18th, 2013)
-
Fix bug where work was lost due to silently ignored unpickle errors.
-
Jobs can now access the current
Jobinstance from within. Relevant documentation here. -
Custom properties can be set by modifying the
job.metadict. Relevant documentation here. -
Custom properties can be set by modifying the
job.metadict. Relevant documentation here. -
rqworkernow has an optional--passwordflag. -
Remove
logbookdependency (in favor oflogging)
(September 3rd, 2012)
-
Fixes broken
rqinfocommand. -
Improve compatibility with Python < 2.7.
(August 30th, 2012)
-
.enqueue()now takes aresult_ttlkeyword argument that can be used to change the expiration time of results. -
Queue constructor now takes an optional
async=Falseargument to bypass the worker (for testing purposes). -
Jobs now carry status information. To get job status information, like whether a job is queued, finished, or failed, use the property
status, or one of the new boolean accessor propertiesis_queued,is_finishedoris_failed. -
Jobs return values are always stored explicitly, even if they have to explicit return value or return
None(with given TTL of course). This makes it possible to distinguish between a job that explicitly returnedNoneand a job that isn't finished yet (seestatusproperty). -
Custom exception handlers can now be configured in addition to, or to fully replace, moving failed jobs to the failed queue. Relevant documentation here and here.
-
rqworkernow supports passing in configuration files instead of the many command line options:rqworker -c settingswill sourcesettings.py. -
rqworkernow supports one-flag setup to enable Sentry as its exception handler:rqworker --sentry-dsn="http://public:secret@example.com/1"Alternatively, you can use a settings file and configureSENTRY_DSN = 'http://public:secret@example.com/1'instead.
(August 5th, 2012)
-
Reliability improvements
- Warm shutdown now exits immediately when Ctrl+C is pressed and worker is idle
- Worker does not leak worker registrations anymore when stopped gracefully
-
.enqueue()does not consume thetimeoutkwarg anymore. Instead, to pass RQ a timeout value while enqueueing a function, use the explicit invocation instead:```python q.enqueue(do_something, args=(1, 2), kwargs={'a': 1}, timeout=30) ``` -
Add a
@jobdecorator, which can be used to do Celery-style delayed invocations:```python from redis import StrictRedis from rq.decorators import job # Connect to Redis redis = StrictRedis() @job('high', timeout=10, connection=redis) def some_work(x, y): return x + y ```Then, in another module, you can call
some_work:```python from foo.bar import some_work some_work.delay(2, 3) ```
(August 1st, 2012)
- Fix bug where return values that couldn't be pickled crashed the worker
(July 20th, 2012)
- Fix important bug where result data wasn't restored from Redis correctly (affected non-string results only).
(July 18th, 2012)
q.enqueue()accepts instance methods now, too. Objects will be pickle'd along with the instance method, so beware.q.enqueue()accepts string specification of functions now, too. Example:q.enqueue("my.math.lib.fibonacci", 5). Useful if the worker and the submitter of work don't share code bases.- Job can be assigned custom attrs and they will be pickle'd along with the rest of the job's attrs. Can be used when writing RQ extensions.
- Workers can now accept explicit connections, like Queues.
- Various bug fixes.
(May 15, 2012)
- Fix broken PyPI deployment.
(May 14, 2012)
- Thread-safety by using context locals
- Register scripts as console_scripts, for better portability
- Various bugfixes.
(March 28, 2012)
- Initially released version.