In [3]:
from IPython.lib import backgroundjobs as bg
import sys
import time
def sleepfunc(interval=2, *a, **kw):
args = dict(interval=interval,
args=a,
kwargs=kw)
time.sleep(interval)
return args
def diefunc(interval=2, *a, **kw):
time.sleep(interval)
raise Exception("Dead job with interval %s" % interval)
def printfunc(interval=1, reps=5):
for n in range(reps):
time.sleep(interval)
print('In the background...', n)
sys.stdout.flush()
print('All done!')
sys.stdout.flush()
Now, we can create a job manager (called simply jobs
) and use it to submit new jobs.
Run the cell below, it will show when the jobs start. Wait a few seconds until you see the 'all done' completion message:
In [4]:
jobs = bg.BackgroundJobManager()
# Start a few jobs, the first one will have ID # 0
jobs.new(sleepfunc, 4)
jobs.new(sleepfunc, kw={'reps':2})
jobs.new('printfunc(1,3)')
Out[4]:
You can check the status of your jobs at any time:
In [7]:
jobs.status()
For any completed job, you can get its result easily:
In [ ]:
jobs[0].result
The jobs manager tries to help you with debugging:
In [9]:
# This makes a couple of jobs which will die. Let's keep a reference to
# them for easier traceback reporting later
diejob1 = jobs.new(diefunc, 1)
diejob2 = jobs.new(diefunc, 2)
You can get the traceback of any dead job. Run the line below again interactively until it prints a traceback (check the status of the job):
In [11]:
print( "Status of diejob1:", diejob1.status)
diejob1.traceback() # jobs.traceback(4) would also work here, with the job number
This will print all tracebacks for all dead jobs:
In [15]:
jobs.traceback()
The job manager can be flushed of all completed jobs at any time:
In [12]:
jobs.flush()
After that, the status is simply empty:
In [13]:
jobs.status()
Jobs have a .join
method that lets you wait on their thread for completion:
In [ ]:
j = jobs.new(sleepfunc, 2)
j.join?