I'm having some problems launching threads from the list of jobs, they are on a list because they are configurable-specific I am wrapping up the tasks so that I can store the results of the work in 'Self', but in the non-threadsof manner something is going wrong so that I get the correct number of threads, but some examples are not correct. Here is the example code:
import threading, time class run parallelest (): def __init __ (self): pass def runList (self, functionList): threadList = [] LISTIndex category for function In (0, Len (functionList)): newThread = threading.Thread (target = lambda: self._run_parallel_job (functionList [functionListIndex]) newThread.start () threadList.append (newThread) # sleep delay which all works well is. # Time.sleep (2) # We all wait to complete the thread and if none of them we report the failure for the thread in thread: Thread: thread (3600 * 24) # 1 day may be better if thread.isAlive () == true: increase exception ("thread.isAlive == true") def _run_parallel_job (self, function): result = function () # results Self thread itself # (I promise that I am using SACPHERS) DEF F (X): print "F (% d) run"% x returns x if __name__ == '__main__': RF (RP) = RunPalell Test = (Lambda: F (0), Lambda: F (1), Lambda: F (2), Lambda: F (3), Lambda: F (4), Lambda: F (5) / Code>
When I run, I see things like this:
& gt; Python thread_problem.py f (1) run f (2) run f (4) run f (5) run f (5) run f (6) run f (7) run & gt;
I expect different commands in print, I think I should see 0-7 numbers with no duplication, but I do not. If I add time. Sleep (2), the problem goes magically away, but I would really like to know that the way I think it should do is not working.
Thanks a bunch!
The problem is that functionList [functionListIndex]
is evaluated only When it is in Lambda (inside the thread) it is running. Until then the value of functionListIndex can change.
To fix this, you can pass a parameter to lambda that will be evaluated at the definition time:
newThread = threading. Thread (target = lambda throw = function list index: self._run_parallel_job (func))
The default parameter values of the function are evaluated at the definition time, so this work Will do
Another Pythonic solution is to avoid the lambda and use the args
parameter:
newThread = threading.Thread (target = self._run_parallel_job , Args = (functionalist [function list index],))
No comments:
Post a Comment