Decorator For Multiprocessing Lock Crashes On Runtime
Solution 1:
EDIT after 24 hrs attempt and debugging ive found a solution by using decorators with arguments
defloc_dec_parent(*args , **kwargs):
deflock_dec(func):
@wraps(func) defwrapper(*arg , **kwarg):
kwargs['lock'].acquire()
try:
func(*arg)
finally:
kwargs['lock'].release()
return wrapper
return lock_dec
and function is
@loc_dec_parent(lock = Lock())defadd_no_lock(total):
for i inrange(100):
time.sleep(0.01)
total.value += 5
this works for me
Solution 2:
A recent post of yours drew my attention to this post. You solution is not ideal in that it does not allow arbitrary arguments to be passed to the wrapped function (right now it would not support keyword arguments). Your decorator function only needs one argument, i.e. the lock to be used, and you shouldn't care whether it is passed as a keyword argument or not. You can also simplify your code by using a context manager for the lock:
from functools import wraps
from multiprocessing import Lock
defloc_dec_parent(lock=Lock()):
deflock_dec(func):
@wraps(func)defwrapper(*args , **kwargs):
with lock:
func(*args, **kwargs)
return wrapper
return lock_dec
the_lock = Lock()
@loc_dec_parent(the_lock)deffoo(*args, **kwargs):
print('args:')
for arg in args:
print('\t', arg)
print('kwargs:')
for k, v in kwargs.items():
print('\t', k, '->', v)
foo(1, 2, x=3, lock=4)
Prints:
args:
1
2
kwargs:
x -> 3
lock -> 4
But there is still a problem with the decorator conceptually when actually used in actual multiprocessing under Windows or any platform that creates new processes using spawn
:
from functools import wraps
from multiprocessing import Lock, Process
import time
defloc_dec_parent(lock=Lock()):
deflock_dec(func):
@wraps(func)defwrapper(*args , **kwargs):
with lock:
func(*args, **kwargs)
return wrapper
return lock_dec
lock = Lock()
@loc_dec_parent(lock=lock)deffoo():
for i inrange(3):
time.sleep(1)
print(i, flush=True)
@loc_dec_parent(lock=lock)defbar():
for i inrange(3):
time.sleep(1)
print(i, flush=True)
if __name__ == '__main__':
p1 = Process(target=foo)
p2 = Process(target=bar)
p1.start()
p2.start()
p1.join()
p2.join()
Prints:
0
0
1
1
2
2
The locking does not work! We should have seen the following it were working:
0
1
2
0
1
2
This is because to implement the creation of each new subprocess a new Python interpreter is launched in the new process's address space and the source is re-executed from the top before control is passed to the target of the Process
instance. This means that in each new process's address space a new distinct Lock
instance is being created and the decorators are being re-executed.
The main process should be creating a single Lock
instance which it then passes to each process as an argument. In this way you can be sure that each process is dealing with the same Lock
instance.
In short, a multiprocessor.Lock
is a bad candidate for such a decorator if you wish to support all platforms.
Update
To emulate Java's synchronized methods, then you should ensure that you have a single Lock
instance that is used by all decorated functions and methods. For this you want to use a decorator implemented as a class. Also, don't forget that the wrapper function should return any possible return value that the wrapped function/method returns.
This must run on a platform using fork
to create new processes:
from functools import wraps
from multiprocessing import Lock, Process
import time
classSynchronized():
the_lock = Lock() # class instancedef__call__(self, func):
@wraps(func)defdecorated(*args, **kwargs):
with self.the_lock:
return func(*args, **kwargs)
return decorated
@Synchronized()deffoo():
for i inrange(3):
time.sleep(1)
print(i, flush=True)
classMyClass:
@Synchronized()defbar(self):
for i inrange(3):
time.sleep(1)
print(i, flush=True)
if __name__ == '__main__':
p1 = Process(target=foo)
p2 = Process(target=MyClass().bar)
p1.start()
p2.start()
p1.join()
p2.join()
Post a Comment for "Decorator For Multiprocessing Lock Crashes On Runtime"