execute() uses a semi common functional programming tactic where you hand
object. So the stumbling block here was your use of
around to be handed to runcommand() each time it's called.
Post by Rob MarshallHi Jeff,
Python 2.7.12 (default, Dec 4 2017, 14:50:18)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
... run('hostname')
...
Post by Jeff ForcierPost by Rob Marshallresults = execute(parallel(gethostnames),hosts=['10.10.0.1','10.10.0.
2','10.10.0.3'])
[10.10.0.1] Executing task 'gethostnames'
[10.10.0.2] Executing task 'gethostnames'
[10.10.0.3] Executing task 'gethostnames'
[10.10.0.3] run: hostname
[10.10.0.1] run: hostname
[10.10.0.2] run: hostname
[10.10.0.2] out: hostname2
[10.10.0.3] out: hosname3
[10.10.0.1] out: hostname1
Python 2.7.12 (default, Dec 4 2017, 14:50:18)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
... run(cmd)
...
Post by Jeff ForcierPost by Rob Marshallresults = execute(parallel(runcommand('hostname')),hosts=['10.10.0.1'
,'10.10.0.2','10.10.0.3'])
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in runcommand
File "/usr/local/lib/python2.7/dist-packages/fabric/network.py",
line 684, in host_prompting_wrapper
host_string = raw_input("No hosts found. Please specify (single)"
KeyboardInterrupt
How do I run a task with execute that requires arguments?
Thanks,
Rob
Post by Jeff ForcierHi Rob,
Fabric 1 has its own multiprocessing setup, but it needs to do various
things to the objects getting cloned across subprocess barriers - e.g.
socket cleanup, fabric.state.env contents tweaking, and the like - and I'm
guessing if you are doing your own multiprocessing work, you may be
skipping all of that.
Post by Jeff ForcierSee e.g. some of the stuff done here: https://github.com/fabric/
fabric/blob/d91b86ecc0c91357e7befe3dd5b67efc00682aeb/fabric/tasks.py#L225
- you might need to look higher up in the call stack too but that's the
guts of it re: disconnecting client objects in the cache.
Post by Jeff ForcierThat said, it's possible that Fabric 1's design straight up precludes
the approach you're taking - it's not thread safe, it's not
greenlet/coroutine/etc async safe, etc because of the emphasis on global
module state. But as you're using multiprocessing, it depends on exactly
what 'async' means in this context - I'm guessing it's more about the
behavior of launching the subprocesses from the master, in which case it
might work fine.
Post by Jeff ForcierI don't have a ton of spare time at the moment to get deep into this if
you encounter more problems, but hopefully the above gets you pointed in
the right direction!
Post by Jeff ForcierBest,
Jeff
Post by Rob MarshallHi,
I have a "wrapper" for Fabric 1.13.2 that I use to make running fabric
commands a little easier in some senses. I can setup an object that
has the host_string, etc., all setup when I instantiate the object.
I'm attempting to use it to run multiple processes simultaneously on
multiple hosts. This seems to work fine for some things, but I'm
running into a problem with a particular command that I seem to be
able to run just fine by itself, i.e. not using multiprocessing, but
that gets a connection timeout when being run via
multiprocess.pool.apply_async(). Is there a known issue with running
Fabric with multiprocessing pools?
Thanks,
Rob
_______________________________________________
Fab-user mailing list
https://lists.nongnu.org/mailman/listinfo/fab-user
--
Jeff Forcier
Unix sysadmin; Python engineer
http://bitprophet.org