Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature idea: Proxy job assignment and result through scheduler #543

Closed
torokati44 opened this issue Jun 23, 2020 · 3 comments
Closed

Feature idea: Proxy job assignment and result through scheduler #543

torokati44 opened this issue Jun 23, 2020 · 3 comments

Comments

@torokati44
Copy link

Right now, any client that wants to leverage a build farm needs to have a network connection to all worker daemons it wants to use, not just to the scheduler. (Correct me if I'm wrong.)
This makes it impossible to make a build farm in a star topology, where each node (worker and/or client) only has a network connection to the scheduler. This, however, might be useful in some remoting/VPN/tunneling scenarios. See also: #540
I propose an idea where a worker, upon failing to connect to an assigned worker daemon directly, would then make an attempt to do this through the scheduler, which it obviously can access, as a fallback.
In this case, the scheduler would act as a proxy between the client and the worker, passing the assignment and the result of each job there and back.
This would significantly increase the load on the scheduler machine (especially the network bandwidth used), but would make icecc work better in some more exotic networking scenarios, where not all participants are necessarily in the same subnet.
What do you think, is this something you would accept?
As an alternative, an overlay network of a single subnet on top of the spokes of the star topology could be set up with some kind of additional IP tunneling, but it would be nice if that wasn't necessary.

@torokati44
Copy link
Author

torokati44 commented Jun 23, 2020

On second thought, simply setting up some appropriate routing rules on the nodes, and enabling IP forwarding on the scheduler should do the trick, since the broadcast messages only need to reach the scheduler, not all the other peers... Will have to verify this...

@HenryMiller1
Copy link
Collaborator

HenryMiller1 commented Jun 23, 2020 via email

@torokati44
Copy link
Author

Yes, we use ccache too, but sometimes a "central" header has to be edited, and then the whole cache becomes useless.
And several many-core computers is still better than a single many-core computer, especially for compiling C++. :)

Anyway, I performed a little experiment yesterday using a Raspberry Pi 3B+ as both a "VPN server" and icecream scheduler, and two other computers connected to it through WireGuard tunnels, one of which over an exceptionally bad WiFi tethered mobile 4G connection...

I can describe it in more detail if you are curious, but the main conclusions I drew from it:

  • It does indeed work without this requested feature, simply by routing (forwarding) the network traffic between the two hosts through the scheduler (in this case the Pi).
  • The biggest problem was transferring the large-ish environment tarball: This took a long time, and even failed the first few times, but this only has to be done fairly rarely, so I think it's fine.
  • Even with this awful connection, causing several stalls in the scheduling, combining the two computers still yielded faster build times than using only one of the computers - although of course not nearly by that much as if they were on reasonable connections.
  • If it was borderline useful even in this awfully unfortunate scenario, then I think with the peers being on good (wired internet) connections, it will likely work fairly well.

So, now I think that this feature is not necessary.
Although I'd still love to see #540 merged, as this won't work without it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants