-
Notifications
You must be signed in to change notification settings - Fork 252
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature idea: Proxy job assignment and result through scheduler #543
Comments
On second thought, simply setting up some appropriate routing rules on the nodes, and enabling IP forwarding on the scheduler should do the trick, since the broadcast messages only need to reach the scheduler, not all the other peers... Will have to verify this... |
Icecc is designed assuming fast local internet. I've tried it over VPN and it works but the speed suffers for everyone not just those on the VPN. In most cases a VPN is slow enough that you are better off without Icecc. Computers with many cores are cheap these days. Ccache is a good tool as you.
Your idea can be made to work, but it by nature is slower than what we have for those who don't need it, and those who do generally have a slow enough network that it won't work. IT probably doesn't like VPN users using all the bandwidth icecc does as well.
…--
Henry Miller
[email protected]
On Tue, Jun 23, 2020, at 05:38, TÖRÖK Attila wrote:
Right now, any client that wants to leverage a build farm needs to have a network connection to all worker daemons it wants to use, not just to the scheduler. (Correct me if I'm wrong.)
This makes it impossible to make a build farm in a star topology, where each node (worker and/or client) only has a network connection to the scheduler. This, however, might be useful in some remoting/VPN/tunneling scenarios. See also: #540 <#540>
I propose an idea where a worker, upon failing to connect to an assigned worker daemon directly, would then make an attempt to do this through the scheduler, which it obviously can access, as a fallback.
In this case, the scheduler would act as a proxy between the client and the worker, passing the assignment and the result of each job there and back.
This would significantly increase the load on the scheduler machine (especially the network bandwidth used), but would make icecc work better in some more exotic networking scenarios, where not all participants are necessarily in the same subnet.
What do you think, is this something you would accept?
As an alternative, an overlay network of a single subnet on top of the spokes of the star topology could be set up with some kind of additional IP tunneling, but it would be nice if that wasn't necessary.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub <#543>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ACHSQEWCL27APMAWH726UYLRYCA3BANCNFSM4OFQ6ELQ>.
|
Yes, we use ccache too, but sometimes a "central" header has to be edited, and then the whole cache becomes useless. Anyway, I performed a little experiment yesterday using a Raspberry Pi 3B+ as both a "VPN server" and icecream scheduler, and two other computers connected to it through WireGuard tunnels, one of which over an exceptionally bad WiFi tethered mobile 4G connection... I can describe it in more detail if you are curious, but the main conclusions I drew from it:
So, now I think that this feature is not necessary. |
Right now, any client that wants to leverage a build farm needs to have a network connection to all worker daemons it wants to use, not just to the scheduler. (Correct me if I'm wrong.)
This makes it impossible to make a build farm in a star topology, where each node (worker and/or client) only has a network connection to the scheduler. This, however, might be useful in some remoting/VPN/tunneling scenarios. See also: #540
I propose an idea where a worker, upon failing to connect to an assigned worker daemon directly, would then make an attempt to do this through the scheduler, which it obviously can access, as a fallback.
In this case, the scheduler would act as a proxy between the client and the worker, passing the assignment and the result of each job there and back.
This would significantly increase the load on the scheduler machine (especially the network bandwidth used), but would make icecc work better in some more exotic networking scenarios, where not all participants are necessarily in the same subnet.
What do you think, is this something you would accept?
As an alternative, an overlay network of a single subnet on top of the spokes of the star topology could be set up with some kind of additional IP tunneling, but it would be nice if that wasn't necessary.
The text was updated successfully, but these errors were encountered: