-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
adhoc jobs block other jobs from being processed in the queue #14645
Comments
Are all jobs blocked, or only those running in the same inventory as the ad hoc command? Because if that's the case, then this is expected behavior. awx/awx/main/scheduler/dependency_graph.py Lines 88 to 90 in d8a28b3
|
adhoc jobs appear to block only when running jobs from the same inventory, yes. I had no idea that was expected. I assumed that a job was a job and they could all be run simultaneously (excluding the inventory/project updates). Seems like a fair workaround would be to duplicate the inventory and use one for tempates and the other for adhoc jobs, does that sound about right? |
Discussion has come up before, and we would prefer to allow these rules to be fully user-customizable, @chrismeyersfsu was a particular advocate for this. There may be some existing related issues. The |
I think we could change the way we mark adhocommands so that they aren't treated as inventory updates. Instead we can introduce a mark_adhoccommand method. We should still block on inventory updates though. awx/awx/main/scheduler/dependency_graph.py Line 141 in d8a28b3
|
Hey, is this issue still open? I can work on this if needed.. |
Please confirm the following
[email protected]
instead.)Bug Summary
When you submit an adhoc job all other jobs (other adhoc, and job templates) are stuck in a pending state until the running adhoc job completes.
AWX version
23.4.0
Select the relevant components
Installation method
kubernetes
Modifications
no
Ansible version
No response
Operating system
No response
Web browser
Firefox
Steps to reproduce
Expected results
all jobs in the unified job queue should process according to capacity. I expect to see more than one job in a running state.
Actual results
The adhoc job is in a running state. All other jobs are stuck in pending waiting for the adhoc command to finish.
Additional information
I am using a container group to run jobs.
Capacity of the managed nodes have been visually verified to be < 25% cpu and mem usage.
I tried creating a duplicate container group for job templates, which still was stuck in pending, but now with a message waiting for adhoccommand-371 to finish
The text was updated successfully, but these errors were encountered: