- Sep 16, 2021
-
-
Andrew Kapuscinski authored
-
Charlotte Hausman authored
-
Janet Goldstein authored
* `on_carta_ready` needs parent workflow request ID * `carta envoy` testing documentation augmented
-
Nathan Hertz authored
-
Janet Goldstein authored
-
Charlotte Hausman authored
-
Daniel Lyons authored
-
Daniel Lyons authored
-
- Sep 15, 2021
-
-
Charlotte Hausman authored
-
Nathan Hertz authored
-
Andrew Kapuscinski authored
-
Janet Goldstein authored
-
- Sep 14, 2021
-
-
Charlotte Hausman authored
-
Janet Goldstein authored
-
- Sep 13, 2021
-
-
Charlotte Hausman authored
-
Charlotte Hausman authored
-
Charlotte Hausman authored
-
- Sep 10, 2021
-
-
Nathan Hertz authored
- Sep 09, 2021
-
-
Daniel Lyons authored
-
-
Andrew Kapuscinski authored
-
Charlotte Hausman authored
-
Charlotte Hausman authored
-
- Sep 08, 2021
-
-
Daniel Lyons authored
-
- Sep 07, 2021
-
-
Charlotte Hausman authored
-
Charlotte Hausman authored
-
Charlotte Hausman authored
-
Daniel Lyons authored
This is two lines of thought combined into one merge: 1. AMQP clients should either receive messages or send messages 2. Capability queues are based on a database-backed queue manager rather than keeping state in-memory Most of the work relating to the first idea comes in refactoring the Router to not be a message sender. Many places in the code now either instantiate a MessageSender instead, or use both a Router and a MessageSender if they truly needed both functionalities. The previous implementation appears to have caused messages to arrive out of order because facilities like `wf_monitor` that only send messages were also trying to receive messages, and either not handling them at all or putting them into a buffer of some kind to be dropped on the floor when the process ended. The work relating to the second idea changes the way that steps are processed in the capability service and eliminates the capability engine concept. Now when PrepareAndRunWorkflow steps are reached, the capability is simply moved into the Waiting state and the queue manager is signaled. Whenever the queue manager is awakened, it checks to see if any queues have slots available and requests waiting. If they do, the number of available slots are used to get requests and start executing them. When an execution exits the cluster, the queue manager is signaled again, so the process continues until all the jobs are processed. As a stability benefit, we check this on startup as well.
- Sep 03, 2021
-
-
Janet Goldstein authored
-
- Sep 02, 2021
-
-
Janet Goldstein authored
-
Charlotte Hausman authored
-
Charlotte Hausman authored
-
Charlotte Hausman authored
-
Charlotte Hausman authored
-
Janet Goldstein authored
-