- Jan 25, 2022
-
-
Charlotte Hausman authored
-
- Jan 14, 2022
-
-
Andrew Kapuscinski authored
-
- Dec 21, 2021
-
-
Nathan Hertz authored
-
- Dec 09, 2021
-
-
Charlotte Hausman authored
-
- Nov 22, 2021
-
-
Janet Goldstein authored
-
Janet Goldstein authored
-
- Nov 18, 2021
-
-
Janet Goldstein authored
-
- Oct 20, 2021
-
-
Andrew Kapuscinski authored
-
Andrew Kapuscinski authored
-
- Oct 12, 2021
-
-
- Oct 11, 2021
-
-
Charlotte Hausman authored
-
- Sep 30, 2021
-
-
Charlotte Hausman authored
-
- Sep 28, 2021
-
-
Janet Goldstein authored
-
- Sep 27, 2021
-
-
Charlotte Hausman authored
-
Charlotte Hausman authored
-
Charlotte Hausman authored
-
Janet Goldstein authored
-
- Sep 24, 2021
-
-
Janet Goldstein authored
-
- Sep 22, 2021
-
-
Daniel Lyons authored
-
- Sep 07, 2021
-
-
Daniel Lyons authored
This is two lines of thought combined into one merge: 1. AMQP clients should either receive messages or send messages 2. Capability queues are based on a database-backed queue manager rather than keeping state in-memory Most of the work relating to the first idea comes in refactoring the Router to not be a message sender. Many places in the code now either instantiate a MessageSender instead, or use both a Router and a MessageSender if they truly needed both functionalities. The previous implementation appears to have caused messages to arrive out of order because facilities like `wf_monitor` that only send messages were also trying to receive messages, and either not handling them at all or putting them into a buffer of some kind to be dropped on the floor when the process ended. The work relating to the second idea changes the way that steps are processed in the capability service and eliminates the capability engine concept. Now when PrepareAndRunWorkflow steps are reached, the capability is simply moved into the Waiting state and the queue manager is signaled. Whenever the queue manager is awakened, it checks to see if any queues have slots available and requests waiting. If they do, the number of available slots are used to get requests and start executing them. When an execution exits the cluster, the queue manager is signaled again, so the process continues until all the jobs are processed. As a stability benefit, we check this on startup as well.
-
- Aug 30, 2021
-
-
Charlotte Hausman authored
-
- Aug 19, 2021
-
-
- Jul 29, 2021
-
-
Charlotte Hausman authored
-
- Jul 19, 2021
-
-
-
Charlotte Hausman authored
-
- Jul 15, 2021
-
-
Charlotte Hausman authored
-
Charlotte Hausman authored
-
- Jun 30, 2021
-
-
Charlotte Hausman authored
-
- Jun 24, 2021
-
-
Nathan Hertz authored
-
- Jun 21, 2021
-
-
WS-254, WS-253, WS-251: Catch ingest-complete event, parse it, and create request based on given info
-
- Jun 10, 2021
-
-
Charlotte Hausman authored
-
- Jun 09, 2021
-
-
Andrew Kapuscinski authored
-
Nathan Hertz authored
-
Janet Goldstein authored
-
- Jun 03, 2021
-
-
Charlotte Hausman authored
-
- Jun 01, 2021
-
-
Daniel Lyons authored
-
- May 25, 2021
-
-
Charlotte Hausman authored
-
- May 21, 2021
-
-
Charlotte Hausman authored
-
- May 19, 2021
-
-
Nathan Hertz authored
- Local cluster can now run download workflows - wf_monitor's timeout functionality now works the way we expect it to
-
- May 17, 2021
-
-
Nathan Hertz authored
-