diff --git a/apps/cli/utilities/wksp0/.gitignore b/apps/cli/utilities/wksp0/.gitignore
deleted file mode 100644
index 3c727377a53d74ddb9702615acd9bf34bc069659..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/.gitignore
+++ /dev/null
@@ -1,8 +0,0 @@
-.idea
-__pycache__
-*.egg-info
-*.iml
-.eggs
-.DS_Store
-architecture/*.html
-*~
diff --git a/apps/cli/utilities/wksp0/=8.9 b/apps/cli/utilities/wksp0/=8.9
deleted file mode 100644
index 113006dcd6c4e073e7d7edeacfe5cb236215ede4..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/=8.9
+++ /dev/null
@@ -1,4 +0,0 @@
-Collecting package metadata (current_repodata.json): ...working... done
-Solving environment: ...working... failed with initial frozen solve. Retrying with flexible solve.
-Collecting package metadata (repodata.json): ...working... done
-Solving environment: ...working... failed with initial frozen solve. Retrying with flexible solve.
diff --git a/apps/cli/utilities/wksp0/README.md b/apps/cli/utilities/wksp0/README.md
deleted file mode 100644
index c5ee93830edd84285713807e8deb0cab176d5031..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/README.md
+++ /dev/null
@@ -1,115 +0,0 @@
-# Workspace Prototype 0
-
-This package is a prototype of the workspace system. It is intended to:
-
- - Demonstrate unit, architectural and integration tests
- - Demonstrate an end-to-end flow of information from a capability request through
-   execution on the HTCondor cluster
- - Answer questions about the design
-
-It is not intended to be:
- 
- - Feature complete
- - Easy to use
- - Beautiful 
- - Useful
- 
-In fact, steps have been taken to ensure that this will not be useful in the long-term.
-
-## Building
-
-Make sure you have Conda installed.
-
-1. `conda create -n wksp0 python=3`
-2. `conda activate wksp0`
-3. `python setup.py develop`
-
-## Running Workflows
-
-Make sure you have the software built, and be on the machine `testpost-master`.
-
-1. `run_workflow grep-uniq '{"search":"username"}' /home/casa/capo/nmtest.properties`
-
-The workflow will execute but it is not currently smart enough to know when the workflow is complete.
-
-### HTCondor notes
-
-#### Running jobs
-
-##### Transferring files
-
-1. `transfer_input_files = ...` does not cause files to be transferred unless `should_transfer_files = YES` or `should_transfer_files = IF_NEEDED` is set.
-2. If `should_transfer_files = [YES|IS_NEEDED]`, your `executable = ...` will also get transferred. If the OS does not match, this will lead to interesting problems.
-
-As a result of this discovery, it seems to be wise to *always* supply a shell script as your executable (to avoid platform issues).
-
-##### Logging
-
-HTCondor will not write a log file to /tmp. I'm not entirely sure why this is but for now it seems to be prudent to 
-assign your log files to network filesystems.
-
-#### Running DAGs
-
-One significant good thing here: a file mentioned in `output = ...` or `transfer_output_files = ...` can be 
-mentioned in `transfer_input_files = ...` in a subsequent job (a `CHILD` in the DAG) appears to work correctly.
-
-##### Logging
-
-`condor_submit_dag` always writes a workflow logfile to the supplied dag filename + `.dagman.log`. This is
-in the same format as the regular HTCondor log files and can be parsed the same way. The entire workflow should
-create the following events: `SUBMIT`, `EXECUTE`, `JOB_TERMINATED`. Between `EXECUTE` and `JOB_TERMINATED`, 
-events will appear in the job log files.
-
-It's totally OK to have all the jobs writing events to the same `condor.log`; this is the way I have it set
-up currently. Each job will produce a sequence that looks similar to the workflow, but appears to include
-an extra `IMAGE_SIZE` event for some reason. However, we get a flow of events that looks something like this:
-
-    ┌────────────────┬─────────────────────────────────────────┐
-    │ time           │ -1-2-3-4-5-6-7-8-9-0-1-2-3-4-5-6-7-8--> │
-    ├────────────────┼─────────────────────────────────────────┤ 
-    │ foo.dagman.log │  S E                            JT      │
-    │ condor.log     │       S E IS JT  ... S E IS JT          │
-    └────────────────┴─────────────────────────────────────────┘
-
-I suspect handling this in Python is going to be a little stupid, probably involving two threads 
-sending events to something which is acting as a generator.
-
-## Testing
-
-1. `conda activate wksp0`
-2. `python setup.py test`
-
-## Testing TODOs
-
-- ☑ Unit tests
-- ☑ Architectural tests
-- ☑ Integration tests
-
-### Architectural Tests
-
-The main idea here is to have unit tests that prevent or at least loudly call
-attention to changes to the design and architecture of the system. They do not 
-have to be perfect, they just have to plausibly prevent new interface methods 
-from appearing and violating separation of concerns.
-
-At the moment, we have one in `tests/architectural/test_ifaces.py` which shows
-how to ensure that an interface has exactly two methods on it, with the 
-requisite arguments.
-
-## End-to-end flow
-## Design questions
-
-Building the prototype has revealed a few more questions about the design.
-
-1. What is the interface between the capability engine and the rest of the system?
-
-2. How do supplied product locators make it into the capability steps? Or do they?
-
-3. Are living threads needed for capability execution, or can we find an event-driven solution that won't require
-   them?
-
-It seems as though in the prototype we need a thread to execute the capability and another thread to catch events
-to walk the capability step goes through it's own state model. This is the kind of tricky thing that would probably
-be helpful to developers to see spelled out in a prototype, but also seemed likely to become a deep rabbit hole, so
-I sort of dodged the question for the prototype.
-
diff --git a/apps/cli/utilities/wksp0/alloy/Capabilities.als b/apps/cli/utilities/wksp0/alloy/Capabilities.als
deleted file mode 100644
index 348b08af4b4cb2bdb6881a89bc6ba417f78629a4..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/alloy/Capabilities.als
+++ /dev/null
@@ -1,4 +0,0 @@
-one sig CapabilityService {}
-sig CapabilityEngine {}
-sig CapabilityStep {}
-sig CapabilityRequest {}
diff --git a/apps/cli/utilities/wksp0/alloy/Workflows.als b/apps/cli/utilities/wksp0/alloy/Workflows.als
deleted file mode 100644
index badda206229f7db9c4efc6cc2a3f099b861a49e4..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/alloy/Workflows.als
+++ /dev/null
@@ -1,3 +0,0 @@
-one sig WorkflowService {}
-one sig HTCondor {}
-sig DagmanWorkflow {}
diff --git a/apps/cli/utilities/wksp0/alloy/Workspace.als b/apps/cli/utilities/wksp0/alloy/Workspace.als
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/apps/cli/utilities/wksp0/architecture/Charge.org b/apps/cli/utilities/wksp0/architecture/Charge.org
deleted file mode 100644
index a949a813cfd34eed7e410c6deaa852a61f44b005..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/architecture/Charge.org
+++ /dev/null
@@ -1,18 +0,0 @@
-#+TITLE: Response to the Critical Design Review Charge
-#+AUTHOR: Daniel K Lyons <dlyons@nrao.edu>
-#+DATE: 2020-01-23
-
-* Introduction
-
-Quoting from [[https://open-confluence.nrao.edu/display/Arch/Workspaces+Critical+Design+Review][Workspaces Critical Design Review]]:
-
-#+BEGIN_QUOTE
-The panel is charged with assessing the readiness of the SSA Workspace system to begin implementation, in particular:
-
-- Are the L1 requirements traceable to the conceptual requirements (L0)?  Are L2 requirements appropriately derived from the L1s? Are there any significant gaps in the requirements?
-- Does the architecture presented satisfy the requirements?  Is it appropriate for the task? Are architectural choices clearly identified and motivated?
-- Does the implementation team clearly understand the work to be done?  Are the detailed tasks clearly defined and estimated?
-- Is the implementation and integration plan sufficiently detailed and realistic?
-- Is the planned testing, including unit and integration, sufficient? Is the framework for executing those tests already implemented, or planned for implementation on a realistic and suitable time frame?
-#+END_QUOTE
-
diff --git a/apps/cli/utilities/wksp0/architecture/Design-Iterations.org b/apps/cli/utilities/wksp0/architecture/Design-Iterations.org
deleted file mode 100644
index 3e797a3ffc2450a7039e01e4d4cfa07cbcabcb20..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/architecture/Design-Iterations.org
+++ /dev/null
@@ -1,1588 +0,0 @@
-#+title: Workspace Architecture: Design Iterations
-#+author: Daniel K Lyons
-#+email: dlyons@nrao.edu
-#+date: 2019-11-26
-#+SETUPFILE: https://fniessen.github.io/org-html-themes/setup/theme-readtheorg.setup
-#+HTML_HEAD_EXTRA: <link rel="stylesheet" type="text/css" href="extra.css" />
-#+OPTIONS: H:5 ':t
-
-* Architectural Drivers
-  :PROPERTIES:
-  :UNNUMBERED: t
-  :END:
-
-For an overview of the architecture, please consult the [[./Overview.org][Workspace Architecture: Overview]] document.
-
-** Primary Requirements
-
-1. Cameo Systems Modeler SRDP Requirements
-2. Cameo Systems Modeler SRDP System Architecture
-
-
-** Architecturally Significant Requirements
-
-| ID          | Requirements                                                       | ASR                                                                                                                         |
-|-------------+--------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------|
-| <<<ASR-1>>> | SRDP-L1-6.1, SRDP-L1-6.2, SRDP-L1-6.11, SRDP-L1-6.12, SRDP-L1-6.13 | Workspaces must provide a user-facing reviewable, cancellable processing facility that includes estimates of resource usage |
-| <<<ASR-2>>> | SRDP-L1-6.5, SRDP-L1-6.7                                           | Workspaces must support system-defined and large project-defined triggered processing                                       |
-| <<<ASR-3>>> | <SSA requirement>                                                  | Selectable list of capabilities should be informed by chosen products and CASA versions                                     |
-| <<<ASR-4>>> | <SSA requirement>                                                  | Products must bear provenance information: how it was made, from what inputs it was made, what software versions were used  |
-
-
-** Quality Attribute Scenarios (Inferred)
-
-| ID         | QA              | Scenario                                                                                                                                                                                             | Associated Use Case |
-|------------+-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------|
-| <<<QA-1>>> | Modifiability   | DAs should be able to add or modify capabilities (specifically to commission new CASA and CASA pipeline releases) at runtime without software changes, downtime or interfering with other processing | TBD                 |
-| <<<QA-2>>> | Availability    | Running capabilities must be isolated from changes to the system and continue executing during system restarts                                                                                       | SRDP-L0-11.4        |
-| <<<QA-3>>> | Reproducibility | It should be possible to recreate any derived product                                                                                                                                                |                     |
-| <<<QA-4>>> | Traceability    | Users should be able to see state changes to their running requests                                                                                                                 |                     |
-
-
-*** Quality attribute scenarios in depth
-
-| ID   | QA              | Source | Stimulus                                                   | Env                                    | Response                       | Metric                                                                |
-|------+-----------------+--------+------------------------------------------------------------+----------------------------------------+--------------------------------+-----------------------------------------------------------------------|
-| QA-1 | Modifiability   | DA     | Want to attempt standard calibration with new CASA release | Running normally                       | Changes are made to capability | Zero downtime                                                         |
-| QA-1 | Modifiability   | DA     | Wants to create a new capability                           | Running normally                       | New capability added to system | Zero downtime, zero running requests affected                         |
-| QA-2 | Availability    | DA     | Same as QA-1                                               | Running normally                       | Changes are made to capability | Zero running requests affected                                        |
-| QA-3 | Reproducibility | User   | Wants to recreate a product                                | Running normally with existing product | New product generated          | Zero scientific differences between new and old product               |
-| QA-4 | Traceability    | User   | Wants to see running requests                              | User has active requests               | User can see running requests  | State changes to running requests propagate to user within 5 seconds. |
-
-
-** Constraints
-
-| ID          | Requirement       | Constraint                                                                                             |
-|-------------+-------------------+--------------------------------------------------------------------------------------------------------|
-| <<<CON-1>>> | SRDP-L1-6.15      | Support [[https://htcondor.readthedocs.io/en/latest/][HTCondor]] and [[https://opensciencegrid.org/][Open Science Grid]]                                                                 |
-| <<<CON-2>>> | <SSA requirement> | Provide migration path for existing workflows                                                          |
-| <<<CON-3>>> | <SSA requirement> | Provide a migration path for VLASS manager                                                             |
-| <<<CON-4>>> | <SSA requirement> | Imaging and calibration are performed by CASA, so CASA versions must be tracked and used appropriately |
-| <<<CON-5>>> | SRDP-L1-6.11      | Use the Kayako science helpdesk to facilitate communication between staff and users                    |
-
-** Concerns
-
-| ID          | Concern                                                                                                                             |
-|-------------+-------------------------------------------------------------------------------------------------------------------------------------|
-| <<<CRN-1>>> | Establish an overall initial system architecture. (Iteration 1)                                                                     |
-| <<<CRN-2>>> | As SSA Architect, provide consistent technical direction for software systems developed and maintained by DMS.                      |
-| <<<CRN-3>>> | Leverage SSA team’s knowledge about TODO technology.                                                                                |
-| <<<CRN-4>>> | Allocate development to members of the SSA team, some are remote.                                                                   |
-| <<<CRN-5>>> | A majority of domain objects/modules shall be unit tested via automated testing (CI/CD).                                            |
-| <<<CRN-6>>> | Workspace application development will utilize a continuous integration process with clear release versions for delivered software. |
-
-** Requirement Satisfaction
-
-| Requirement    | Iterations   |
-|----------------+--------------|
-| SRDP-L0-11     |              |
-| SRDP-L0-11.2   | [[Iteration 9][9]], [[Iteration 14][14]]        |
-| SRDP-L0-11.3   | [[Iteration 9][9]]            |
-| SRDP-L0-11.4   | [[Iteration 9][9]]            |
-| SRDP-L0-11.5   |              |
-| SRDP-L1-5.3    | [[Iteration 17][17]]           |
-| SRDP-L1-6.1    | [[Iteration 10][10]], [[Iteration 17][17]]       |
-| SRDP-L1-6.2    | [[Iteration 9][9]]            |
-| SRDP-L1-6.3    | [[Iteration 5][5]]            |
-| SRDP-L1-6.4    | [[Iteration 5][5]], [[Iteration 15][15]]        |
-| SRDP-L1-6.5    | [[Iteration 5][5]], [[Iteration 15][15]]        |
-| SRDP-L1-6.6    | [[Iteration 11][11]]           |
-| SRDP-L1-6.6.1  | [[Iteration 5][5]]            |
-| SRDP-L1-6.6.2  | [[Iteration 11][11]]           |
-| SRDP-L1-6.6.3  | [[Iteration 11][11]]           |
-| SRDP-L1-6.7    | [[Iteration 12][12]]           |
-| SRDP-L1-6.7.1  | [[Iteration 12][12]]           |
-| SRDP-L1-6.8    | [[Iteration 2][2]]            |
-| SRDP-L1-6.9    | [[Iteration 13][13]]           |
-| SRDP-L1-6.9.1  | [[Iteration 13][13]]           |
-| SRDP-L1-6.10   | [[Iteration 5][5]], [[Iteration 8][8]], [[Iteration 15][15]]     |
-| SRDP-L1-6.10.1 | [[Iteration 8][8]]            |
-| SRDP-L1-6.11   | CON-5, [[Iteration 9][9]], [[Iteration 10][10]] |
-| SRDP-L1-6.12   | [[Iteration 4][4]]            |
-| SRDP-L1-6.13   | [[Iteration 9][9]]            |
-| SRDP-L1-6.13.1 | [[Iteration 9][9]]            |
-| SRDP-L1-6.13.2 | [[Iteration 6][6]]            |
-| SRDP-L1-6.14   | [[Iteration 6][6]]            |
-| SRDP-L1-6.15   | CON-1, [[Iteration 9][9]]     |
-| SRDP-L1-8      | [[Iteration 7][7]]            |
-| SRDP-L1-8.6    | [[Iteration 8][8]]            |
-| SRDP-L1-8.7    | [[Iteration 7][7]], [[Iteration 8][8]]         |
-| SRDP-L1-8.8    | [[Iteration 7][7]]            |
-| SRDP-L1-8.8.1  | [[Iteration 7][7]]            |
-| SRDP-L1-8.9    | [[Iteration 7][7]], [[Iteration 10][10]]        |
-| SRDP-L1-8.10   | [[Iteration 7][7]]            |
-| SRDP-L1-8.10.1 | [[Iteration 12][12]]           |
-| SRDP-L1-11     | [[Iteration 4][4]]            |
-| SRDP-L1-11.1   | [[Iteration 4][4]]            |
-| SRDP-L1-11.2   | [[Iteration 4][4]]            |
-| SRDP-L1-12     |              |
-| SRDP-L1-12.6   | [[Iteration 4][4]]            |
-| SRDP-L1-13     | [[Iteration 15][15]]           |
-| SRDP-L1-6.16   | [[Iteration 17][17]]           |
-| SRDP-L1-6.16.1 | [[Iteration 17][17]]           |
-| SRDP-L1-6.16.2 | [[Iteration 17][17]]           |
-
-* <<Iteration 1>>Conceptual Architecture
-
-** Review Inputs
-
-- Design Purpose :: Produce a conceptual architecture to support system construction and meet DMS/SRDP verification and validation requirements. 
-
-  Conceptual architecture is the most abstract model, primarily focused on structures, highlights relationships between key components (not how they work), and contains no implementation details.
-
-- Primary Functional Requirements :: SRDP-L1-6
-- ASRs :: ASR-1
-- QAs :: N/A
-- Constraints :: CON-1
-- Concerns :: CRN-1
-
-** Establish Iteration Goals and Select Drivers
-
-The goal of this iteration is to address CRN-1, “Establish an overall initial system architecture.”
-
-** Choose System Element(s) to Refine
-
-The element to refine is the entire Workspace system. 
-
-** Choose Design Concept(s) that Satisfy Drivers
-
-- Design Decision :: Structure the workspace system in a client-server architecture
-- Rationale :: There are shared resources and services that large numbers of distributed clients wish to access, and for which we wish to control access or quality of service
-
-** Instantiate Architecture Elements, Allocate Responsibilities, Define Interfaces
- 
-*** Design Decision: Clients
-
-Rationale: Clients initiate interactions with servers by invoking services as needed and waiting on results.
-
-The client of the workspace is the entity which can make requests for processing.
-
-*** Design Decision: Server
-
-Servers provide services to distributed clients.
-
-The workspace server furnishes processing services to clients. It receives the requests, executes them, tracks their progress and provides information about them back to clients.
-
-** Sketch Views and Record Design Decisions
-
-This is a view of the workspace system context.
-
-[[./images/image1.png]]
-
-Based on this view and because the system is clearly about providing access to shared resources, the choice of a client-server architecture is warranted. At the risk of providing highly obvious information, here is a common sketch of client-server architecture from Wikipedia:
-
-[[./images/image2.png]]
-
-** Analyze Current Design, Review Iteration Goal
-
-The decisions made in this iteration satisfy CRN-1. The decisions addresses CON-1 because Open Science Grid support is allocated as an external system to the workflow system. ASR-1 is addressed by the workspace system itself, to be elaborated in future iterations.
-
-* <<Iteration 2>>Web Architecture
-
-** Review Inputs
-
-- Design Purpose :: Refine the server component of the workspace system
-- Primary Functional Requirements :: SRDP-L1-6, SRDP-L1-6.8
-- ASRs :: ASR-1
-- QAs :: N/A
-- Constraints :: N/A
-- Concerns :: N/A
-
-** Establish Iteration Goals and Select Drivers
-
-The goal of this iteration is to establish the architecture of the workspace server.
-
-** Choose System Element(s) to Refine
-
-The element to refine is the Workspace server. 
-
-** Choose Design Concept(s) that Satisfy Drivers
-
-- Design Decision :: Structure the workspace system in a web application architecture
-- Rationale :: Web applications provide access to remote users over the internet, do not require users to install software, and are a core competency of the implementation group
-
-** Instantiate Architecture Elements, Allocate Responsibilities, Define Interfaces
- 
-| Component Name    | Role          | Rationale                                                                                                                                                                         |
-|-------------------+---------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| Workspace UI      | UI            | The workspace user interface provides the interactive components to users to make their processing requests.                                                                      |
-| Capability System | REST services | The user interface communicates with REST services to make requests and display their state. Internal applications can also make use of the services without going through the UI |
-| Capability Info   | Data Access   | The service layer looks up and persists its information via the Capability Info                                                                                                   |
-
-** Interfaces
-
-The workspace UI forms an interface, which will provide humans access to the underlying services of the capability system.
-
-The capability system will provide several REST endpoints as follows:
-
-| Endpoint             | Verbs             | Rationale                                                                                                                                                              |
-| /capability          | GET, POST         | Access list of capabilities; create capabilities                                                                                                                       |
-| /capability/{ID}     | GET, POST, DELETE | Obtain descriptions of, modify, or delete capabilities                                                                                                                 |
-| /request             | GET, POST         | Access lists of capability requests, create new requests                                                                                                               |
-| /request/{ID}        | GET, POST         | Access description and state of capability request; modify capability requests                                                                                         |
-| /request/{ID}/params | GET, POST         | Access and modify parameters to request                                                                                                                                |
-| /request/{ID}/submit | POST              | Submit request                                                                                                                                                         |
-| /request/{ID}/cancel | POST              | Cancel request. DELETE on the request resource would suggest that the request ceases to exist; since that is not the case, we have a separate cancellation action here |
-
-HTTP status codes to be used as appropriate. Successful submission of requests will return 202 Accepted, as the processing will not be complete even though the request is.
-
-We do not anticipate supporting a wide breadth of content-types. JSON is likely to be sufficient for internal users and for the Workspace UI. This design choice allows for expanding future support for external users by creating XML representations and schemata for them.
-
-REST services are best when they are stateless. As authentication and authorization form a concern here, these REST services will respect the small amount of state needed to provide logged-in users different functionality than unauthenticated users.
-
-Caching may be implementable for these endpoints but it is not required by the design.
-
-** Sketch Views and Record Design Decisions
-
-This is a view of the components and their interactions
-
-[[./images/image3.png]]
-
-** Analyze Current Design, Review Iteration Goal
-
-This iteration refined the workflow server. Separating the UI from services via a REST interface is common sense architecture today. This also enables internal clients to access the same resources as external users. 
-
-* <<Iteration 3>>Pub-Sub Messaging
-
-** Review Inputs
-
-- Design Purpose :: Refine the server component of the workspace system
-- Primary Functional Requirements :: SRDP-L1-6
-- ASRs :: ASR-1
-- QAs :: QA-2
-- Constraints :: N/A
-- Concerns :: N/A
-
-** Establish Iteration Goals and Select Drivers
-
-The goal of this iteration is to establish the messaging component of the workspace server.
-
-** Choose System Element(s) to Refine
-
-The element to refine is the Workspace server. 
-
-** Choose Design Concept(s) that Satisfy Drivers
-
-- Design Decision :: Use publish-subscribe messaging for events in the workspace server
-- Rationale :: Publish-subscribe maintains a strong decoupling between the components of the system, allowing new facilities to gain information about workspace processing without having to modify the workspace system itself or endanger its availability
-- Design Decision :: Externally-developed component
-- Rationale :: Third-party AMQP system provides significant functionality, is well-understood and already used by the archive and the existing workflows
-
-** Instantiate Architecture Elements, Allocate Responsibilities, Define Interfaces
- 
-| Message                                                 | Rationale                                                                                     |
-|---------------------------------------------------------+-----------------------------------------------------------------------------------------------|
-| Send: processing request start, finish, task processing | UI can update based on these events, resource estimates can be computed based on these events |
-| Receive: new data available                             | Service can begin processing when new data or products become available in the archive        |
-
-*** External component: AMQP
-
-AMQP realizes our requirement for a pub-sub messaging component. The team has extensive experience using AMQP over the last three or four years. The system is very robust and works well.
-
-*** Responsibility: Handling messaging failures
-
-As in any distributed system, there are several failure modes that need to be addressed. In the pub-sub messaging system, we have essentially a Sender, the messaging system, and a Receiver, and an error can occur on any of these. What happens?
-
-1. *Sender fails to send.* This could be due to sender misconfiguration. The message goes nowhere. Presumably the system enters a deadlock state which is straightforward to diagnose.
-2. *Messaging system offline.* The client libraries for AMQP have options for handling this situation. A common and acceptable stop-gap is simply blocking until the system comes back online. This is a function that has been used in anger in the current archive.
-3. *Receiver fails to receive.* This can happen because the receiver is offline, or because the receiver is misconfigured. The AMQP system can be configured with durable and persistent queuing, which ensures that messages are recorded on-disk. Whenever the receiver comes back online, the backlog of messages will be passed to it.
-
-    The same functionality ensures that the AMQP system can be restarted without affecting queue state.
-
-Our experience with the archive and VLASS manager shows that the AMQP system we use is robust and reliable. We have not found any significant issues with it that we were not to blame for ultimately. The durable and persistent queuing system has been a useful thing for workflow restarts already. We expect we can expand our usage of this technology safely.
-
-** Sketch Views and Record Design Decisions
-
-[[./images/image4.png]]
-
-** Analyze Current Design, Review Iteration Goal
-
-This iteration refined the workflow server by applying a pub-sub messaging pattern.
-
-* <<Iteration 4>>Estimation
-
-** Review Inputs
-
-- Design Purpose :: Define the estimation component of the workspace system
-- Primary Functional Requirements :: SRDP-L1-6.12, SRDP-L1-11, SRDP-L1-11.1, SRDP-L1-11.2, SRDP-L1-12.6
-- ASRs :: ASR-1
-- QAs :: QA-2
-- Constraints :: N/A
-- Concerns :: N/A
-
-** Establish Iteration Goals and Select Drivers
-
-The goal of this iteration is to establish the messaging component of the workspace server.
-
-**  Choose System Element(s) to Refine
-
-The element to refine is the Workspace server. 
-
-** Choose Design Concept(s) that Satisfy Drivers
-
-- Design Decision :: Apply the Aggregator pattern
-- Rationale :: Aggregators take multiple messages from an event source and combine them into some kind of aggregate value.
-
-** Instantiate Architecture Elements, Allocate Responsibilities, Define Interfaces
- 
-*** Design Decision: Estimator aggregation
-
-Rationale: Estimator aggregates messages from the workflow: start messages with data volume and processing options, end messages to compute time taken.
-
-*** Design Decision: Estimator interface
-
-Rationale: UI requests estimates based on data volume and processing options; Estimator replies based on information it has aggregated thus far.
-
-| Method      | Arguments | Rationale                                                                    |
-|-------------+-----------+------------------------------------------------------------------------------|
-| estimate    | request   | Provides the UI a way to expose an estimate of how long the request will take |
-| isExpensive | request   | Provides the capability system with a way of knowing whether or not this request requires an additional review per SRDP-L1-12.6. |
-
-*** Metrics
-
-The Estimator's recording must be sufficient to fulfill SRDP-L1-11, SRDP-L1-11.1 and SRDP-L1-11.2.
-
-** Sketch Views and Record Design Decisions
-
-[[./images/image5.png]]
-
-** Analyze Current Design, Review Iteration Goal
-   :PROPERTIES:
-   :CUSTOM_ID: Estimate-Analysis
-   :END:
-
-
-This design fulfills SRDP-L1-6.12 and partially fulfills ASR-1 by providing estimates. The estimator will also be responsible for recording the SRDP-L1-11.1 and SRDP-L1-11.2 metrics and performing the determination about whether the request is large, per SRDP-L1-12.6.
-
-We have interpreted the requirement to provide estimates as suggesting that we provide rough estimates. The design constraint CON-1 of using Open Science Grid will preclude us from having detailed information about the executing environment and its hardware, which trades off against making accurate predictions about performance. We assume the intent of this estimate is to gently nudge users towards cheaper processing choices (e.g. choosing restoration over reprocessing) rather than telling users exactly how long they can wait before phoning the director to complain. This is a risk.
-
-* <<Iteration 5>>Requests and Queues
-** Review Inputs
-
-- Design Purpose :: Refine the capability system to define capability requests
-- Primary Functional Requirements :: SRDP-L1-6.3, SRDP-L1-6.4, SRDP-L1-6.5, SRDP-L1-6.6.1 and SRDP-L1-6.10
-- ASRs :: ASR-2, ASR-1
-- QAs :: N/A
-- Constraints :: CON-1
-- Concerns :: N/A
-
-** Establish Iteration Goals and Select Drivers
-
-The goal of this iteration is to solve triggered processing.
-
-** Choose System Element(s) to Refine
-
-The element to refine is the Capability service. 
-
-** Choose Design Concept(s) that Satisfy Drivers
-
-- Design Decision :: Apply the Command pattern
-- Rationale :: Command allows you to nominalize an action so that the action can be examined or deferred. “It should be possible to configure an object (that invokes a request) with a request.”
-
-** Instantiate Architecture Elements, Allocate Responsibilities, Define Interfaces
- 
-*** Domain Object: Capability Request
-Rationale: Capability requests for triggered processing or system-processing can be created ahead of time with their execution deferred until triggers occur or requisite products become available. Capability requests for users constitute their intent and may encompass many attempts to produce their desired product.
-
-*** Domain Object: Capability Queue
-
-Rationale: Retains requests until they can be executed. Organizes requests in priority order. Allows throttling.
-
-
-*** Responsibility: Hand-off to Workspaces
-
-The archive (or any other subsystem sending a capability request) must furnish the type of the data alongside the data. This is needed to determine which capabilities are available and to ensure that they are well-typed.
-
-** Sketch Views and Record Design Decisions
-
-[[./images/image6.png]]
-
-** Analyze Current Design, Review Iteration Goal
-
-The design fulfills requirements SRDP-L1-6.3, SRDP-L1-6.4, SRDP-L1-6.5, SRDP-L1-6.6.1 and SRDP-L1-6.10. This design adds a command-and-queue pattern. The command represents capability requests. This enables the system to hold pre-created requests for triggered processing, as well as holding requests that exceed the defined threshhold for running processes. The requests will dwell in the queue until they can be executed. Additionally, the use of a priority queue ensures that high-priority processing will be performed before normal processing.
-
-*** Discussion on different approaches to hand-off
-
-Hand-off deals with the moment that data is identified by some other system and introduced into the workspace. There are only three ways this can occur:
-
-1. The selected data is all that is provided
-2. The selected data and its type are provided
-3. The selected data and a capability name are provided
-
-The difference between 1 and 2 is that in 1, we impose a service on the archive to discover the type of some data. The workspace system must "phone home" to the archive to discover what the type is of the data, coupling is increased and an extra call is inserted between the two steps.
-
-The difference between 1 and 3 is that in 3, we impose on ourselves a service to determine what capabilities are available for a given type, from which the archive must choose. This raises coupling again, but in the reverse direction, as the archive or other subsystem must "phone us" and find out what capabilities are on offer.
-
-The least coupled solution appears to be 2, where the archive (or other subsystem) furnishes us with the data and a fact about its type. From there, we can present the capability options and begin collecting arguments.
-
-Capability typing is elaborated in [[Iteration 19]].
-
-* <<Iteration 6>>Authentication, Authorization and Proprietary Data
-** Review Inputs
-
-- Design Purpose :: Refine the workspace system to address authentication, authorization and proprietary data access
-- Primary Functional Requirements :: SRDP-L1-8.8.1, SRDP-L1-6.14, SRDP-L1-6.13.2
-- ASRs :: ASR-1, ASR-2
-- QAs :: N/A
-- Constraints :: N/A
-- Concerns :: N/A
-
-** Establish Iteration Goals and Select Drivers
-
-The goal of this iteration is to address security concerns relating to authentication, authorization, and access to proprietary data..
-
-** Choose System Element(s) to Refine
-The element to refine is the Workspace system. 
-
-** Choose Design Concept(s) that Satisfy Drivers
-
-- Design Decision :: Apply the service pattern
-- Rationale :: A service is a discrete unit of functionality that can be accessed remotely and acted upon and updated independently
-
-** Instantiate Architecture Elements, Allocate Responsibilities, Define Interfaces
- 
-- Service :: Authentication/Authorization Service
-- Rationale :: Provide remote clients with the ability to authenticate and authorize access to products. Also provide the ability to create opaque auth tokens that can be passed around and validated by services without leaking credentials to subsystems.
-
-*** Interfaces
-
-The authentication/authorization service will be a REST API with the following endpoints:
-
-| Endpoint             | Verbs | Rationale                                                                           |
-|----------------------+-------+-------------------------------------------------------------------------------------|
-| /authenticate        | POST  | Authenticate a user                                                                 |
-| /generate-token      | POST  | Produce a user authentication token for downstream processing                       |
-| /verify-token        | GET   | Verify whether a user authentication token is valid for a particular user           |
-| /authorize/{locator} | POST  | Authorize the user implied by the supplied token for access to this product locator |
-
-Authentication endpoints will need to handle CAS authorization against both the ALMA and NRAO CAS domains.
-
-Token-related endpoints will include headers indicating when the token will expire.
-
-** Sketch Views and Record Design Decisions
-
-[[./images/image7.png]]
-
-** Analyze Current Design, Review Iteration Goal
-
-The service is external to the workspace system. It is shared with the archive.
-
-This design adds an authentication/authorization service. This service abstracts the decision-making about whether a user is authorized for certain access, and provides a capability to downstream systems to validate access without requiring the user to present credentials or have the UI capture and leak them to downstream systems. The service is unaware of what is accessing it, so it can be used by multiple UIs. It supports the QA system by allowing users to belong to multiple classes and providing that information.
-
-There is some ambiguity about ownership here. There are discussions about creating an observatory-wide A3 service which may supersede this design. If/when that happens, it will have to be integrated here and either this component will front the wider A3 service, or this component will be replaced by it. I consider this a risk, but as this aspect of capabilities must be addressed now, we must have something fulfilling this role now. 
-
-* <<Iteration 7>>Capability Steps and QA Assignees
-
-** Review Inputs
-
-- Design Purpose :: Develop capability execution and quality assurance
-- Primary Functional Requirements :: SRDP-L1-8, SRDP-L1-8.7, SRDP-L1-8.8, SRDP-L1-8.8.1, SRDP-L1-8.9, SRDP-L1-8.10
-- ASRs :: ASR-1, ASR-2
-- QAs :: N/A
-- Constraints :: N/A
-- Concerns :: N/A
-
-** Establish Iteration Goals and Select Drivers
-
-The capability system defined in iteration 2 need to be refined to address requirements relating to quality assurance and quality assurance state.
-
-** Choose System Element(s) to Refine
-
-The element to refine is the Capability service. 
-
-** Choose Design Concept(s) that Satisfy Drivers
-
-- Design Decision :: Domain model objects
-- Rationale :: The QA requirements necessitate the introduction of some domain objects. A domain model is a conceptual model of the domain that incorporates both behaviour and data.
-
-** Instantiate Architecture Elements, Allocate Responsibilities, Define Interfaces
-
-*** Domain Object: Assignee
-
-*Purpose*: Allows a capability request to be assigned to a staff member for monitoring.
- 
-*** Domain Object: Capability Engine
-
-*Purpose*: Executes a capability
-
-As capabilities become complex, the responsibility for handling their execution should belong to a single domain object.
-
-*** Domain Object: Capability Sequence
-
-*Purpose*: A sequence of instructions to fulfill a capability.
-
-One view of the capability is as a sequence of instructions; this domain object isolates this responsibility.
-
-*** Domain Object: Capability Step
-
-*Purpose*: A concrete step in the processing of a capability.
-
-An interface for capability steps allows new kinds of capability steps to be defined in the future
-
-*** Domain Object: Await Product Step
-
-*Purpose*: Await the existence of products.
-
-Since requests may be created and begin executing before their products exist, we need a step to await them.
-
-This simplifies the state model for capabilities and will allow more complex product relationships with capabilities 
-
-*** Domain Object: Await QA
-
-*Purpose*: Await quality assurance
-
-Notifies the appropriate users that this capability is now waiting for QA.
-
-*** Domain Object: Prepare and Execute Workflow Step
-
-*Purpose*: Runs a workflow for its side-effects.
-
-The “real processing” of the capability will happen here, delegating to the workflow service.
-
-*** Domain Object: Await Workflow Step
-
-*Purpose*: Awaits completion of the previously executed workflow.
-
-This is a separate step from the above to keep steps fairly atomic and without internal state. Rather than having one step here with a question about whether the workflow has been started yet or not, if we appear to have executed it, we await it here; if we appear not to have executed it, when processing resumes, we will start by executing it.
-
-*** Domain Object: Await Large Allocation Approval
-
-*Purpose*: Await analyst approval before beginning processing.
-
-For large requests (requests requiring a significant amount of resources), analysts must approve the request. This step checks the estimated time of the processing using the estimator; if it exceeds a certain threshold, the step awaits a message from an analyst approving the request.
-
-** Sketch Views and Record Design Decisions
-
-[[./images/image8.png]]
-
-The following state diagram shows the states that capability requests may be in. 
-
-[[./images/image9.png]]
-
-** Analyze Current Design, Review Iteration Goal
-
-This iteration refined the workflow server. Separating the UI from services via a REST interface is common sense architecture today. This also enables internal clients to access the same resources as external users. 
-
-The capability queue ensures that, per SRDP-L1-6.10, requests begin executing automatically when the appropriate resources become available, and per SRDP-L1-6.10.1, the queue can be paused.
-
-*** Changes from earlier revisions
-
-**** Splitting Prepare and Run Workflow into Two
-
-In an earlier revision, Prepare And Run Workflow and Await Workflow were a single step. This was revised into separate steps to make capability steps that are more atomic. Preparing and starting a workflow are anticipated to be fairly quick activities, whereas in general their results will require a significant wait. It is anticipated that the capability system will be restarted during these waits. It would be safer if it were more obvious where in the execution of a step we are during a restart; if preparing and executing has not finished yet, it can simply be redone. If it is finished, we can simply wait on the message. There will be no need to jump into some middle position inside a step.
-
-**** On handling QA as a step
-
-The design until now has been highly general. QA was unified with getting the parameters needed for a capability by the AWAIT PARAMETER step. This unification produces a few problems:
-
-1. Since AWAIT PARAMETER is a step, there it is part of the state of a capability request, so it must be created before the user can supply the parameters. This creates a problem showing the form before sending a request
-2. There is some ambiguity about who the parameter should come from which is pushed onto the UI's plate
-3. QA state seems like the first thing you'd want to use to condition further processing on in a capability sequence, so it creates pressure to complicate the capability sequence concept
-
-A less general approach that fulfills the requirements with fewer strange consequences seems to be the one adopted above, to wit:
-
-1. Remove AWAIT PARAMETER
-2. Record the needed parameter type on the capability, so that it can be obtained statelessly prior to sending a request
-3. Create AWAIT QA as a step
-
-**** What about extra parameters?
-
-So, what happens if we need extra parameters during a capability? At a later date, AWAIT PARAMETER could be reintroduced and probably share implementation with the mechanism on the capability for obtaining the first parameter, if this turns out to be needed. There does not appear to be such a need in the current requirements.
-
-**** What about conditional processing?
-
-It appears that there are low-complexity solutions to runtime-editable capabilities /or/ capabilities with conditional logic, but not both. At this time, the only conditional processing we know we may need to support is QA, and we can push that problem down into the workflows using the QA parameter we receive. The capability sequence as a list of steps that can be edited at runtime currently appears to be more useful than a non-editable capability with full support for conditional processing.
-
-There are several directions we could go from here, but we don't have enough information to choose wisely between them, or enough time or of convincing necessity to implement something fully general and powerful.
-
-* <<Iteration 8>>Capability Executables: Data Fetcher, CASA Wrapper, Deliverer
-
-** Review Inputs
-
-- Design Purpose :: Refine the capability system to introduce standard capabilities
-- Primary Functional Requirements :: SRDP-L1-6.10, SRDP-L1-8.6, SRDP-L1-8.10
-- ASRs :: ASR-2
-- QAs :: TBD
-- Constraints :: CON-3
-- Concerns :: TBD
-
-** Establish Iteration Goals and Select Drivers
-
-The goal of this iteration is to establish standard reusable functionalities that will be composed to form the standard capabilities described in the requirements.
-
-** Choose System Element(s) to Refine
-
-The element to refine is the Capability system. 
-
-** Choose Design Concept(s) that Satisfy Drivers
-
-- Design Decision :: Introduce reusable executables
-- Rationale :: Workflows are composed of executable tasks. Many workflows should use the same tasks, parameterized by different input control files.
-
-** Instantiate Architecture Elements, Allocate Responsibilities, Define Interfaces
- 
-*** Executable: Data Fetcher
-
-Rationale: The existing archive provides this component, which exemplifies parameterization by a machine-readable input file.
-
-*** Executable: CASA Wrapper
-
-Rationale: Every call to the CASA pipeline should have the same structure within the workflow. The CASA pipeline already does most of the interesting work here.
-
-*** Executable: Deliverer
-
-Rationale: A single executable will take a high-level description of how the requester wants data delivered, and do it the same way for all workflows.
-
-** Sketch Views and Record Design Decisions
-
-[[./images/image10.png]]
-
-** Analyze Current Design, Review Iteration Goal
-
-This design creates a many-to-many relationship between workflow definitions and reusable tasks that belong to workflows. If the tasks are sufficiently general, they can be reused across many different workflows. This design also defines several of the standard workflows which will be used to implement the standard capabilities. 
-
-* <<Iteration 9>>HTCondor
-** Review Inputs
-
-- Design Purpose :: Refine the workflow system
-- Primary Functional Requirements :: SRDP-L1-6.2, SRDP-L1-6.11, SRDP-L1-6.13, SRDP-L1-6.13.1, SRDP-L1-6.15, SRDP-L0-11.3, SRDP-L0-11.4
-- ASRs :: ASR-1
-- QAs :: QA-1, QA-2
-- Constraints :: CON-1, CON-2
-- Concerns :: N/A
-
-** Establish Iteration Goals and Select Drivers
-
-   The goal of this iteration is to clarify how HTCondor will be used and how system restarts will not affect running processing.
-
-** Choose System Element(s) to Refine
-
-   The element to refine is the workflow system.
-
-** Choose Design Concept(s) that Satisfy Drivers
-
-- Design Decision :: Explicit Interface
-- Rationale :: Protect clients from implementation details by making clients depend only on the interface.
-- Design Decision :: Externally Developed Component
-- Rationale :: Mandated by design constraints
-
-** Instantiate Architecture Elements, Allocate Responsibilities, Define Interfaces
-
-*** Workflow Interface
-
-The workflow interface will allow clients to execute workflows by name, passing along files as parameters. It will additionally allow clients to see what workflows are executing currently and what their state is, and allow cancellation.
-
-*** Externally Developed Component: HTCondor
-
-The workflow implementation will be based on HTCondor. Using HTCondor fulfills a design constraint CON-1 as well as providing a much-needed feature for free: executing workflows are not dependent on any services, so they will not be vulnerable to changes in the running system, fulfilling QA-1 and QA-2.
-
-*** Responsibility: Migrating existing workflows
-
-Design constraint CON-2 must be addressed here.
-
-Clients of the existing workflow system can be migrated to this system fairly smoothly, since both systems amount to sending a small request. If the workflow names are similar, it will be even more straightforward, but this would be a good opportunity to normalize them.
-
-The workflows themselves will require more work to migrate. The plan would be something like this:
-
-1. Ignore ArchiveWorkflowStartupTask and UpdateRequestHandlerTask, as well as any other work that maintains workflow state from inside workflows.
-2. Synthesize a Unix executable from each WorkflowTask and its constituent jobs.
-3. Replace data flow in workflows with DAG parent-child relationships
-
-Per [[Iteration 8]], it is clear that step 2 will result in at least a data fetcher, a CASA wrapper, and a deliverer. It is anticipated that the majority of capabilities will boil down a small number of workflows and higher reuse will compensate for the significant work of migration.
-
-** Sketch Views and Record Design Decisions
-
-[[./images/image12.png]]
-
-** Analyze Current Design, Review Iteration Goal
-
-The current design hides HTCondor behind a high-level interface. There are system requirements that are met by HTCondor itself, but they are hidden behind this interface. Consequently, if HTCondor is replaced at a later date, the replacement will have the responsibility of fulfilling QA-1 and QA-2.
-
-It is understood how to cancel HTCondor jobs using ~condor-rm~; this feature will be exposed in the external interface and in the workflow system itself to implement SRDP-L1-6.11.
-
-* <<Iteration 10>>Kayako Helpdesk
-** Review Inputs
-
-- Design Purpose :: Refine communication between staff and users
-- Primary Functional Requirements :: SRDP-L1-6.11, SRDP-L1-8.9
-- ASRs :: N/A
-- QAs :: N/A
-- Constraints :: CON-5
-- Concerns :: N/A
-
-** Establish Iteration Goals and Select Drivers
-
-   The goal of this iteration is to develop the communication method between observatory staff and users.
-
-** Choose System Element(s) to Refine
-
-   The element to refine is the capability system.
-
-** Choose Design Concept(s) that Satisfy Drivers
-
-- Design decision :: Explicit Interface
-- Rationale :: Creating an explicit interface for the helpdesk component protects clients from becoming tightly coupled to a particular implementation, such as the Kayako helpdesk system itself
-- Design decision :: Proxy
-- Rationale :: The proxy pattern allows us to communicate with a local object that represents an external system.
-- Design Decision :: Externally developed component
-- Rationale :: Based on CON-5, we must use the Kayako-based science helpdesk
-
-** Instantiate Architecture Elements, Allocate Responsibilities, Define Interfaces
-   :PROPERTIES:
-   :CUSTOM_ID: Helpdesk-API
-   :END:
-
-*** Interface: Helpdesk
-
-| Method        | Arguments          | Rationale                                                           |
-|---------------+--------------------+---------------------------------------------------------------------|
-| delete ticket |                    | Per SRDP-L1-6.11, deleting associated tickets is a requirement      |
-| create ticket | capability request | Inferred because tickets must be created before they can be deleted |
-
-*** Interface: HelpdeskTicket
-
-| Method   | Arguments | Rationale                                                                     |
-|----------+-----------+-------------------------------------------------------------------------------|
-| add note | note      | To facilitate communication with the user, we must be able to send them notes |
-
-*** External component: Kayako
-
-CON-5 specifies that communication between users and staff should be facilitated by the Kayako science helpdesk. The rationale here is  that using the Helpdesk to moderate communication with the user avoids the cost of a your-own-solution. 
-
-*** Design Decision: KayakoHelpdesk implements Helpdesk
-
-This is a proxy to the Kayako science helpdesk.
-
-*** Design Decision: KayakoTicket implements HelpdeskTicket
-
-This is a proxy to a specific ticket in the Kayako system.
-
-** Sketch Views and Record Design Decisions
-
-The block definition for this section shows the relationship between steps as implementors of an interface, and the sequence that is their composition, and the engine which understands the interface.
-
-[[./images/sequence-structure.png]]
-
-Here's an activity view of the processing of the sequence steps:
-
-[[./images/sequence-processing.png]]
-
-This diagram illustrates the use of the helpdesk to mediate communication between staff and users. 
-
-[[./images/image11.png]]
-
-* <<Iteration 11>>Notification Service
-** Review Inputs
-
-- Design Purpose :: To elaborate how notifications are sent
-- Primary Functional Requirements :: SRDP-L1-6.6, SRDP-L1-6.6.2, SRDP-L1-6.6.3
-- ASRs :: N/A
-- QAs :: N/A
-- Constraints :: N/A
-- Concerns :: N/A
-
-** Establish Iteration Goals and Select Drivers
-
-   The goal of this iteration is to describe how notifications will be handled in the workspace system.
-
-** Choose System Element(s) to Refine
-
-   The element to refine is the capability system.
-
-** Choose Design Concept(s) that Satisfy Drivers
-
-- Design Decision :: Explicit Interface
-- Rationale :: Providing interfaces for sending messages will protect notification senders from having to know the details of how messages are delivered
-- Design Decision :: Singleton
-- Rationale :: A global notification instance will make it simple for any notification client to send a message without worrying about dependency resolution
-- Design Decision :: NotificationForwarder
-- Rationale :: Some notifications will have to be sent via certain transports. Those transports can register for notifications and forward them to the user over the transport mechanism they understand.
-
-** Instantiate Architecture Elements, Allocate Responsibilities, Define Interfaces
-
-*** Interface: Notifier
-
-| Method              | Arguments                      | Rationale                                                                                                   |
-|---------------------+--------------------------------+-------------------------------------------------------------------------------------------------------------|
-| sendNotification    | user, message, level, priority | Allow any subsystem to send a notification to a user with a specified message and level (info, warn, error) |
-| registerRecipient   | Recipient                      | Set up Notifier to send notifications to Recipient                                                          |
-| unregisterRecipient | Recipient                      | Parallel to registerRecipient                                                                               |
-
-*** Interface: Recipient
-
-| Method              | Arguments                      | Rationale                |
-|---------------------+--------------------------------+--------------------------|
-| receiveNotification | user, message, level, priority | Mirrors sendNotification |
-
-*** Singleton Object: ArchiveNotifier
-
-*Rationale*: provides the Notifier interface to the rest of the system. By being a singleton we can be sure that any subsystem that needs to send notifications can.
-
-*** Domain Object: NotificationForwarder
-
-*Rationale*: Rather than sprinkling a distinction between emails and other kinds of notifications throughout the codebase, we can have notification forwarders that capture certain notifications and recast them over other transports, such as email.
-
-** TODO Sketch Views and Record Design Decisions
-
-** Analyze Current Design, Review Iteration Goal
-
-* <<Iteration 12>>Large Projects and Project Settings
-** Review Inputs
-
-- Design Purpose :: Establish large-project specific functionality within the capability system
-- Primary Functional Requirements :: SRDP-L1-6.7, SRDP-L1-6.7.1, SRDP-L1-8.10.1
-- ASRs :: ASR-2, ASR-3
-- QAs :: N/A
-- Constraints :: CON-2
-- Concerns :: N/A
-
-** Establish Iteration Goals and Select Drivers
-
-   The goal of this iteration is to discharge large project responsibilities to a component within the capability system.
-
-** Choose System Element(s) to Refine
-
-   The element to refine is the capability service
-
-** Choose Design Concept(s) that Satisfy Drivers
-
-- Design Decision :: Domain object: Project Settings
-- Rationale :: Large projects will require their own special settings. Other components in the system will have to be parameterized by these settings
-- Design Decision :: Chain of Responsibility pattern
-- Rationale :: Overrides should be sought in the project settings that pertain to the chosen project, but if there is no project setting for this project, there should be a default settings object that handles requests.
-
-** Instantiate Architecture Elements, Allocate Responsibilities, Define Interfaces
-
-*** Domain Object: Project Settings
-
-*Rationale*: the project settings object will collect settings that pertain to certain projects. 
-
-| Method                  | Arguments | Rationale                                                                  |
-|-------------------------+-----------+----------------------------------------------------------------------------|
-| get capability override |           | Allows standard capabilities to be customized or overridden by custom ones |
-| get QA users            |           | Allows large projects to nominate users to perform QA                      |
-| get custom capabilities |           | Allows large projects to define custom capabilities                        |
-
-*** Domain Object: Default Settings
-
-*Rationale*: the default settings object adheres to the same API as the project settings, but will handle requests on behalf of projects that do not have their own settings. This is likely to be the normal case. 
-
-As each telescope has different data analysts, it is likely that there will need to be a default settings per telescope.
-
-*** Responsibilities
-
-As requests come in and are handled, if they pertain to large projects, a large project object will have to be consulted for capability overrides. When QA steps are executed, capability requests will have to be assigned to the large project's QA users instead of the defaults. And when a user chooses data to request processing against, if the data belong to a large project, that large project's custom capability options will have to be shown to the user.
-
-** TODO Sketch Views and Record Design Decisions
-
-
-** Analyze Current Design, Review Iteration Goal
-
-A consequence of this design is that data with multiple projects will be harder to reason about. If data belongs to multiple projects, if one is a large project, do the options available there occur or not? Similarly, what happens if there are two or more large projects? At the moment, all data are ultimately traced to a single project and we can ignore this problem.
-
-* <<Iteration 13>>Data Retention and Scheduling
-** Review Inputs
-
-- Design Purpose :: Introduce data retention policy and removal
-- Primary Functional Requirements :: SRDP-L1-6.9, SRDP-L1-6.9.1
-- ASRs :: N/A
-- QAs :: N/A
-- Constraints :: N/A
-- Concerns :: N/A
-
-** Establish Iteration Goals and Select Drivers
-
-   The goal of this iteration is to refine the capability system's data retention system.
-
-** Choose System Element(s) to Refine
-
-   The element to refine is the capability system.
-
-** Choose Design Concept(s) that Satisfy Drivers
-
-- Design Decision :: Domain Object
-- Rationale :: Introducing domain objects for schedules and scheduled tasks
-- Design Decision :: Domain Object
-- Rationale :: Introduce workflows for cleanup tasks
-
-** Instantiate Architecture Elements, Allocate Responsibilities, Define Interfaces
-
-*** Domain Object: Schedule
-
-*Rationale*: Schedule will periodically execute ScheduledTasks per its schedule. 
-
-| Method                  | Arguments       | Rationale                                                        |
-|-------------------------+-----------------+------------------------------------------------------------------|
-| add workflow            | cycle, workflow | Executes ~workflow~ according to schedule ~cycle~                |
-| remove workflow         | workflow        | Removes ~workflow~ from the schedule                             |
-| get executing workflows |                 | Allow viewing of current execution status of scheduled workflows |
-
-By tracking the workflows, we will be able to see when a workflow was last executed and what that execution's state is.
-
-*** Domain Object: Clean-up Warning Workflow
-
-*Rationale*: Send a notification that cleanup will be performed in so many days, per SRDP-L1-6.9.1. Scheduled to run daily.
-
-*** Domain Object: Automatic Cleanup workflow
-
-*Rationale*: Remove data from the temporary storage area. Scheduled to run daily.
-
-** TODO Sketch Views and Record Design Decisions
-** Analyze Current Design, Review Iteration Goal
-
-Normally this kind of thing would be handled by cron. In fact, in the existing workflow system, it is handled by cron. This has the downside that the system is not able to track the workflow executions that are occurring, and cron files must manually be installed when the system is installed.
-
-This system can also subsume the current system that periodically reindexes Solr using cron.
-
-* <<Iteration 14>>CASA Versioning and Capability Matrix
-** Review Inputs
-
-- Design Purpose :: Address CASA version requirements
-- Primary Functional Requirements :: SRDP-L0-11.2
-- ASRs :: ASR-3, ASR-4
-- QAs :: QA-1, QA-3
-- Constraints :: CON-4
-- Concerns :: N/A
-
-** Establish Iteration Goals and Select Drivers
-
-   The goal of this iteration is to ensure that software version, especially CASA versions, are tracked and that per-version customizations are available and furnished to the software.
-
-** Choose System Element(s) to Refine
-
-   The element to refine is the Capability Info
-
-** Choose Design Concept(s) that Satisfy Drivers
-
-- Design Decision :: Domain Object
-- Rationale :: Introducing several domain objects that pertain directly to CASA versions and tracking versions through the capability system.
-
-** Instantiate Architecture Elements, Allocate Responsibilities, Define Interfaces
-
-*** Domain Object: Capability Matrix
-
-*Rationale*: the capability matrix defines custom templates per capability and CASA version.
-
-*** Responsibility: Provenance Information
-
-Engaging the capability matrix to discharge responsibility for per-CASA functionality raises the question of how the CASA version and associated differences will be addressed in terms of provenance information.
-
-All templates that are rendered will become part of the files on disk at the end of a capability run. So whatever customizations have occurred will be reflected in files on disk at the end of the capability execution. So this becomes a responsibility for the ingestion system to receive these files and ingest them where appropriate. The links in the provenance chain are provided on-disk; establishing provenance in the archive database is an archive concern.
-
-** Sketch Views and Record Design Decisions
-
-Here is a view of how templates are looked up, taking into account the [[Iteration 12][Iteration 12]] introduction of project settings.
-
-[[./images/look-up-templates.png]]
-
-** Analyze Current Design, Review Iteration Goal
-
-This design introduces a significant new object containing a fair amount of complexity. It is hidden behind the Capability Info interface so as not to introduce undue complexity into the rest of the system. It is believed that this will provide the level of per-CASA customization that is needed for this project. 
-
-* <<Iteration 15>>Future Product Locators
-** Review Inputs
-
-- Design Purpose :: Work out how future products and standard processing requests will work
-- Primary Functional Requirements :: SRDP-L1-6.4, SRDP-L1-6.5, SRDP-L1-6.10, SRDP-L1-13
-- ASRs :: ASR-2
-- QAs :: N/A
-- Constraints :: N/A
-- Concerns :: N/A
-
-** Establish Iteration Goals and Select Drivers
-
-   The goal of this iteration is to explain how future product locators work, how they integrate with product locators, and how the expectation of processing will be recorded.
-
-** Choose System Element(s) to Refine
-
-   The element to refine is the capability system.
-
-** Choose Design Concept(s) that Satisfy Drivers
-
-- Design Decision :: Domain Objects
-- Rationale :: Domain objects for future products will enable us to reason about products that are not yet in the system as well as the results of processing that has not yet begun.
-
-** Instantiate Architecture Elements, Allocate Responsibilities, Define Interfaces
-
-The following objects are prototyped in the file [[../docs/FutureProductLocators.hs]], from which I include a snippet here:
-
-
-#+BEGIN_SRC haskell
-data ProductType = Execblock | Calibration | Image
-  deriving (Show, Eq)
-
-data FutureProductType = FutureExecblock ProposalId SessionNumber
-                       | FutureProduct ProductType Product
-                       deriving (Show, Eq)
-
-data Product = CurrentArchiveProduct ProductLocator
-             | FutureArchiveProduct FutureProductType
-             | Product `And` Product
-             deriving (Show, Eq)
-
-isReady (CurrentArchiveProduct _) = True
-isReady (FutureArchiveProduct _)  = False
-isReady (p1 `And` p2) = isReady p1 && isReady p2
-
-resolve p@(CurrentArchiveProduct _) = p
-resolve (FutureArchiveProduct fp)   = undefined -- look up the future product
-resolve (p1 `And` p2)               = resolve p1 `And` resolve p2
-#+END_SRC
-
-*** Domain Object: Product
-
-*Rationale*: The Product is the set that contains both current and future products and protects the rest of the system from having to understand much about the distinction between them.
-
-| Method  | Rationale                                                         |
-|---------+-------------------------------------------------------------------|
-| isReady | True if the product exists and is ready for use                   |
-| resolve | If the product isReady, converts it to a product that can be used |
-
-*** Domain Object: CurrentArchiveProduct extends Product
-
-*Rationale*: The simplest implementor of Product, CurrentArchiveProducts are products that currently exist in the archive and are identified by a science product locator.
-
-| Method         | Rationale                                               |
-|----------------+---------------------------------------------------------|
-| isReady        | True                                                    |
-| resolve        | self                                                    |
-| productLocator | Access the product locator that this product represents |
-
-*** Domain Object: FutureProduct extends Product
-
-*Rationale*: FutureArchiveProducts represent processing that is pending, either because the process is still going on or because the necessary products do not yet exist.
-
-| Method  | Rationale                                                                      |
-|---------+--------------------------------------------------------------------------------|
-| isReady | True if the pending processing has completed                                   |
-| resolve | CurrentArchiveProduct representing the result of the processing, if it isReady |
-
-*** Domain Object: FutureArchiveProduct extends FutureProduct
-
-*Rationale*: There are two kinds of future product. ~FutureArchiveProduct~ represents the base case of the recursion, a hard reference to an expected observation which does not yet exist in the archive. These represent a specific kind of anticipated archive product, whatever the most basic kind is for a given instrument. They are ready when the appropriate archive products are ingested and they resolve to CurrentArchiveProducts.
-
-*** Domain Object: FutureCapabilityResult extends FutureProduct
-
-*Rationale*: The second kind of future product represents the inductive case. FutureCapabilityResult are actually references to the output of capability requests. They are blocked until their capability requests complete, whereupon they resolve to CurrentArchiveProducts.
-
-** Sketch Views and Record Design Decisions
-
-[[./images/future-products.png]]
-
-** Analyze Current Design, Review Iteration Goal
-
-This was prototyped earlier in the process but deserved to be documented as an iteration.
-
-The key idea here is that products which exist and provisional products which do not yet exist must both be suitable subjects for a capability request. This implies the capability step that waits for products which are not yet realized. The ability to base a request on an earlier request will be needed for standard imaging, so FutureCapabilityResult gives us the possibility of unlimited capability composition.
-
-* TODO <<Iteration 16>>Prepare and Run Workflow
-** Review Inputs
-
-- Design Purpose :: Explain how Prepare and Run Workflow works
-- Primary Functional Requirements :: N/A
-- ASRs :: N/A
-- QAs :: N/A
-- Constraints :: N/A
-- Concerns :: N/A
-
-** Establish Iteration Goals and Select Drivers
-
-   The goal of this iteration is to develop the detailed design for how workflows are executed by capabilities.
-
-** Choose System Element(s) to Refine
-
-   The element to refine is the Prepare and Run Workflow capability step
-
-** Choose Design Concept(s) that Satisfy Drivers
-
-- Design Decision :: 
-- Rationale :: 
-
-** Instantiate Architecture Elements, Allocate Responsibilities, Define Interfaces
-** Sketch Views and Record Design Decisions
-** Analyze Current Design, Review Iteration Goal
-
-* <<Iteration 17>>Request, Version and Execution
-** Review Inputs
-
-- Design Purpose :: Enable refinement of requests and resubmission of failed executions
-- Primary Functional Requirements :: SRDP-L1-6.16, SRDP-L1-6.16.1, SRDP-L1-6.16.2, SRDP-L1-5.3, SRDP-L1-6.1
-- ASRs :: N/A
-- QAs :: N/A
-- Constraints :: N/A
-- Concerns :: N/A
-
-** Establish Iteration Goals and Select Drivers
-
-   The goal of this iteration is to make edit-resubmit possible for capability requests, as well as re-execution for failed executions.
-
-** Choose System Element(s) to Refine
-
-   The element to refine is the capability service
-
-** Choose Design Concept(s) that Satisfy Drivers
-
-- Design Decision :: Domain object
-- Rationale :: Introduce request version and execution objects to track edits
-- Design Decision :: Façade Pattern
-- Rationale :: Hide the complexity of the interplay between version and execution behind the request itself
-
-** Instantiate Architecture Elements, Allocate Responsibilities, Define Interfaces
-
-*** Domain Object: Request Version
-
-*Rationale*: The request version holds the parameter selections from the user and a list of executions. The parameter selections are only editable when there is no active execution. Editing the parameter selections creates a new request version. The UI is informed of the previous version's parameters, which serve in place of the defaults.
-
-*** Domain Object: Capability Execution
-
-*Rationale*: Each round through the capability sequence is recorded in a capability execution. The capability execution holds the execution state. Only if there is no active execution on a request can a new execution be started.
-
-*** State Model: Requests and Executions
-
-Requests have the following states:
-
-- Created / Ready to Submit
-- Submitted / Awaiting execution
-- Executing
-- Complete
-
-Every request has at least one version. Only one version may be active at a time. When a version is created, an execution is also created and dispatched. A version is read-only while it has an active execution. Therefore, versions only have a locked/unlocked state.
-
-Additional information about the request state is obtained from the execution, which exists in one of these states:
-
-- Created / Ready to Execute
-- Executing Step
-- Complete
-
-*** Façade: Request Status
-
-A human-friendly /request status/ is created by showing the name of the current version's currently-running execution's current step, if there is one, or else the state of the request. Thus, a request status might appear to transition from Ready to Awaiting Product to Executing Workflow to Awaiting QA to Complete, without having all of these as discrete states of the request.
-
-** Sketch Views and Record Design Decisions
-
-The relationships between requests, versions and executions is shown here:
-[[./images/request-execution.png]]
-
-The states of a request are:
-[[./images/request-states.png]]
-
-The states of an execution are:
-[[./images/execution-states.png]]
-** Analyze Current Design, Review Iteration Goal
-
-We discovered something gross about the state model early in the design process: waiting for products, user input, and QA led to a very complex state model. Additionally, not every request requires QA or waiting for products, so there were a large number of "bypassing" transitions in the state model. We sought a simplifying design.
-
-With capability steps, the capability state is made largely a pointer to the current step. This simplifies the state model considerably. The remaining complexity comes from the request/execution split and the desire to show the user something analogous to request state, but which takes into account re-executions and revisions. Request status handles this.
-
-*** VLASS Ramifications
-
-This design is similar to the design of the VLASS manager, in which there are versions and "jobs" (instead of executions). However, we have introduced constraints that VLASS manager lacks:
-
-- You may only create a new execution if the previous execution is complete
-- You may not edit the parameters without creating a new version
-
-These changes will reduce some of the potential for confusion that exists in the manager today.
-
-*** Discussion of Alternate Designs
-
-**** Philosophy
-
-There is a conflict here between a platonic and an aristotelian view of what the referent of a capability request is. In the platonic view, out in concept-space somewhere exists “the product I want” and if it takes twelve tries with different parameters to locate it in reality, fine. In the aristotelian view, the referent is “whatever came out of me running this capability.”
-
-The platonic view makes a lot of sense for VLASS, where they know they have X tiles and there should be one perfect image of each tile. It also makes sense for things like standard calibrations and standard images, where you could say “this is *the* calibration for this observation.” It makes a less sense for users, who will have asked for a capability execution and received "for free" a version and an execution they did not ask for.
-
-We have decided that it is more important to equate capability requests with “canonical products” to simplify reasoning about VLASS and standard calibrations. The additional argument is that all of the work associated with getting a user a certain scientific product will be grouped together, no matter how many tries it took.
-
-**** Technical Approaches
-
-Knowing we must support versioning or history of some kind produces two candidate designs: one in which requests have versions that have executions, and another in which requests have executions, but may have new or old requests "hanging off of them," representing a linked-list of newer or older requests.
-
-In general, processing linked list representations in a database is painful and best avoided whenever not strictly necessary, and an argument could be made that having versions collated together may make future changes easier, although there is no requirement for such a thing at this time.
-
-* TODO <<Iteration 18>>Quality Assurance
-** Review Inputs
-
-- Design Purpose :: N/A
-- Primary Functional Requirements :: N/A
-- ASRs :: N/A
-- QAs :: N/A
-- Constraints :: N/A
-- Concerns :: N/A
-
-** Establish Iteration Goals and Select Drivers
-
-   The goal of this iteration is 
-
-** Choose System Element(s) to Refine
-
-   The element to refine is 
-
-** Choose Design Concept(s) that Satisfy Drivers
-
-- Design Decision :: 
-- Rationale :: 
-
-** Instantiate Architecture Elements, Allocate Responsibilities, Define Interfaces
-** Sketch Views and Record Design Decisions
-** Analyze Current Design, Review Iteration Goal
-
-* <<Iteration 19>>Capability Typing and Parameters
-** Review Inputs
-
-- Design Purpose :: Establish a perspective on capabilities as akin to functions
-- Primary Functional Requirements :: N/A
-- ASRs :: N/A
-- QAs :: N/A
-- Constraints :: N/A
-- Concerns :: N/A
-
-** Establish Iteration Goals and Select Drivers
-
-   The goal of this iteration is to clarify the function-like role that capabilities play in the system by defining their type system.
-
-** Choose System Element(s) to Refine
-
-   The element to refine is the capability.
-
-** Choose Design Concept(s) that Satisfy Drivers
-
-- Design Decision :: Domain Object
-- Rationale :: Objects representing product types will form the set of types that capabilities can have.
-- Design Decision :: Explicit Interface
-- Rationale :: By treating capabilities as function-like, we gain a composition operator we need for SRDP-L1-13.
-
-** Instantiate Architecture Elements, Allocate Responsibilities, Define Interfaces
-
-*** Product Types
-
-The archive has an explicit set of types for its products. This set is enumerated in the software and in the ~science_product_types~ table, but currently consists of:
-
-- Execution Block
-- Calibration
-- Catalog
-- Image
-
-"Execution Block" is a misnomer, since for ALMA observations it will be an OUS or an ASDM. This is a technical issue that can be straightened out during implementation.
-
-*** Capability extends Function
-
-The decision here is to treat capabilities as being akin to functions. Depending on how they are implemented, there may be an explicit interface to fulfill, such as ~Function~ for Java, or perhaps by implementing ~__call__~ in Python. On the other hand, because capabilities are highly asynchronous, so the analogy may not be meaningful enough at the scope of implementation details to warrant it.
-
-*** Capability Types
-
-We can regard each capability as having a single, simple type from one to another:
-
- - Standard Calibration ∷ Execution Block → Calibration
- - Standard Imaging ∷ Calibration → Image 
-
-This gives us a simple composition operator of running one capability on the results of another. Requests built on earlier requests much like function composition; (Standard Imaging ∘ Standard Calibration) ∷ Execution Block → Image.
-
-At the moment we have no plans to expand this typing or allow multiple parameter types.
-
-** TODO Sketch Views and Record Design Decisions
-
-I'm unsure what use it would be to make a diagram of this.
-
-** Analyze Current Design, Review Iteration Goal
-
-The goal of this iteration is to clarify how capabilities resemble functions and, as a result, how they are typed.
-
-
-* TODO <<Iteration 20>>(DRAFT) Automatic Capability Request Creation
-** Review Inputs
-
-- Design Purpose :: Handle automatic creation of capability requests for certain projects
-- Primary Functional Requirements :: N/A
-- ASRs :: N/A
-- QAs :: N/A
-- Constraints :: N/A
-- Concerns :: N/A
-
-** Establish Iteration Goals and Select Drivers
-
-   The goal of this iteration is 
-
-** Choose System Element(s) to Refine
-
-   The element to refine is the project settings defined in [[Iteration 12]].
-
-** Choose Design Concept(s) that Satisfy Drivers
-
-- Design Decision :: 
-- Rationale :: 
-
-** Instantiate Architecture Elements, Allocate Responsibilities, Define Interfaces
-** Sketch Views and Record Design Decisions
-** Analyze Current Design, Review Iteration Goal
-
-The motivating problems here are:
-
- - Triggering CIPL (standard calibration, and eventually, standard imaging)
- - Triggering post-processing on the stress test (TCAL0003)
- - Special-casing of VLASS in amygdala so that CIPL does not run
- - Addressing VLASS's assortment of products and the generation of the requisite capability requests to make them
-
-So there are about five issues here:
-
-1. The stress test 
-
-    One way this could be handled would be by redefining utterly its calibration capability to instead run its own reduction script. This ignores the problem that the result type will be a PDF and not a calibration in any real sense, but that probably isn't a real problem anyway.
-
-    Another approach would be to have a type-based dispatch table, something like:
-
-    | if you see <product type>... | generate capability request <type> |
-    |------------------------------+------------------------------------|
-    | execblock                    | calibration                        |
-    | calibration                  | image                              |
-
-    This seems like a nice solution, but we have an issue with...
-2. Showing both standard calibrations and images
-
-    But showing them before the observation is complete. In the interim while we wait for TTA tools rewrites, do we expect they'll want to see the calibration /and/ image as soon as we realize an observation has completed? If so, it kind of rules out a type-based dispatch lookup table, which would otherwise be a convenient solution to the problem. Then we have VLASS...
-3. VLASS pre-creates products (usually, to some arbitrary depth)
-
-    VLASS processing tends to have an N:M relationship between inputs and outputs. This works fine within the capability request framework (well, mostly fine—depending on N products is specified above, but producing M outputs would be more of an ingestion responsibility than a capability one) but it would make it hard to generate the right stuff at observation time, because you are just getting a tiny piece of a tile, and you probably don't want to do a bunch of conditional thinking about whether you're making larger images or not.
-
-The fundamental problem here is that we don't have a well-defined entry point to the system as a whole. What is it like to expect an observation to happen, or a calibration to arrive? When we learn that an observation for your project is inbound, how should the system react to that information? Does the workspace system have a responsibility to generate capability requests on behalf of projects, or not?
-
-
-* TODO <<Iteration 21>>(DRAFT) Capability Step Sequence Persistence
-
-** Review Inputs
-
-- Design Purpose :: Decide how to persist capability step sequences
-- Primary Functional Requirements :: N/A
-- ASRs :: N/A
-- QAs :: N/A
-- Constraints :: N/A
-- Concerns :: N/A
-
-** Establish Iteration Goals and Select Drivers
-
-   The goal of this iteration is to determine whether the step sequence should be persisted as distinct records or as a string conforming to a simple grammar.
-
-** Choose System Element(s) to Refine
-
-   The element to refine is the Capability Step Sequence
-
-** Choose Design Concept(s) that Satisfy Drivers
-
-- Design Decision :: Create simple grammar
-- Rationale :: Enable us to reason formally about the step sequence's complexityparser and
-
-** Instantiate Architecture Elements, Allocate Responsibilities, Define Interfaces
-
-*** Step Sequence Grammar
-
-#+BEGIN_EXAMPLE
-
-uuid               = ;; defined by RFC4122
-telescope          = "alma" | "evla" | "vlba"     ;; may be extended in the future
-product-locator    = "uid://", telescope, "/", product type, "/", uuid
-
-proposal-id        = ;; defined by proposal system
-session-id         = ;; defined by proposal system
-
-future-product     = "current(", product-locator, ")
-                   | "capability-result(", capability-name, ",", future-product, ")"
-                   | "anticipated(", proposal-id, ",", session-id, ")"
-
-capability-name    = string                        ;; yet-to-be-refined
-workflow-name      = string                        ;; yet-to-be-refined
-
-step-sequence      = step, { newline, step }
-
-step               = await-product | await-qa | prepare-execute-wf | await-wf | await-lp
-
-await-product      = "AWAIT PRODUCT ", product-locator
-await-qa           = "AWAIT QA"
-prepare-execute-wf = "PREPARE AND EXECUTE ", workflow-name
-await-wf           = "AWAIT WORKFLOW"
-await-lp           = "AWAIT LARGE PROJECT APPROVAL"
-
-#+END_EXAMPLE
-
-** Sketch Views and Record Design Decisions
-** Analyze Current Design, Review Iteration Goal
-
-
-* Appendix A: Roles
-  :PROPERTIES:
-  :UNNUMBERED: t
-  :END:
-
-** Data access
-
-User roles from least to most access:
-
- - Identified archive user :: Anybody can download public data. All that is required is that they provide an email address as a form of identification and as a place for a data-available notification to go.
-
- - Authenticated archive user :: Authenticated users can do everything identified users can do, plus request more expensive forms of processing.
-
-- Principal investigator, coauthor :: Users who are PIs or Co-Is for VLA projects can access their own proprietary data. Principal investigators on ALMA projects may also access their proprietary data.
-
-- User with designated access to proprietary data ::  ALMA users who were delegated access to an OUS may see that OUS and request processing on it.
-
-- Archive operators :: Archive operators may make any processing request against any data regardless of its proprietary status.
- 
-** Capability request handling
-
-- SRDP user :: SRDP users may see their own requests and their own request state.
-
-- Designated analysts for large project data :: Users who were nominated by a large project to perform QA for it may see all processing requests for that project and may perform QA on it.
-
-- Data analyst (DA) :: Data analysts may perform QA on standard calibration and standard imaging for all observations.
-
-- Astronomer on duty (AOD) :: AODs supervise DAs and may override their access or change their decisions.
-
-   There is no plan in this design to explicitly acknowledge the AOD role. It is expected that, in the current scheme of things, the DA will assign requests to the AOD for final approval. We anticipate that leaving the QA workflow general will make it more flexible to future change.
-
-- Archive operators :: Archive operators have unrestricted access to the workspace requests and their states
-
-** TODO Diagrams here
-
-* Appendix B: Requirements
-  :PROPERTIES:
-  :UNNUMBERED: t
-  :END:
-** <<<SRDP-L1-5.3>>>
-If the user is not satisfied with the product (for whatever reason), they shall have the ability to return to their request or helpdesk ticket through a provided link, modify as necessary and resubmit. A simple mechanism shall be provided to request more assistance through a linked helpdesk ticket mechanism. 
-** <<<SRDP-L1-6.1>>>
-When manual intervention for recalibration is required, the process shall be executed by the operations staff. The staff member shall work with the user to identify and resolve the issue and then resubmits the job for the user. At this point the process will re-enter the standard workflow.
-
-** <<<SRDP-L1-6.2>>>
-The archive interface shall provide status information for the user on each job, links to completed jobs, as well as the weblog for the job.
-
-** <<<SRDP-L1-6.3>>>
-Batch submission of jobs shall be throttled to prevent overwhelming processing resources.
-
-** <<<SRDP-L1-6.4>>>
-The standard imaging process shall automatically be triggered for observations supported by SRDP once the standard calibration has passed quality assurance.
-
-** <<<SRDP-L1-6.5>>>
-When the single epoch calibration and imaging for all configurations are complete, the data from all configurations shall be imaged jointly.
-
-** <<<SRDP-L1-6.6>>>
-The Time Critical flag shall persist throughout the lifecycle of the project and be made available to the data processing subsystems.
-
-** <<<SRDP-L1-6.6.1>>>
-Processing of time critical proposals shall begin as soon as data is available.
-
-** <<<SRDP-L1-6.6.2>>>
-The workflow manager shall notify the PI immediately when calibration or imaging products are available, with specific notice that the products have not been quality assured.
-
-** <<<SRDP-L1-6.6.3>>>
-In cases of reduction failure, a high priority notification to operations shall be made so that appropriate manual mitigation can be done. Note that this may occur outside of normal business hours.
-
-** <<<SRDP-L1-6.7>>>
-Large Project processing shall allow use of custom or modified pipelines to process the data and the project team shall be directly involved in the quality assurance process.
-
-** <<<SRDP-L1-6.7.1>>>
-The SRDP system shall allow use of NRAO computing resources for the processing of the large project data provided that required computing resources does not exceed the available resources (including prior commitments).
-
-** <<<SRDP-L1-6.8>>>
-Once a job is created on archived data, the archive interface shall provide the user an option to modify the input parameters and review the job prior to submission to the processing queue.
-
-** <<<SRDP-L1-6.9>>>
-Results from reprocessing archive data are temporary and the automated system shall have the ability to automatically enforce the data retention policy.
-
-** <<<SRDP-L1-6.9.1>>>
-Warnings shall be issued to the user 10 and three days prior to data removal.
-
-** <<<SRDP-L1-6.10>>>
-The workflow system shall automatically start the execution of standard calibration jobs.
-
-** <<<SRDP-L1-6.10.1>>>
-It shall be possible for a user to inhibit the automatic creation of calibration jobs.  For instance after a move, prior to new antenna positions being available.
-
-** <<<SRDP-L1-6.11>>>
-The user shall be able to cancel jobs and remove all associated helpdesk tickets.
-
-** <<<SRDP-L1-6.12>>>
-The user shall be provided an estimate of the total latency in product creation.
-
-** <<<SRDP-L1-6.13>>>
-The workspace system shall provide interfaces to allow review and control of the activities in the workspace.
-
-** <<<SRDP-L1-6.13.1>>>
-An interface that allows users to interact with their active and historical processing requests shall be provided.
-
-** <<<SRDP-L1-6.13.2>>>
-An interface providing internal overview and control of all existing workspace activities and their state for use by internal operational staff.
-
-** <<<SRDP-L1-6.14>>>
-The system shall authenticate the user and verify authorization prior to creation of a workspace request.
-
-** <<<SRDP-L1-6.15>>>
-The workspace system shall support the optional submission of jobs to open science grid through the high throughput condor system.
-
-** <<<SRDP-L1-8>>>
-Every product shall be assessed for quality, and those products for which the initial calibration are not judged to be of science quality should be identified for further intervention.
-
-** <<<SRDP-L1-8.6>>>
-Workspaces shall permit some categories of processing to be designated as requiring QA.
-
-** <<<SRDP-L1-8.7>>>
-Processing requests that require QA shall have to undergo a human inspection prior to being delivered to the requester or ingested into the archive.
-
-** <<<SRDP-L1-8.8>>>
-There will be a QA interface that will show requests requiring QA and allow designated users to pass/fail requests.
-
-** <<<SRDP-L1-8.8.1>>>
-The QA interface will allow permitted users to revise the parameters of a request and submit new processing.
-
-** <<<SRDP-L1-8.8.2>>> 
-Only the final QA-passed results will be delivered to the requesting user or ingested into the system.
-
-** <<<SRDP-L1-8.9>>>
-The QA interface will facilitate communication between the user performing QA and the user who submitted the processing request.
-
-** <<<SRDP-L1-8.10>>>
-Ops staff will be designated for performing QA on standard calibration and imaging processes, and will be able to reassign to other ops staff.
-
-** <<<SRDP-L1-8.10.1>>>
-Large projects shall be able to designate their own users to perform QA on their processes.
-
-** <<<SRDP-L0-11>>>
-The system shall support a robust and reliable process for the testing, validation, and delivery of capabilities.
-
-** <<<SRDP-L0-11.2>>>
-SRDP workflows shall be executable with candidate versions of the software. The products generated by this software shall not be exposed as SRDP products in the standard data discovery interfaces.
-
-** <<<SRDP-L0-11.3>>>
-It shall be possible to execute portions of the SRDP workflows to optimize testing.
-
-** <<<SRDP-L0-11.4>>>
-It shall be possible to modify the system without losing the current execution state, or in such a way that the state information can be recaptured.
-
-** <<<SRDP-L0-11.5>>>
-The execution environment may need to be modified, for example using a non-standard destination directory to accumulate outputs from a regression testing run.
-
-** <<<SRDP-L1-11>>>
-
-Metrics
-
-** <<<SRDP-L1-11.1>>>
-
-The latency between the completion of the observation and the delivery of products shall be measured.
-
-** <<<SRDP-L1-11.2>>>
-Categories for failure shall be identified and metrics derived in order to allow the Observatory to address common failure modes.
-
-** <<<SRDP-L1-12>>>
-
-Product Specification
-
-** <<<SRDP-L1-12.6>>>
-
-If the requested product is large (either in number of data sets to be processed, or implied processing time), the request shall be flagged for manual review by the SRDP operations staff.
-** <<<SRDP-L1-13>>>
-The restore use case can be used to prepare data for further processing (such as the PI driven imaging use case).
-** <<<SRDP-L1-6.16>>>
-A request is not complete until the user is satisfied with the result of the processing. 
-** <<<SRDP-L1-6.16.1>>>
-Multiple revisions of the parameters are permitted and must be kept with the request.
-** <<<SRDP-L1-6.16.2>>>
-If a job fails for some transient reason, it should be possible to re-execute it without losing information about the failed execution.
-* TODO COMMENT Additional TODOs
-** TODO 
diff --git a/apps/cli/utilities/wksp0/architecture/Futures.org b/apps/cli/utilities/wksp0/architecture/Futures.org
deleted file mode 100644
index ffaa94725a8902c0d93cc7d7c527b89ece4d2931..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/architecture/Futures.org
+++ /dev/null
@@ -1,88 +0,0 @@
-#+TITLE: Workspace Architecture: Future Directions
-#+AUTHOR: Daniel K Lyons and the SSA Team
-#+DATE: 2019-12-09
-#+SETUPFILE: https://fniessen.github.io/org-html-themes/setup/theme-readtheorg.setup
-#+HTML_HEAD_EXTRA: <link rel="stylesheet" type="text/css" href="extra.css" />
-#+OPTIONS: H:5 ':t
-
-* Introduction
-
-This document is intended to keep track of several things that don't quite fit with the other architecture documentation:
-
-- Things that sound like requirements, but weren't
-- Requirements that were missed between architecting and preparing for the CDR
-- Hard, wall-like objects that we appear to be rushing towards with this architecture
-
-* Missed Requirements
-
-At this time, there are no requirements that are known to have been missed.
-
-* Areas Needing Additional Design
-
-** Reingestion
-
-In the course of creating the work breakdown, it has become clear that there is a need for a deeper understanding of reingestion. There is a technical problem here, in that the product locator system prevents us from safely implementing reingestion as a simple delete followed by an ingestion. There are several different semantics that need to be understood here; perhaps an earlier ingestion may have been corrupted or incomplete somehow, or perhaps we are really ingesting an improved version of something but want to retain the old one for some reason. The requirements here need to be sussed out in more depth before it can really be designed and implemented, and for this reason it is a bit vague.
-
-** Build and deployment
-
-As the SSA group is in the midst of reviewing our core processes in advance of implementing workspaces, we see a need to treat the build system and deployment system for the workspace as component-level work deserving of the same kind of attention as the code itself. Testing has been a major concern for our planning since early in the process, but the emphasis on build and deployment is new. A certain amount of research is expected to be needed to figure out the right process here. The late-breaking development here is simply raising the importance of these issues, with the expectation that more details will be coming soon.
-
-** Parameter Validation
-
-At the moment there is no parameter validation phase in the system. The architectural assumption here is that the UI will do its own validation and prevent users from doing nonsensical things. The UI is likely to generate its own parameter validation service, but since it isn't a first-class architectural entity at the moment, it won't be available for these systems to utilize. It seems likely that we will want to promote this to a first-class entity so that it can be used by the capability service itself to validate requests as they come in, even from other archive and workspace systems.
-
-A future feature that might motivate more design here would be parameters whose values are influenced by the chosen products themselves. Some CASA parameters have sensible values that vary with different data files, for instance. There's nothing in the current design to locate these values or validate them, and that is something we will probably be asked to revisit in the future.
-
-* Non-Requirements
-
-** Capability typing
-
-I find it useful to mentally think of the type signatures of workflows and capabilities, if they were just like ordinary programming language artifacts. In this regime, capabilities are clearly functions from products to products and workflows are clearly procedures. In term of Java, you could see workflows and capabilities as having types like:
-
-#+BEGIN_SRC java
-void workflow();
-Product capability(Product input);
-#+END_SRC
-
-This leaves some work for the future:
-
-- How do we handle checking the types of capability inputs? 
-
-  You can't image a calibration table or generate a calibration table from an image, for instance.
-- How do we verify the type of the object input and other inputs?
-- How do we handle multiple products, such as for downloads? There's only one product slot.
-- How do we handle capabilities that need more than one product, each with different semantics? 
-
-  For instance, calibrate this raw data with this calibration table?
-
-** CASA version-specific UI
-
-It's true that different configuration for different CASA versions is possible within one capability. However, there is nothing in the system to modify the UI depending on the CASA version, in principle.
-
-In practice, the UI parameter components are so completely independent, you could put conditional logic in them based on the CASA version that is chosen, as long as the receiving side is able to handle it. So if you only put key/value into the parameter when the CASA version is X, you'd better only use that key inside the override template for CASA version X. This may not be super fun to debug though, so it may be better to pretend you cannot do this.
-
-** Self-healing
-
-This is not currently in-scope. Ingestion is a workflow, because it does not begin with a product. Reingestion, however, can be a capability because it begins with a product and ends with a product (the same product). So self-healing by reingestion can come into it here.
-
-** Custom triggering
-
-It was realized fairly late in the design process that we are assuming a fairly straightforward replacement of some hard-coded rules in the existing archive rules engine (amygdala) with some other hard-coded rules in the same location to address the workspace system instead. Handling this properly with some flexibility would be a good idea, but there did not seem to really be a motivating requirement.
-
-** Auto-follow-ons
-
-What if I pick raw data and want an image? In the current design, I have to set up the calibration or restore of that raw data, then I can send a follow-on request for an image from that calibrated MS. There is no way to do both of these in a single go. 
-
-I anticipate that implementing this feature on the current design should not be super hard. But it isn't in-scope at this time.
-
-** Pre-emption
-
-In this design, time-critical projects are flagged as such, and a time-critical standard calibration will always be chosen to run before a non-time-critical standard calibration. This is because the capability queues are priority queues.
-
-There is no pre-emption in this design. This means that the arrival of a time-critical calibration will not cause a running non-time-critical calibration to be stopped or cancelled. The time-critical calibration still has to wait for whatever processing is currently running to finish. It should be the next thing executed though—unless there are more than N time-critical calibrations ahead of it in the queue, where N is the concurrency limit for this capability.
-
-** Cross-queue priorities
-
-This design does not address priorities across different capabilities. There is no way, for instance, to specify that standard calibrations should be run preferentially over AUDI requests. There simply isn't anything above the capability queues to make decisions like this; each queue will happily launch up to its concurrency limit of workflows.
-
-However, because our wrapping for HTCondor is very flexible, we can probably fake this effect with HTCondor even though  it isn't surfaced in the architecture. In the definition of the workflow templates for HTCondor, we can add labels and conditions which HTCondor can be configured to use to create the effect of cross-queue priorities.
diff --git a/apps/cli/utilities/wksp0/architecture/Introduction.org b/apps/cli/utilities/wksp0/architecture/Introduction.org
deleted file mode 100644
index 677b8da41e0444ca17b244ca0b92cb24fbecf4e3..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/architecture/Introduction.org
+++ /dev/null
@@ -1,23 +0,0 @@
-#+TITLE: Workspaces: Introduction for the CDR Panel
-#+AUTHOR: Daniel K Lyons
-#+DATE: 2020-01-30
-#+SETUPFILE: https://fniessen.github.io/org-html-themes/setup/theme-readtheorg.setup
-#+HTML_HEAD_EXTRA: <link rel="stylesheet" type="text/css" href="extra.css" />
-#+OPTIONS: H:5 ':t
-
-* Intro to the Panel
-
-Welcome to the panel for the critical design review of the SRDP Workspace System. The following documentation should be useful to you for fulfilling your charge:
-
-1. L0->L1 requirement traceability is visible in Cameo, exported into [[./L0-L1-mapping.pdf][this PDF document]].
-2. L1->L2 requirement traceability is also visible in Cameo, exported into [[./L1-L2-mapping.pdf][this PDF document]].
-3. The architecture is documented in the [[./Overview.org][Overview]] document
-4. L2 requirements are expected to take the form of JIRA tasks derived from the tasks listed in the [[https://open-confluence.nrao.edu/display/WSCDR/DRAFT%3A+Workspaces+System+Implementation+Planning][Workspaces System Implementation Planning]] document.
-5. Known gaps in the requirements are documented in the [[./Futures.org][Future Directions]] document.
-6. The relationship between the requirements and the architecture is presented in the [[file:Overview.org::*Requirement%20Satisfaction][Requirement Satisfaction]] section of the Overview.
-7. The relationship between the requirements and the implementation plan is presented in the [[https://open-confluence.nrao.edu/display/WSCDR/DRAFT%3A+Workspaces+System+Implementation+Planning][Requirements Gap Analysis]] section of the implementation planning document.
-8. The architect asserts that this architecture is the simplest that accounts for the requirements provided and inferred.
-9. The architectural decisions and their rationale are documented in the [[./Design-Iterations.org][Design Iterations]] document.
-10. The implementation team as a whole understands the work to be done. Each developer understands their role in the system.
-11. [[./Overview.org::*Testing Plan][The testing plan]] is part of the architecture overview. 
-
diff --git a/apps/cli/utilities/wksp0/architecture/L0-L1-mapping.pdf b/apps/cli/utilities/wksp0/architecture/L0-L1-mapping.pdf
deleted file mode 100644
index 4341284fcd9bee34b388028979e5937f7ec31cee..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/L0-L1-mapping.pdf and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/L1-L2-mapping.pdf b/apps/cli/utilities/wksp0/architecture/L1-L2-mapping.pdf
deleted file mode 100644
index ec8f051f475eb64415f9b74c48b6bb945c698081..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/L1-L2-mapping.pdf and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/Overdesigned-Capabilities.org b/apps/cli/utilities/wksp0/architecture/Overdesigned-Capabilities.org
deleted file mode 100644
index 18c1cd933e7facaf9c41815dea4f100a42861282..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/architecture/Overdesigned-Capabilities.org
+++ /dev/null
@@ -1,17 +0,0 @@
-#+TITLE: Workspace Architecture: Overdesigned
-#+AUTHOR: Daniel K Lyons
-#+DATE: 2019-12-09
-#+SETUPFILE: https://fniessen.github.io/org-html-themes/setup/theme-readtheorg.setup
-#+HTML_HEAD_EXTRA: <link rel="stylesheet" type="text/css" href="extra.css" />
-
-* Introduction
-
-This file is a eulogy for an overdesigned alternate system for managing capabilities.
-
-** Capabilities
-
-A capability forms an arrow. Capabilities are well-typed and composable. The type system is based on the input and output product types.
-
-The capability sequence is the implementation of the capability. It is an arrow, composed of constituent arrows like require-parameter, require-product, run-workflow. Workflow executions input and output types as well.
-
-The capability arrow is compiled to a state machine.
diff --git a/apps/cli/utilities/wksp0/architecture/Overview.org b/apps/cli/utilities/wksp0/architecture/Overview.org
deleted file mode 100644
index abbc50caa847acb8c7ae543cc41e19c54df6ed59..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/architecture/Overview.org
+++ /dev/null
@@ -1,550 +0,0 @@
-#+TITLE: Workspace Architecture: Overview
-#+AUTHOR: Daniel K Lyons and the SSA Team
-#+DATE: 2019-12-09
-#+SETUPFILE: https://fniessen.github.io/org-html-themes/setup/theme-readtheorg.setup
-#+HTML_HEAD_EXTRA: <link rel="stylesheet" type="text/css" href="extra.css" />
-#+OPTIONS: H:5 ':t
-
-* Introduction
-
-A key ingredient in the initiative to deliver science-ready data products is a mechanism to produce those products. The workspace system provides a bulk processing facility for this purpose. The key ideas of this facility are:
-
-- Processing is quality-assured
-- Processing is estimable, visible and cancellable
-- Processing utilizes large clusters for high throughput, such the local cluster or the public Open Science Grid
-- Processing may be set up prior to the availability of input products, and will kick off when they arrive
-- Processing options are edited by users and the provenance is tracked
-
-The architecture presented here was developed using Attribute-Driven Design. The design iterations are available in the [[./Design-Iterations.org][Design Iterations document]].
-
-The overall architecture here is that of a web application atop two services: a capability service and a lower-level workflow service that it uses. 
-
-[[./images/overview.png]]
-
-The workflow service provides lower level access to non-local computing in various clusters. Workflows are somewhat generic; they have their own inner structure of tasks and task dependencies, but they don't explicitly know anything about science products or the other high-level concerns of scientists and NRAO staff. The capability service is higher-level, dealing with products explicitly, handling quality assurance, and handling parameters. 
-
-These two major services form packages, each built from smaller services:
-
-[[./images/packages.png]]
-
-These services each have a simple role: the *notification service* sends email notifications to users, the *help desk service* facilitates creating, locating, closing and linking to science help desk tickets, the *scheduling service* allows us to perform actions routinely on a schedule, and the *estimation service* retains and publishes timing metrics for capabilities.
-
-Alongside these services are several external systems: the archive (meaning the system which manages the NRAO archive), the authentication/authorization (A3) service, the science help desk, the messaging subsystem, HTCondor and the OpenScienceGrid. Processing is realized by making requests for capabilities.
-
-[[./images/externals.png]]
-
-Within the system are several shared components that act as services internally but which are not exposed: the messaging subsystem (which is shared with the archive), the notification system, and the scheduling system. Additionally, there is the estimation system, which passively collects metrics data for analytics but also exposes an API for retrieving estimates of how long work will take.
-
-** Aside about user interfaces
-
-I have left the workspace UI as a mostly blank box. Early on, we decided to leave the workspace UI underspecified for the sake of agility. We have interpreted requirements that explicitly mention user interaction as instead requiring functionality in the service layers which the UI can use to implement the requirements. For this reason, almost no actual work is allocated to the workspace UI. Instead the workspace UI is only mediating access to the user.
-
-It is worth noting that the UI will necessarily break down into several sections based on their primary purpose and intended audience, as described by this diagram.
-
-[[./images/ui-components.png]]
-
-The editor interface will expose the create-edit-delete operations for capabilities and workflows, but otherwise goes unmentioned in the rest of the document.
-
-It can be assumed that the workspace UI will eventually decompose into some code running in the browser backed by some code running on a web server. The nature of this breakdown is left unspecified for now, but is likely (leveraging the strengths of the SSA team) to be based on Angular 2.0 and Python. The web developers will iterate directly with stakeholders to build a useful UI using the components designed in this architecture.
-
-* Capability Requests and Their Submission
-
-Let us turn now to the story of a user submitting a capability request.
-
-** By finding data
-
-The story begins with our user selecting some data of interest to her in the archive search interface.
-
-[[./images/choose-data-for-processing.png]]
-
-She then requests processing on the item she selected. She is prompted to log in. After successfully authenticating, she is prompted for some additional settings for this processing request. She then submits the request and waits for it to complete. This is represented in this diagram:
-
-[[./images/archive-request.png]]
-
-*** Authentication and Authorization
-
-There are two reasons a capability request may require authorization. One is proprietary data, the other is restricted capabilities.
-
-Most capabilities are available to anyone to be invoked, but some are restricted to the group that performs QA. These are called restricted capabilities. System capabilities like standard calibration (formerly CIPL) and eventually standard imaging will be restricted, because their post-QA step includes ingestion. VLASS capabilities will also likely be restricted because of the complexity of keeping track of what has been done.
-
-The more familiar restriction has to do with proprietary data. This is why it is important that when a request comes in, the user making the request along with the data they want to operate on must be forwarded to the authorization service to be checked for access. During the proprietary period, only the observer can access the data.
-
-Authentication is the process of confirming the /identity/ of a user. Authorization is the process of confirming that a particular user has a certain right—in our case, access to proprietary data or restricted capabilities. Resource allocation is tied to the identity of a user. There is a longstanding plan to produce a shared "Authentication/Authorization/Allocation" or "A3" service. For the time being, the workspace will have to encompass an A3 service of its own, which in time will become a proxy to the real service, once it exists.
-
-** By the result of another request
-
-There is another way a user could make a request, starting from an earlier capability request. Suppose we have another user looking at some pending processing requests in the workspace itself. The user sees a request for a calibration. He wants to use that calibration to make an image, even though the calibration process isn't complete. The user chooses the request and requests processing on it. He is prompted for some additional parameters for this reprocessing request. He then submits the request and waits for it to complete.
-
-The fact that the user could view the processing request implies that he had access to the results of it, so no additional authentication was required.
-
-[[./images/capability-request-request.png]]
-
-In both cases the result was the same, but the starting state was different.
-
-** By internal systems
-
-Standard calibration (and eventually standard imaging) will work by sending capability requests to the system automatically as data is ingested into the archive. The archive has a rules engine component called amygdala, which notices new product ingestions and reacts by dispatching CIPL (the "CASA Integrated PipeLine") workflow to do automatic calibration. This diagram illustrates:
-
-[[./images/amygdala-request.png]]
-
-The architecture presented so far reveals some missing functionality here in the form of an update to this rules engine to make it more flexible. Several use-cases are identified, but as this was a late discovery, the plan for now must be to proceed with a small change to the rules engine to dispatch capabilities rather than workflows. We expect to revisit this later.
-
-#+BEGIN_COMMENT
-The use cases here are:
-
- - handling VLASS
- - handling the weekly stress test
-#+END_COMMENT
-
-** By VLASS
-
-VLASS (the Very Large Array Sky Survey) is also a client of the capability service. VLASS processing will be implemented as a suite of extra capabilities, which in turn may or may not rely on extra VLASS workflows. VLASS does quite a bit of custom processing, which I discuss later in this document.
-
-* Capability Structure
-
-Capability requests have several relationships with other objects, which are shown in this diagram:
-
-[[./images/capability-requests.png]]
-
-Communication between analysts or large project nominated users and end-users is mediated by the help desk system. So tickets can be created by people on either side, but are always associated to a particular request.
-
-Every request that is ready to be executed will have at least one version and one execution. The purpose of the version is to hold onto different parameter choices. The purpose of the execution is to track a particular attempt to make that version. Only one execution under a given request can be executing at a time.
-
-* Request Processing
-
-Once a capability is submitted, an initial version is created and an initial execution record is created under that version. The execution record is then placed in the execution pool. The execution pool receives events from the archive about product availability, from the workspace UI about quality assurance and large allocation status, and from the workflow system. The pool routes these events to the appropriate executions, causing them to change state. Once the request reaches a Prepare-And-Run-Workflow step, it is placed in the queue for the relevant capability, where it awaits being selected for execution. Once the Prepare-And-Run-Workflow step is performed, the capability execution is returned to the pool for the Await-Worfklow step, where it awaits a "workflow complete" message.
-
-The queue is a priority queue, and executes requests in priority order. The requirements stipulate that triggered observations, target-of-opportunity or director's discretionary time count as high priority. The priority only matters inside a given queue: high priority requests of a certain capability will come before low priority requests of the same capability. If there are multiple requests with the same priority level, the one submitted first will be executed first. There is nothing explicit in this design about priority /across/ queues, for instance to make standard calibration take priority over optimized imaging. But, it would be possible to leverage HTCondor to achieve cross-queue priorities by modifying the workflow's templates (and possibly HTCondor's configuration).
-
-Queues can be paused to facilitate upgrades of CASA or instrument reconfiguration. Queues may also optionally have a concurrency limit. This prevents built-up requests from flooding the cluster after resuming from a pause.
-
-Once the execution is selected, the capability info is consulted to acquire some information about this capability, namely the step sequence which is copied into the execution (to prevent strange behavior if the definition is changed while executions are in-flight). A capability engine walks the steps of the capability sequence, executing each one in turn. The step sequence will contain some steps for waiting for products, some for waiting for user input parameters, and some for executing workflows. This shows the sequence of events:
-
-[[./images/request-submission.png]]
-
-The entities in play here are shown in this diagram:
-
-[[./images/capability-execution-bdd.png]]
-
-Each of these entities has a job:
-
-- Capability Request :: Represents the request itself, holds all the versions and knows what the final outcome was.
-- Request Version :: Represents a particular "take," the options chosen for it, and holds all the attempts to produce a result from those options.
-- Capability Execution :: Represents an attempt to execute the capability with this set of options, and knows what it's execution state is.
-- Capability Execution Pool :: Holds all executions in an AWAIT state
-- Capability Queue :: Holds all the executions for a certain capability and runs them in priority order.
-- Capability Engine :: Does the actual execution of a capability by evaluating capability steps. Concurrency is managed by the queue by having the  capability engines corresponding to the concurrency limit.
-- Capability Step :: Does one piece of a capability, such as launching a workflow or waiting for products or quality assurance (details below).
-- Capability Sequence :: The list of capability steps that implement a capability.
-
-There are five kinds of capability step:
-
-- Await product :: broadcasts a need for a certain product and then waits for a signal from the archive or the capability system that it is available
-- Prepare and run workflow :: does some work to set up and begin executing a workflow
-- Await workflow :: waits for a signal that it is complete
-- Await QA :: sends message that QA is needed, waits for QA status change message
-- Await large allocation approval :: checks the estimated time of the request; if it's too large, waits for a signal that allocation approval is granted
-
-*** Capability execution interruption
-
-There is a potential here for drama, if the power goes out during capability execution. While the capability info will be storing our state so that we can resume execution, we must consider what happens if a step was in some partially executed state when power was out. What happens if we re-do a step twice, for instance?
-
-- Await product :: Check for  the product; it's either available or not, so there is no harm repeating this step.
-- Await workflow ::  Check to see if the workflow is actually complete; if it is not, resume waiting. Again, no harm.
-- Await QA :: Check for the QA status change; if it hasn't arrived yet, resume waiting. Still no harm.
-- Prepare and run workflow :: The dangerous one. This does some calculation and then executes a workflow. If the calculation was interrupted, redoing it is harmless. If the workflow execution was started but not recorded, there is a chance that two workflows will be executing.
-
-There does not appear, to me, to be a way for this design to result in lost work, only a way for extra processing to be executed. We'll have to think about this and how it could be detected in those cases where the capability service is restarted abruptly.
-
-*** Preparing a workflow
-
-Preparing a workflow requires a few steps of its own:
-
-[[./images/prepare-execute-workflow.png]]
-
-Here is a view in terms of the interactions with other objects:
-
-[[./images/prepare-execute-workflow-seq.png]]
-
-** Example: Imaging
-
-Let's take a deeper look at an example capability. Let's say we're imaging; we have defined a workflow that fetches data and runs CASA and we have an ingestion workflow. To provide an imaging capability, we will need a calibration product, we will need to run CASA against it, and we will need to perform QA before delivering it. Here is what the corresponding capability step sequence might look like:
-
-#+BEGIN_SRC
-AWAIT PRODUCT cal://alma/...
-PREPARE AND RUN WORKFLOW fetch-and-run-casa
-AWAIT WORKFLOW COMPLETE
-AWAIT QA
-PREPARE AND RUN WORKFLOW ingest
-AWAIT WORKFLOW COMPLETE
-#+END_SRC
-
-The capability engine will process this sequence in order, mostly by sending messages to other systems, as described here (bearing in mind this is an /example/ capability step sequence):
-
-[[./images/generic-sequence.png]]
-
-** Request and Execution States
-
-Most of the time, what a user is interested in is actually the request /status/, which I define to be the state of the request, unless there is a currently executing step associated with one of its versions' executions, in which case, the name of that step. This simplifies the state model for requests to this:
-
-[[./images/request-states.png]]
-
-And executions to this:
-
-[[./images/execution-states.png]]
-
-With a request status that is either the request state, or if the request state is "Executing" and the corresponding execution is in "Executing Step," then whatever the current executing step is for the associated execution—for instance, fetching data, running CASA or delivering.
-
-[[./images/execution-status.png]]
-
-* Request Cleanup
-
-There is a daily cleanup task executed by the scheduler which handles the requirements here.
-
-[[./images/daily-cleanup.png]]
-
-The structure of this is pretty simple:
-
-[[./images/daily-cleanup-bdd.png]]
-
-* Workflows
-
-Workflows are the unit of processing used by the capability service. A workflow will encompass several steps, like fetching data, running CASA and performing delivery. These steps are not limited to sequential order; they can actually form a graph. This is to enable advanced concurrency setups like map/reduce or scatter/gather processing where many concurrent jobs perform the same task on a different increment of data, as needed by (for instance) VLASS. These details are hidden behind the abstraction; clients of the workflow have no idea how their workflow is executed or where.
-
-The workflow system itself is ignorant of things like products, versions and provenance, and there are no aggregate collections of workflow executions underneath workflow requests or versions; running a workflow is more like running a program.
-
-Workflows accept an input parameter and files. Workflows are transformed into jobs appropriate for HTCondor and managed by HTCondor DAGman. The sequence of steps looks like this:
-
-[[./images/workflow-execution-act.png]]
-
-Simultaneously, there is a process monitoring the HTCondor logs and generating events as the workflow process evolves. Both of these are illustrated by this sequence diagram:
-
-[[./images/workflow-execution.png]]
-
-In this manner, the message that a workflow is complete will be sent back to the capability system, where a capability step in a queue somewhere is waiting for it, as well as the estimation service.
-
-** Why are we tightly-coupled to HTCondor?
-
-The workflow system is hidden behind its own service. There is no direct dependency on the implementation of the workflows in the capability service; as far as it is concerned, a workflow is run by sending a name, a parameter and some files to a service and asking it to go.
-
-Inside the workflow service, we use HTCondor's DAGman as the implementation of workflows. We have a concrete requirement to support OpenScienceGrid, which implies we must support HTCondor. Absent credible alternates, I consider this sufficient flexibility.
-
-* Estimation, Notification, Help Desk
-
-** Estimation Service
-
-When capabilities are executed, messages are sent via the messaging subsystem. There is an estimation service listening for these messages with the intent of correlating the request parameters with the elapsed time between request and completion. The service will correlate these data and provide an API for obtaining a guesstimate for how long particular requests might take if they were submitted.
-
-The service API here will be a single endpoint, to which a capability and parameter can be given, returning an estimate of how long it will take to execute.
-
-** Notification Service
-
-There are many situations in this system where a notification may need to be sent, either to the submitting user or to people responsible for doing quality assurance. The notification system will provide a high-level API for sending these sorts of notifications.
-
-Both the capability service and the workflow service utilize templating to generate input files. The same templating system will be used here, so that notifications can be selected and initiated with some set of parameters and proper template rendering will occur.
-
-** Help Desk Service
-
-Scientists needing help request it using the Kayako science help desk, which mediates the communication between staff and users. The exact functionality of Kayako is unknown to our team, but what matters for our purposes is that tickets can be opened and closed and linked to. The Help Desk Service abstracts the exact nature of the science help desk from us, protecting us from change, but giving us access to two verbs, open and close, and giving us a way to track tickets via links.
-
-Capability requests can be hooked up to one or more help desk tickets. Users and staff will both be able to use a UI in the workspace system to initiate a conversation with the other side, which will take place in the science help desk.
-
-* Persistence with Capability Info and Workflow Info
-
-Architecturally, how capabilities and workflows and whatnot are persisted is not especially significant. What matters in the architecture is knowing that it happens and which components are responsible for it. As shown in earlier sections, there are Capability Info and Workflow Info elements in the design which handle the lookup and persistence of capabilities and workflows respectively. Capability Info has collaborators, Project Settings and Capability Matrix, which handle some details specially.
-
-Without getting deeply involved in the design, we can probably assume that these blocks will backend to a relational database for the workspaces:
-
-[[./images/database.png]]
-
-* Integration with Existing Systems
-
-This is the design, but how to get from where we are to it is a question that warrants some exploration.
-
-The bulk of the capability system is all-new, along with the UI. These portions can be built directly without affecting the existing systems. Moving the existing workflows into this regime, however, is not straightforward, nor is deprecating the VLASS manager.
-
-** Archive
-
-There are two points of integration with the archive: ingestion and making capability requests.
-
-At product ingestion time, messages are sent over the pub/sub messaging system. This system is shared between the archive and the workspace system. The workspace receives these ingestion events and looks for active requests that are awaiting the newly ingested product; if found, they are notified and move on to the next step in their step sequence.
-
-As discussed in [[By internal systems][the section on internal systems]], the archive also has a rules engine for dispatching workflows when certain events occur, like ingestion of certain products. This system will have to be updated to send capability requests instead, and may deserve some improvements over and beyond that.
-
-The second point of integration is in the archive's UI for sending requests. This is going to change so as to obtain capabilities from the workspace system and forward the user and requested data to the workspace system.
-
-** Existing workflows
-
-The existing workflows serve many of the same needs as the new ones do, just worse. The plan for migration and integration is this:
-
-1. Refactor several existing workflow jobs into standalone executables
-
-   This mainly pertains to delivery and running CASA. The data fetcher and ingestion are already basically standalone executables.
-
-2. Refactor and migrate some workflow tasks into standalone executables
-
-   There are several current workflow tasks that will probably need to be refactored into standalone utilities. Ingestion, for instance, has a preparatory step that probably needs to be converted to this.
-
-3. Ensure that internal workflows are mapped to the new workflow service
-
-   There are several clients of the existing workflow system, especially VLASS (discussed below) and the archive system. The workflow service will be internally-accessible to support these clients, but they will need to be updated to access the new service.
-
-Apart from these areas, the bulk of the code in the existing workflows is either boilerplate or OODT-related cruft (scaffolding or replacement components). This code goes away, completely replaced by the new workflow service.
-
-** VLASS
-
-The VLASS software system has several major components:
-
-- VLASS Manager :: A UI for handling VLASS processing
-- VLASS Workflows :: A suite of workflows, used both explicitly by the VLASS manager and in a triggered fashion by the archive
-- Scripts :: A poorly-defined suite of scripts that do various tasks manually for VLASS
-
-Ultimately, it should be the case that:
-
-- VLASS Manager is mostly replaced by the workspace UI
-- VLASS workflows are entirely replaced by the capability system and its large-project support
-- Reliance on one-off scripts dramatically reduced
-
-I argue that the end-game scenario is achievable with the design we have now:
-
-1. Workspace UI will support allowing large projects to define their own QA personnel.
-2. Large projects can also define their own capabilities
-3. QA system is sufficient for large projects including VLASS
-4. Request-version-execution regime maps nicely from VLASS Manager's product-version-execution system
-5. Capability request composition maps nicely from VLASS Manager's product dependencies
-
-There are some features of the VLASS Manager that do not map onto features of the capability system, which will need to be handled somehow:
-
-- Survey / tile completion tracking ("87% of T17t01 is imaged", "99% of Epoch 1.2 is imaged")
-- Generating requests for various products of various minitiles and their components
-
-For this reason I do not think it is possible to completely replace the VLASS Manager with the workspace UI.
-
-*** Paths forward
-
-The way forward is to worry about VLASS post-hoc. As long as the archive continues to generate the messages which the VLASS workflows and VLASS manager expect, it would be safe to bring the entire capability system online without touching the VLASS systems. Assuming the workspace system is in-place, the migration path would then be:
-
-1. Migrate VLASS workflows into VLASS project-specific capabilities
-2. Migrate data from VLASS database into capability info
-3. Remove jobs/executions/QA tabs from VLASS manager. Alternatively, make it display the same data by retrieving it from the capability service
-4. Refactor VLASS scripts for generating products to generate capability requests instead
-
-As VLASS is effectively a client of the workspace system in the new regime, doing this is probably the right approach. We could interleave development here with development on the capability system, at some increased risk but with an accelerated timeline. The safety of this would be a function of how completely the workspace system is built when the integration is attempted.
-
-Obviously this will leave you with a VLASS manager that is kind of a husk of its former glory. More work will be necessitated here, but I think solving all of VLASS's problems is probably not in-scope for the workspace system.
-
-* On Errors
-
-As with any large system, there are a lot of ways for things to go wrong. The following are addressed by this architecture.
-
-** Hardware failures
-
-A key benefit of tight coupling to HTCondor is that hardware failures of running processes does not cause the work to be lost completely. HTCondor will reschedule the work onto another machine. So the most obvious kinds of hardware error are handled by HTCondor itself.
-
-Of the external mechanisms we use, our critical dependencies are on the HTCondor cluster, the database and the messaging systems. HTCondor's own management systems can fail, in which case our workflows won't be schedulable. This will manifest in our software as capability execution failures, which can be retried later. The database system going offline would be a significant disaster for almost all of our systems, but the database is routinely backed up and has its own disaster recovery mechanisms. The messaging system has gone offline before, which causes dependent systems to block until it comes back online. This can cause availability issues but tends not to lose data.
-
-** New resources
-
-What happens when new HPC systems are brought online? As long as the scientific computing group (SCG) provides access to new resources via HTCondor, the workspace system will be able to utilize them. HTCondor has several features here which are likely to make it a safe bet in the long run. For compatibility with other HPC software, HTCondor provides "glide ins"—a way of automatically setting up a minimal HTCondor environment on a single machine. This makes it very easy to support HTCondor on top of other software and hardware. Separately, for granular scheduling of work, HTCondor provides a powerful pattern matching system, based on their "classified ad" system. We anticipate using this to automatically push workflow executions into clusters that are local to the data they will be operating on.
-
-** Networking
-
-What is the effect workspaces will have on network utilization? The workspaces system itself doesn't necessarily affect network usage significantly. The reason is that the same kind of processing we are doing now is what will be done under workspaces, and the workload is similar. Workspaces doesn't, by itself, generate a significant amount of additional network traffic over and above the current workflow system. 
-
-We anticipate a significant effect from moving some processing into the Open Science Grid, whereupon processing will necessarily be nonlocal to the data. The SCG is working on a partnership with the Center for High Throughput Computing (CHTC), the authors of HTCondor, to figure out exactly how we'll need to address this. One approach would be creating data caches at Open Science Grid sites to reduce the amount of long-distance data transfer. There is probably no avoiding a significant increase in bandwidth utilization from propagating large datasets from here to OSG sites; the best we can probably hope for is to come up with a way to do it intelligently rather than wastefully, but again this will probably fall mostly on the shoulders of the SCG.
-
-** Workflow failures
-
-When individual workflow steps fail, HTCondor eventually cancels the workflow. Whatever work has completed so far isn't lost but is marked as having completed; the workflow can be manually rescheduled and proceed. This requires manual intervention by a human but suggests that workflows are more readily resumed than in the existing workflow system.
-
-A number of interesting related system failures could hide behind a workflow failure: NGAS or Lustre issues, trouble with CASA versions, etc. In any event, HTCondor leaves copious logs and the messages from tasks will be available in the working directory to examine for post-mortem analysis.
-
-** Capability execution failures
-
-Capability steps mostly cannot fail owing to their simplicity. The only step that can fail is a workflow, which is addressed above. The capability will then enter an Error state; either someone fixes the capability and restarts it (causing it to return to the Executing Step state) or the whole execution is failed.
-
-** Capability request failures
-
-Capability requests cannot fail, although they can be abandoned. If an execution fails, a new execution can be created; if the failure was due to some transient cause (a software misconfiguration or missing resources or something) then it will be remedied by an additional execution. If the failure was due to bad parameters, a new request version can be created with fixed parameters, possibly with input from the help desk via the ticket mechanism.
-
-** Monitoring and alerting
-
-As SSA systems grow larger and more complex, detecting faults and responding in a timely way is becoming a larger and larger issue. The capability system is only going to increase this, and the distributed nature of the processing is going to create new opportunities for information necessary for debugging to be mislaid.
-
-For this reason, we anticipate bringing online a new monitoring system. This system will not be specific to the workspace, but is expected to be shared with all of the SSA software. The monitoring system will provide a simple API for publishing statistics and logs from any component in the workspace system (or any other system). The information so collected can be visualized with an open-source client such as Grafana. Pro-active monitoring and alerting can also be done. We will be conducting the research on which system to use shortly, and this document will be updated when concrete decisions are made.
-
-* Testing Plan
-
-The high-level approach here is to follow the plan outlined in the book /Growing Object-Oriented Software/. We will build a "walking skeleton" consisting of all the necessary interfaces as stubs. First light will be a simple capability that does nothing or nearly nothing, to exercise all the pathways. Integration tests and unit tests will follow, with the meat of the implementation of a module following the unit tests for that module.
-
-Our general approach will be Test-Driven Development, in which the system is modeled and unit tests are designed and implemented for each object in the system, along with integration and regression testing. 
-
-Integration testing will involve establishing and exercising expectations for interactions within and among the components. For example, the various services interface with the capability service as well as the messaging subsystem and the workflow service, which itself interacts with the messaging subsystem. We plan to use mock objects to represent the services so that the behavior of each service can be exercised without the need to instantiate and call methods on the actual objects, which could be time-consuming, difficult, and in some cases not possible. In similar fashion, every foreseeable scenario in the workspace's interaction with such entities as the archive, the science helpdesk, and others will be modeled and tested using mocks.
-
-Regression testing is necessary to ensure that defects that have been addressed don't turn up again later. This can happen as a result of code changes not being committed to the source code repository, or of merging Tuesday's branch into the repo without pulling Monday's work down first.
-
-The eventual goal is automated building and testing of the codebase, such that at regular intervals the system is automatically rebuilt to incorporate any code changes, then every single test is exercised, with immediate reporting of any failures.
-
-* Technology Decisions
-
-There are few technology choice surprises in this architecture. Most of the technologies we'll be using have been proven already in the archive project over many years and with VLASS.
-
-The database backend will be PostgreSQL. The database abstraction layers will be either SQL Alchemy for Python or myBatis for Java.
-
-The services will be written in Java or Python, depending on which is convenient or furnishes a necessary library. If Java, JAX-RS will be used. If Python, Pyramid will be used.
-
-The user interfaces will be written with AngularJS and a Python/Pyramid backend.
-
-Templates will, as much as possible, use Mustache, which is a language-independent system for doing templating.
-
-One open question is how to handle metrics and proactive monitoring. There is some interest in using InfluxDB in the electronics division; another suggestion is Prometheus. As this would be a new system altogether, there is not much precedent for it in the observatory to follow. Details will be added to this document as they are found.
-
-Tests will be written using JUnit and Mockito for Java and the built-in unittest library for Python, which also furnishes mock objects.
-
-* Requirement Satisfaction
-
-This section is intended to assist the CDR panel members with understanding how each requirement is satisfied. The details of how certain decisions were made as they pertain to a particular requirement can be found by consulting [[file:./Design-Iterations.org::*Requirement%20Satisfaction][the requirement satisfaction section of the design iterations document]]. The following table includes each requirement, it's text, and the corresponding items from the design that address the requirement.
-
-| Requirement    | Text                                                                                                                                                                                                                                                                                                                  | Components                                               |
-|----------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------|
-| SRDP-L1-5.3    | If the user is not satisfied with the product (for whatever reason), they shall have the ability to return to their request or helpdesk ticket through a provided link, modify as necessary and resubmit. A simple mechanism shall be provided to request more assistance through a linked helpdesk ticket mechanism. | Request Version, Capability Execution                    |
-| SRDP-L1-6.1    | When manual intervention for recalibration is required, the process shall be executed by the operations staff. The staff member shall work with the user to identify and resolve the issue and then resubmits the job for the user. At this point the process will re-enter the standard workflow.                    | Help Desk Service, Request Version, Capability Execution |
-| SRDP-L1-6.2    | The archive interface shall provide status information for the user on each job, links to completed jobs, as well as the weblog for the job.                                                                                                                                                                          | Workflow Service, Workspace UI                           |
-| SRDP-L1-6.3    | Batch submission of jobs shall be throttled to prevent overwhelming processing resources.                                                                                                                                                                                                                             | Capability Queue                                         |
-| SRDP-L1-6.4    | The standard imaging process shall automatically be triggered for observations supported by SRDP once the standard calibration has passed quality assurance.                                                                                                                                                          | Future Product, Capability Queue, Await QA               |
-| SRDP-L1-6.5    | When the single epoch calibration and imaging for all configurations are complete, the data from all configurations shall be imaged jointly.                                                                                                                                                                          | Future Product, Capability Queue                         |
-| SRDP-L1-6.6    | The Time Critical flag shall persist throughout the lifecycle of the project and be made available to the data processing subsystems.                                                                                                                                                                                 | Capability Queue                                         |
-| SRDP-L1-6.6.1  | Processing of time critical proposals shall begin as soon as data is available.                                                                                                                                                                                                                                       | Capability Queue, Future Product                         |
-| SRDP-L1-6.6.2  | The workflow manager shall notify the PI immediately when calibration or imaging products are available, with specific notice that the products have not been quality assured.                                                                                                                                        | Notification Service                                     |
-| SRDP-L1-6.6.3  | In cases of reduction failure, a high priority notification to operations shall be made so that appropriate manual mitigation can be done. Note that this may occur outside of normal business hours.                                                                                                                 | Notification Service                                     |
-| SRDP-L1-6.7    | Large Project processing shall allow use of custom or modified pipelines to process the data and the project team shall be directly involved in the quality assurance process.                                                                                                                                        | Project Settings                                         |
-| SRDP-L1-6.7.1  | The SRDP system shall allow use of NRAO computing resources for the processing of the large project data provided that required computing resources does not exceed the available resources (including prior commitments).                                                                                            | Project Settings                                         |
-| SRDP-L1-6.8    | Once a job is created on archived data, the archive interface shall provide the user an option to modify the input parameters and review the job prior to submission to the processing queue.                                                                                                                         | Capability Request, Workspace UI                         |
-| SRDP-L1-6.9    | Results from reprocessing archive data are temporary and the automated system shall have the ability to automatically enforce the data retention policy.                                                                                                                                                              | Scheduling Service, Cleanup                              |
-| SRDP-L1-6.9.1  | Warnings shall be issued to the user 10 and three days prior to data removal.                                                                                                                                                                                                                                         | Scheduling Service, Cleanup Warning                      |
-| SRDP-L1-6.10   | The workflow system shall automatically start the execution of standard calibration jobs.                                                                                                                                                                                                                             | Capability Request, Future Product, Standard Calibration |
-| SRDP-L1-6.10.1 | It shall be possible for a user to inhibit the automatic creation of calibration jobs.  For instance after a move, prior to new antenna positions being available.                                                                                                                                                    | Capability Queue                                         |
-| SRDP-L1-6.11   | The user shall be able to cancel jobs and remove all associated helpdesk tickets.                                                                                                                                                                                                                                     | Helpdesk Service, Capability Service, Workspace UI       |
-| SRDP-L1-6.12   | The user shall be provided an estimate of the total latency in product creation.                                                                                                                                                                                                                                      | Estimation Service                                       |
-| SRDP-L1-6.13   | The workspace system shall provide interfaces to allow review and control of the activities in the workspace.                                                                                                                                                                                                         | Workspace UI                                             |
-| SRDP-L1-6.13.1 | An interface that allows users to interact with their active and historical processing requests shall be provided.                                                                                                                                                                                                    | Workspace UI                                             |
-| SRDP-L1-6.13.2 | An interface providing internal overview and control of all existing workspace activities and their state for use by internal operational staff.                                                                                                                                                                      | Analyst UI                                               |
-| SRDP-L1-6.14   | The system shall authenticate the user and verify authorization prior to creation of a workspace request.                                                                                                                                                                                                             | A3 Service, Capability Service                           |
-| SRDP-L1-6.15   | The workspace system shall support the optional submission of jobs to open science grid through the high throughput condor system.                                                                                                                                                                                    | Workflow Service                                         |
-| SRDP-L1-8      | Every product shall be assessed for quality, and those products for which the initial calibration are not judged to be of science quality should be identified for further intervention.                                                                                                                              | Analyst UI, Await QA                                     |
-| SRDP-L1-8.6    | Workspaces shall permit some categories of processing to be designated as requiring QA.                                                                                                                                                                                                                               | Capability Sequence, Await QA                            |
-| SRDP-L1-8.7    | Processing requests that require QA shall have to undergo a human inspection prior to being delivered to the requester or ingested into the archive.                                                                                                                                                                  | Workspace UI, Capability Sequence, Await QA              |
-| SRDP-L1-8.8    | There will be a QA interface that will show requests requiring QA and allow designated users to pass/fail requests.                                                                                                                                                                                                   | Analyst UI                                               |
-| SRDP-L1-8.8.1  | The QA interface will allow permitted users to revise the parameters of a request and submit new processing.                                                                                                                                                                                                          | Analyst UI, Workspace UI                                 |
-| SRDP-L1-8.8.2  | Only the final QA-passed results will be delivered to the requesting user or ingested into the system.                                                                                                                                                                                                                | Capability Sequence, Await QA                            |
-| SRDP-L1-8.9    | The QA interface will facilitate communication between the user performing QA and the user who submitted the processing request.                                                                                                                                                                                      | Workspace UI, Helpdesk Service                           |
-| SRDP-L1-8.10   | Ops staff will be designated for performing QA on standard calibration and imaging processes, and will be able to reassign to other ops staff.                                                                                                                                                                        | Project Settings, Assignee, Analyst UI                   |
-| SRDP-L1-8.10.1 | Large projects shall be able to designate their own users to perform QA on their processes.                                                                                                                                                                                                                           | Project Settings, Analyst UI                             |
-| SRDP-L0-11     | The system shall support a robust and reliable process for the testing, validation, and delivery of capabilities.                                                                                                                                                                                                     | Testing Plan                                             |
-| SRDP-L0-11.2   | SRDP workflows shall be executable with candidate versions of the software. The products generated by this software shall not be exposed as SRDP products in the standard data discovery interfaces.                                                                                                                  | Capability Matrix                                        |
-| SRDP-L0-11.3   | It shall be possible to execute portions of the SRDP workflows to optimize testing.                                                                                                                                                                                                                                   | Testing Plan                                             |
-| SRDP-L0-11.4   | It shall be possible to modify the system without losing the current execution state, or in such a way that the state information can be recaptured.                                                                                                                                                                  | Workflow Service                                         |
-| SRDP-L0-11.5   | The execution environment may need to be modified, for example using a non-standard destination directory to accumulate outputs from a regression testing run.                                                                                                                                                        | Workflow Service                                         |
-| SRDP-L1-11     | Metrics                                                                                                                                                                                                                                                                                                               | Metrics Service, Estimation Service                      |
-| SRDP-L1-11.1   | The latency between the completion of the observation and the delivery of products shall be measured.                                                                                                                                                                                                                 | Metrics Service                                          |
-| SRDP-L1-11.2   | Categories for failure shall be identified and metrics derived in order to allow the Observatory to address common failure modes.                                                                                                                                                                                     | Metrics Service, Monitoring Plan                         |
-| SRDP-L1-12.6   | If the requested product is large (either in number of data sets to be processed, or implied processing time), the request shall be flagged for manual review by the SRDP operations staff.                                                                                                                           | Estimation Service                                       |
-| SRDP-L1-13     | The restore use case can be used to prepare data for further processing (such as the PI driven imaging use case).                                                                                                                                                                                                     | Future Products                                          |
-| SRDP-L1-6.16   | A request is not complete until the user is satisfied with the result of the processing.                                                                                                                                                                                                                              | Capability Request                                       |
-| SRDP-L1-6.16.1 | Multiple revisions of the parameters are permitted and must be kept with the request.                                                                                                                                                                                                                                 | Request Version                                          |
-| SRDP-L1-6.16.2 | If a job fails for some transient reason, it should be possible to re-execute it without losing information about the failed execution.                                                                                                                                                                               | Capability Execution                                     |
-
-* Glossary
-
-** Workspace Terms
-
-A *product* is a set of data files of a particular type, with provenance, which could be archived.
-
-A *capability* is a particular workflow setup, intended to accept a certain kind of product and some parameters and produce another product.
-
-A *workflow* is a non-local process composed of steps, whose currently executing step or steps are known.
-
-A *capability request* marries a capability to a product, representing the expectation of a new product, parameterized in a certain concrete way.
-
-A *capability step* is a step in the process of producing a certain product.
-
-The *capability matrix* maps CASA versions to version-specific templates, allowing us to support a variety of CASA versions.
-
-A *capability queue* organizes requests in priority order and makes it possible to control the number of concurrent executions, or pause execution altogether.
-
-The *project settings* holds project-specific information: custom capabilities, capability template overrides, and a list of users who may perform QA for the custom capabilities of this project.
-
-The capability *step sequence* is the sequence of steps for running a capability. There are only a few now, like /await QA/, /prepare and run workflow/, /await workflow/ and /await product/.
-
-A *capability engine* knows how to walk the step sequence and execute it. There's a number of these for each queue, corresponding to the concurrency limit.
-
-The *capability info* holds the information about capabilities and capability requests.
-
-** NRAO Jargon
-
-- VLASS :: [[https://public.nrao.edu/vlass/][Very Large Array Sky Survey]], which is a large project here at the NRAO to map the radio sky with the modern instrument's capabilities
-- CASA :: [[http://casa.nrao.edu][Common Astronomy Software Applications]], is the larger and more modern of the two in-house data reduction packages, for making images from radio data
-- HTCondor :: [[https://research.cs.wisc.edu/htcondor/][Condor]] is software for "high throughput computing," which is to say, a kind of grid computing focused on processing smallish jobs on normal-ish computers, in bulk.
-- SCG :: The Scientific Computing Group here at the NRAO, are the group that maintains our clusters and worry about grid computing and high-performance and high-throughput computing.
-- OSG :: The Open Science Grid is a publicly-funded high-throughput cluster for scientific computing
-- CHTC :: The Center for High-Throughput Computing is the research organization at University of Wisconsin that is responsible for maintaining HTCondor and associated software
-- NGAS :: Next-Generation Archive System is the previous generation of petabyte-scale data storage which we currently use as the principal storage backend for the archive
-- CIPL :: "CASA Integrated PipeLine" is the old name for the standard calibration process
-
-** Technical Jargon
-
-- JWT :: JSON Web Token, a standard for transmitting authentication data between web services.
-
-* COMMENT TODOs
-** TODO Make sure requirements satisfaction matrix is propagated to Cameo, create PDF dump from it
-** TODO Parameter validation?
-** TODO Phase-requirement mapping (plus gap analysis)
-** TODO Time criticality - how do we mark things as having high priority? (Manually, until there are requirements otherwise)
-** TODO Add something about Cancellation from DI 2.6 and 9.
-** TODO Discuss scalability
-** TODO VLASS: Show mapping from Product Type -> Product -> Product Version -> Request/Job
-** TODO Metrics, which, how stored and queried, informing staff about stalled jobs
-** TODO Test plan: full environment in CV? automated regression testing? what is in scope for regression testing?
-** TODO What happens to follow-ons whose predecessor is cancelled?
-Probably they get cancelled as well.
-** TODO Are queues dynamic? (YES)
-** TODO Is there a way to pause all queues at once? (CH) should be
-** TODO 
-* COMMENT Buckets
-** More discussion/documentation needed
-*** ALMA (RR)
-Is there a plan to scale the workspace design to include ALMA data? If so, some general considerations are:
-- Does this mean that there are two Archives, one in NM and one in CV? Will the one in NM only have VLASS and the one in CV only have ALMA?
-- Will processing in CV happen on HTCondor or on the NAASC lustre? And if processing is happening on the NAASC lustre, how is the messaging and state system incorporated? Can NAASC users choose where jobs are submitted and how will this work?
-
-*** Section 4 (RR)
-- Capability Request: It states that it can hold multiple versions. For a given request, does there need to be a 1:1 ratio of versions to executions? And can multiple executions on a single data set run concurrently?
-- Request version: How many attempts to produce a result and what predicates a susccessful result? Is there a way to stop trying to produce a result based on the failure? 
-- Capability queue: “all the executions” each having its own version, correct?
-*** Section 4.0.1:
-- Prepare and run workflow: you only have to make sure two workflows on the same execution are not occurring simultaneously
-*** Section 7.3:
-- Helpdesks will be moving away from Kayako. How will workspaces handle this? Does it matter?
-- Workspace UI and workflow should also be hooked up to the helpdesk?
-*** Section 8:
-- "Architecturally, how capabilities and workflows and whatnot are persisted is not especially significant”. This is perhaps untrue because it is important when trying to trace a failure through the system
-
-
-** Addressed in Documentation
-*** 
-** Need Further Elaboration
-*** Scope failure modes (RR)
-One of my concerns is a general underestimation of scope failure modes and how to handle them. For example:
-
-- What happens if CASA silently fails?
-- What happens if Pipeline fails and how do we know if it is really Pipeline or a propagated error from CASA?
-  (Knowing the answers to a given failure should then dictate the next step in the workflow. Is it submitted again? Does it go to manual?)
-- What happens if a message is sent but not received?
-- What happens if some, but not all, the products are generated?
-
-There needs to be a way to detect some of these other failure modes, sufficient logging and traceable to find the root cause, and then a course of action to deal with it.
-
-The seemed to be little discussion on manual processing in the event of standard mode failure. It is understood that users may submit jobs, but what happens in the instance that a standard mode fails?
-- Does the job get resubmitted or sent to a DA? (It probably depends on the type of failure, see above)
-- If it is sent to a DA, does it still get ingested?
-- What happens if a user (PI or DA) generates better results than the standard mode? Are they re-ingested?
diff --git a/apps/cli/utilities/wksp0/architecture/extra.css b/apps/cli/utilities/wksp0/architecture/extra.css
deleted file mode 100644
index a6ee710b8a35f27a8ea4dd851d5252766a4792f3..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/architecture/extra.css
+++ /dev/null
@@ -1,3 +0,0 @@
-h4, h5, h6 {
-    color: inherit;
-}
diff --git a/apps/cli/utilities/wksp0/architecture/images/amygdala-request.png b/apps/cli/utilities/wksp0/architecture/images/amygdala-request.png
deleted file mode 100644
index 223edf12daa13af16a029e5b1078c7a59bf2858a..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/amygdala-request.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/archive-request.png b/apps/cli/utilities/wksp0/architecture/images/archive-request.png
deleted file mode 100644
index 210c30a7ee947b907e9cdc019f460a5d336259de..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/archive-request.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/capability-behaviors.png b/apps/cli/utilities/wksp0/architecture/images/capability-behaviors.png
deleted file mode 100644
index 6b7796315008633ee6afc00c4f716e4c15f9a461..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/capability-behaviors.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/capability-execution-bdd.png b/apps/cli/utilities/wksp0/architecture/images/capability-execution-bdd.png
deleted file mode 100644
index 5eac2337af32a7bf30832316ec41bbe77da08044..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/capability-execution-bdd.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/capability-execution.png b/apps/cli/utilities/wksp0/architecture/images/capability-execution.png
deleted file mode 100644
index 0926dc5f6fd8dca46a5356b886abd28a4cf3cfcf..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/capability-execution.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/capability-lookup.png b/apps/cli/utilities/wksp0/architecture/images/capability-lookup.png
deleted file mode 100644
index a2f44449264efd25483fb4eccc2fa9be18059b82..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/capability-lookup.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/capability-request-request.png b/apps/cli/utilities/wksp0/architecture/images/capability-request-request.png
deleted file mode 100644
index 842b67aac9300c46349445597f555b70ea6507a0..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/capability-request-request.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/capability-requests.png b/apps/cli/utilities/wksp0/architecture/images/capability-requests.png
deleted file mode 100644
index c91fd4e5616e44c61d2edc151a10a7722907d89d..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/capability-requests.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/choose-data-for-processing.png b/apps/cli/utilities/wksp0/architecture/images/choose-data-for-processing.png
deleted file mode 100644
index f8e2b397d70344499cb29100203c5224a0bc8e89..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/choose-data-for-processing.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/daily-cleanup-bdd.png b/apps/cli/utilities/wksp0/architecture/images/daily-cleanup-bdd.png
deleted file mode 100644
index 3f0210e532cb7f9adfa611fbf9bb8a38241be015..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/daily-cleanup-bdd.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/daily-cleanup.png b/apps/cli/utilities/wksp0/architecture/images/daily-cleanup.png
deleted file mode 100644
index c68742d38bc97d5201f9bfa156dac31538b2d5a6..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/daily-cleanup.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/database.png b/apps/cli/utilities/wksp0/architecture/images/database.png
deleted file mode 100644
index f1854da8a5e08770223705f0980e36073631b586..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/database.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/database_out.png b/apps/cli/utilities/wksp0/architecture/images/database_out.png
deleted file mode 100644
index e74fa6c213c8ef099142640367341b0b921e1e13..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/database_out.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/execution-states.png b/apps/cli/utilities/wksp0/architecture/images/execution-states.png
deleted file mode 100644
index e9b1ace9f8c97736a91f3b38b0cab41a1acac9ee..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/execution-states.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/execution-status.png b/apps/cli/utilities/wksp0/architecture/images/execution-status.png
deleted file mode 100644
index f5ffc29281cdbb0fea717f19f04d591bc3208a6e..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/execution-status.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/externals.png b/apps/cli/utilities/wksp0/architecture/images/externals.png
deleted file mode 100644
index 254af28ce47a8ee1e8adb36b858a6a1c704717a9..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/externals.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/future-products.png b/apps/cli/utilities/wksp0/architecture/images/future-products.png
deleted file mode 100644
index 216f9e608ae109c4ef9488df793beb220b19e5d7..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/future-products.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/generic-sequence.png b/apps/cli/utilities/wksp0/architecture/images/generic-sequence.png
deleted file mode 100644
index ce8596ab9ef35db1331f9fa405d4cfbd19387b93..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/generic-sequence.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/image1.png b/apps/cli/utilities/wksp0/architecture/images/image1.png
deleted file mode 100644
index 651f02116fb7c940b3cea5123d8033257ccd6e19..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/image1.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/image10.png b/apps/cli/utilities/wksp0/architecture/images/image10.png
deleted file mode 100644
index 67675ae73d49d7eb7754a22410f2d78843523ede..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/image10.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/image11.png b/apps/cli/utilities/wksp0/architecture/images/image11.png
deleted file mode 100644
index da612f3212c619acb80c157fa14f55e86d46627d..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/image11.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/image12.png b/apps/cli/utilities/wksp0/architecture/images/image12.png
deleted file mode 100644
index 82964b941518a9a17ea08e4a5139810bdf56cc1f..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/image12.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/image2.png b/apps/cli/utilities/wksp0/architecture/images/image2.png
deleted file mode 100644
index 030251477edebd406c829a83a69d0dfb4682a308..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/image2.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/image3.png b/apps/cli/utilities/wksp0/architecture/images/image3.png
deleted file mode 100644
index 5436fd44d29d8d897d9ac718ff41f3ae8e4ce738..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/image3.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/image4.png b/apps/cli/utilities/wksp0/architecture/images/image4.png
deleted file mode 100644
index 62a10017828d15dcc3069549b760ae0bb5471745..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/image4.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/image5.png b/apps/cli/utilities/wksp0/architecture/images/image5.png
deleted file mode 100644
index f81aa0e0065197e165fb04ff13e04677d2a20bdb..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/image5.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/image6.png b/apps/cli/utilities/wksp0/architecture/images/image6.png
deleted file mode 100644
index 2a0baa421e3cb2d04b289e42099cb486d721abd4..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/image6.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/image7.png b/apps/cli/utilities/wksp0/architecture/images/image7.png
deleted file mode 100644
index 11de09ce4d7b8d0e95190c320a67b9f3357f6822..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/image7.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/image8.png b/apps/cli/utilities/wksp0/architecture/images/image8.png
deleted file mode 100644
index 9b5bbc65dc5d263d1495d983998051d9b8fba184..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/image8.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/image9.png b/apps/cli/utilities/wksp0/architecture/images/image9.png
deleted file mode 100644
index 3d4f8e36626d77a3a38e08be8af3e3b8afdfa46c..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/image9.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/internals.png b/apps/cli/utilities/wksp0/architecture/images/internals.png
deleted file mode 100644
index 53ebbd6c840588f3e7b9766acad7e32e65cc5777..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/internals.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/look-up-templates.png b/apps/cli/utilities/wksp0/architecture/images/look-up-templates.png
deleted file mode 100644
index a2fe116cd92d16a44085c2e6981ca0a6cfcad4ba..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/look-up-templates.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/overall.png b/apps/cli/utilities/wksp0/architecture/images/overall.png
deleted file mode 100644
index 28b09c1d6175290e58748067b65f124fd78d13db..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/overall.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/overview.png b/apps/cli/utilities/wksp0/architecture/images/overview.png
deleted file mode 100644
index 9f013f0fa463aa5e73521a0c7fdd8522bb5dd3c3..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/overview.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/packages.png b/apps/cli/utilities/wksp0/architecture/images/packages.png
deleted file mode 100644
index 62cb1c370a182209dbc53bbb7209c6a16c534337..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/packages.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/prepare-execute-workflow-seq.png b/apps/cli/utilities/wksp0/architecture/images/prepare-execute-workflow-seq.png
deleted file mode 100644
index 68cf7c56090882c41f5f88ae6be8ccd46c10bf44..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/prepare-execute-workflow-seq.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/prepare-execute-workflow.png b/apps/cli/utilities/wksp0/architecture/images/prepare-execute-workflow.png
deleted file mode 100644
index fb023f2ce9569c262622dada2d9717c8ddf0552e..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/prepare-execute-workflow.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/pub-sub-messaging.png b/apps/cli/utilities/wksp0/architecture/images/pub-sub-messaging.png
deleted file mode 100644
index 56399426d66769ae38070d3d47fa0c4d4bc5c14e..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/pub-sub-messaging.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/request-execution.png b/apps/cli/utilities/wksp0/architecture/images/request-execution.png
deleted file mode 100644
index 0ccf626d9a7dd6c13b0f4eed04452043000abb17..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/request-execution.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/request-states.png b/apps/cli/utilities/wksp0/architecture/images/request-states.png
deleted file mode 100644
index e1fb6ca02a24aec8a6b0ada0255ae4759f52b49c..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/request-states.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/request-submission.png b/apps/cli/utilities/wksp0/architecture/images/request-submission.png
deleted file mode 100644
index 735d008e3584d7b187612c8c902ceedae8289ec3..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/request-submission.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/sequence-processing.png b/apps/cli/utilities/wksp0/architecture/images/sequence-processing.png
deleted file mode 100644
index 96f7d762948f128013b050eb825b20cd401d73d8..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/sequence-processing.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/sequence-structure.png b/apps/cli/utilities/wksp0/architecture/images/sequence-structure.png
deleted file mode 100644
index 6003d100573bd83b6a5fe6e0dcd0bb81f25379ae..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/sequence-structure.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/step-executions.png b/apps/cli/utilities/wksp0/architecture/images/step-executions.png
deleted file mode 100644
index d33ed93a5060c57f92ec42d76bb9c00e8345e826..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/step-executions.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/ui-components.png b/apps/cli/utilities/wksp0/architecture/images/ui-components.png
deleted file mode 100644
index ff5ca7a27815350e6b9b3b8f27f5d6c011091f5e..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/ui-components.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/workflow-execution-act.png b/apps/cli/utilities/wksp0/architecture/images/workflow-execution-act.png
deleted file mode 100644
index c704a151db6326f7eb1d6ff4fa5c9beccf8900be..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/workflow-execution-act.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/architecture/images/workflow-execution.png b/apps/cli/utilities/wksp0/architecture/images/workflow-execution.png
deleted file mode 100644
index fde00b7424bf1b2d1fc500ca4415b45f5b9518a5..0000000000000000000000000000000000000000
Binary files a/apps/cli/utilities/wksp0/architecture/images/workflow-execution.png and /dev/null differ
diff --git a/apps/cli/utilities/wksp0/capabilities/grep-uniq/sequence.txt b/apps/cli/utilities/wksp0/capabilities/grep-uniq/sequence.txt
deleted file mode 100644
index 5493b6bd63dc55f2f3ecd5544604e30809744613..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/capabilities/grep-uniq/sequence.txt
+++ /dev/null
@@ -1,5 +0,0 @@
-AWAIT PRODUCTS
-AWAIT PARAMETER search-parameters
-PREPARE AND RUN WORKFLOW grep-uniq
-AWAIT PARAMETER qa-status
-PREPARE AND RUN WORKFLOW post-qa
diff --git a/apps/cli/utilities/wksp0/docs/Capabilities.als b/apps/cli/utilities/wksp0/docs/Capabilities.als
deleted file mode 100644
index e140d6a8fad2c0368bf2634ccff3d8bf1eca2a04..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/docs/Capabilities.als
+++ /dev/null
@@ -1,39 +0,0 @@
-sig User {}
-sig Step {}
-
-sig Capability {
-	submitter: one User,
-	assignee: lone User,
-	sequence: one Sequence,
-	engine: lone Engine
-}
-
-sig Sequence {
-	steps: some Step
-}
-
-sig Engine {
-	capability: lone Capability,
-	currentStep: one Step
-}
-
-fact NoSubmitterIsAssignee {
-	no c: Capability | c.submitter = c.assignee
-}
-
-fact EngineBelongsToCapability {
-	all c: Capability | c.engine.capability in c
-}
-
-fact EnginesOnlyExecuteCapabilitySteps {
-	all c: Capability | c.engine.currentStep in c.sequence.steps
-}
-
-fact EveryCapabilityHasAUniqueEngine {
-	no disj c, c': Capability | c.engine = c'.engine
-}
-
-pred show() {
-}
-
-run show for 3 but 3 Step, 3 Capability
diff --git a/apps/cli/utilities/wksp0/docs/FutureProductLocators.hs b/apps/cli/utilities/wksp0/docs/FutureProductLocators.hs
deleted file mode 100644
index e159ceafa24cd5379c9537b1c1e91e1cd6d675e2..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/docs/FutureProductLocators.hs
+++ /dev/null
@@ -1,33 +0,0 @@
-module FutureProductLocators where
-
-import Control.Category
-
-type ProductLocator = String
-type ProposalId = String
-type SessionNumber = Integer
-
-data ProductType = Execblock | Calibration | Image
-  deriving (Show, Eq)
-
-data FutureProductType = FutureExecblock ProposalId SessionNumber
-                       | FutureProduct ProductType Product
-                       deriving (Show, Eq)
-
-data Product = CurrentArchiveProduct ProductLocator
-             | FutureArchiveProduct FutureProductType
-             | Product `And` Product
-             deriving (Show, Eq)
-
-isReady (CurrentArchiveProduct _) = True
-isReady (FutureArchiveProduct _)  = False
-isReady (p1 `And` p2) = isReady p1 && isReady p2
-
-resolve p@(CurrentArchiveProduct _) = p
-resolve (FutureArchiveProduct fp)   = undefined -- look up the future product
-resolve (p1 `And` p2)               = resolve p1 `And` resolve p2
-
-data QaType = Always Bool
-            | Human
-            | InvokeScript String
-
-data Task = Executable String | Task `AndThen` Task | Noop
diff --git a/apps/cli/utilities/wksp0/docs/Makefile b/apps/cli/utilities/wksp0/docs/Makefile
deleted file mode 100644
index d4bb2cbb9eddb1bb1b4f366623044af8e4830919..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/docs/Makefile
+++ /dev/null
@@ -1,20 +0,0 @@
-# Minimal makefile for Sphinx documentation
-#
-
-# You can set these variables from the command line, and also
-# from the environment for the first two.
-SPHINXOPTS    ?=
-SPHINXBUILD   ?= sphinx-build
-SOURCEDIR     = .
-BUILDDIR      = _build
-
-# Put it first so that "make" without argument is like "make help".
-help:
-	@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
-
-.PHONY: help Makefile
-
-# Catch-all target: route all unknown targets to Sphinx using the new
-# "make mode" option.  $(O) is meant as a shortcut for $(SPHINXOPTS).
-%: Makefile
-	@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
diff --git a/apps/cli/utilities/wksp0/docs/_build/.keep b/apps/cli/utilities/wksp0/docs/_build/.keep
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/apps/cli/utilities/wksp0/docs/_static/.keep b/apps/cli/utilities/wksp0/docs/_static/.keep
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/apps/cli/utilities/wksp0/docs/_templates/.keep b/apps/cli/utilities/wksp0/docs/_templates/.keep
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/apps/cli/utilities/wksp0/docs/api/modules.rst b/apps/cli/utilities/wksp0/docs/api/modules.rst
deleted file mode 100644
index 7db6d1cb4b5808f73e742a9dfe96a58f4bc8f89c..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/docs/api/modules.rst
+++ /dev/null
@@ -1,7 +0,0 @@
-wksp
-====
-
-.. toctree::
-   :maxdepth: 4
-
-   wksp
diff --git a/apps/cli/utilities/wksp0/docs/api/wksp.rst b/apps/cli/utilities/wksp0/docs/api/wksp.rst
deleted file mode 100644
index 32025e8d3bdaedf272df210983b2b27d91e148e2..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/docs/api/wksp.rst
+++ /dev/null
@@ -1,54 +0,0 @@
-wksp package
-============
-
-Submodules
-----------
-
-wksp.capability module
-----------------------
-
-.. automodule:: wksp.capability
-   :members:
-   :undoc-members:
-   :show-inheritance:
-
-wksp.ifaces module
-------------------
-
-.. automodule:: wksp.ifaces
-   :members:
-   :undoc-members:
-   :show-inheritance:
-
-wksp.run\_capability module
----------------------------
-
-.. automodule:: wksp.run_capability
-   :members:
-   :undoc-members:
-   :show-inheritance:
-
-wksp.run\_workflow module
--------------------------
-
-.. automodule:: wksp.run_workflow
-   :members:
-   :undoc-members:
-   :show-inheritance:
-
-wksp.workflow module
---------------------
-
-.. automodule:: wksp.workflow
-   :members:
-   :undoc-members:
-   :show-inheritance:
-
-
-Module contents
----------------
-
-.. automodule:: wksp
-   :members:
-   :undoc-members:
-   :show-inheritance:
diff --git a/apps/cli/utilities/wksp0/docs/conf.py b/apps/cli/utilities/wksp0/docs/conf.py
deleted file mode 100644
index 00d32af42cb4debf5f33d669ed5b34d349f8a571..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/docs/conf.py
+++ /dev/null
@@ -1,56 +0,0 @@
-# Configuration file for the Sphinx documentation builder.
-#
-# This file only contains a selection of the most common options. For a full
-# list see the documentation:
-# https://www.sphinx-doc.org/en/master/usage/configuration.html
-
-# -- Path setup --------------------------------------------------------------
-
-# If extensions (or modules to document with autodoc) are in another directory,
-# add these directories to sys.path here. If the directory is relative to the
-# documentation root, use os.path.abspath to make it absolute, like shown here.
-#
-# import os
-# import sys
-# sys.path.insert(0, os.path.abspath('.'))
-
-
-# -- Project information -----------------------------------------------------
-
-project = 'Workspace Prototype'
-copyright = '2019, Daniel K Lyons'
-author = 'Daniel K Lyons'
-
-# The full version, including alpha/beta/rc tags
-release = '1.0rc1'
-
-
-# -- General configuration ---------------------------------------------------
-
-# Add any Sphinx extension module names here, as strings. They can be
-# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
-# ones.
-extensions = ['sphinx.ext.autodoc',
-]
-
-# Add any paths that contain templates here, relative to this directory.
-templates_path = ['_templates']
-
-# List of patterns, relative to source directory, that match files and
-# directories to ignore when looking for source files.
-# This pattern also affects html_static_path and html_extra_path.
-exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
-
-
-# -- Options for HTML output -------------------------------------------------
-
-# The theme to use for HTML and HTML Help pages.  See the documentation for
-# a list of builtin themes.
-#
-html_theme = "sphinx_rtd_theme"
-html_theme_path = ["_themes", ]
-
-# Add any paths that contain custom static files (such as style sheets) here,
-# relative to this directory. They are copied after the builtin static files,
-# so a file named "default.css" will overwrite the builtin "default.css".
-html_static_path = ['_static']
\ No newline at end of file
diff --git a/apps/cli/utilities/wksp0/docs/index.rst b/apps/cli/utilities/wksp0/docs/index.rst
deleted file mode 100644
index 8d5341b3d12e47016e5483a3eabdbeb65afc7e66..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/docs/index.rst
+++ /dev/null
@@ -1,20 +0,0 @@
-.. Workspace Prototype documentation master file, created by
-   sphinx-quickstart on Thu Nov 21 09:59:49 2019.
-   You can adapt this file completely to your liking, but it should at least
-   contain the root `toctree` directive.
-
-Welcome to Workspace Prototype's documentation!
-===============================================
-
-.. toctree::
-   :maxdepth: 2
-   :caption: Contents:
-
-
-
-Indices and tables
-==================
-
-* :ref:`genindex`
-* :ref:`modindex`
-* :ref:`search`
diff --git a/apps/cli/utilities/wksp0/docs/make.bat b/apps/cli/utilities/wksp0/docs/make.bat
deleted file mode 100644
index 2119f51099bf37e4fdb6071dce9f451ea44c62dd..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/docs/make.bat
+++ /dev/null
@@ -1,35 +0,0 @@
-@ECHO OFF
-
-pushd %~dp0
-
-REM Command file for Sphinx documentation
-
-if "%SPHINXBUILD%" == "" (
-	set SPHINXBUILD=sphinx-build
-)
-set SOURCEDIR=.
-set BUILDDIR=_build
-
-if "%1" == "" goto help
-
-%SPHINXBUILD% >NUL 2>NUL
-if errorlevel 9009 (
-	echo.
-	echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
-	echo.installed, then set the SPHINXBUILD environment variable to point
-	echo.to the full path of the 'sphinx-build' executable. Alternatively you
-	echo.may add the Sphinx directory to PATH.
-	echo.
-	echo.If you don't have Sphinx installed, grab it from
-	echo.http://sphinx-doc.org/
-	exit /b 1
-)
-
-%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
-goto end
-
-:help
-%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
-
-:end
-popd
diff --git a/apps/cli/utilities/wksp0/experiments/cat-grep.dag b/apps/cli/utilities/wksp0/experiments/cat-grep.dag
deleted file mode 100644
index 9855da09854375145704fbd5cb999ef527d0dc70..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/experiments/cat-grep.dag
+++ /dev/null
@@ -1,3 +0,0 @@
-JOB cat cat.condor
-JOB grep grep.condor
-PARENT cat CHILD grep
diff --git a/apps/cli/utilities/wksp0/experiments/cat.condor b/apps/cli/utilities/wksp0/experiments/cat.condor
deleted file mode 100644
index 8612f84558f72476b5e6997c3c570a684755907d..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/experiments/cat.condor
+++ /dev/null
@@ -1,6 +0,0 @@
-executable = /bin/cp
-arguments = /home/casa/capo/nmtest.properties contents.txt
-log = condor.log
-
-queue
-
diff --git a/apps/cli/utilities/wksp0/experiments/grep-uniq.dag b/apps/cli/utilities/wksp0/experiments/grep-uniq.dag
deleted file mode 100644
index 9da28b3a6e38e044be021176e01f339bb5bfd1f4..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/experiments/grep-uniq.dag
+++ /dev/null
@@ -1,3 +0,0 @@
-JOB grep grep2.condor
-JOB uniq uniq.condor
-PARENT grep CHILD uniq
diff --git a/apps/cli/utilities/wksp0/experiments/grep.condor b/apps/cli/utilities/wksp0/experiments/grep.condor
deleted file mode 100644
index 06d1c3e9b908de63ad5700a81d1d4bd3f661204c..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/experiments/grep.condor
+++ /dev/null
@@ -1,8 +0,0 @@
-executable = /bin/grep
-arguments = "username contents.txt"
-output = usernames.txt
-error = grep.err
-transfer_input_files = contents.txt
-log = condor.log
-
-queue
diff --git a/apps/cli/utilities/wksp0/experiments/grep2.condor b/apps/cli/utilities/wksp0/experiments/grep2.condor
deleted file mode 100644
index bfe057861fad67a3f312570122384204a5685d02..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/experiments/grep2.condor
+++ /dev/null
@@ -1,9 +0,0 @@
-executable = /bin/grep
-arguments = "username nmtest.properties"
-should_transfer_files = YES
-transfer_input_files = /home/casa/capo/nmtest.properties
-output = raw-usernames.txt
-error = grep.err
-log = condor.log
-
-queue
diff --git a/apps/cli/utilities/wksp0/experiments/hello.condor b/apps/cli/utilities/wksp0/experiments/hello.condor
deleted file mode 100644
index f5db68694273ecd716c10c7baaf7ba4635a265cb..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/experiments/hello.condor
+++ /dev/null
@@ -1,3 +0,0 @@
-executable = hello.sh
-log = condor.log
-queue
diff --git a/apps/cli/utilities/wksp0/experiments/hello.sh b/apps/cli/utilities/wksp0/experiments/hello.sh
deleted file mode 100755
index fa26d35f2bc202f11b594ba845f4c049554ed1cc..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/experiments/hello.sh
+++ /dev/null
@@ -1,4 +0,0 @@
-#!/bin/sh
-
-echo "Hello, world!"
-exit 1
diff --git a/apps/cli/utilities/wksp0/experiments/map-reduce/grep-sort.dag b/apps/cli/utilities/wksp0/experiments/map-reduce/grep-sort.dag
deleted file mode 100644
index e336dbeb9e71265072d7bb2b35266852dc46cbde..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/experiments/map-reduce/grep-sort.dag
+++ /dev/null
@@ -1,9 +0,0 @@
-JOB username grep.condor
-VARS username search="username" file="/home/casa/capo/nmtest.properties"
-
-JOB password grep.condor
-VARS password search="password" file="/home/casa/capo/nmtest.properties"
-
-JOB sort sort.condor
-
-PARENT username password CHILD sort
diff --git a/apps/cli/utilities/wksp0/experiments/map-reduce/grep.condor b/apps/cli/utilities/wksp0/experiments/map-reduce/grep.condor
deleted file mode 100644
index c4d19a508e97de315851091a247785eff8f838fa..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/experiments/map-reduce/grep.condor
+++ /dev/null
@@ -1,6 +0,0 @@
-executable = grep.sh
-arguments = "$(search) $(file) $(search).grep-out"
-should_transfer_files = YES
-transfer_input_files = $(file)
-log_file = condor.log
-queue
diff --git a/apps/cli/utilities/wksp0/experiments/map-reduce/grep.sh b/apps/cli/utilities/wksp0/experiments/map-reduce/grep.sh
deleted file mode 100755
index 955cf464eda761ef0c1157618153bf97e33732ef..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/experiments/map-reduce/grep.sh
+++ /dev/null
@@ -1,4 +0,0 @@
-#!/bin/sh
-
-grep $1 $2 > $3
-
diff --git a/apps/cli/utilities/wksp0/experiments/map-reduce/sort.condor b/apps/cli/utilities/wksp0/experiments/map-reduce/sort.condor
deleted file mode 100644
index 26c85a7bc4f61d0f966c27835063b002f074b665..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/experiments/map-reduce/sort.condor
+++ /dev/null
@@ -1,6 +0,0 @@
-executable = sort.sh
-arguments = "combined.txt"
-should_transfer_files = YES
-transfer_input_files = username.grep-out,password.grep-out
-log_file = condor.log
-queue
diff --git a/apps/cli/utilities/wksp0/experiments/map-reduce/sort.sh b/apps/cli/utilities/wksp0/experiments/map-reduce/sort.sh
deleted file mode 100755
index 78ff58db64b9a36b3a02822d0a3759466cf78664..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/experiments/map-reduce/sort.sh
+++ /dev/null
@@ -1,3 +0,0 @@
-#!/bin/sh
-
-cat *.grep-out | sort > $1
diff --git a/apps/cli/utilities/wksp0/experiments/uniq.condor b/apps/cli/utilities/wksp0/experiments/uniq.condor
deleted file mode 100644
index e29055839456b7d2c132e600f326f6738ab83c0a..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/experiments/uniq.condor
+++ /dev/null
@@ -1,7 +0,0 @@
-executable = uniq.sh
-arguments = raw-usernames.txt
-output = unique-usernames.txt
-error = uniq.err
-log = condor.log
-
-queue
diff --git a/apps/cli/utilities/wksp0/experiments/uniq.sh b/apps/cli/utilities/wksp0/experiments/uniq.sh
deleted file mode 100755
index 6d62e7f8f3e1e1a37db5ccad687eaf41236954af..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/experiments/uniq.sh
+++ /dev/null
@@ -1,3 +0,0 @@
-#!/bin/sh
-
-cut -f2 -d= "$1" | sed 's/[ \t]*\([^ \t]*\).*/\1/g' | sort | uniq
diff --git a/apps/cli/utilities/wksp0/setup.py b/apps/cli/utilities/wksp0/setup.py
deleted file mode 100644
index 4a033aa09b2f4b79b2165954d809f401da8f6a98..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/setup.py
+++ /dev/null
@@ -1,26 +0,0 @@
-#!/usr/bin/env python
-
-from setuptools import setup, find_packages
-
-setup(name='wksp0',
-      version='1.0rc1',
-      description='NRAO Archive Workspace Subsystem Prototype 0',
-      author='Daniel K Lyons',
-      author_email='dlyons@nrao.edu',
-      url='https://open-bitbucket.nrao.edu/projects/SSA/repos/wksp0/browse',
-      packages=find_packages(),
-      test_suite='tests',
-      install_requires=[
-            'injector >= 0.17',
-            'htcondor >= 8.9',
-            'pystache >= 0.5'
-      ],
-      extras_require={
-            'dev': ['sphinx >= 2.2', 'sphinx_rtd_theme']
-      },
-      entry_points={
-            'console_scripts': [
-                  'run = wksp.run_capability:main',
-                  'run_workflow = wksp.run_workflow:main'
-            ],
-      })
diff --git a/apps/cli/utilities/wksp0/tests/__init__.py b/apps/cli/utilities/wksp0/tests/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/apps/cli/utilities/wksp0/tests/architectural/__init__.py b/apps/cli/utilities/wksp0/tests/architectural/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/apps/cli/utilities/wksp0/tests/architectural/test_ifaces.py b/apps/cli/utilities/wksp0/tests/architectural/test_ifaces.py
deleted file mode 100644
index 198b4e9bfe4c182b3ad8e08eba4e72c1281635df..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/tests/architectural/test_ifaces.py
+++ /dev/null
@@ -1,27 +0,0 @@
-import unittest
-import inspect
-from typing import List
-
-import wksp.ifaces
-
-
-def interface_methods(klass) -> List[str]:
-    """
-    Return the names of methods on the supplied class
-    :param klass:   the class to get methods from
-    :return:        the names of the methods on klass
-    """
-    return [x for (x, _) in inspect.getmembers(klass, predicate=inspect.isfunction)]
-
-
-class TestInterfaces(unittest.TestCase):
-    """
-    Ensure that the interfaces have exactly the methods we think they have, and no more or less.
-    """
-    def test_capability_info(self):
-        methods = interface_methods(wksp.ifaces.CapabilityInfo)
-        self.assertEqual(len(methods), 2, "CapabilityInfo should have two methods")
-        self.assertIn("lookup_capability", methods, "CapabilityInfo should have a method lookup_capability")
-        self.assertIn("lookup_capability_request", methods,
-                      "CapabilityInfo should have a method lookup_capability_request")
-
diff --git a/apps/cli/utilities/wksp0/tests/interface/__init__.py b/apps/cli/utilities/wksp0/tests/interface/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/apps/cli/utilities/wksp0/tests/interface/workflow.py b/apps/cli/utilities/wksp0/tests/interface/workflow.py
deleted file mode 100644
index 4c6fd71a4749f95566461d1839d2a1a6c2acda2f..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/tests/interface/workflow.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import pathlib
-import tempfile
-import unittest
-from unittest.mock import patch, MagicMock
-
-from injector import Injector
-
-import wksp.workflow as wf
-
-
-class TestWorkflowService(unittest.TestCase):
-    """
-    Ensure that the interfaces have exactly the methods we think they have, and no more or less.
-    """
-    @patch('wksp.workflow.subprocess')
-    def test_workflow_service(self, subprocess):
-        """
-        Tests the hardcoded grep-uniq workflow by faking out the condor_submit_dag execution.
-
-        :param subprocess:  a fake subprocess module
-        :return:  whether the test passes or not
-        """
-        # for the purposes of this test, we are using
-        # the hardcoded workflow info and the DAGman workflow service
-        def configure(binder):
-            binder.bind(wf.WorkflowInfo,    to=wf.DirectoryWorkflowInfo(pathlib.Path('workflows')))
-            binder.bind(wf.WorkflowService, to=wf.HTCondorWorkflowService)
-
-        # set up the injector
-        injector = Injector(configure)
-
-        # get the service
-        workflow_service = injector.get(wf.WorkflowService)
-
-        # The plan here is to fake out the subprocess.run call and the mkdtemp call so that we
-        # know a-priori what the temp directory produced is going to be. Then we can check that
-        # the arguments to the condor_submit_dag call match exactly what we expect.
-        #
-        # We could go further here and inspect the files in the temp directory as well.
-
-        # make a real temp directory
-        with tempfile.TemporaryDirectory() as temp_dir:
-            with patch('wksp.workflow.mkdtemp', new=MagicMock(return_value=temp_dir)):
-
-                # run a workflow
-                workflow_name = 'grep-uniq'
-                workflow_service.execute(workflow_name,
-                                         {'search': 'username'},
-                                         [pathlib.Path('/home/casa/capo/nmtest.properties')])
-
-                # did subprocess.run get executed or not?
-                subprocess.run.assert_called_with(['condor_submit_dag',
-                                                   f'{temp_dir}/{workflow_name}.dag'],
-                                                  cwd=temp_dir)
-
-                # here's a moment to inspect the files in /tmp, if we choose to
-                pass
diff --git a/apps/cli/utilities/wksp0/tests/unit/__init__.py b/apps/cli/utilities/wksp0/tests/unit/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/apps/cli/utilities/wksp0/tests/unit/capability.py b/apps/cli/utilities/wksp0/tests/unit/capability.py
deleted file mode 100644
index 7b751b9e1025ab7ad67d22cdd853fe6c126e4bec..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/tests/unit/capability.py
+++ /dev/null
@@ -1,17 +0,0 @@
-import unittest
-from wksp.capability import *
-
-
-class CapabilityParsingTests(unittest.TestCase):
-    def test_parsing(self):
-        capability = DirectoryCapability(pathlib.Path('../../capabilities/grep-uniq'))
-        self.assertEquals(5, len(capability.sequence))
-
-
-class ConsoleParameterReaderTests(unittest.TestCase):
-    def test_reading_parameters(self):
-        inst = ConsoleParameterReader.obtain_parameter(QaStatus)
-        print(inst)
-        inst = ConsoleParameterReader.obtain_parameter(SearchParameters)
-        print(inst)
-
diff --git a/apps/cli/utilities/wksp0/wksp/__init__.py b/apps/cli/utilities/wksp0/wksp/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/apps/cli/utilities/wksp0/wksp/capability.py b/apps/cli/utilities/wksp0/wksp/capability.py
deleted file mode 100644
index 323bd4d5c468a2072a367f887dfb86e4515a236e..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/wksp/capability.py
+++ /dev/null
@@ -1,262 +0,0 @@
-from queue import Queue
-from threading import Semaphore
-
-from injector import ClassAssistedBuilder, inject
-
-from wksp.ifaces import *
-import pathlib
-
-
-class SearchParameters(Parameter):
-    search: str
-
-    @staticmethod
-    def fields() -> Dict[FieldName, FieldLabel]:
-        return {'search': 'Search'}
-
-    def json(self):
-        return {'search': self.search}
-
-    def load(self, json):
-        self.search = json['search']
-
-    def __repr__(self):
-        return f"<SearchParameters search='{self.search}'>"
-
-
-class QaStatus(Parameter):
-    status: bool
-
-    @staticmethod
-    def fields() -> Dict[FieldName, FieldLabel]:
-        return {'qa-pass': 'Passes QA'}
-
-    def json(self):
-        return {'qa-pass': self.status}
-
-    def load(self, json):
-        self.status = json['qa-pass'].strip().lower() in ['yes', 'y', 'true']
-
-    def __repr__(self):
-        return f"<QaStatus {'pass' if self.status else 'fail'}>"
-
-
-ParameterRegistry = {'search-parameters': SearchParameters,
-                     'qa-status': QaStatus}
-
-
-class ConsoleParameterReader:
-    @staticmethod
-    def obtain_parameter(parameter_type: Type[Parameter]) -> Parameter:
-        json = {}
-
-        for field, label in parameter_type.fields().items():
-            json[field] = input(label + '?> ')
-
-        result = parameter_type()
-        result.load(json)
-        return result
-
-
-class DirectoryCapability(Capability):
-    """
-    Implements a capability by reading files off the filesystem (rather than from a database or whatnot).
-    """
-    max_jobs: int
-    sequence: List[CapabilityStep]
-
-    def create_request(self, locators: List[ProductLocator]):
-        return CapabilityRequest(capability=self, locators=locators, files=[], id=None, parameters=[])
-
-    def __init__(self, path: pathlib.Path):
-        self.path = path
-        self.name = path.name
-        self.sequence = self.parse(self.path / 'sequence.txt')
-        self.max_jobs = 2
-
-    def __hash__(self):
-        return hash({'name': 'DirectoryCapability', 'path': self.path})
-
-    @staticmethod
-    def parse(path: pathlib.Path):
-        sequence = []
-        with path.open('r') as f:
-            for line in f:
-                if line.startswith('AWAIT PRODUCTS'):
-                    sequence.append(AwaitProduct())
-                elif line.startswith('AWAIT PRODUCT '):
-                    sequence.append(AwaitProduct(line.split('AWAIT PRODUCT ')[1].strip()))
-                elif line.startswith('AWAIT PARAMETER '):
-                    sequence.append(AwaitParameter(ParameterRegistry[line.split('AWAIT PARAMETER ')[1].strip()]))
-                elif line.startswith('PREPARE AND RUN WORKFLOW '):
-                    sequence.append(PrepareAndRunWorkflow(line.split('PREPARE AND RUN WORKFLOW ')[1].strip()))
-
-        return sequence
-
-
-class DirectoryCapabilityInfo(CapabilityInfo):
-    """
-    Finds information about capabilities on the filesystem. Stores requests in memory (in a list).
-    """
-    def __init__(self, path: pathlib.Path):
-        self.path = path
-        self.requests = []
-        self.n_requests = 0
-
-    def lookup_capability(self, capability_name: str) -> Capability:
-        return DirectoryCapability(self.path / capability_name)
-
-    def lookup_capability_request(self, capability_request_id: int) -> CapabilityRequest:
-        return self.requests[capability_request_id]
-
-    def save_request(self, request: CapabilityRequest) -> int:
-        # 1. Record this request in our list of requests
-        self.requests.append(request)
-        self.n_requests += 1
-
-        # 2. Record the ID on the request itself
-        request.id = self.n_requests
-
-        # return it
-        return request.id
-
-
-class PrototypeCapabilityQueue(CapabilityQueue):
-    """
-    Implements the CapabilityQueue API, backed by a simple thread-safe queue.
-    """
-    items: Queue
-
-    @inject
-    def __init__(self, capability: Capability, runner_builder: ClassAssistedBuilder["PrototypeQueueRunner"]):
-        self.items = Queue()
-        self.runner = runner_builder.build(queue=self.items, max_jobs=capability.max_jobs)
-        self.runner.start()
-
-    def enqueue(self, request: CapabilityRequest):
-        # 1. place this request into some kind of queue
-        self.items.put(request)
-
-
-class PrototypeCapabilityEngine(CapabilityEngine):
-    request: CapabilityRequest
-    responder: CapabilityEngineResponder
-
-    @inject
-    def __init__(self, request: CapabilityRequest, responder: CapabilityEngineResponder):
-        self.request = request
-        self.responder = responder
-
-    def execute(self, request):
-        for step in request.capability.sequence:
-            self._execute_step(step)
-
-    def _execute_step(self, step: CapabilityStep):
-        step.execute_against(self.request, self.responder)
-
-
-class PrototypeCapabilityEngineResponder(CapabilityEngineResponder):
-    workflow_service: WorkflowService
-    product_service: ProductService
-
-    @inject
-    def __init__(self, workflow_service: WorkflowService, product_service: ProductService):
-        self.workflow_service = workflow_service
-        self.product_service = product_service
-
-        self.console = ConsoleParameterReader()
-
-    def prepare_and_run_workflow(self, step: CapabilityStep, name: str, param: Parameter, files: List[Path]):
-        # in here I need to find the WorkflowService
-        return self.workflow_service.execute(name, param.json(), files)
-
-    def await_product(self, step: CapabilityStep, product_locator: ProductLocator):
-        return self.product_service.locate_product(product_locator)
-
-    def await_parameter(self, step: CapabilityStep, parameter_type: Type[Parameter]) -> Parameter:
-        return self.console.obtain_parameter(parameter_type)
-
-
-class PrototypeQueueRunner(QueueRunner, Thread):
-    engines: Dict[CapabilityRequest, CapabilityEngine]
-
-    @inject
-    def __init__(self, queue: Queue, max_jobs: int, engine_builder: ClassAssistedBuilder[PrototypeCapabilityEngine]):
-        super().__init__()
-        self.queue = queue
-        self.semaphore = Semaphore(max_jobs)
-        self.engines = {}
-        self.engine_builder = engine_builder
-
-    def run(self) -> None:
-        while True:
-            # obtain the semaphore
-            self.semaphore.acquire()
-
-            # now get a job from the queue
-            request = self.queue.get()
-
-            # now build an engine and start executing that request
-            self.engines[request.id] = self.engine_builder.build(request=request)
-
-            # execute the first step of this capability
-            self.engines[request.id].execute(request)
-
-            # release the semaphore
-            self.semaphore.release()
-
-    def complete(self, request):
-        """
-        Sent by the engine when it is done executing a capability
-        :return:
-        """
-        del self.engines[request.id]
-        self.semaphore.release()
-
-
-class PrototypeCapabilityService(CapabilityService):
-    queues: Dict[CapabilityName, CapabilityQueue]
-
-    @inject
-    def __init__(self, info: CapabilityInfo, queue_builder: ClassAssistedBuilder[PrototypeCapabilityQueue]):
-        self.queues = {}
-        self.info = info
-        self.queue_builder = queue_builder
-
-    def send_request(self, name: CapabilityName, locators: List[ProductLocator]) -> CapabilityRequest:
-        # 1. Locate the capability
-        capability = self.info.lookup_capability(name)
-
-        # 2. Create a request
-        request = capability.create_request(locators)
-
-        # 3. Persist the request
-        self.info.save_request(request)
-
-        # 4. Return it
-        return request
-
-    def _locate_queue(self, request: CapabilityRequest) -> CapabilityQueue:
-        # 1. Create a queue for this capability, if we don't have one currently
-        if request.capability.name not in self.queues:
-            self.queues[request.capability.name] = self.queue_builder.build(capability=request.capability)
-
-        # 2. Return the queue for this capability
-        return self.queues[request.capability.name]
-
-    def execute(self, request: CapabilityRequest) -> None:
-        # 1. Locate the proper queue for this request
-        queue = self._locate_queue(request)
-
-        # 2. Submit the request to that queue
-        queue.enqueue(request)
-
-
-class HardcodedProductService(ProductService):
-    def __init__(self):
-        self.products = {'nmtest-capo': pathlib.Path('/home/casa/capo/nmtest.properties'),
-                         'readme': pathlib.Path('README.md')}
-
-    def locate_product(self, product_locator: ProductLocator) -> Path:
-        return self.products[product_locator]
-
diff --git a/apps/cli/utilities/wksp0/wksp/delivery.py b/apps/cli/utilities/wksp0/wksp/delivery.py
deleted file mode 100644
index 6ad1e7a6164e154b5b30f324171106bae38d1801..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/wksp/delivery.py
+++ /dev/null
@@ -1,504 +0,0 @@
-import abc
-import argparse
-import pathlib
-
-
-# -------------------------------------------------------------------------
-#
-#        D E S T I N A T I O N   S Y S T E M
-#
-# -------------------------------------------------------------------------
-import secrets
-import sys
-from typing import Iterator, List
-
-
-class Destination(abc.ABC):
-    """
-    Destinations are locations that files can be copied into. They might not
-    always be on a local disk; FTP or Globus could also be destinations.
-
-    The destination API is very simply, consisting just of adding files.
-    """
-    @abc.abstractmethod
-    def add_file(self, file: pathlib.Path, relative_path: str):
-        pass
-
-    @abc.abstractmethod
-    def close(self):
-        pass
-
-    def __exit__(self, exc_type, exc_val, exc_tb):
-        # ensure that if we are used as a context manager ('with' statement)
-        # that the destinations do get properly closed
-        self.close()
-
-
-class DestinationDecorator(Destination):
-    def __init__(self, underlying: Destination):
-        self.underlying = underlying
-
-    def add_file(self, file: pathlib.Path, relative_path: str):
-        self.underlying.add_file(file, relative_path)
-
-    def close(self):
-        self.underlying.close()
-
-
-class TarDecorator(DestinationDecorator):
-    """
-    This decorator creates a local tar archive. Calls to add_file
-    are intercepted and instead the file contents are added to the
-    tar archive. When close() is called, we finalize the tar file
-    and place it in the delivery area.
-    """
-    pass
-
-
-class ChecksumDecorator(DestinationDecorator):
-    """
-    This decorator ensures that an MD5SUM file appears in the underlying
-    destination after all the files are added, and the contents of that
-    file are, as one would expect, the MD5SUMs of the files added to the
-    destination, in the format that ``md5sum -c`` expects.
-    """
-    pass
-
-
-class SubdirectoryDecorator(DestinationDecorator):
-    def __init__(self, underlying: Destination, subdirectory: str):
-        super().__init__(underlying)
-        self.subdirectory = subdirectory
-
-    def add_file(self, file: pathlib.Path, relative_path: str):
-        self.underlying.add_file(file, self.subdirectory + "/" + relative_path)
-
-
-class LocalDestination(Destination):
-    """
-    LocalDestination is for delivering to a local directory on the filesystem.
-    """
-    def __init__(self, path: pathlib.Path):
-        self.path = path
-
-    def add_file(self, file: pathlib.Path, relative_path: str):
-        raise NotImplementedError
-
-    def close(self):
-        """
-        Nothing special actually needs to be done for local deliveries
-        when we close the destination.
-        """
-        pass
-
-
-class DestinationBuilder:
-    """
-    To facilitate building a stack of destination and its decorators.
-    """
-    def __init__(self):
-        self._destination = None
-
-    def local(self, path: pathlib.Path):
-        """Add a local destination with the given path"""
-        self._destination = LocalDestination(path)
-        return self
-
-    def tar(self):
-        """Add the tar decorator to the destination"""
-        self._destination = TarDecorator(self._destination)
-        return self
-
-    def build(self):
-        """Create the destination"""
-        return self._destination
-
-# -------------------------------------------------------------------------
-#
-#        P R O D U C T   D E L I V E R Y   S Y S T E M
-#
-# -------------------------------------------------------------------------
-class DeliveryContext:
-    """
-    The delivery context provides access to some environmental functions that
-    are not really the responsibility of any particular component, but which
-    many components need to share information about, such as:
-
-    - Creating and removing temporary files
-    - Creating and retaining tokens
-    """
-    def __init__(self):
-        self._token = None
-
-    def create_tempfile(self, prefix: str, suffix: str) -> pathlib.Path:
-        """
-        Create a temporary file, using the given prefix and suffix.
-
-        :param prefix:  prefix for the tempfile name
-        :param suffix:  suffix for the tempfile name
-        :return:        the path to the temporary file
-        """
-        raise NotImplementedError
-
-    @property
-    def token(self) -> str:
-        """
-        If a delivery only requires one token, just use this property
-        to get it. It will be created once and remain the same throughout
-        the lifetime of this object.
-
-        :return: the current token
-        """
-        if self._token is None:
-            self._token = self.generate_token()
-        return self._token
-
-    def generate_token(self) -> str:
-        """
-        Generates a random token suitable for use in paths and URLs.
-        :return: a random token
-        """
-        return secrets.token_hex(16)
-
-    def __exit__(self, exc_type, exc_val, exc_tb):
-        # possible: remove all the generated tempfiles here
-        pass
-
-
-class SpooledProduct(abc.ABC):
-    """
-    A SpooledProduct is something interesting enough to deliver to an end-user.
-    SpooledProducts might be science products or auxilliary products, but they
-    might just be some temporary thing that came out of some processing, never
-    to be seen again.
-
-    SpooledProducts have a type and a path. More specific SpooledProducts may
-    have other specific metadata unto themselves.
-    """
-    @abc.abstractmethod
-    def deliver_to(self, deliverer: "ProductDeliverer"):
-        """
-        Deliver sets up a double dispatch so we can get the product type into the
-        method itself, and behave differently depending on the type of product
-        being delivered.
-
-        :param deliverer: the deliverer to deliver to
-        """
-        pass
-
-
-# Basic delivery types that others are derived from (are these needed?)
-
-class DirectoryProduct(SpooledProduct):
-    def deliver_to(self, deliverer: "ProductDeliverer"):
-        deliverer.deliver_directory(self)
-
-
-class FileProduct(SpooledProduct):
-    def deliver_to(self, deliverer: "ProductDeliverer"):
-        deliverer.deliver_file(self)
-
-
-class ExecutionBlock(DirectoryProduct):
-    def __init__(self, date):
-        self.date = date
-
-    @property
-    def pipeline_spec(self):
-        "For execution blocks, $PIPELINE_SPEC is observation.$DATE"
-        return f'observation.{self.date}'
-
-    def deliver_to(self, deliverer: "ProductDeliverer"):
-        deliverer.deliver_execution_block(self)
-
-
-# These types are derived from what appears in the piperesults file;
-# there may be duplicates
-
-class ASDM(DirectoryProduct):
-    def deliver_to(self, deliverer: "ProductDeliverer"):
-        deliverer.deliver_asdm(self)
-
-
-class PipeRequest(FileProduct):
-    def deliver_to(self, deliverer: "ProductDeliverer"):
-        deliverer.deliver_piperequest(self)
-
-
-class CalibrationTables(FileProduct):
-    def __init__(self, date: str):
-        self.date = date
-
-    def pipeline_spec(self):
-        "For calibrations, $PIPELINE_SPEC is calibration_pipeline.$DATE"
-        return f'calibration_pipeline.{self.date}'
-
-    def deliver_to(self, deliverer: "ProductDeliverer"):
-        deliverer.deliver_calibration_tables(self)
-
-
-class Flags(FileProduct):
-    def deliver_to(self, deliverer: "ProductDeliverer"):
-        deliverer.deliver_flags(self)
-
-
-class ApplyCommands(FileProduct):
-    def deliver_to(self, deliverer: "ProductDeliverer"):
-        deliverer.deliver_apply_commands(self)
-
-
-class Weblog(FileProduct):
-    def deliver_to(self, deliverer: "ProductDeliverer"):
-        deliverer.deliver_weblog(self)
-
-
-class CasaCommandLog(FileProduct):
-    def deliver_to(self, deliverer: "ProductDeliverer"):
-        deliverer.deliver_casa_command_log(self)
-
-
-class RestoreScript(FileProduct):
-    def deliver_to(self, deliverer: "ProductDeliverer"):
-        deliverer.deliver_restore_script(self)
-
-
-class Image(FileProduct):
-    def __init__(self, date):
-        self.date = date
-
-    def pipeline_spec(self):
-        "For images, $PIPELINE_SPEC is imaging_pipeline.$DATE"
-        return f'imaging_pipeline.{self.date}'
-
-    def deliver_to(self, deliverer: "ProductDeliverer"):
-        deliverer.deliver_image(self)
-
-
-class OUS(DirectoryProduct):
-    """
-    OUSes have a great deal of sub-products within them.
-    """
-
-    def __init__(self, id):
-        self.id = id
-        self._products = []
-
-    def add_product(self, product):
-        """
-        Add a product to this OUS.
-
-        :param product:  the product to add
-        """
-        self._products.append(product)
-
-    @property
-    def products(self):
-        return self._products
-
-    def deliver_to(self, deliverer: "ProductDeliverer"):
-        deliverer.deliver_ous(self)
-
-
-class ProductDeliverer:
-    def __init__(self, destination: Destination):
-        self.destination = destination
-
-    def deliver_product(self, product):
-        """
-        Primary user-interface for this class. Provide a product to be delivered.
-
-        Under the hood, engages the double dispatch mechanism to
-        effect typed delivery.
-        :param product:  the product to deliver
-        :return:
-        """
-        product.deliver_to(self)
-
-    def deliver_asdm(self, asdm: ASDM):
-        """
-        Deliver a measurement set
-        :param measurement_set: the measurement set to deliver
-        """
-        # FIXME: example implementation; refine
-        print(f'Delivering measurement set {asdm}')
-        self.deliver_directory(asdm)
-
-    def deliver_ous(self, ous: OUS):
-        """
-        Deliver an OUS, which has products inside it.
-
-        :param ous:  the OUS to deliver
-        """
-        # FIXME: example implementation; refine
-        # The trick here is to basically "cd" into the OUS directory and then
-        # proceed as normal. The subdirectory decorator will ensure that the
-        # products have the same structure, but inside the OUS directory
-        # instead of the level above.
-        ous_destination = ProductDeliverer(SubdirectoryDecorator(self.destination, ous.id))
-        for product in ous.products:
-            product.deliver_to(ous_destination)
-
-    def deliver_directory(self, dir_product: DirectoryProduct):
-        raise NotImplementedError
-
-    def deliver_file(self, file_product: FileProduct):
-        raise NotImplementedError
-
-    def deliver_piperequest(self, ppr: PipeRequest):
-        """
-        An example of how to deliver something specific, which is really just
-        a file under the hood.
-        :param ppr:
-        :return:
-        """
-        print('Delivering PPR')
-        self.deliver_file(ppr)
-
-    def deliver_image(self, img: Image):
-        # similar to deliver_piperequest
-        raise NotImplementedError
-
-    def deliver_restore_script(self, restore_script: RestoreScript):
-        # similar to deliver_piperequest
-        raise NotImplementedError
-
-    def deliver_casa_command_log(self, log: CasaCommandLog):
-        # similar to deliver_piperequest
-        raise NotImplementedError
-
-    def deliver_calibration_tables(self, caltables: CalibrationTables):
-        # similar to deliver_piperequest
-        raise NotImplementedError
-
-    def deliver_flags(self, flags: Flags):
-        # similar to deliver_piperequest
-        raise NotImplementedError
-
-    def deliver_apply_commands(self, cmds: ApplyCommands):
-        # similar to deliver_piperequest
-        raise NotImplementedError
-
-    def deliver_weblog(self, weblog: Weblog):
-        # similar to deliver_piperequest
-        raise NotImplementedError
-
-
-# -------------------------------------------------------------------------
-#
-#        P R O D U C T   F I N D I N G
-#
-# -------------------------------------------------------------------------
-
-class ProductFinder(abc.ABC):
-    @abc.abstractmethod
-    def find_products(self) -> Iterator[SpooledProduct]:
-        pass
-
-
-class PiperesultsProductFinder(ProductFinder):
-    def __init__(self, path: pathlib.Path):
-        self.path = path
-
-    def find_products(self) -> Iterator[SpooledProduct]:
-        raise NotImplementedError
-
-
-class HeuristicProductFinder(ProductFinder):
-    def __init__(self, path: pathlib.Path):
-        self.path = path
-
-    def find_products(self) -> Iterator[SpooledProduct]:
-        raise NotImplementedError
-
-
-# -------------------------------------------------------------------------
-#
-#        C O M M A N D   L I N E   A R G U M E N T S
-#
-# -------------------------------------------------------------------------
-
-class DeliverySettings:
-    def __init__(self, source: pathlib.Path, tar=False, local_destination=None):
-        self.source = source
-        self.tar = tar
-        self.local_destination = local_destination
-
-    def create_destination(self) -> Destination:
-        builder = DestinationBuilder()
-
-        # first handle the local destination argument
-        if self.local_destination:
-            builder.local(self.local_destination)
-        else:
-            builder.local("/lustre/aoc/whatever")
-
-        # then handle the tar argument
-        if self.tar:
-            builder.tar()
-
-        return builder.build()
-
-    @classmethod
-    def parse_commandline(cls, args=None) -> "DeliverySettings":
-        parser = argparse.ArgumentParser()
-        parser.add_argument('-l', '--local-destination', type=str, default=None,
-                            help="Deliver to this local directory instead of the appropriate web root")
-        parser.add_argument('-t', '--tar', action='store_true', default=False, help='Archive the delivered items as a tar file')
-        parser.add_argument('source', type=pathlib.Path, metavar="SOURCE_DIRECTORY",
-                            help="The directory where the products to be delivered are located")
-        ns = parser.parse_args(args)
-        return DeliverySettings(source=ns.source, tar=ns.tar, local_destination=ns.local_destination)
-
-
-# -------------------------------------------------------------------------
-#
-#        D E L I V E R Y
-#
-# -------------------------------------------------------------------------
-
-class Delivery:
-    def __init__(self):
-        self.context = DeliveryContext()
-
-    def has_piperesults(self, path: pathlib.Path) -> bool:
-        raise NotImplementedError
-
-    def create_product_finder(self, source: pathlib.Path) -> ProductFinder:
-        """
-        Based on the contents of the source/ folder, make the right flavor
-        of product finder
-        :param source: directory to examine
-        :return: a product finder
-        """
-        # if there is a piperesults file, use that finder
-        if self.has_piperesults(source):
-            return PiperesultsProductFinder(source)
-        else:
-            return HeuristicProductFinder(source)
-
-    def deliver(self, settings: DeliverySettings):
-        # make the destination
-        destination = settings.create_destination()
-
-        # find the products
-        finder = self.create_product_finder(settings.source)
-
-        # the ensuing probably needs some kind of reference to the options,
-        # so we know how to create the destination
-
-        # make the delivery system
-        deliverer = ProductDeliverer(destination)
-
-        # loop over the products and deliver them
-        for product in finder.find_products():
-            deliverer.deliver_product(product)
-
-
-def main():
-    """CLI entry point"""
-    settings = DeliverySettings.parse_commandline()
-    Delivery().deliver(settings)
-
-
-if __name__ == '__main__':
-    main()
diff --git a/apps/cli/utilities/wksp0/wksp/ifaces.py b/apps/cli/utilities/wksp0/wksp/ifaces.py
deleted file mode 100644
index 01fa90f8365fd9de41ca1234a19b3b84dc5d1cf7..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/wksp/ifaces.py
+++ /dev/null
@@ -1,345 +0,0 @@
-from abc import ABC, abstractmethod
-from dataclasses import dataclass
-from pathlib import Path
-from threading import Thread
-from typing import List, Iterator, Iterable, Dict, Type, Optional
-import inspect
-
-"""
-Interfaces for the Workspace system live in this module.
-
-Python doesn't really have interfaces, but it does have ABCs: abstract base classes.
-The idea here is, pending input from SSA-5944, to at least document the interfaces as
-I understand them, and hopefully this will be enough of a "model" that IntelliJ will
-let me know if I misuse them.
-"""
-
-ProductLocator = str
-CapabilityName = str
-
-
-@dataclass
-class ScienceProduct(ABC):
-    product_locator: ProductLocator
-
-    """
-    A science product from the archive
-    """
-    pass
-
-
-@dataclass
-class Capability(ABC):
-    name: CapabilityName
-    max_jobs: int
-
-    """
-    A capability
-    """
-    def create_request(self, locators: List[ProductLocator]):
-        """
-        Create a capability request for this capability
-
-        :param locators:  product locators for the new request
-        :return:          a capability request
-        """
-        raise NotImplementedError(f'{self.__class__.__name__}.{inspect.stack()[0][3]}')
-
-
-class ProductService(ABC):
-    """
-    Locate products and realize them on disk (haha).
-    """
-    @abstractmethod
-    def locate_product(self, product_locator: ProductLocator) -> Path:
-        """
-        Locates a given product and produces a file path to it.
-        :param product_locator:   the locator to this product
-        :return:                  a path to this product
-        """
-        raise NotImplementedError(f'{self.__class__.__name__}.{inspect.stack()[0][3]}')
-
-
-@dataclass
-class CapabilityRequest:
-    """
-    A particular capability request
-    """
-    capability: Capability
-    locators: List[ProductLocator]
-    id: Optional[int]
-    parameters: List["Parameter"]
-    files: List[Path]
-
-    @property
-    def last_parameter(self) -> "Parameter":
-        return self.parameters[-1]
-
-
-class CapabilityInfo(ABC):
-    """
-    Interface to stored capability information.
-    """
-    @abstractmethod
-    def lookup_capability(self, capability_name: str) -> Capability:
-        """
-        Look up the definition of a capability.
-
-        :param capability_name:  the name of the capability to find
-        :return: a capability
-        """
-        raise NotImplementedError(f'{self.__class__.__name__}.{inspect.stack()[0][3]}')
-
-    @abstractmethod
-    def lookup_capability_request(self, capability_request_id: int) -> CapabilityRequest:
-        """
-        Look up a particular request
-        :param capability_request_id:  the request identifier
-        :return: a capability request
-        """
-        raise NotImplementedError(f'{self.__class__.__name__}.{inspect.stack()[0][3]}')
-
-    @abstractmethod
-    def save_request(self, request: CapabilityRequest) -> int:
-        """
-        Save a capability request and return an integer identifier for it.
-
-        :param request:  the request to save
-        :return:         the request identifier
-        """
-        raise NotImplementedError(f'{self.__class__.__name__}.{inspect.stack()[0][3]}')
-
-
-class QueueRunner(Thread, ABC):
-    pass
-
-
-class CapabilityQueue(ABC):
-    """
-    Holds capability requests until they can be executed.
-    """
-    @abstractmethod
-    def enqueue(self, request: CapabilityRequest):
-        raise NotImplementedError(f'{self.__class__.__name__}.{inspect.stack()[0][3]}')
-
-
-class CapabilityService(ABC):
-    """
-    The capability service: clients access this to request capability runs
-    """
-    @abstractmethod
-    def send_request(self, name: CapabilityName, locators: List[ProductLocator]) -> CapabilityRequest:
-        """
-        Start a capability request with the given capability name and product locators.
-
-        :param name:      the capability name to look things up with
-        :param locators:  the products to start the capability with
-        :return:          a new capability request
-        """
-        raise NotImplementedError(f'{self.__class__.__name__}.{inspect.stack()[0][3]}')
-
-    @abstractmethod
-    def execute(self, request: CapabilityRequest) -> None:
-        """
-        Begin executing a capability request
-
-        :param request:  the request to execute
-        :return: None
-        """
-        raise NotImplementedError(f'{self.__class__.__name__}.{inspect.stack()[0][3]}')
-
-
-class ArchiveService(ABC):
-    """
-    Abstracts services that are needed from the archive system.
-    """
-    @abstractmethod
-    def lookup_product(self, locator: ProductLocator) -> ScienceProduct:
-        """
-        Look up a science product by its locator
-        :param locator:  science product locator for this product
-        :return:         science product
-        """
-        raise NotImplementedError(f'{self.__class__.__name__}.{inspect.stack()[0][3]}')
-
-
-FieldName = str
-FieldLabel = str
-
-
-class Parameter(ABC):
-    """
-    Abstracts parameters needed for running capabilities.
-    """
-    @staticmethod
-    def fields() -> Dict[FieldName, FieldLabel]:
-        raise NotImplementedError(f'Parameter.{inspect.stack()[0][3]}')
-
-    def json(self) -> Dict[str, str]:
-        raise NotImplementedError(f'{self.__class__.__name__}.{inspect.stack()[0][3]}')
-
-    def load(self, json: Dict[str, str]) -> None:
-        raise NotImplementedError(f'{self.__class__.__name__}.{inspect.stack()[0][3]}')
-
-
-class CapabilityStep(ABC):
-    """
-    A step in a capability sequence
-    """
-    @abstractmethod
-    def execute_against(self, request: CapabilityRequest, responder: "CapabilityEngineResponder"):
-        """
-        Execute this capability step
-        :return: None
-        """
-        raise NotImplementedError(f'{self.__class__.__name__}.{inspect.stack()[0][3]}')
-
-
-class CapabilityEngineResponder(ABC):
-    """
-    Abstracts the callbacks for a capability engine
-    """
-    @abstractmethod
-    def await_parameter(self, step: CapabilityStep, parameter_type: Type[Parameter]) -> Parameter:
-        raise NotImplementedError(f'{self.__class__.__name__}.{inspect.stack()[0][3]}')
-
-    @abstractmethod
-    def await_product(self, step: CapabilityStep, product_locator: ProductLocator):
-        raise NotImplementedError(f'{self.__class__.__name__}.{inspect.stack()[0][3]}')
-
-    @abstractmethod
-    def prepare_and_run_workflow(self, step: CapabilityStep, name: str, param: Parameter, files: List[Path]):
-        raise NotImplementedError(f'{self.__class__.__name__}.{inspect.stack()[0][3]}')
-
-
-class CapabilitySequence(ABC):
-    """
-    Represents the sequence of events in a capability.
-    """
-    pass
-
-
-class CapabilityEngine(ABC):
-    """
-    Executes a capability.
-    """
-    @abstractmethod
-    def execute(self, request):
-        pass
-
-
-class AwaitProduct(CapabilityStep, ABC):
-    """
-    Wait for a product to become available.
-    """
-
-    product: ProductLocator
-
-    def __init__(self, product: Optional[ProductLocator]=None):
-        self.product = product
-
-    def execute_against(self, request: CapabilityRequest, responder: CapabilityEngineResponder):
-        # if we have a product, await it
-        if self.product:
-            request.files.append(responder.await_product(self, self.product))
-
-        # if we do not, await the locators on the request itself
-        else:
-            for locator in request.locators:
-                request.files.append(responder.await_product(self, locator))
-
-
-class AwaitParameter(CapabilityStep, ABC):
-    """
-    Wait for a certain parameter to arrive (probably from the UI).
-    """
-
-    parameter_type: Type[Parameter]
-
-    def __init__(self, parameter_type: Type[Parameter]):
-        self.parameter_type = parameter_type
-
-    def execute_against(self, request: CapabilityRequest, responder: CapabilityEngineResponder):
-        request.parameters.append(responder.await_parameter(self, self.parameter_type))
-
-
-class PrepareAndRunWorkflow(CapabilityStep, ABC):
-    """
-    Render templates and execute a workflow, awaiting its completion.
-    """
-    workflow_name: str
-
-    def __init__(self, workflow_name: str):
-        self.workflow_name = workflow_name
-
-    def execute_against(self, request: CapabilityRequest, responder: CapabilityEngineResponder):
-        responder.prepare_and_run_workflow(self, self.workflow_name, request.last_parameter, request.files)
-
-
-class WorkflowEvent(ABC):
-    """
-    Represents an event on the workflow.
-    """
-    pass
-
-
-class WorkflowEventStream(ABC, Iterable[WorkflowEvent]):
-    """
-    Represents an event stream from a workflow execution.
-    """
-    @abstractmethod
-    def __iter__(self) -> Iterator[WorkflowEvent]:
-        """
-        Get the events from the workflow event stream.
-        :return: all the events for this workflow execution
-        """
-        raise NotImplementedError(f'{self.__class__.__name__}.{inspect.stack()[0][3]}')
-
-
-class WorkflowService(ABC):
-    """
-    Executes workflows; should be a freestanding service.
-    """
-    @abstractmethod
-    def execute(self, workflow_name: str, argument: Dict, files: List[Path]) -> WorkflowEventStream:
-        """
-        Execute this workflow against these files.
-
-        :param workflow_name:  name of the workflow to run
-        :param argument:       extra argument (a JSON object)
-        :param files:          some extra files the workflow should consider
-        :return:               a stream of events from this workflow
-        """
-        raise NotImplementedError(f'{self.__class__.__name__}.{inspect.stack()[0][3]}')
-
-
-@dataclass
-class Workflow(ABC):
-    name: str
-    dagman_template: str
-    tasks: List[str]
-
-    @abstractmethod
-    def render_templates(self, argument: Dict, files: List[Path]) -> Dict[str, str]:
-        """
-        Render the templates associated with this workflow
-        :param argument: the workflow argument JSON
-        :param files:    the files to be processed
-        :return:         a list of rendered templates
-        """
-        raise NotImplementedError(f'{self.__class__.__name__}.{inspect.stack()[0][3]}')
-
-
-class WorkflowInfo(ABC):
-    """
-    Holds information about workflows.
-    """
-    @abstractmethod
-    def lookup_workflow_definition(self, name: str) -> Workflow:
-        """
-        Look up the workflow with this name.
-
-        :param name:  Workflow name
-        :return:      Workflow instance
-        """
-        raise NotImplementedError(f'{self.__class__.__name__}.{inspect.stack()[0][3]}')
diff --git a/apps/cli/utilities/wksp0/wksp/run_capability.py b/apps/cli/utilities/wksp0/wksp/run_capability.py
deleted file mode 100644
index 2ea76f3da03c59912bedebb71730845092176063..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/wksp/run_capability.py
+++ /dev/null
@@ -1,40 +0,0 @@
-import pathlib
-
-from injector import Injector
-import wksp.workflow as wf
-import wksp.capability as cp
-from wksp.ifaces import WorkflowInfo, WorkflowService, CapabilityService
-import sys
-
-
-def main():
-    if len(sys.argv) < 2:
-        print('Usage: run <capability name> product-locator [product-locator...]')
-
-    # for the purposes of this test, we are using
-    # the hardcoded workflow info and the DAGman workflow service
-    def configure(binder):
-        # binder.bind(CapabilityInfo, to=cp.DirectoryCapabilityInfo(pathlib.Path('./capabilities'))
-        # binder.bind(CapabilityEngineResponder, to=cp.ConsoleEngineResponder)
-        binder.bind(WorkflowInfo,                 to=wf.DirectoryWorkflowInfo(pathlib.Path('./workflows')))
-        binder.bind(WorkflowService,              to=wf.HTCondorWorkflowService)
-        binder.bind(cp.CapabilityInfo,            to=cp.DirectoryCapabilityInfo(pathlib.Path('./capabilities')))
-        binder.bind(CapabilityService,            to=cp.PrototypeCapabilityService)
-        binder.bind(cp.ProductService,            to=cp.HardcodedProductService)
-        binder.bind(cp.CapabilityEngine,          to=cp.PrototypeCapabilityEngine)
-        binder.bind(cp.CapabilityEngineResponder, to=cp.PrototypeCapabilityEngineResponder)
-        binder.bind(cp.CapabilityQueue,           to=cp.PrototypeCapabilityQueue)
-
-    # set up the injector
-    injector = Injector(configure)
-
-    # get the service
-    capability_service = injector.get(CapabilityService)
-
-    # 1. initialize the request
-    request = capability_service.send_request(sys.argv[1], sys.argv[2:])
-
-    # 2. execute the request
-    capability_service.execute(request)
-
-    # 3. while the request needs input, provide it
diff --git a/apps/cli/utilities/wksp0/wksp/run_workflow.py b/apps/cli/utilities/wksp0/wksp/run_workflow.py
deleted file mode 100644
index 38b69b44a3eaa22e1356de1f0019b75b5285da22..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/wksp/run_workflow.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import pathlib
-
-from injector import Injector
-import wksp.workflow as wf
-from wksp.ifaces import WorkflowInfo, WorkflowService
-import sys
-import json
-
-
-def main():
-    if len(sys.argv) < 3:
-        print('Usage: run_workflow <workflow name> <JSON argument> file1 [file2...]')
-
-    # for the purposes of this test, we are using
-    # the hardcoded workflow info and the DAGman workflow service
-    def configure(binder):
-        binder.bind(WorkflowInfo,    to=wf.DirectoryWorkflowInfo(pathlib.Path('./workflows')))
-        binder.bind(WorkflowService, to=wf.HTCondorWorkflowService)
-
-    # set up the injector
-    injector = Injector(configure)
-
-    # get the service
-    workflow_service = injector.get(WorkflowService)
-
-    # execute the test workflow and print each event
-    for event in workflow_service.execute(sys.argv[1], json.loads(sys.argv[2]), [pathlib.Path(x) for x in sys.argv[3:]]):
-        print(event)
diff --git a/apps/cli/utilities/wksp0/wksp/workflow.py b/apps/cli/utilities/wksp0/wksp/workflow.py
deleted file mode 100644
index e271824635b8bc2cc8ed1580663b050937af3d60..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/wksp/workflow.py
+++ /dev/null
@@ -1,129 +0,0 @@
-import stat
-import subprocess
-from pathlib import Path
-from typing import List, Dict, Iterator
-from tempfile import mkdtemp
-import pystache
-
-from htcondor.htcondor import JobEventLog
-from injector import inject
-
-
-from wksp.ifaces import WorkflowService, WorkflowEvent, WorkflowEventStream, WorkflowInfo, Workflow
-
-
-class WorkflowDirectory(Workflow):
-    def __init__(self, path: Path):
-        self.path = path
-
-    def render_templates(self, argument: Dict, files: List[Path]) -> Dict[str, str]:
-        def render(text):
-            return pystache.render(text, argument, filename=files[0])
-
-        # null templating: give me the raw content of the files
-        result = dict((file.name, render(file.read_text())) for file in self.path.glob('*'))
-
-        # that's it
-        return result
-
-
-class DirectoryWorkflowInfo(WorkflowInfo):
-    """
-    Prototype-quality implementation of WorkflowInfo that uses the filesystem instead of a relational database.
-
-    Looks in the supplied path for directories; each directory is the name of a workflow; inside each directory are
-    template files for that workflow.
-    """
-    def __init__(self, workflow_dir: Path):
-        """
-        Constructor for DirectoryWorkflowInfo
-
-        :param workflow_dir:  the base directory holding workflow definitions
-        """
-        self.dir = workflow_dir
-
-    def lookup_workflow_definition(self, name: str) -> Workflow:
-        if (self.dir / name).exists():
-            return WorkflowDirectory(self.dir / name)
-        else:
-            raise KeyError("no such workflow", name)
-
-
-class HTCondorWorkflowService(WorkflowService):
-    """
-    Implements the workflow service by sending commands to HTCondor
-    """
-    @inject
-    def __init__(self, db: WorkflowInfo):
-        self.db = db
-
-    def execute(self, workflow_name: str, argument: Dict, files: List[Path]) -> WorkflowEventStream:
-        # 1. look up the workflow info for this workflow name
-        info = self.db.lookup_workflow_definition(workflow_name)
-
-        # 2. render the templates to files
-        contents = info.render_templates(argument, files)
-
-        # 3. serialize the templated files
-        temp_folder = self._prepare_files_for_condor(contents)
-
-        # 4. execute condor and get the log file
-        log_file = self._execute_prepared(temp_folder)
-
-        # 5. start reading the logs
-        return HTCondorWorkflowEventStream(log_file)
-
-        # probably should remove the temporary files here
-
-    @staticmethod
-    def _prepare_files_for_condor(files: Dict[str, str]) -> Path:
-        """
-        Place the files for Condor into a new temp directory and returns the directory.
-
-        :param files:  a dictionary of filename -> content
-        :return:       a Path
-        """
-        # 1. create a temporary directory
-        temp_folder = Path(mkdtemp(dir=str(Path.home() / "tmp")))
-
-        # 2. spool each of the temp files to it
-        for name, content in files.items():
-            (temp_folder / name).write_text(content)
-
-        # 3. make any scripts in there executable
-        for file in temp_folder.glob('*.sh'):
-            file.chmod(file.stat().st_mode | stat.S_IEXEC)
-
-        # finished, return folder
-        return temp_folder
-
-    @staticmethod
-    def _execute_prepared(folder: Path) -> Path:
-        """
-        Execute HTCondor using the named folder as the source of the files.
-
-        :param folder:  the path to the folder to execute
-        :return:        the path to the log file
-        """
-        print(f'executing on folder {folder}')
-
-        # some file in here should end in .dag; that file is our dagman input
-        dagman = list(folder.glob('*.dag'))[0]
-
-        # ensure the log file exists
-        logfile = folder / 'condor.log'
-        logfile.touch()
-
-        # submit
-        subprocess.run(['condor_submit_dag', str(dagman)], cwd=str(folder.absolute()))
-
-        # return the logfile
-        return logfile
-
-
-class HTCondorWorkflowEventStream(WorkflowEventStream):
-    def __init__(self, log_path: Path):
-        self.log = log_path
-
-    def __iter__(self) -> Iterator[WorkflowEvent]:
-        return iter(JobEventLog(str(self.log.resolve())))
diff --git a/apps/cli/utilities/wksp0/workflows/grep-uniq/grep-uniq.dag b/apps/cli/utilities/wksp0/workflows/grep-uniq/grep-uniq.dag
deleted file mode 100644
index 1833691e16d14f45dbc0299019a6ba10c18d5065..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/workflows/grep-uniq/grep-uniq.dag
+++ /dev/null
@@ -1,3 +0,0 @@
-JOB grep grep.condor
-JOB uniq uniq.condor
-PARENT grep CHILD uniq
diff --git a/apps/cli/utilities/wksp0/workflows/grep-uniq/grep.condor b/apps/cli/utilities/wksp0/workflows/grep-uniq/grep.condor
deleted file mode 100644
index 59c5bd0432194bb98aba8637e7958df556e1ab90..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/workflows/grep-uniq/grep.condor
+++ /dev/null
@@ -1,9 +0,0 @@
-executable = grep.sh
-arguments = "{{search}} {{filename.name}}"
-output = raw-lines.txt
-should_transfer_files = YES
-transfer_input_files = {{filename}}
-error = grep.err
-log = condor.log
-
-queue
diff --git a/apps/cli/utilities/wksp0/workflows/grep-uniq/grep.sh b/apps/cli/utilities/wksp0/workflows/grep-uniq/grep.sh
deleted file mode 100755
index 8209e70fb8ec6f447a1830dd64d54620d7cde13c..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/workflows/grep-uniq/grep.sh
+++ /dev/null
@@ -1,3 +0,0 @@
-#!/bin/sh
-
-grep $*
diff --git a/apps/cli/utilities/wksp0/workflows/grep-uniq/uniq.condor b/apps/cli/utilities/wksp0/workflows/grep-uniq/uniq.condor
deleted file mode 100644
index 67af48f315f80ad5af6c9ac8b4660f5fc2b12155..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/workflows/grep-uniq/uniq.condor
+++ /dev/null
@@ -1,9 +0,0 @@
-executable = uniq.sh
-arguments = raw-lines.txt
-should_transfer_files = IF_NEEDED
-transfer_input_files = raw-lines.txt
-output = unique-lines.txt
-error = uniq.err
-log = condor.log
-
-queue
diff --git a/apps/cli/utilities/wksp0/workflows/grep-uniq/uniq.sh b/apps/cli/utilities/wksp0/workflows/grep-uniq/uniq.sh
deleted file mode 100755
index 6d62e7f8f3e1e1a37db5ccad687eaf41236954af..0000000000000000000000000000000000000000
--- a/apps/cli/utilities/wksp0/workflows/grep-uniq/uniq.sh
+++ /dev/null
@@ -1,3 +0,0 @@
-#!/bin/sh
-
-cut -f2 -d= "$1" | sed 's/[ \t]*\([^ \t]*\).*/\1/g' | sort | uniq