Comments on test deployment (thread moved from XCI Jira page to here)

1 post / 0 new
Comments on test deployment (thread moved from XCI Jira page to here)

JP suggested moving my comments here, as the Jira page wasn't the right place.  The comments below should be identical to the ones in Jira, formatting aside.


(3) The comment in section 6.1: "The API MUST store the API version to keep track of attribute submission lineage" was not implemented in the software delivered.

The code records the api version in the "agent" field, as well as the XSEDE user.  An example value looks like "ssakai@TERAGRID.ORG via job_attributes v2".


(1) We are not storing the api-key in a database. We are storing an SHA digest of the apikey.

A natural consequence of this design change is that the user cannot come back later and retrieve their API key, which is pretty bad for user experience.  Section 6.3.2 of the design document discusses this issue, and the reason why we want to store apikeys in plaintext

As it stands now, nobody is going to bother opening a ticket for their lost apikey.  They'll make a variant of their gateway and use that one instead, as there's no incentive to do anything else.  The unintended creation of more inconsistent data is a more impactful (and likely) loss than the impact (and probability) an attacker can do with apikeys to the gateway attributes endpoint.

Furthermore, if the digest is a single-round call to a digest function, rather than a salted password hashing function, any situation that would result in invalidating plaintext apikeys across-the-board would also require invalidating sha(which one?)-digests across-the-board too.  The use of a digest function primitive as a means of hashing a password is insecure due to the low cost of offline dictionary attacks and techniques for the precomputation of hashes (e.g. rainbow tables).  Proper password hashing functions not only salt the hash, but either use a resource-costly digest or many rounds of a faster digest, or a combination of the two so that brute-forcing a recovered hash is extremely costly.


#2 is not part of the API and we are not responsible for the scripts that have to be written to capture the job info (jobid, submittime, etc) and post them to the API. It seems that this now falls on each SP to write those scripts.

The responsibility for submitting attributes and invoking their URL-fetcher of choice with the correct arguments falls on the gateway operator.  It's also up to the gateway operator to decide how they want to handle exceptions (log, retry, fire-and-forget).  The SP has nothing to do with gateway attribute submission. 

FWIW, I think the reliability of the infrastructure behind XDCDB makes error recovery in this case a non-issue.  Transient errors if any will be short, and the loss of a handful of submissions is undesirable but not intolerable. 

Delivery Effort Stage: