Will the upload URL change in the future?

Hi,

I have a script for uploading things to NOMAD, but everytime I use it the upload command seems to have changed. Currently it seems to be

http://nomad-lab.eu/prod/v1/api/v1/uploads/

at some point it was

http://nomad-lab.eu/prod/rae/api/uploads/

but this command now throws errors.

Will the command change in the future? If yes, is there a persistent URL that I can use?

Thank you!

… or maybe some other parts of the syntax changed, currently checking

Hello Florian,

going from version 0 to version 1 was a major step for us and included major changes to the API. We changed the URL, because also the underlying API/processing underwent breaking changes.

The old URL is still there. The old API can still be used to get data that was uploaded before the change. Unfortunately, we had to discontinue the upload of new data and cannot provide any new data through the old API/URL.

I know that this can be annoying, but these major breaking updates should be infrequent. Of course, I cannot make any promises for the far future.

For the upload endpoint in particular the API has not change too much. Here are the docs of the new one. It is a post not, not a put; but most of the parameters stayed the same. The subsequent endpoints to change metadata and published have changed more significantly.

Hi Markus,

I found the additional PUT → POST in the meantime.

Furthermore the json feature I’m using via

-H 'Accept: application/json'

is broken, the JSON strings are not generally valid JSON. Probably they’re valid in Java though ("" missing for strings, e.g.)

I guess I’m the only user for this so it went unnoticed.

edit also adding names changed from &name to &upload_name

The JSON part is interesting (and quite astonishing as “all” responses are generated by a framework). I would really like to get to the bottom of this. Do you have an example output? What endpoint?

For example the upload with this command:

tar cf - nomad_upload | curl "http://nomad-lab.eu/prod/v1/api/v1/uploads?token=b..." -X POST  -H 'Accept: application/json' -T - | xargs echo

yields

{upload_id: xL5Xoii5RiuQb6IpTQdHTA, data: {process_running: true, current_process: process_upload, process_status: PENDING, last_status_message: Pending: process_upload, errors: [], warnings: [], complete_time: null, upload_id: xL5Xoii5RiuQb6IpTQdHTA, upload_name: null, upload_create_time: 2022-06-24T13:05:50.953000, main_author: 6f36f641-c23e-476f-863c-3161f0c66af7, coauthors: [], reviewers: [], viewers: [6f36f641-c23e-476f-863c-3161f0c66af7], writers: [6f36f641-c23e-476f-863c-3161f0c66af7], published: false, published_to: [], publish_time: null, with_embargo: false, embargo_length: 0, license: CC BY 4.0, entries: 0}}

i.e. everything without “” etc. I checked with https://jsonformatter.curiousconcept.com

Ok another thing (unrelated): When I donwload raw data I get a file raw without file ending or anything. Then I have to guess it’s a zip file to be able to unpack it.

The problem is the “| xargs echo” at the end of one of the commands shown on the gui. I think we added it, to show the progress of the upload which is otherwise lost in the piping.

The problem is that xargs strips quotes from the input (in this case the curl output). If you omit the | xargs echo, you’ll get the right json (but no progress bar).

We should probably revise these offered command examples. The whole xargs thing is quite “hacky”.

How did you do the upload?

I am clicking through our various download options. Downloading multiple files from the raw files tab of an individual entry only shows the entryid and not .zip. Here it would be better that raw.zip. We can change that.

The other downloads (download button on top of upload page, download entries from the processing step of the upload page, download button in the overall search), seem all to suggest raw.zip as download file name. Maybe it is my browser making assumptions based on the content type header.

If you use the API, we are a bit limited. Not really sure how curl would create a file name.

yes by selecting the files:

was not aware that one do the bulk download at the top, thanks for the hin

Ah yes, I used xargs -0 previously which returns the ticks.

Thanks for the tip.

The raw problem seems to be happening in Firefox, not Chrome. We switched to a proper library to implement the downloads in an upcomming release (don’t worry the URLs wont change;-). This will fix this problem.