You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At the moment only the datapackage schema is saved in the server. The resources can be added via the "data package creator": this is useful to infer column names, types, etc. but they are not saved when pressing "Save to Serve".
This is by design to avoid big complexity:
Deal with uploading and downloading big data files (upload, download, etc.) in a browser
Avoid adding many (hundreds, thousands) of data files and dealing with this
Backend: currently schema-collaboration stores the schema in the database. If data files should be uploaded and kept it would probably be into an object storage. More moving parts
schema-collaboration might need to be more explicit on what's happening with the resources.
If this was a useful feature it might need to be added.
There would be two user advantages for this:
Researchers could upload data if the data manager didn't have it yet
After changing the tabular data resource: data package creator could validate the new schema. Or it could be done the frontend side using frictionless-py and avoiding to download/upload the files to the client
The text was updated successfully, but these errors were encountered:
At the moment only the datapackage schema is saved in the server. The resources can be added via the "data package creator": this is useful to infer column names, types, etc. but they are not saved when pressing "Save to Serve".
This is by design to avoid big complexity:
schema-collaboration might need to be more explicit on what's happening with the resources.
If this was a useful feature it might need to be added.
There would be two user advantages for this:
The text was updated successfully, but these errors were encountered: