Ingesting data via s3ingest

Step 1: Organize your data

follow spec here http://docs.neurodata.io/ndstore/sphinx/ingesting.html

currently only 3d data

Step 2: Get data locally

move data to same file system where you're running the ndstore database

Step 3: Write ingest file

go to directory for configs

cd /home/neurodata/ndstore/ingest-client/ingest/configs

look at a config

cat ./neurodata-kasthuri11-example.json
  • lines 1-12 depend on where you're doing things
  • pick right file processor and tile processor based on your data (docs to be provided by APL)
  • if your files are not following the naming convention in the docs, you may need to adjust ingest/plugins/stack.py
  • disable ssl csrf (lines 278-285 in settings.py)
  • copy-paste AWS credentials to settings_secret.py

projects:

  • dbhost: debug
  • kv engine: redis
  • kv server: debug

{ "schema": { "name": "neurodata-schema", "validator": "NeurodataValidator" }, "client": { "backend": { "name": "neurodata", "class": "NeurodataBackend", "host": "localhost:8000", "protocol": "http" }, "path_processor": { "class": "ingest.plugins.stack.ZindexStackPathProcessor", "params": { "root_dir": "/data/greg/", "filetype": "png" } }, "tile_processor": { "class": "ingest.plugins.stack.ZindexStackTileProcessor", "params": { "filetype": "png" } } }, "database": { "dataset": "greg16", "project": "greg16", "channel": "image" }, "ingest_job": { "resolution": 0, "extent": { "x": [0, 182], "y": [0, 218], "z": [0, 64], "t": [0, 1] }, "tile_size": { "x": 182, "y": 218, "z": 1, "t": 1 } } }

python aws_interface.py token --data /path/to/data --config /path/to/json --action upload-new

note to self

  • make create dataset tutorial
  • make create ingest json tutorial
  • make placeholder for ingest tutorial, and internal tutorial for eric and myself

In [ ]: