\image analysis running on the server "lev", sharing the basic same data. presently, the IA port is not exposed, but in interest of time, i am making a "pass-through" access to the IA api via a WB api call. (this is ultimately the model we want to use in production, so that wildbook can handle security and other issues, but as you will see, i have kind of done a very generic hack version of the same concept.) this consists of a small wb api "wrapper" call around the call you actually wish to be making of the IA data. perhaps examples would demonstrate best:

IA call: /api/annot/image/contributor/tag/json/?annot_uuid_list=[{"UUID":"8b595dc0-9c5a-4caf-9703-9f8ff017e824"}] becomes: http://lev.cs.rpi.edu:8080/ggr/ia?passthru=/api/annot/image/contributor/tag/json/&arg=annot_uuid_list%3D[{%22__UUID__%22:%228b595dc0-9c5a-4caf-9703-9f8ff017e824%22}]

IA call: /api/annot/age/months/json/?annot_uuid_list=[{"UUID":"8b595dc0-9c5a-4caf-9703-9f8ff017e824"}] becomes: http://lev.cs.rpi.edu:8080/ggr/ia?passthru=/api/annot/age/months/json/&arg=annot_uuid_list%3D[{%22__UUID__%22:%228b595dc0-9c5a-4caf-9703-9f8ff017e824%22}]

in other words, you pass two parameters, passthru and arg, which are just uri-encoded strings that represent the two sides of the "?" in the original call. (note that arg is optional.) the two examples above are "live", in the sense that you can click them and should get the json results as expected.

this should, technically, get you to any existing IA api call (provided i had jason p turn on all the right ones)... at least the ones that use GET. if you need any POST queries, let me know and i can pass those through as well.


In [1]:
import requests
import urllib
import GetPropertiesAPI as GP
from collections import OrderedDict
import importlib
import UploadAndDetectIBEIS as UD
importlib.reload(UD)
importlib.reload(GP)
# DOMAIN = 'http://lev.cs.rpi.edu:8080/ggr/ia'
import json

In [68]:
data_dict = {
        'aid_list': [1,2,4],
    }
q_annot_uuid_list = UD.get('api/annot/uuid', data_dict)
q_annot_uuid_list


Out[68]:
[{'__UUID__': '0b69cf48-f7f8-47f8-8d0e-4c66c028ff81'},
 {'__UUID__': '9bb949b5-5802-4d38-98e5-04b9aadc540b'},
 {'__UUID__': '30865271-2dc1-451c-9964-ef5e87a40530'}]

In [47]:
url = "http://pachy.cs.uic.edu:5001/api/engine/query/graph/"
data_dict = {
        'query_annot_uuid_list' : json.dumps([q_annot_uuid_list[0]]),
}
response = requests.request('POST', url, data=data_dict)

In [48]:
jobid_str = response.json()['response']

UD.check_job_status(jobid_str)


Out[48]:
True

In [49]:
data_dict = {
        'jobid' : jobid_str
    }

result = UD.get("api/engine/job/result", data_dict)['json_result']# ['inference_dict']['cluster_dict']

In [50]:
result.keys()


result['query_annot_uuid_list']


Out[50]:
[{'__UUID__': '0b69cf48-f7f8-47f8-8d0e-4c66c028ff81'}]

In [21]:
UD.upload("/Users/sreejithmenon/Downloads/270px-Elephant_near_ndutu.jpg")


Uploading 270px-Elephant_near_ndutu.jpg
Out[21]:
4

In [22]:
UD.run_detection_task(4)


image_uuid 1448207663269176108075294643474467511

Annot aid_list    = [5]
      bboxes      = [[29, 72, 238, 266]]
      thetas      = [0.0]
      species     = ['elephant_savannah']
      confidences = [0.5779936909675598]
      notes       = ['cnnyolodetect']

Deleting aid_list = [5]
      bboxes      = [None]
      thetas      = [None]
      species     = ['____']
      confidences = [None]
      notes       = [None]

Engine Job ID     = 'jobid-0003'
	 Checking status...
	 Checking status...
Engine Detections = [{'width': 238, 'height': 266, 'theta': 0.0, 'class': 'elephant_savannah', 'ytl': 72, 'confidence': 0.578, 'xtl': 29}]

Annot aid_list    = [5]
      bboxes      = [[29, 72, 238, 266]]
      thetas      = [0.0]
      species     = ['elephant_savannah']
      confidences = [0.0]
      notes       = ['']

Notes on aid_list = [5]
      bboxes      = [[29, 72, 238, 266]]
      thetas      = [0.0]
      species     = ['elephant_savannah']
      confidences = [0.0]
      notes       = ['Flickr_Image']

In [44]:
def handle_name_logic(cluster_dict, aid):
    name = None
    prev_match = False

    # case 1: there is just 1 entry in the orig_name_list - no matches
    if len(cluster_dict['orig_name_list']) == 1:
        name = str(aid)
    else:
        # case 2: There are some matches
        for i in range(len(cluster_dict['orig_name_list'])): 
            if 'NEWNAME' not in cluster_dict['orig_name_list'][i]:
                prev_match = True

        if prev_match:
            for orig_name in cluster_dict['orig_name_list']:
                if 'NEWNAME' not in orig_name:
                    name = orig_name
        else:
            # case 2b: But none of the matches have been previously been assigned a name
            # This block will only work when the pipeline is being run for the first time on a dataset.
            name = str(aid)
    
    return name

In [51]:
handle_name_logic(result['inference_dict']['cluster_dict'],1)


Out[51]:
'1'

In [2]:
data_dict= {
    "name_rowid_list" : [151, 194, 360, 376, 504, 512, 517, 528, 543, 594, 612]
}


UD.delete('api/name', data_dict)

In [11]:
data_dict = {
            "aid_list" : [3306,3307]
    }

UD.delete('api/annot',data_dict)


Out[11]:
[None, None]

In [88]:
import time, re
def run_annot_identification(aid, daid_annot_list): # ID'ing task for each annotation
    # step 1: get annot_uuid -- this can be done offline too
    data_dict = {
        'aid_list': [aid],
    }
    annot_uuid_list = UD.get('api/annot/uuid', data_dict)

    # step 2: for the given annot UUID run the detection against all the available annots
    url = "http://pachy.cs.uic.edu:5001/api/engine/query/graph/"
    data_dict = {
        'query_annot_uuid_list' : json.dumps([annot_uuid_list[0]]),
        'database_annot_uuid_list' : json.dumps(daid_annot_list)
    }
    response = requests.request('POST', url, data=data_dict)

    print("Query submitted..!")

    try:
        assert response.json()['status']['success']
    except AssertionError:
        print("RUN_ID_PIPELINE failed for annotation_id %i" %aid)

    jobid_str = response.json()['response']
    print("Job ID: %s" %jobid_str)

    error_time = 30 * 60
    start = 0
    while not UD.check_job_status(jobid_str) and start < error_time:
        print("Waiting for job completion..!")
        start += 10
        time.sleep(10)

    try:
        assert UD.check_job_status(jobid_str)
    except AssertionError:
        print("RUN_ID_PIPELINE failed for annotation_id %i" %aid)
        bashCommand="""echo "RUN_ID_PIPELINE failed \n`date`" | mailx -s 'Msg from UploadAndDetectIBEIS' smenon8@uic.edu"""
        process = subprocess.Popen(bashCommand.split(), stdout=subprocess.PIPE)
        output, error = process.communicate()
    
    print("Query complete..!")

    # step 3: Job execution must have successfully completed at this point and now we extract the needed information
    data_dict = {
        'jobid' : jobid_str
    }

    result = UD.get("api/engine/job/result", data_dict)['json_result']['inference_dict']['cluster_dict']

    print(result)
    name = handle_name_logic(result, aid)
    # this particular block ensures the names assigned (integers) are preserved
   
    # make the final assignment
    # for annot_uuid_dict in result['annot_uuid_list']:
        # aid = uuid_aid_map[annot_uuid_dict["__UUID__"]]
    print(name)
    print(type(name))
    data_dict = {
            "aid_list" : [aid],
            "name_list" : [name]
        }

    UD.put("api/annot/name", data_dict)
    
    print("IDing complete for AID %s \n" %aid)
    return 0

In [91]:
run_annot_identification(2,[{'__UUID__': '0b69cf48-f7f8-47f8-8d0e-4c66c028ff81'},
 {'__UUID__': '9bb949b5-5802-4d38-98e5-04b9aadc540b'},
 {'__UUID__': '30865271-2dc1-451c-9964-ef5e87a40530'}])


Query submitted..!
Job ID: jobid-0017
Query complete..!
{'annot_uuid_list': [{'__UUID__': '9bb949b5-5802-4d38-98e5-04b9aadc540b'}], 'orig_name_list': ['NEWNAME_-2'], 'exemplar_flag_list': [True], 'error_flag_list': [[]], 'new_name_list': ['NEWNAME_-2']}
2
<class 'str'>
IDing complete for AID 2 

Out[91]:
0

In [97]:
UD.run_id_pipeline(range(1,6), "zebra_grevys")


Extracting uuid for all annotations specified by GID range
Running ID detection for aid 1
Query submitted..!
Job ID: jobid-0019
Query complete..!
IDing complete for AID 1 

Running ID detection for aid 2
Query submitted..!
Job ID: jobid-0020
Query complete..!
IDing complete for AID 2 

Running ID detection for aid 4
Query submitted..!
Job ID: jobid-0021
Query complete..!
IDing complete for AID 4 


In [99]:
url = "http://pachy.cs.uic.edu:5001/api/annot/exemplar/"
response = requests.request('POST', url)
print(response.json())


{'status': {'code': 200, 'cache': -1, 'success': True, 'message': ''}, 'response': [True, True, False, True, False]}

In [7]:
GP.getImageFeature([1,2,3,4,5,6,7, 3001], "name/text")


Out[7]:
['1', '2', '3', '4', '5', '6', '7', '____']

In [6]:
exemplars_2 = UD.post("api/annot/exemplar", {})

In [8]:
set(exemplars_2) - set(exemplars)


Out[8]:
set()

In [10]:
sum(exemplars)


Out[10]:
50

In [25]:
gidRange = range(1,150+1)

gid_aid_map = {}
for gid in gidRange:
    aid = GP.getAnnotID(int(gid))
    gid_aid_map[gid] = [aid][0]

aid_list = [item for sublist in gid_aid_map.values() for item in sublist if len(sublist) > 0]

In [26]:
name_list = GP.getImageFeature(aid_list, "name/text")
name_dict = {aid_list[i] : name_list[i] for i in range(len(aid_list))}

species_list = GP.getImageFeature(aid_list, "species/text")
species_dict = {aid_list[i] : species_list[i] for i in range(len(aid_list))}

In [27]:
species = 'giraffe_reticulated'

# ignore all the annotations which already have a name associated
qaid_list_species_only = list(filter(lambda aid : species_dict[aid] == species and name_dict[aid] == "____", aid_list))   
daid_list_species_only = list(filter(lambda aid : species_dict[aid] == species, aid_list))

In [28]:
def refresh_exemplars(data_dict={}):
    return UD.post("api/annot/exemplar", data_dict)

def get_database_annots(aid_list):
    print("Method for extracting database annotations running..!")

    db_annot_list = []
    
    # get all exemplars
    data_dict = {
        'aid_list' : aid_list
    }
    exemplars = refresh_exemplars(data_dict)

    if sum(exemplars):
        for i in range(len(aid_list)):
            if exemplars[i]:
                db_annot_list.append(aid_list[i])
    else: # no exemplars
        db_annot_list = aid_list
    

    return db_annot_list

In [29]:
daid_annot_list = get_database_annots(daid_list_species_only)


Method for extracting database annotations running..!

In [30]:
daid_annot_list


Out[30]:
[1,
 2,
 3,
 4,
 5,
 6,
 7,
 9,
 10,
 11,
 12,
 14,
 15,
 16,
 17,
 18,
 19,
 21,
 22,
 23,
 24,
 25,
 28,
 29,
 30,
 32,
 33,
 34,
 36,
 37,
 38,
 42,
 43,
 47,
 49,
 50,
 53,
 55,
 56,
 57,
 67,
 71,
 74,
 95,
 105,
 107,
 108,
 116,
 133,
 140]

In [26]:
data_dict = {
        'jobid' : 'jobid-0001'
    }

result = UD.get("api/engine/job/result", data_dict)# ['json_result']['inference_dict']['cluster_dict']

In [27]:
print(json.dumps(result, indent=4))


{
    "status": "ok",
    "jobid": "jobid-0001",
    "json_result": "<!!! EXCEPTION !!!>\nTraceback (most recent call last):\n  File \"/opt/ibeis/ibeis/ibeis/web/job_engine.py\", line 834, in on_engine_request\n    result = action_func(*args, **kwargs)\n  File \"/opt/ibeis/ibeis/ibeis/web/apis_query.py\", line 544, in query_chips_graph\n    cfgdict=query_config_dict, return_request=True)\n  File \"/opt/ibeis/ibeis/ibeis/web/apis_query.py\", line 697, in query_chips\n    cm_list = qreq_.execute()\n  File \"/opt/ibeis/ibeis/ibeis/algo/hots/query_request.py\", line 1265, in execute\n    save_qcache=None)\n  File \"/opt/ibeis/ibeis/ibeis/algo/hots/match_chips4.py\", line 96, in submit_query_request\n    verbose=verbose)\n  File \"/opt/ibeis/ibeis/ibeis/algo/hots/match_chips4.py\", line 218, in execute_query_and_save_L1\n    for fpath in fpath_iter\n  File \"/opt/ibeis/ibeis/ibeis/algo/hots/chip_match.py\", line 2777, in load_from_fpath\n    state_dict = ut.load_cPkl(fpath, verbose=verbose)\n  File \"/opt/ibeis/utool/utool/util_io.py\", line 339, in load_cPkl\n    data = pickle.load(file_)\nEOFError\n\n[!on_engine_request] [!?] Caught exception\n<type 'exceptions.EOFError'>: \n[!on_engine_request] jobid = 'jobid-0001'\n</!!! EXCEPTION !!!>"
}

In [11]:
daids = UD.get_database_annots(list(range(2852)))


Method for extracting database annotations running..!

In [21]:
data_dict = {
        'aid_list': [2852],
    }
annot_uuid_list = UD.get('api/annot/uuid', data_dict)
url = "http://pachy.cs.uic.edu:5001/api/engine/query/graph/"
data_dict = {
        'query_annot_uuid_list' : json.dumps([annot_uuid_list[0]]),
        'database_annot_uuid_list' : json.dumps(daids)
    }
response = requests.request('POST', url, data=data_dict)

In [23]:
response.json()


Out[23]:
{'response': 'jobid-0002',
 'status': {'cache': -1, 'code': 200, 'message': '', 'success': True}}

In [2]:
DOMAIN = 'http://pachy.cs.uic.edu:5000'

In [3]:
GP.getAnnotID(1)


Out[3]:
[]

In [4]:
data_dict = {
        'gid_list': [1],
    }

In [6]:
image_uuid_list = UD.get('api/image/uuid', data_dict)

In [7]:
image_uuid_list


Out[7]:
[{'__UUID__': '89ef0e83-066b-1e0d-6d4b-53ac52007f21'}]

In [ ]: