python - Get_Multi vs. query in NDB. I.e. reads versus cache -


so in app have graph search problem (see previous questions). 1 of annoying parts of algorithm use have read in entire ndb database memory (about 5500 entities, 1mb in size in datastore statistics). things work ok

nodeconns=jumpalt.query().fetch(6000) 

but prefer if cache checked first... doing

nodeconns=ndb.get_multi(jumpalt.query().fetch(keys_only=true)) 

works offline generates following error online: "exceeded soft private memory limit 172.891 mb"

speedwise normal query fine bit concerned every user generating 5500 reads datastore gonna eat quota quite :)

so, question is, (1) such large memory overhead get_multi normal? (2) stupid read in entire database each user anyway?


Comments