You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For some size of tensor (larger than 4kb?), it would be a killer feature if we could lazy-load the entire tensor. That would mean that LazyTensor (or whatever class we use for this) keeps an open LMDB transaction, so that the pointers to the data stay alive for as long as possible. If we do this, we will need some mechanism yoink transactions from live objects when we run out of transaction slots.
Or maybe it would be better if large tensors get their own memory-mapped file. That means they are not transactional like the rest of OOCMap is. I'm not sure that's really a problem.
The text was updated successfully, but these errors were encountered:
We have to store Numpy arrays in this.
For some size of tensor (larger than 4kb?), it would be a killer feature if we could lazy-load the entire tensor. That would mean that
LazyTensor
(or whatever class we use for this) keeps an open LMDB transaction, so that the pointers to the data stay alive for as long as possible. If we do this, we will need some mechanism yoink transactions from live objects when we run out of transaction slots.Or maybe it would be better if large tensors get their own memory-mapped file. That means they are not transactional like the rest of OOCMap is. I'm not sure that's really a problem.
The text was updated successfully, but these errors were encountered: