I've started working with BIGDB and am wondering about the following so I can best design the data structure:
1. How can I determine the size of BigDB object so I can examine/understand the limits of potential data models I might use?
2. With regards to loading/saving BigDB objects, is the entire object need to be sent over traffic each time the object is modified, or just the piece that was modified? For example, if I have a player object that was 200KB, would it need to pass 200KB of data just to modify a single property (ex: score) or just the smaller piece of score data?
I'm asking about this because I can see the potential for the data format/structure to add considerably to size (ex: property names) of the object. If the entire object is being sent for each read/write (vs. the single property) then I believe traffic could be spared by using a custom serialization that makes the data as small as possible (but would require the entire object to be sent for each read/write).
Ideally, it would be awesome to hear that only the pieces modified are sent.
3. With regards to #2, if the data/traffic only sends the parts of BigDB object modified, is it also smart about sending only the appending piece? For example, let's say I have a game log (potentially long list of entries) and I set a new value by using gameLog = gameLog + newEntry. Does the entire gameLog get sent back/forth, of just new entry?
4. I like that there is a .toString() method available (as this is potentially useful for saving backups and or objects to client for offline play). Is there a method to convert this string back into an object?