Forum BigDB BigDB Object - Determine Object Size & Best Practice

Discussion and help relating to the PlayerIO database solution, BigDB.

BigDB Object - Determine Object Size & Best Practice

Postby RossD20Studios » May 18th, 2018, 11:15 pm

I've started working with BIGDB and am wondering about the following so I can best design the data structure:

1. How can I determine the size of BigDB object so I can examine/understand the limits of potential data models I might use?

2. With regards to loading/saving BigDB objects, is the entire object need to be sent over traffic each time the object is modified, or just the piece that was modified? For example, if I have a player object that was 200KB, would it need to pass 200KB of data just to modify a single property (ex: score) or just the smaller piece of score data?

I'm asking about this because I can see the potential for the data format/structure to add considerably to size (ex: property names) of the object. If the entire object is being sent for each read/write (vs. the single property) then I believe traffic could be spared by using a custom serialization that makes the data as small as possible (but would require the entire object to be sent for each read/write).

Ideally, it would be awesome to hear that only the pieces modified are sent.

3. With regards to #2, if the data/traffic only sends the parts of BigDB object modified, is it also smart about sending only the appending piece? For example, let's say I have a game log (potentially long list of entries) and I set a new value by using gameLog = gameLog + newEntry. Does the entire gameLog get sent back/forth, of just new entry?

4. I like that there is a .toString() method available (as this is potentially useful for saving backups and or objects to client for offline play). Is there a method to convert this string back into an object?
RossD20Studios
 
Posts: 22
Joined: May 17th, 2017, 12:15 am

Re: BigDB Object - Determine Object Size & Best Practice

Postby Henrik » May 23rd, 2018, 12:10 pm

Hey Ross,

1) It's a pretty straight-forward binary format, a little bit of header and field identifiers and lengths, otherwise regular binary format of data. Int, uint, float are 4 bytes each, long, double, datetime are 8 bytes each. Strings are UTF8-encoded, byte-arrays are stored as is.

2 & 3) When you load a BigDB object, the entire object is sent to the client. When you save an object, only modified properties are sent server-side. However, we send the entire property, we don't do deltas of the content of each property. So if you make your log example as a single string property that you append to, the entire log would be sent each time you save. So a better way to do that, is to make it an array that you add new string properties to every time you want to append something. However, to modify an object, you would have to load the entire object the first time, so if that thing keeps growing over the entire lifetime of a user, it's gonna consume a lot of bandwidth at the end. So the proper solution would be to make a new log object every day or week or whatever is reasonable for your game.

4) No, sorry, it's pretty much just a debug method.
Henrik
.IO
 
Posts: 1822
Joined: January 4th, 2010, 1:53 pm

Re: BigDB Object - Determine Object Size & Best Practice

Postby RossD20Studios » May 24th, 2018, 1:37 am

Thanks so much for this information and your suggestion for how to best implement the log system. I appreciate your support, Henrik. :D
RossD20Studios
 
Posts: 22
Joined: May 17th, 2017, 12:15 am

Re: BigDB Object - Determine Object Size & Best Practice

Postby Henrik » May 24th, 2018, 4:20 am

If you do some kind of log system like that, remember to add a DateTime property to it, and make an index over that, so that you can easily delete old entries.
Henrik
.IO
 
Posts: 1822
Joined: January 4th, 2010, 1:53 pm

Re: BigDB Object - Determine Object Size & Best Practice

Postby atilla1 » May 25th, 2018, 11:19 am

It'd be useful if there was an API for loading individual properties of an object rather than the entire object all at once.

If a client were able to specify the individual properties in a flat nested collection syntax, that'd likely work fairly conveniently.
I think it'd look better with deconstructed tuples but that wouldn't play nicely with lower versions of C# in the .NET framework.

1) It's a pretty straight-forward binary format, a little bit of header and field identifiers and lengths, otherwise regular binary format of data. Int, uint, float are 4 bytes each, long, double, datetime are 8 bytes each. Strings are UTF8-encoded, byte-arrays are stored as is.

I'm assuming you're referring to protocol buffer messages specifically. In the documentation for BigDB it states, "BigDB is not a relational SQL database, but is much closer to a NoSQL document database like MongoDB or CouchDB."

In that case, it's completely schema-less and the serialized binary contents are accessed within a key-value store?
atilla1
 
Posts: 4
Joined: September 7th, 2012, 9:48 pm


Return to BigDB



cron