Forum BigDB Object Contention and Heavy Loads: Should I Be Worried?

Discussion and help relating to the PlayerIO database solution, BigDB.

Object Contention and Heavy Loads: Should I Be Worried?

Postby phil.peron » January 24th, 2011, 10:32 pm

Hi all,

Let's say I have something in my game (room) that does this:

Code: Select all
public void ChangeStuff(Thing thing)
{
    BigDB.Load("MyTable", "Stuff",
        delegate(DatabaseObject result)
        {
            result.Set(thing.Name, GetSomeValue());
            result.Save(true, delegate(){},
                delegate(PlayerIOError error)
                {
                    if (error.ErrorCode == ErrorCode.StaleVersion)
                       ChangeStuff(thing);
                }
            );
        }
    );
}


Let's assume that this method gets called once when the room is created and continues to cycle during the life of the room. With one room active, it feels pretty benign. When there's 2 or 3, you get the sense that there's going to be some conflict but the "Stuff" should stay pretty much up to date. But what happens if you have 5? Or 10? How about 30 rooms? The "Stuff" object looks like it's going to get into some trouble.

I recall having a conversation with Chris a while ago regarding an issue like this but can't recall the details.

My gut tells me it may be flawed by design. "Stuff" should actually be a table full of "Things". That way the core game code can change the specific "Thing" it's targetting without too much contention going on.

Finally, I'm being lazy. I was originally going to write a test to see how it would react under these conditions but felt posting was a quicker route. ;)

I'd love to hear any feedback.Thanks.
phil.peron
 
Posts: 35
Joined: September 24th, 2010, 7:50 pm

Re: Object Contention and Heavy Loads: Should I Be Worried?

Postby Benjaminsen » January 25th, 2011, 8:59 am

The system has a build in limitation of 5 active saves per object, after this it will start to throw errors.

Can you describe your use case in a bit more details, what is it the goal of constantly saving an object?
Benjaminsen
.IO
 
Posts: 1444
Joined: January 12th, 2010, 11:54 am
Location: Denmark

Re: Object Contention and Heavy Loads: Should I Be Worried?

Postby phil.peron » January 25th, 2011, 12:57 pm

Ah, that's right. Thanks, Chris.

The use case is that players need to be able to share world state regardless of what room they're in. An example would be that you would have 3 players in Room A, 5 players in Room B and 13 players in Room C that all exist in (let's say) a dungeon together.

Everything seems to be working fine sharing the world state object in BigDB but now I see that this would run into issues when players from more than 5 rooms would try to access it to write their state deltas.
phil.peron
 
Posts: 35
Joined: September 24th, 2010, 7:50 pm

Re: Object Contention and Heavy Loads: Should I Be Worried?

Postby Henrik » January 25th, 2011, 1:09 pm

If the number of people that need to share this extra world data is not larger than the room limit, you could use a service-room for the task perhaps?

The basic idea is that all active players join whatever room they're playing in, but also a second roomtype that is a service-room. When joining this room, you simply store the name of their current room in BigDB. Whenever you need to exchange data between players, you move them all into the same service-room by looking at BigDB to pick one room that the most of those players are already connected to, and then move everyone else to it. After that you are free to exchange whatever data you want between those players, while they're still connected to their main game room.
Henrik
.IO
 
Posts: 1875
Joined: January 4th, 2010, 1:53 pm

Re: Object Contention and Heavy Loads: Should I Be Worried?

Postby phil.peron » January 25th, 2011, 3:23 pm

Hi Henrik,

I like the idea of leveraging room structures to handle this problem but my philosophy has been to keep the creation and management of things like this on the server side. To my knowledge, there's no way to control rooms via the C# server API which is why I originally started with the design mentioned above.

I think the most straightforward solution is to break out the single state ("Stuff") object into it's own dedicated table. Each room only has to manage state updates on the players it controls and world state reads can be done via an index.

If this works as planned the only thing that would sweeten our automation process during world creation is to allow tables to be built programmatically. But that's just me being lazy again. ;)
phil.peron
 
Posts: 35
Joined: September 24th, 2010, 7:50 pm

Re: Object Contention and Heavy Loads: Should I Be Worried?

Postby Oliver » January 25th, 2011, 3:50 pm

Hey,

Your questions, answered one by one:

What will happen in your code:
It doesn't really matter how many rooms are active, but rather how much access there is to the given objects. If you're trying to use it to synchronize everything for everybody, you'll run into problems. But i don't think you'll run into the 5-save limit, since that's another thing. Let me explain:

Broadly speaking, there are two major ways of removing race conditions from shared resources: Pessimistic locking and optimistic locking. Both work by serializing access to the shared resource so only one guy can edit the thing at a time.

Pessimistic locking is the most basic naive solution you'd use for synchronizing access: Block everybody else from accessing the shared resource while you're accessing it. Usually this is done by grabbing a lock and releasing the lock when you're done with the resource. If somebody else tries to grab the lock while you're in your critical section, they'll be queued up and forced to wait until you release the lock, at which point the next guy in the queue (if any) will be allowed to grab the lock and go about his business etc.

Pessimistic locking is great when there is tons of access to the shared resource and when the time to read/modify the data is very short.

However, loads of cases aren't like that. For instance, if you had to call a webservice method over HTTP to aquire a lock before you could load an database object, everything would totally fall apart because of the major lag that would introduce.

Optimistic locking was developed as solution to that problem. Essentially, when you load an object, you're also getting the version of the object you're changing. And when you save the object, you're telling the backend "Please apply these changes to the object if it's still in version 22." The clever bit is that "if it's still in version 22"-part, because that allows the client to load the latest version (probably version 23, but it might be further along), reapply the changes locally, and send those changes to the backend along with the message "please apply these changes if we're still on version 23..."

Optimistic locks are clever because they essentially allow you to optimize for the case where you don't have high contention for the shared resource, where the time spent in the critical section is very long, or where you have very high read rates with low write rates.

So, getting back to your case. What will happen is that everything will work fine, until some point where everything will fail horribly, because almost every write will get back a stale version: to perform a change you'll load the latest version, perform some changes and send those changes back. But because there is relatively high latency (compared to if everything was running on the same machine with sub-sub-sub-millisecond memory access), the object will have been changed on the server before you're changes arrive. So you'll get a stale version, load the latest version, try to apply the changes, send in those changes and it'll fail again etc... Of course, one save will complete once in a while.. it'll just be ugly.

The problem underlying all this, is that you're trying to synchronize everything using a single object. If you had each room write to their own objects, and only *read* the other rooms objects, you can get much further with your synchronize everything approach. Of course, this also breaks down once there are so many rooms that loading all that data all the time will use too much bandwidth (although, to be fair, you need a ton of room for this to happen)

Where was i. Oh yes.

Getting back to your problem, i'd say that the solution to having 3 players in room A communicate with the 5 players in room B and the 13 players in room C would be to have them all in the same room (3+5+13=21 players). If you need more than 45 players in a room, we can provide that in the Pro or Enterprise plans, but just moving the the limit from 45 to - say - 100 will just move where you'll experience the problem.

The better way to solve it would be to divide your world into a grid (zones), and have each room represent a zone. Then whenever people are close to each other (in the same zone), they can communicate. If you really want to make it fancy, you'll make the client automatically join the adjacent zones when the player moves around so the experience is totally seamless. Bonus for this implementation: You've designed a very scalable solution that isn't limited by the limits of any single machine.

The 5 saves per object limit:
This was misunderstood a bit i think. The limit is like this: Each object instance will at max queue up 5 saves, before it will give you an error telling you that you're doing it wrong.

So, what does that mean.... It means that each time you call Save() on a DatabaseObject instance, that instance will check if there are changes, and if there are it either start sending those changes to the backend, or queue up the changes to be sent whenever the currently executing save completes. At max, this queue can have a length of 5.

Most likely you'll never run into this limit (saves complete really fast), unless you're changing a property and calling save in a loop or something similar.

Eh, this went longer than i expected, and in the words of somebody much smarter than me: Sorry for the lengthy text, i didn't have time to make it shorter ;-)

Best,
Oliver
User avatar
Oliver
.IO
 
Posts: 1159
Joined: January 12th, 2010, 8:29 am

Re: Object Contention and Heavy Loads: Should I Be Worried?

Postby phil.peron » January 25th, 2011, 4:21 pm

Wow, thank you so much for the detailed and enlightening response.

I need to digest all this and start some refactoring.

You guys rock.
phil.peron
 
Posts: 35
Joined: September 24th, 2010, 7:50 pm


Return to BigDB



cron