Forum Multiplayer Server-Client protocol implementation

Discussion and help relating to the PlayerIO Multiplayer API.

Server-Client protocol implementation

Postby tzachs » February 9th, 2011, 2:35 pm

Hi, I would like to do optimizations on my messaging protocol between the server and the client, so that the message sizes will be as small as possible.
However, in order to do that, I need to know how Player.IO handles its inner protocol.
For instance, I read in one of the posts here, that an Int will not always be sent as 4-bytes, but sometimes as a single byte.
I couldn't find anywhere that describes how it works exactly.

Is there some reference to the rules Player.IO uses in its inner implementation when transferring messages, or can somebody just write it down?
What I really want to know is, for each data type, what are the rules that define the number of bytes that will be sent.

I guess I could use WireShark to analyze the transport (and I will do it anyway at least to see if what I did help in decreasing the messages size), but it would require me to do a lot of tedious trial & error, it will take a long time, and I might end up with the wrong conclusions, so it would really help me a lot (and I think other developers too) to know that stuff.

Thanks.
tzachs
 
Posts: 8
Joined: January 19th, 2011, 10:33 pm

Re: Server-Client protocol implementation

Postby Oliver » February 18th, 2011, 3:04 pm

Hi,

The binary protocol used by Player.IO is not something we're going to document and commit to the public. We like being able to change it from time to time...

The general rule is: smaller values use less bytes. So, smaller integers use less bytes (as long as they're positive), shorter strings use less bytes etc...

- Oliver
User avatar
Oliver
.IO
 
Posts: 1159
Joined: January 12th, 2010, 8:29 am

Re: Server-Client protocol implementation

Postby tzachs » February 19th, 2011, 1:07 pm

Oh, ok, thanks for your clarification, and that answer indeed makes sense.

With that in mind, I would like to make a few future suggestions, to make it easier for developers to do optimizations if they choose:
1. Have a message.SetByte/GetByte, to control single bytes.
2. Have a message.Size, to get the message size in bytes (unlike message.Length which gets the number of parameters).
tzachs
 
Posts: 8
Joined: January 19th, 2011, 10:33 pm

Re: Server-Client protocol implementation

Postby Benjaminsen » February 19th, 2011, 1:23 pm

tzachs wrote:Oh, ok, thanks for your clarification, and that answer indeed makes sense.

With that in mind, I would like to make a few future suggestions, to make it easier for developers to do optimizations if they choose:
1. Have a message.SetByte/GetByte, to control single bytes.
2. Have a message.Size, to get the message size in bytes (unlike message.Length which gets the number of parameters).


As Oliver mentions the protocol is optimized for size. However if you wish to squeeze out every single bit of the message you could opt for a model where you send back and forward your own format encoded as one large byte-array.

This will give you the optimal compression, it is however a lot of work for little effect as the existing package format where already designed to be as small as possible.
Benjaminsen
.IO
 
Posts: 1444
Joined: January 12th, 2010, 11:54 am
Location: Denmark

Re: Server-Client protocol implementation

Postby Oliver » February 21st, 2011, 10:13 pm

Exactly,

If you want to control the bytes yourself, you can just have a message with a single Byte[] value, and then control the values in the byte[] yourself.

We might add a message.Size property, but we'll never add a SetByte/GetByte since that's controlled on a protocol level, and allowing you to change bytes of the message would cause the format to be invalid. As mentioned earlier, you can just a single byte[] in the message if you want complete control.

- Oliver
User avatar
Oliver
.IO
 
Posts: 1159
Joined: January 12th, 2010, 8:29 am


Return to Multiplayer



cron