A clear difference between a collaborative project and an “uni-lateral” project is – how to establish a level of confidence on whether all the parties involved on the project will deliver their contributions as intended. I don’t think this problem is unique to Open Source Software, but in fact can be found in many “trust-based” transactions. By “trust-based” transactions, I mean exchanges where the completion can not be warranted beyond relying on the other party to do the right thing.

Few decades back, a field of mathematics called Game Theory modelled this situation in “the prisoner’s dilemma“. Imagine that you and your best friend have been arrested by the police for a crime to which both of you were part to. You are put into different interrogation cell’s and offer the following deal:
“At the moment, neither of you is talking… but we have enough evidence to put both of you prison for 2 years. However, If you are prepare to incriminate your accomplice, we will cut your sentence to 1year community service and he will spend 5 years in jail. But you must hurry, because we are offering the other guy the same deal. If he gives you up, you will spend 5 years inside…you have 5 minutes to decide” But you know that if both of you talk, the jail term will be 3 years for both of you… so what do you do?
There is no clear cut answer to the question, hence is called a dilemma 🙂 In fact, John Nash identified a theoretical equilibrium point at the “guilty-guilty” scenario. This is the only scenario where, considering the other person’s choice, you couldn’t have been better of if you had chosen differently. However, actual lab experiments showed that “real” people don’t tend to choose the Nash equilibrium more often than the other 3 scenarios.
A variant of the problem is the “iterated prisoner’s dilemma” , this is when the game is played repeatedly and the impact of the decision into future transactions is taken into account in the current “game”. It’s this version of the dilemma that in my opinion applies to collaborative development projects.
In collaborative projects the 2 players are the contributor and “benevolent dictator“/project lead. Will the contributor keep to his/her public commitments? Will the project lead accept the contribution and give the kudos where is due?
Ultimately, the attitude of both players will be influenced by the prospect of future collaborations. If you are trying to facilitate these kind of relations, here my views on how to make it successful:
- Iterated the Game as much as possible: Ensure the collaboration is not based on a one-off interaction. The easiest way to do this is to have a multiple-deliveries integration plan vs big bang approach.
- Raise the stakes: The prisioner’s dilemma is based on the fact that the reward of defecting is far larger than cooperating. You can alter the situation by raising the stakes for mutual cooperation, making it much more attractive to both parties. (i.e. in a situation where “if nobody talks, you are both free” there is no dilemma)
Lastly, make sure common sense prevails. Although game theory is a fascinating subject, proposed solutions to this dilemma have failed to consistently predict how real human (i.e. not only rational but also emotional) beings actually behave.
I have changed the last line of the post to reflect @puckrin comment in twitter. Note that I agree with PD statement, but remain unconvinced about the capacity to model mathematically the reaction of “emotional” beings to it.
Great post. I very much believe that Prisoner’s Dilemma is a great model for open-source development. Just wanted to make two points.
(*) In addition to the contributor/lead game, there is also a game between companies using the open-source – do they contribute (please “not guilty”) or freeload (please “guilty”). One of the tricks of running an open-source project is to ensure enough players remain in the collaborative game.
(*) Another way to approach PD is to change the pay-off for the (guilty, not guilty) solution so that the person who pleads “guilty” gets less benefit than if they’d pleaded not guilty. This is sometimes called “the prisoner’s revenge” because the analogy is that if you plead guilty when the other chap pleads not guilty his friends will do you harm.
This changes the game to have two Nash equilibriums – so it’s still a game, but a different one.
How would this be made real in OSS? You could make it so that people couldn’t use new contributions in devices for 2 years unless they are also making considerable contributions. Lots of different ways you could do this if you think about it for a while.
I realise that this is rather academic since you guys have defined your governance; but seems like there’s an decent PhD thesis for someone here — practical methods for moving OSS from prisoner’s dilemma to prisoner’s revenge ;-).
Very good post. You understand the game very well. I would recommend looking into other related games like “Stag Hunt”.