Wednesday, March 14, 2012

"Jane, you ignorant slut."

"Dan, you pompous ass."
Remember the '70s when phrases like that in an SNL sketch were so outrageous, so over the top, they actually were kinda funny? Today, of course, those old Point / Counter Point sketches seem more prescient than outrageous, as "serious" news commentators routinely say things like this with the same calm, deliberate, faux intellectual deadpan Dan Aykroyd and Jane Curtin used back then. And it's not just talk radio where this occurs.

Most major online news outlets and blogs now allow readers to comment on op-ed pieces they publish. If you've ever read any of those commentary streams, you know they can quickly degrade into name calling and personal attacks, especially on those sites that allow commentators to post using anonymous pseudonyms. The grandiose idea that thousands of readers could contribute something constructive and positive to a thoughtful online conversation has been, to date, one of the most miserable failures associated with the Internet since its inception.

Nick Denton, the founder of Gawker Media and owner of a bunch of popular blogs, makes no bones about it. He says,  "It's a promise that has so not happened that people don't even have that ambition anymore. The idea of capturing the intelligence of the readership — that's a joke."

Why has this laudable Internet ambition been such an abject failure, and what, if anything can be done to fix it? I have spent the past few months reading thousands of comments attached to articles on a wide variety of topics.  To be sure, the effect is often depressing and sometimes frightening. But I do not think the reason this hasn't worked is a lack of intelligent people out there. On the contrary, I see many well-reasoned, well-written posts, some making points with which I agree and some with which I disagree. People still remember how to disagree with one another without ripping each other to shreds with ad hominem attacks. The trouble is, the constructive comments are often submerged in a cesspool of vile language, deliberate disinformation, intellectual immaturity, and troll bait.

I have developed a few tricks that help me leaf through the commentary streams quickly, identifying the comments that may have value and skipping the ones that probably should not have been written in the first place. One such trick is based on a quote often attributed to Eleanor Roosevelt,
"Great minds discuss ideas, average minds discuss events, small minds discuss people."
I've found you can quickly scan a post to determine whether the author is going to talk about ideas, events, or people. The first category are usually interesting, the second sometimes interesting, and the last almost never. Here are some examples from the commentary stream on the CNN article I linked above.
Ideas
@veggiedude - Siri (and IBM's Watson) has shown how well artificial intelligence can work. The solution is to employ a Siri type moderator to decide whether to post or not post the comments from the users. This way, a machine is in control, and there is no one to get mad at - unless a person 'enjoys' getting mad at a machine.
@maestro406 - Allow a "squelch" option. Everyone has the right to post, but that doesn't mean I have to read every comment from every poster. I should be able to block individual posters on my computer.
Events/Examples
@swohio - And the funny thing is, so many news sites have switched over to making people log in using Facebook in order to leave comments, stating it will help facilitate more mature, reasonable, and engaging discussion/comments. Sad to say, it hasn't at the sites I used to comment on where they've switched over to that system.
@GloriaBTM - The most enjoyable board I've posted on were IMDB around 2005-07, both because of the format, variety of topics, and the way it was moderated. (I'm not sure what it's like now...) We were able to have real conversations about current events, science, religion (and yes, movies), without have to wade through so much muck. Most importantly, posters whose posts were repeatedly deleted for violating T&C had their accounts deleted, fairly quickly. Yes, people could create new accounts, but it did slow the nastiness down.People could also IGNORE certain posters. It was brilliant. You just click ignore and you don't have to see them ever again.I found the format easier to navigate than any other board I've been on. Each thread could collapse (truly collapse, unlike here, where it still takes up space, though the message is blanked). Then you could quickly scroll through and find your conversation and open that thread.
People
@Tzckrl - STOOPID! R u kidding? Palin and Santarum 2012! Anyone who don't think so is a twerp!
@TommiGI - South by Southwest is a joke and Nick Denton is the punchline.
Another, sometimes more reliable, trick is to jot down the aliases of commentators who post useful or interesting comments and then scan for other comments by those same aliases. Of course, this is made more difficult if users are allowed to choose arbitrary, anonymous handles each time they log in. As @swohio indicates above, the recent tendency of other sites to require Facebook logins in order to try to enforce authenticity has had mixed results. While that approach doesn't seem to help moderate the crazies, as you might think it would, it does at least provide a consistent screen name for posters so that you can more easily spot the ones who have had a better track record in the past.

In Trust Me, I'm @HamsterOfDoom I talked about how a framework like the Ethosphere can help develop and maintain trust relationships with pseudonymous identities. By maintaining a numerical ranking, a rep as I call it in the Ethosphere, that reflects the degree to which an alias has contributed constructively to the community in the past, we can more easily filter out the noise while still allowing open participation. The trolls can still troll, but their influence in the group will be limited.

Another possible approach, proposed by Nick Denton in the linked interview, could have a similar impact.
The answer? Denton said his sites are planning to post some stories that allow only a hand-picked, pre-approved group of people to comment on them. That, he said, would make the comment section an extension of the story and allow people [...] to have their say without fear of being piled onto by others.
The fundamental difficulty with this approach is that someone else, the publisher?, anoints an elite few who may participate in the conversation. This is counter to the original goal of open participation and violates an unwritten Internet law just as much as requiring authentic logins would. A better solution is to allow the participants themselves to choose who should have proportionately greater influence, in a dynamic and ongoing fashion. In I'm Not Elitist, Just Better Than You, I discuss how and why this is done within the Ethosphere.

Do you think pseudonyms + reputation could be a possible solution to this problem?  Are there other self-policing, self-organizing ways to do this?

Thursday, March 8, 2012

Peer Pubs

The land of academic publishing and peer-reviewed journals is a strange land indeed. The business is dominated by a handful of publishing companies, notably the big four: Elsevier, Springer, John Wiley & Sons, and Informa. And don't think its just a niche market; those companies make tons of money and are hugely profitable. In the first quarter of 2011, Wiley made over $250 million with a profit margin of 43%! The largest of the four, Elsevier, made over $1.1 billion in 2010 and 36% of that went straight to the shareholders as profit. Wow. How in the world can there be such a giant market for publications like The Journal of Small Ruminant Research ("The official journal of the International Goat Association")? And why is it such a profitable business?
Well, here's how it works. These refereed journals are periodic anthologies of  scholarly papers and articles on fairly narrow topics (like goat research). They are written, understandably, by academics and researchers and they often document painstaking research and, sometimes, groundbreaking results in thousands of different fields of study. To ensure the accuracy, novelty, and relevance of these results, a long-standing peer review process has evolved in which highly reputable members of the same research community review, accept, or reject submissions. The collections are planned and organized usually by an editor, who is also a member of the same research community, like, for example, the community of researchers in the field of parallel and distributed computing.

Great!, you say. We want these smart, creative researchers to make gobs of money because they are actually adding huge value to our body of knowledge and, eventually, our economy as well. Out of those billions of dollars, how much does the author of one of these articles get paid? Well, nothing typically. They probably are getting a salary from a university or corporate research lab, and publishing research is considered part of their jobs, so they rarely get paid anything by the publishing companies. And besides, they get their name on the papers and enjoy all the fame and glory that comes from publishing a well-prepared, carefully researched treatise on ruminant husbandry or, ahem, byzantine agreement.

So what about the reviewers, whose names do not go up in lights? Yeah, they don't usually get paid either. Again, it's an expected part of their jobs. The editors? Nope.  It is such an honor to be asked to edit a series or even a volume of these prestigious publications that members of that same research community (heck, let's just call it a teamspace) do it for free. So basically, all the intellectual heavy lifting required to produce one of these journals is born by the teamspace itself with little or no monetary compensation expected.

The publishers keep all those $billions for themselves. But surely they provide some useful service, right? The answer is, they used to. In the old days, before the Internet, the publishers handled typesetting, copy editing, printing, distribution, and promotion (marketing). But today, either that stuff is being done by the teamspaces themselves (typesetting, copy editing, promotion) or simply isn't required anymore.  We love the Internet! It makes things cheaper for us consumers and more readily available to all of us. Since the hard work of peer-reviewed publishing is being done free of charge by all those various teamspaces, and since printing and distribution costs have essentially disappeared, I bet you think those journals and the research papers they contain can be had for a song now. Right?

Think again. A year's subscription to the Elsevier journal pictured above, just twelve issues, will cost a library about $1,200.  And that's the online cost! The publishers simply turn a switch  to give you access to somebody else's hard work and they charge you $100 per month per journal for that. University libraries must buy hundreds, perhaps thousands of these subscriptions and most researchers have little choice in the matter because they are required by their employers to publish in certain journals. The kind of open access we've come to expect on the Internet doesn't exist for these types of works, because the publishers make the authors sign over copyrights, again without compensation.  And then to pour salt on an already infected wound, the publishers spend some of that unearned revenue to lobby congress to pass vile, anti-competitive, anti-Internet laws with sickeningly misleading titles like the Research Works Act (RWA), the Stop Online Piracy Act (SOPA), and the Protect Intellectual Property Act (PIPA).

To state the obvious, the international research community is composed of a bunch of really smart folks, and they won't allow this situation to persist for too much longer. Already, nearly 8,000 of them have signed a petition, a bill of rights really, called The Cost of Knowledge, pledging to boycott Elsevier's publications. And there appears to be stirrings of a broader revolt against this ridiculous status quo in the blogosphere.

Most researchers agree that the only real value-add provided by the publishers in this scenario is to provide a framework in which peer review can take place. Not that they actually do the reviews, that's done by the peers themselves. But they provide a framework, a societal network where researchers on any particular topic can socialize, collaborate, swap critiques and comments, reach consensus about which papers to publish in a given issue, and recognize and honor those members who have contributed in a positive way to the body of knowledge. It turns out, reputation is big in academia.  Does this framework sound familiar?

The land of academic publishing will, almost inevitably, evolve into something like the Ethosphere. There's little doubt about this, in my opinion.  To phrase it another way, an Ethosphere teamspace as I've described it in this blog is nothing more nor less than a topical, peer-reviewed, online publication without the parasitic middlemen.

In the Ethosphere, perhaps knowledge needn't cost so much after all.

Tuesday, March 6, 2012

A Random Story

So randomness and probabilities are inevitable parts of any discussion of distributed consensus. Just how comfortable are we with the fact that our daily choices must be guided, at least in part, by the laws of probability?

When we get behind the wheel of an automobile or strap ourselves into an airplane seat, we are literally trusting our lives to the laws of probability. Since we can't know everything about the pilot of the airplane or the other drivers on the road with us, we wrap ourselves every day in the comfortable knowledge that, with high probability, we will survive to see another day.



But when it comes to computer programs and programmers, we are much more demanding. We expect computers to be deterministic. Period. They perform exactly the same instructions over and over and any deviation from that determinism we call a "bug" or a "fault." We demand that our family photos be kept safe even in the eventuality of multiple, byzantine faults. When you check yourself into a hospital, the chances that an accident or mistake will lead to the loss of your life are quite a bit higher than the chances that an accident or mistake will result in the loss of that kitty cat picture on Facebook. (Small caveat: I totally made that up.)

The fact is, we accept that the universe is stochastic but demand that computers be deterministic. Here's a true story about how I once forced a customer of mine to face that bias and, I hope, start to change his attitude a bit.

While I was at Caringo (they probably still do this) we would periodically conduct training classes for customers of our products, which includes CAStor, a scalable, distributed object storage solution. I would typically join the students and instructors for lunch one day to answer any technical questions they had about the products.  During one such training class, I learned from the instructors that one of the students was being vocally skeptical about one of the foundational assumptions we had made in the product.

CAStor can store billions, even trillions of distinct data objects, each of which has a universally unique identifier (UUID) associated with it.  Other similar products on the market generate these UUIDs in a deterministic fashion that necessarily involves a central naming authority - an expensive and fragile solution IMHO. CAStor's innovation in this area was to instead use a large truly random number for these UUIDs, removing the requirement for a central authority and significantly simplifying the generation and management overhead.

Using this mechanism, the chances that two distinct objects will be assigned exactly the same UUID are very, very, very (to the 42nd power) small. But it is possible and, of course, it would be a bad thing if it happened, which is exactly the objection being raised by our skeptical student.  I mean, we're talking about computers here, and he expected a deterministic guarantee that such a collision is not possible.

So I expected a question along these lines during our meet-the-architect lunch. But it didn't come.  At the end of the lunch, after we'd all finished our deli sandwiches, I decided to bring up the topic myself. I went into the kitchen of our corporate offices and retrieved a wine glass that had been sitting on the top shelf collecting dust. Back in the training room, I tapped the glass with my pen to get everyone's attention and, without saying a word, placed the glass into my empty lunch sack, twisted it shut, and put it on the conference table. Then I picked up a heavy metal power strip and, again without a word to the class, pounded the sack with the power strip until the glass was thoroughly shattered.

Everyone moved away from me there in the training room. The instructors, whom I hadn't warned about this, considered calling building security.

After a brief pause for effect, I picked up the lunch sack and began shaking it vigorously. Then I asked, "Who here believes the glass will spontaneously reassemble itself if I keep shaking the bag?"

No one answered. So I upped the ante.

"What if I continue shaking it like this all day?  All year?"  Nobody answered.

"What if I continue shaking the sack for the rest of my life? Who believes the glass shards will accidentally find their way back in the exact same configuration they started in to reform the wine glass?"

Finally, the troublemaker student answered. He timidly said, "Well, it could happen."

"Exactly," I said.  "It could happen. And if any of you believe it actually will, I recommend you should not buy our product." And I left the room.

I'm happy to report that our "troublemaker" student became a big believer in our product and is still a customer.

Sunday, March 4, 2012

Provably Probable Social Choice

There is a strong theoretical similarity between computer networks and people networks. Here we will discuss a surprising fact that has been proven to be true for both human and computer networks.

Without a coin to flip, there is no safe way for independent entities to reach consensus! 

The previous chapter contained a light treatment of a fairly heavy theoretical topic in computer science, Byzantine Agreement (BA), and an exploration of how randomness is an essential requirement in overcoming certain impossibility results in distributed computing. As it turns out, there are some tantalizingly strong similarities between the theory of distributed agreement and the theory of social choice (SC).

Recall that the BA problem setup involves a number of distributed processes each of which starts out with an initial value for some variable. We might call this initial value the "authentic" or "honest" value of a process, because all properly functioning processes will honestly report this value to all others. The goal of any BA algorithm is to allow the processes to vote for their authentic value and to compute, eventually, a global value in such a way that two straightforward requirements are met:
  1. If all well-behaved processes have the same authentic value, then the global consensus value must be that value.
  2. If the well-behaved processes do not all agree on the same authentic value, they must still agree on some global value; it doesn't matter which one.
To make it more interesting, the problem allows for the possibility of faulty processes that do not honestly report their authentic value choices but rather attempt to game the system to influence the agreed upon result, or to prevent any agreement from being reached. If we place no constraints at all on the types of failures the faulty processes can experience, then we may as well assume the nefarious nodes are consciously trying to thwart our algorithm and that they have complete access to all the information they need to do so.

Already we can start to see similarities between BA and SC (voting) problems. We have a number of independent processes (voters), each of which has its own preferred value (candidate) and they must report (vote) in order to agree on (elect) a global winner in a fair manner. Some of the entities may be faulty (dishonest) and instead report (vote) strategically, using information about partial results to unfairly game the system in favor of their authentic choice.
Terminology note: The social choice literature seems to use the terms "strategic voting" and "tactical voting" interchangeably to mean voting for some candidate who is not your authentically preferred one in order to try to influence the election in your favor. Here we will use "tactical voting" because it describes better what's actually going on.
A very interesting question to ask for both BA and SC is this: Is it possible to devise an algorithm in which the non-faulty (honest) processes (voters) can overcome the evil impact of one or a few faulty (dishonest) ones so they cannot unfairly influence the result?  Not surprisingly, many mathematicians have examined this and similar questions and, perhaps surprisingly, the answers have been rather discouraging.

In the area of Byzantine Agreement, it was proven in 1985 that, for any practical scenario (where, e.g.,  message delivery times are unpredictable) there is no deterministic algorithm that will prevent even a single faulty process from influencing the results of the agreement. All the great work and research to find solutions in this area depends on randomness in some way to solve this important problem.

So what about the Social Choice arena? Around the same time (1984) Michael Dummett published several proofs of a decade-old conjecture now called the Gibbard-Satterthwaite theorem which is about voting algorithms used to select one winner from a group of three or more candidates based on voters' preferences. To paraphrase, the theorem states that for any reasonable scenario (where, e.g., there are no blacklisted candidates and the winner is not chosen by a single dictator) there is no deterministic algorithm that will prevent even a single tactical voter from influencing the results of the election. Sound familiar?

There is a more well-known, but in some ways less interesting, result in social choice called Arrow's Impossibility Theorem that has a lot in common with the G-S theorem discussed above. Dr. Kenneth Arrow received the Nobel Prize in Economics for his work related to this theorem in 1972. Professor Nancy Lynch received the Knuth Prize in 2007, in part for her seminal work on the impossibility proof for Byzantine Agreement. Yet, as near as I can tell, neither discipline has cited the other in all these years, despite the striking similarities of the problems and results and the huge amount of research activity associated with the two independent fields.

Don't get me wrong. I'm not saying these two canonical problems are identical, or even that there is a common underlying body of theory (though I believe there very well might me). But even the differences in the problem statements are illuminating and may indicate areas for further research in one field or the other. For example, the BA problem statement requires every non-faulty process be able to independently compute and verify the agreed upon value. There is no central authority to tabulate votes in BA, whereas in SC, it is typically assumed the independent voters submit their preferences which are then tallied in one place by a trusted central authority. But would it be a useful requirement for each voter in a SC scenario to be able to independently verify the results of an election? I believe this could be the basis of a reasonable formal definition of election transparency, a very useful property of real elections.

There are also areas where the typical formulations of SC problems are actually more stringent than BA. Remember the validity requirement for BA is, if every non-faulty process begins with the exact same initial value, then the algorithm must choose that value. If even one good process has a different value, then a correct BA algorithm is free to choose any value at all, as long as everyone agrees with it in the end. For SC, however, we must agree to elect a candidate based on more flexible requirements. An alternative validity rule might be, if a plurality of non-faulty processes have the same preferred value, the algorithm must choose that value. Or more generally, the algorithm must choose the winning candidate such that voters are the least unhappy about the result. This suggests some interesting extensions to the BA problem, such as Synchronous Byzantine Plurality.  I have no idea whether that problem has been studied or results reported in the literature, but reasoning by analogy (always a tricky thing to do) with the Gibbard-Satterthwait theorem, I would guess that synchronous BA with a plurality rather than unanimity constraint would be impossible in a deterministic way.

Despite all the interesting complexities with these two fields of study, one can definitively say that no completely robust solution to either BA or SC is possible without randomness. Faulty and/or malicious participants can always overwhelm honest participants to influence agreements and elections.  Without a coin to flip, there is no safe way for independent entities to reach consensus!

Saturday, March 3, 2012

Mobs, Teams, and Etho-Activism

With the right process and tools, a group of smart, hard-working people can be wiser and more productive than the sum of its parts.

A group of people can become a mob or a team.


One of the first Slamball teams, the Chicago Mob


A mob is collectively motivated by fear, superficiality, and competitiveness. Mobs are destructive. There are many examples of mobs; the ones in the U.S. Congress are particularly dangerous today.

A team is collectively motivated by productivity, consensus, and respect. Teams are productive. I have made my career building highly functional teams out of smart, opinionated, socially and ideologically diverse individuals.

The difference between a mob and a team is the right process and tools to reward productivity, reason, and consensus and to discourage divisiveness, grandstanding, and indulgence.

The difference is the Ethosphere.

With the 2012 U.S. election year upon us, there will be a great deal of discussion about national politics over the next few months. If these discussions take place, as they typically do, in coffee shops, bars, and in comments on news web sites, nothing whatsoever will come of them. Yes, it'd be great if each of us would use such discussions to formulate and refine our views and then send the result to our representatives and electors. But who has the time for that? And it's only one voter's opinion anyway.

What if, instead, there were an easy way for you and a few of your like-minded friends to have those discussions online, capture the results, and automatically email your collective resolutions to the appropriate representatives? Now it's not just one voter's opinion, but 5 or 15 who agree on something.

Even better, invite some of your non-like-minded friends to the team. Diverse opinions make for more lively discussions and with the Ethosphere machinery leading you toward and rewarding consensus, your team's collective decisions will likely be more well-rounded and, ultimately, hold more weight with their intended audience.

It will be a while before Ethosphere becomes the de facto mechanism for cities, municipalities, or even neighborhood associations to deliberate and make decisions. In the mean time, the same sort of bootstrap strategy described above can be used to get a few people working together and sharing their results with the empowered decision makers.

Suppose you and your neighbors want the neighborhood park cleaned up or new art placed in the lobby of your building. All you need is to start a Ethosphere teamspace and send email invitations to a few of your neighbors. Someone drafts a simple document with the details, costs, etc., of the plan, and you all discuss, comment, and perhaps amend it until everyone agrees, and then the proposal is emailed to the HOA board. Next time an issue or request comes up, the same HOA teamspace can be used, a few more neighbors are invited to join, and pretty soon, most of the official business of the HOA is happening online in the most efficient, and least taxing way possible. The physical HOA meetings become a mere formality.

The Ethosphere is not about supporting existing political power structures like HOAs and municipal governments. It is about supporting the people who are disenfranchised by those existing institutions. The governance structures that have evolved in the physical world are archaic, time-consuming, and worst of all, exclusionary. The Ethosphere is specifically designed for the Internet as a single set of consensus mechanisms that scales to support online activism at all levels, from neighborhoods to nations.

The real target market isn't the HOA itself, it is the much larger group of folks who are annoyed or angry at their HOA boards but who don't have the time or organizational resources to do anything about it.

It's about activism, not governance, initially anyway. At first, Ethosphere's authority will stop any time it bumps against a lawyer. Existing bylaws and rules will not allow it to be used in any official capacity. But the solution to that is not to perpetuate these broken, non-scalable physical mechanisms. The solution is to support and encourage online activism around these official structures and let nature take it's course. If it works as well as I think it will, then somebody will soon write a set of bylaws that will grant Ethosphere teamspaces legal authority as governance mechanisms at some level.

Friday, March 2, 2012

Privacy Through Multiplicity

Internet privacy is a big deal and will become an even bigger deal as web-based software to track clickstreams and collect personal information continues to become more effective. As an Internet user, I don't want web sites to be able to conspire to build and maintain a complete picture of my online activities. I want to be able to protect personal information about myself (e.g., credit card #, home address, business email, demographic info) from the prying eyes of web sites that I may casually visit. It would seem that Ethosphere would provide the ultimate in user privacy, since the privacy directive prevents a web site administrator (or anyone else) from discovering the true identity of a persona. However, this may not be enough.

Suppose I always use the same persona, named @UncleAlbert, within Ethosphere. During the day, @UncleAlbert works as a trusted financial advisor, but he is also interested in hang gliding and, being a single person, occasionally hangs out in a singles-only online club. @UncleAlbert also buys books from amazon.com, rents movies from netflix.com, and he owns a laptop he bought on ebay.com. By simply tracking @UncleAlbert's interests and activities on the Internet, an observer could learn a great deal about the person who "owns" @UncleAlbert in RL. The privacy directive would not allow disclosure of the person's real home address or age or gender, but it would still allow some pretty powerful inferences to be made regarding his or her lifestyle and buying habits.

Within the Ethosphere, this sort of inference-by-clickstream invasion of privacy could easily be thwarted by simply using many different personae. For example, if I create a new persona each time I log on to the network (assuming persona creation is cheap and easy), there would be no way for anyone to connect the behavior of one persona with any of the others, and each persona would therefore be completely anonymous. Unfortunately, this strategy would also make any kind of long term relationship, including business and commerce, impossible. Who would trust or choose to associate with an anonymous persona? Privacy is not the same as anonymity.

It is necessary, it seems, for personae to have associated, recognizable characters or personalities that persist across logins. Nobody would trust financial advice from @UncleAlbert (nor be willing to pay for it) without some kind of credentials or track record that indicates he knows what he's talking about. Moreover, if the system maintains that reputation and provides it to clients or even competitors in the financial community, this doesn't really seem to raise any privacy concerns (even though it might possibly be professionally harmful to @UncleAlbert). If a client chooses to record some praise or criticism of Al's work, it seems fair to @UncleAlbert and to other clients and potential clients that this bit of information be made available to them. On the other hand, if the system maintained and provided information about @UncleAlbert's other interests, say hang gliding, to clients or competitors, this does seem to violate reasonable privacy expectations (even though it might not harm @UncleAlbert in any way).

There is a basic tension between maintaining a persona's privacy and its identity. It may be that the best compromise is to compartmentalize one's personality and create several personae, each one representing an independent aspect of the real person. If @UncleAlbert is my financial advisor persona, then perhaps @buzz represents the hang glider enthusiast and @rex is the wild and crazy single guy. Each of these three personae might be individually known, recognized, and perhaps trusted in three separate socio-economic contexts, without compromising the privacy of any of them or their common owner.

Thursday, March 1, 2012

Voting Variants - Harmonic Range Voting

Although Ethosphere could implement many different voting procedures and allow a teamspace to choose from among them, there is a method which is a combination of IRV and RV that seems well-suited to the online venue. We will call the method Harmonic Range Voting (HRV). The word "harmonic" is borrowed from mathematics -- it is the name of the arithmetic series 1 + 1/2 + 1/3 + 1/4 +... whose connection with the algorithm will become apparent. First, let's see how the procedure works.

The HRV ballot closely resembles the IRV ballot. It is a list of candidates in rank order, with the first choice candidate at the top. The list is divided by a dotted line. Any candidates listed below the line are considered to be either not suitable or of unknown merit. Candidates above the line are all deemed suitable, and their position on the list reveals their relative desirability in the opinion of the voter. This sort of ballot ranking is easy to implement as a drag-and-drop interface, and it transitions nicely from single-choice to multi-choice elections, and more generally, handles additional alternate props well. For a single-choice election, a yea vote is equivalent to placing the prop above the line while a nay is like placing it below the line. New alternate props are initially placed below the line, allowing the voter to consider them and, if desired, drag them above the line into their proper ranking.

Although this is essentially a ranked voting method, like IRV, the method of calculating a winner is more like RV. We assign the first place candidate a score of 100. Second place votes are only 1/2 as potent as first place votes, so they are given a score of 50. Third place votes are 1/3 as potent as first place votes, they get a score of 33.333.., and so on. Thus the harmonic series. From a mathematical and theoretical point of view, this is just RV with discrete ranges based on rank. But it does eliminate, or at least reduce, two of the drawbacks of RV mentioned above. First, it is not necessary to choose a subjective merit score for candidates. You just need to decide which one you like best, which one second best, etc. Also, the paradox of electing a candidate that receives no first place votes is avoided, even in degenerate cases like the one outlined in the previous post. The harmonic series ensures no candidate can win unless it has at least a few (>2) first place votes. Similarly, no candidate can be elected strictly on the basis of third place votes unless there are at least a few first or second place votes for that candidate.

Protection of Minorities

The goal of Ethosphere is to encourage larger, more vibrant teamspaces over smaller, fragmented, stagnant ones. An effective, but undesirable, way to reach consensus is to eject all members who don't agree with the majority, or make them unhappy enough so they leave on their own, perhaps to start smaller, more cohesive teamspaces. Such balkanization of teamspaces works against the overall utility of the Ethosphere and, in the limit, results in single-member teamspaces that are pointless and completely without influence. Therefore, we wish to select voting and consensus mechanisms that do not needlessly alienate the losing supporters of a contentious vote or series of votes. Rather, we want the procedure itself to help lead the team toward a kernel of consensus that maximizes the "happiness" of all the members while still allowing props that have significant majority support to be ratified.

The HRV procedure is one way in which this may be accomplished. By blending together IRV, which favors extremist candidates, and RV, which strongly favors centrist props, we have a voting algorithm that admits compromise solutions, but only when they are needed, like when the leading choices are strongly polarized and balanced.