Scaling the Tips of a Claimed Information Advantage: A Response to “Tipping the Scales: Balancing Consumer Arbitration Cases”
Posted on Categories Arbitration, Arbitration Awards, Feature Article, FINRA Code of Arbitration, Securities Arbitration, Statistics & SurveysTags , , , ,

By Richard P. Ryder, Esq.* 

We reported in SAA 2023-12 (Mar. 23) on a newly-released research paper that questions securities arbitration’s fairness and, among other things, recommends that list selection be eliminated. Tipping the Scales: Balancing Consumer Arbitration Cases, was released February 23 by the Stanford Institute for Economic Policy Research (“SIEPR”). The core findings? “1) Arbitration is often assumed to be fair and balanced. But brokerage firms often hold an advantage over consumers. 2) Random selection of arbitrators would increase consumer awards by about $60,000 on average. 3) Industry-friendly arbitrators are 50 percent more likely to be chosen from the arbitrator pool than their consumer-friendly counterparts because securities firms are sophisticated at eliminating consumer-friendly arbitrators.” We encouraged readers with views on the report to submit a letter to the Alert’s editor. To our happy surprise, Rick Ryder has stepped in with this insightful “analysis of the analysis.” The words that follow are his. 


One tangles with just how to approach a critique of this statistical work. We first found it as a citation and link in the Securities Arbitration Alert and then read a “Short Brief” on the article in SAA 2022-12. The article, written by Stanford Professor Amit Seru, is short, which is at once its allure and its failing. Ultimately, the conclusory nature of the findings stated, the scarcity of data cited or explained, the tangential relevance of the references, and the vague allusions to recent research will irk the reader. Fortunately, this annoying quality also fortified the caution that this compact piece might well be a summary of a wholly separate, fully-researched, robustly quantitative and carefully crafted working paper. So we went looking for that more professional work on the SIEPR (Stanford Institute for Economic Policy Research) website.

Tip of the Iceberg

We learned from SIEPR that this short work — the one discovered by SAA — is viewed as a “Policy Brief,” an academic device, evidently, that permits the virtues of abbreviation, both in size and explanation, to expand a scholarly work’s purview, arenas of influence and presence through economies of time and space. This is done by contracting the usual law review type of publication into a capsulized “Brief” and a quick read. The real research — the explanations of data collection, integrity/design decisions., and approach methodology — will be found in a 70-page paper that we ultimately located on the “Working Paper” section of the National Bureau of Economic Research website: Egan, Matvos & Seru, “Arbitration with Uninformed Consumers” (NBER 2021).

This discovery, of course, immunized the Seru article from the piercing barbs and sarcastic witticisms that I’d fabricated as a retort to the short piece. The research had been done and no doubt thoroughly and professionally. So, what then? Examining in depth a 70-page article was beyond our objectives. Our concern was less with the Awards data and how it was dissected and more with the logic and conclusions of the Study. So, that’s where we’ll concentrate.

Worth Noting Up Front

First, though, there are a few noisome statements in the Seru article that beg correction (and bespeak the author’s inexperience with the FINRA arbitration facility).

  • We use representation by an attorney in this piece as equivalent to the consumer being informed. The author divides consumers into these two categories — “informed” and “uninformed” — and finds only the latter disserved by FINRA list selection. We feel that, in imagining a large group of “uninformed” consumers — one big enough to justify the radical change he prescribes — Prof. Seru ignores the significant resources commonly available to consumers for finding representation or evaluating one’s panel.[1]
  • Seru writes that list selection at FINRA is “similar” to the “largest consumer arbitration forums (AAA and JAMS). In truth, list selection at FINRA is vastly more complicated and the system has been subject to special counsel and academic review repeatedly over the years.[2] At AAA, staff appointment of arbitrators prevails under the Consumer Arbitration Rules;[3] list selection is found only in more commercial-type cases — and staff are far more involved, rather than consciously excluded, from the selection process.
  • In hypothesizing that competition exists among arbitrators, the author writes that “arbitration firms know they have a better chance of being selected and paid if they exhibit a more pro-industry stance” and that these “firms” will “tend to dominate the pool” and be “more successful.” Arbitrators are individuals, not “arbitration firms,” and “success” in being chosen (and thus “dominating the pool”) are illusory concepts in practice. FINRA arbitrators cannot even know they’ve been nominated until after they’ve been reviewed. In any case, the underlying premise that selection assures payment is not factual. If there is any big “payday” for the arbitrator, it requires reaching the hearing stage.


Eliminate List Selection to Benefit the Uninformed?

The original authors and Mr. Seru conclude that random selection of the arbitrators in FINRA cases would produce a greatly enhanced award size for consumers (some $60K per case on average) than does the current list selection method. The authors posit that this marked, provable disparity is caused by an information advantage enjoyed by industry participants (“The information advantage comes because securities firms carefully track the results of arbitration proceedings over time.” Seru, p. 1). They further project that competition among arbitrators for a seat on the arbitration (40%) conditions arbitrators to be industry-friendly. Thus, while all parties are accorded the perceived privilege of choosing one’s own arbitrators under FINRA Rules, ironically, the very process that supposedly refines the random nature of the original computerized selection through party participation cements in a substantial disadvantage to the consumer. I’ve gathered below a few thoughts.

Arbitrators Don’t Really Compete for Cases

The concept that arbitrators compete to be selected is, one has to recognize, just a reformulation of the old “saw” about arbitrators voting for the industry respondent to assure their acceptance the next time they are nominated — the “returning respondent” syndrome. The difference here is that this group of professors claims they have proved it quantitatively. There are logical attractions to the proposition that the industry that controls the forum of choice in the pre-dispute agreement, that participates in virtually every arbitration as a party, and that indirectly subsidizes the forum might have leverage with arbitrators, but, if the proposition is valid, as the trio of professors claim to have proved statistically, it’s not arbitrator compensation that’s the cause. It just doesn’t make sense.

In examining the competition proposition, the question becomes, how little does it take for an arbitrator — usually one with a long business career and an unblemished background —to skew her vote and compromise her integrity? Even with the honoraria increases that have taken place, very few FINRA arbitrators make even low five figures in a year. The arbitrations don’t last long enough (about 6 hearing sessions) and the frequency of service is too small. One of the authors’ misimpressions in this regard is that arbitrators get paid when they are appointed; not true, the bulk of their pay still relates to hearing service; even then, it’s not terribly generous. Indeed, 70% of an arbitrator’s appointments will settle without the arbitrator having the opportunity to soil her record for a fee. A hearing is generally required for that corruption to occur.

But, the presence of a competition among arbitrators for industry approval is what the authors claim to have demonstrated and they reason that the industry advantage that springs, in part, from that phenomenon will only widen as time proceeds and arbitrators succumb to the rich temptations that await.

There Really Needn’t Be Any Information Disparity

The authors also claim an industry advantage based on greater information. Here, the authors concede that consumers are variously affected by this information advantage, with the “uninformed” consumer suffering most and the consumer represented by a knowledgeable (Prof. Seru cites the PIABA member) attorney least affected. The “information” that disadvantages consumers in FINRA arbitration, we surmise, is information about the arbitrators and how they are likely to vote. Perhaps, this only means that industry parties are more willing to spend time and money to evaluate their arbitrators. In this case, the message to some Claimants’ attorneys might validly be that they can add tens of thousands on average to their winning Awards by bettering their methods of selecting the right arbitrators. Now, this is a proposition we can genuinely endorse, as a long-time believer in the adage that performing one’s due diligence on her arbitrator candidates is “nearly always the single most important choice confronting parties in arbitration (Stipanowich et al. 2010).”[4]

More on the Uniformed Customer

A couple of additional thoughts on the NBER Study’s “uninformed” consumer. One naturally anticipates, when there are comparisons of “uninformed” and “informed” groups that there will be a large majority of “uninformed” individuals and a special few who are deemed “informed.” That does not seem the case at all here. The three Professors describe their representative sample as comprised of about 5,000 consumer Awards decided between 1998 and 2019. If the Study just reviewed winning Awards, there were 7,000 during this period and customers were represented by counsel in all but about 500.[5] That means a lot more “informed” consumers than “uninformed” consumers to consider before making radical change that will benefit just the “uninformed” group. Professor Seru observes that the amounts in controversy, averaging about $750,000 (with a median of $240,000), provide “substantial incentives for the parties in arbitration.” So, the condition of this smallish group of “uninformed” consumers can be ameliorated, if not equalized, by gaining representation. We’d note further that the amounts involved in the average controversy offers “substantial incentives” that justify the time and cost of closing the information advantage gap. Why, then — if the “uniformed” consumer’s numbers are small and she commonly has the wherewithal to help herself —do we need a systemic change? Is the lemon worth the squeeze, one asks?

Conclusion: Why Break What Works?

It’s more difficult to endorse the Stanford professor’s conclusion that an advantage gap disfavoring consumers warrants scrapping a system that when one recognizes tis history. List selection has been in place and regularly enhanced (arguably improved) over the two decades-plus that it has existed. All indications today support a view that parties on both sides enjoy party participation in making the arbitrator appointments. The FINRA list selection mechanism — its National List Selection System (“NLSS”) — is a product crafted with care and party consultation. Over its long evolution, FINRA has consistently endorsed changes that bespeak a forum emphasis on giving the consumer a leg up. Change would be simple enough — a rule change and a curtailment on party participation. The NLSS is already configured to produce random lists for parties to review; there just wouldn’t be any review. But, who will seriously call for it?

And, why should they? Why tear down a popular system — one built on consensus and sacrifice — for a randomized alternative that rejects peremptory challenges as consumer-harmful and relies entirely for eliminating real conflicts upon limited challenges for cause? Why, especially, introduce such dramatic change when the underlying rationale relies upon benefiting just the “uninformed” consumer? (Prof. Sera writes: “Ironically, we found that policies that are intended to benefit consumers — such as increasing arbitrator compensation or giving parties more choice — would benefit informed consumers but hurt the uninformed.” Seru, p. 3). Shouldn’t diligence in the process — a willingness to prepare and to investigate — be rewarded? Isn’t change that favors the informed consumer in the arbitration process a more just path than the reverse — especially given her far greater prevalence?


*Richard “Rick” Ryder is the Founder and President, Securities Arbitration Commentator, Inc. (SAC), which owns ARBchek, an online facility for searching Arbitrators' Award histories. Mr. Ryder is a member of the SAA’s editorial Advisory Board.


[1] To name a few: The PIABA “Find An Attorney” function and ListServ; free arbitrator reports from an expert service summarizing an arbitrator’s Award history; NASAA info-sharing and support; SAC’s ARBchek/ARBchek Lite reports, and the availability of the Securities Arbitration Law Clinics.

[2] See the Perino Report (Michael A. Perino, Report to the Securities and Exchange Commission Regarding Arbitrator Conflict Disclosure Requirements in NASD and NYSE Securities Arbitrations (Nov. 4, 2002)); the Ernst & Young audit, the SICA fairness survey (Gross, Jill I.  & Black, Barbara, Perceptions of Fairness in Securities Arbitration: An Empirical Study (2008)); conclusions of special counsel (The Report of the Independent Review of FINRA's Dispute Resolution Services – Arbitrator Selection Process (Jun. 28, 2022)); the near-elimination of the “non-public” arbitrator from customer panels; and the clear intention, when recruiting arbitrators, to substitute an objective neutrality for industry knowledge and experience.

[3] Used in customer cases.

[4] Conflict Alert: This writer’s company owns ARBchek Online, a service that produces reports of securities arbitrators’ Award histories.).

[5] We drew these approximations from surveying the SAC Award Database.