A new academic study has found no direct link between the directory rankings of a group of barristers and the outcomes in legal cases.
Chris Hanretty, a politics lecturer at the University of East Anglia in England (pictured), presented the findings of his “Do lawyer rankings matter?” paper at the Sociolegal Studies Association conference on March 26 2013.
I didn’t attend the presentation, nor has the full paper been published, so my comments relate only to the slide deck summary on Mr.Hanretty’s blog and some brief discussions on Twitter.
Skepticism towards directories is nothing new, although this is an unusual study.
It focuses on a series of tax appeals between 1994 and 2011, and whether the outcomes of those cases correlate to the equivalent Chambers & Partners barrister rankings.
The study concludes that the ranking of a particular lawyer does not have a bearing on the outcome of a legal case, although “rankings might matter for other things”
The abstract from the paper states:
“I examine the relationship between rankings of lawyers awarded annually by Chambers and Partners, and appellate outcomes in tax cases. I find that there is no relationship between being ranked at all, or being ranked higher, in a given year, and succeeding in litigation. This is true whether we examine raw frequencies of appellate success according to whether or not the appellate team is higher or lower-ranked, or if instead we carry out a logistic regression model of appellate success which models rankings alongside other covariates which might be thought to influence success.”
It is clear that legal directories struggle to stand up to this sort of technical scrutiny – the study uses statistical techniques like “geometric mean probability”.
Although in their defense, the directories would acknowledge that what they publish is inherently subjective and not intended to meet rigorous academic research standards.
They also might argue that the readers of their products are willing to accept the directories for what they are – a yardstick, but not definitive.
The opacity of directory research is both its strength and its weakness.
Clearly some good lawyers go unnoticed and unranked in the name of preserving the selectivity of the ranking tables.
On this point, the author of the study makes a reference to the “Matthew Effect”, a phenomenon where those already at the top acquire cumulative advantage.
Put another way, an accusation sometimes leveled at the directories is that “getting in is very hard, but once you’re in, you’re in.”
Many of the consultancy enquiries I receive are from lawyers and firms who have struggled to get listed by the directories and want someone to demystify the process and explain how it works and what they can do to get noticed.
There’s always more they can do process-wise, but, at the same time, it can be hard to explain to a lawyer, who clearly has an extremely good practice on any substantive level, why he or she has not made it in to one of these publications.
Although it is a considerable achievement to be listed in the likes of Chambers & Partners, and indicates a solid level of market recognition, I also know from personal experience that the lack of a directory ranking does not mean you are a poor lawyer.
Speaking as a buyer of legal services, I have over the last few years hired a number of lawyers for a variety of business and personal reasons.
Some of the lawyers I used were recommended in prominent legal directories, some of them weren’t.
There was no obvious difference in quality between them, and in some cases the non-ranked lawyers were more suitable for my circumstances.
However, the directory research process deliberately allows the publishers a high degree of flexibility in terms of who they rank and how they rank.
Although they do factor in actual outcomes (lawyers working on the most challenging transactions, successes in court) a host of other factors come into play – feedback from clients, reputation in the market, the quality of submission material, and editorial judgment.
Without the flexibility to make such qualitative judgments, the rankings would see sharp swings from one year to the next and lose their continuity.
But that raises a point — maybe the rankings should shift more radically from one year to the next, and not move at glacial pace?
I’m sure the directories would respond by saying that a measure of conservatism and a longer-term perspective is needed to mitigate against the lawyer who has an exceptional year working on a career-defining case, and then goes quiet for five years.
Let’s also not forget that the market is a large and broad one, and prospective buyers of legal services have numerous ways in which to investigate the credentials of a lawyer – lawyers’ own websites, social media, and a raft of third party directories and surveys. And speaking to people.
Given the difficulty in making judgments about lawyers in the form of a ranking or score, some directories highlight lawyers in less contentious ways.
And in contrast to the research-led directories, there are numerous organizations and titles which take a quantitative approach and measure law firm performance using hard data, whether it’s the M&A league tables, or specialist surveys showing numbers of patents filed by each law firm.
With technology on their side, buyers can consult a number of sources and build up a picture as to whether a lawyer is suitable for them.
I look forward to reading the full research paper when it’s published.
In the meantime, if any legal directory representatives or other commentators have a view on the study, please leave a comment below or email me.
Anon says
Winners vs. losers is surely not what it’s about. Both Abramovich and Berezovsky had fantastic counsel, but only one Russian could win, going on the merits of the case. There may even be the bias that fantastic lawyers might take up more ‘unwinnable’ cases, perform wonderfully, impress all who came within earshot of them, and in fact even win some. This has got to be especially true of appeals, which is what the study looked at. This seems like a crude measure to use, and a pretty dull topic for social sciences research, since it seems to conflate “success rate” with “excellence, in the forms identified by clients”.