AustLII Home | Databases | WorldLII | Search | Feedback

Legal Education Digest

Legal Education Digest
You are here:  AustLII >> Databases >> Legal Education Digest >> 2012 >> [2012] LegEdDig 8

Database Search | Name Search | Recent Articles | Noteup | LawCite | Author Info | Download | Help

Herring, D J; Lynch, C --- "Teaching skills of legal analysis: does the emperor have any clothes?" [2012] LegEdDig 8; (2012) 20(1) Legal Education Digest 26


Teaching skills of legal analysis: does the emperor have any clothes?

D Herring and C Lynch

Law Studies Research Paper Series, University of Pittsburgh, No. 16, 2011, pp 1–36.

The Carnegie Report describes three pillars of legal education – the development of skills of legal analysis; practical lawyering skills; and professional identity. The authors of the Report are largely critical of current approaches to legal education, faulting law schools for failing to adequately prepare students for the practice of law. They find legal education to be especially lacking in the areas of practical skills and professional identity development.

In making their critique, the authors of the Carnegie Report assert that, at the least, law schools currently do one thing very well – use the case-dialogue method to teach legal analysis.

Based on a line of empirical studies, this paper questions the Carnegie Report’s conclusion on this point and reports on a study whose findings are relevant to an assessment of legal educators’ effectiveness in teaching legal analysis.

While the Carnegie Report indicates that the participants in legal education perceive a remarkable intellectual transformation, their perception of learning gains needs to be tested. Such testing will help determine whether the perception is grounded in fact, and if it is, when the learning gains occur, how they are achieved, and how legal educators can increase these gains.

The previous studies in this area indicate that there are multiple aspects that comprise basic skills of legal analysis. This paper examines a particular aspect that each of the prior studies addresses at least to some degree, ‘cross-case/hypothetical reasoning’. One research team led by education psychologist Dorothy Evensen and communications researcher James Stratman provides a useful initial description of this particular aspect of legal analysis: [Law] students need to be able (a) to construct accurate representations of multiple, closely related cases, (b) to detect indeterminacies of interpretation arising between or among them, and (c) to distinguish more from less purpose-relevant questions about their relationships to each other.

The research team describes this particular skill of legal analysis as cross-case reasoning that addresses indeterminacies in text and meaning.

Another research team led by Kevin Ashley focuses on a closely related form of legal analysis – ‘hypothetical reasoning’. Hypotheticals are, in essence, ‘novel cases presenting new dilemmas’. These ‘novel cases’ are used, at least in part, to create ‘conceptual bridges between cases along a continuum’.

So described, cross-case reasoning based on textual indeterminacies and hypothetical reasoning are very similar. These similar forms of reasoning constitute the aspect of legal analysis that lies at the core of the inquiry addressed in this paper – cross-case/hypothetical reasoning.

To determine the feasibility of assessing selected analytical skills, David Bryden conducted a pilot study in the area of legal reasoning skills.

Bryden explained that, for purposes of his study, functional analysis calls on students to identify and draw on the purpose of a legal rule or category as defined by a set of opinions and/or statutory provisions in order to complete a lawyering task. Bryden’s hypothesis was that a good legal education would produce students who at least recognise and articulate the possibility of resolving a relevant legal ambiguity by reference to the purpose of a legal rule or category.

To test this hypothesis, Bryden developed two examinations, each of which consisted of essay questions that presented students with a short set of facts that constituted a legal problem. The questions also provided students with a set of short hypothetical judicial opinions and/or statutory provisions.

Bryden compared two separate groups of students. The first group consisted of third-year law students enrolled in their last semester at three law schools who accepted an invitation to complete one of the two tests. The second group included incoming students at the same three law schools who were invited to participate based on their LSAT scores, which the researchers had selected in order to match, as much as possible, those of the students in the first group. Overall, the results of the study indicated that third-year law students were ‘nearly always more proficient’ in terms of functional analysis than the entering students. Thus, the study indicated that some incremental learning gains in this skill resulted from three years of legal education. However, Bryden noted that even for the third-year students, a clear majority of the essay answers failed to indicate any engagement in functional analysis whatsoever.

Bryden’s findings raise serious questions about the effectiveness of traditional legal education in teaching the skill of functional analysis.

Education researcher James Stratman has provided a wide-ranging critique of Bryden’s study. In the end, Stratman asserted that Bryden’s study is not so much valuable for its results (third-year students perform better than entering students), but rather for its support of an effort to develop empirical tests that can more effectively examine and assess the various skills of legal analysis.

Another research team that included Stratman and was led by Dorothy Evensen took up the effort to develop empirical tests. While the development of valid test instruments was the primary goal of their study, the Evensen/Stratman team found that the legal reasoning skills of second semester law students were no better than those of first semester law students.

In designing their study, the Evensen/Stratman team identified the study’s practical purposes: ‘to develop a prototype multiple-choice instrument assessing law students’ critical case reading and reasoning skills.’

Based on these testing goals, the researchers designed a test instrument that required students to read three cases that address the same procedural rule. There were significant indeterminacies of interpretation both within each case and across cases. The test-taker was given a specific purpose for reading and thinking about the cases – to prepare an appeal of a decision against the defendant in one of the three cases.

The researchers’ field test of the test instrument involved 161 first-year law students from five law schools who volunteered to participate, 81 of whom took the test in their first semester (fall 2003) and 80 of whom took the test in their second semester (spring 2004). The students 12 who completed the test in the fall were not the same students who completed the test in the spring, but the students were matched across the two groups based on LSAT scores.

Phase 2 of the study involved the development of a second test that the researchers intended to be parallel, or equivalent, to the original test. The researchers recognised that the development of this second test would allow them to address a methodological weakness of phase 1 of the study related to the finding that students’ case reading and reasoning ability did not improve from the first semester to the second semester. In other words, phase 1 of the study had not tested the same students twice, once during the first semester and once during the second semester. The development of a second test would allow the researchers to use a superior within subjects design, testing the same students at two different points in time.

The researchers posed two specific research questions in the phase 2 field test that are relevant to the discussion here. First, ‘Do students’ case reading and reasoning skills improve between their first and second years in law school?’ Second, ‘Do students’ case reading and reasoning skills improve between their first and third years in law school?’

For the phase 2 field test, the researchers initially recruited 146 first-year students from the same five law schools included in phase 1. The researchers randomly assigned these students to take either the original version of the test or the new version of the test in the spring 2006 semester. Eighty-three of these students completed the full study protocol, taking the alternate version of the test in the fall semester of their second year (fall 2006). Forty-nine of these students took the original version of the test first and 34 took the new version of the test first. This procedure allowed for a within subjects comparison of reading and reasoning skills, as revealed by test performance, from the second semester of the first year to the first semester of the second year.

The phase 2 field test also involved 63 third-year law students who had participated in phase 1, and thus had completed the original test in their first year, either in the fall or the spring semester. These students completed the second test in their last semester in law school (spring 2006). This allowed for a within subjects comparison of reading and reasoning skills from the first year to the third year.

Consistent with the results from phase 1, the researchers found that students’ case reading and reasoning skills did not improve between their second and third semesters. In addition, the researchers found students’ skills, as measured by the tests developed by the researchers to date, did not improve between their first and third years of law school.

In contrast to Evensen and her colleagues, Kevin Ashley conducted a set of studies that addressed whether an educational intervention beyond traditional law school classroom instruction improved legal reasoning skills.

As to the primary focus of Ashley’s studies, the educational intervention was designed to help students learn to reason with hypotheticals in the context of personal jurisdiction doctrine through an intelligent tutoring system. The system helped students identify, analyse, and reflect on episodes of reasoning with hypotheticals in oral argument transcripts through the construction of simple diagrams.

Ashley’s hypothesis was that law students who used the program to diagram hypothetical reasoning would learn hypothetical reasoning skills better than law students who studied hypotheticals without the program’s diagramming support and feedback.

In the largest of his studies, Ashley required all 85 students in one section of the fall 2007 Legal Process course at one law school to participate.

The students were randomly assigned to one of two study conditions, balanced by LSAT scores. The experimental group received training through a graphical tutoring program that supported argument diagram creation and gave advice. The control group completed a text-based training program that did not provide feedback.

The results revealed no statistically significant differences between the experimental and control groups with respect to performance on the post-test. The results also indicated that neither group benefitted from the study in terms of improvement from pre-test performance.

Thus, the combination of traditional law school classroom case discussion of personal jurisdiction cases and either the diagramming program or the text-based program resulted in no significant educational gains in terms of hypothetical reasoning skill from pre-test to post-test.

The study reported in this paper is the first step in a larger project that seeks to extend the Evensen and Ashley studies by employing multiple-choice pre- and post-test instruments to measure learning gains for first-year law students in the area of cross-case/hypothetical reasoning skills. This pilot study is more focused than the previous studies in terms of course education goals, teaching methodologies, subject matter coverage, and timeframe. Namely, the study is conducted within the context of a single first year, first-semester course that has an express goal of improving students’ skills of legal analysis, with an emphasis on the skill of cross-case/ hypothetical reasoning. The study is centred on a single substantive law unit of the course – personal jurisdiction doctrine. The course completes this unit of study during the first six weeks of the semester. Thus, the study focuses on this discrete period of instruction, with students completing the pre-test in week 1 of their law school education and the post-test in week 7.

The study utilises pre- and post-tests derived from the multiple-choice tests developed by Ashley.

For the purposes of this study, we tested students in the first-year Legal Process (i.e., Civil Procedure I) course who constituted one section of the fall 2009 first-year class. The total section size was 77.

The students in the section were divided into two groups using balanced random assignment by maximum LSAT score. This assignment was designed to ensure that the subsequent comparison would not be affected by any differences in terms of an established measure of incoming competence between the groups. Ultimately, due to absences and attrition, 71 students, 37 in group 1 and 34 in group 2, completed the entire study.

Each group followed the same study process. Students began by taking a pre-test designed to assess their reading and reasoning abilities. They then received six weeks of in-class instruction in the subject area of personal jurisdiction, followed by the post-test. Apart from the tests (as explained in the Results section below), the two groups received the same instructional experience.

For the purposes of this study we constructed two multiple choice tests centred on an oral argument to the United States Supreme Court in a personal jurisdiction case. One test was based on an argument in Kathy Keeton v. Hustler Magazine, [1984] USSC 54; 465 U.S. 770 (1984), while the other was constructed around an argument in Calder v. Jones, [1984] USSC 53; 465 U.S. 783 (1984). Both tests were designed by legal experts and designed to be of equal difficulty.

Students taking the tests first read some short background about the case followed by an extract of the pertinent oral argument. They then answered questions designed to test their reading and reasoning skills. Each test consisted of 10 questions, seven of which were cross-case, indeterminate questions as described above and three of which were single-case, determinate questions.

A sample cross-case, indeterminate question from the Keeton test is shown below (with a hypothetical constituting one of the cases in the cross-case comparison). Here the student is required to assess a hypothetical that was raised by a Justice during the oral argument in light of a legal test proposed by the advocate:

Assume that [the test proposed by Mr Grutman, the petitioner’s attorney] is as follows:

If the state long-arm statute is satisfied and defendant has engaged in purposeful conduct directed at the forum state out of which conduct the cause of action arises, and that conduct satisfies the minimum contacts under which substantial justice and fair play make it reasonable to hail defendant into court there, and the forum state has an interest in providing a forum to the plaintiff, then the forum has personal jurisdiction over the defendant for that cause of action.

Please check ALL of the explanations that are plausible.

Hypothetical: ‘Just to clarify the point, that would be even if the plaintiff was totally unknown in the jurisdiction before the magazine was circulated?’ [i.e, suppose the plaintiff was totally unknown in the state before the magazine was circulated. Would personal jurisdiction over Hustler Magazine lie in that state?]

a) The hypothetical is problematic for Mr Grutman’s proposed test. The decision rule applies by its terms, but arguably the publisher should not be subject to personal jurisdiction in the state under those circumstances.

b) The hypothetical is not problematic for Mr Grutman’s proposed test. The decision rule applies by its terms, and the publisher should be subject to personal jurisdiction in the state under those circumstances.

c) The hypothetical is problematic for Mr Grutman’s proposed test. The decision rule does not apply by its terms, but arguably the publisher should be subject to personal jurisdiction in the state under those circumstances.

d) The hypothetical is problematic for Mr Grutman’s proposed test. The decision rule applies by its terms, but publishers would then be subject to personal jurisdiction even in a state where [plaintiff] suffered no injury.

By contrast, a single-case determinate question from the Calder test is shown below. In this question the student is asked to identify the central legal issue presented in the argument made in Calder v. Jones:

What legal issue concerning Calder and South’s appeal in Shirley Jones v. Iain Calder and John Smith did the Justices address in the oral argument excerpt? Select the best answer below.

a) In determining whether the courts in a state may exercise personal jurisdiction over a defendant, should the court consider the defendant’s travel to the forum state that was unrelated to the events giving rise to the suit?

b) May courts in a state exercise personal jurisdiction over an out-of-state individual who actively participated in the investigation and production of a nationally distributed magazine story about an in-state plaintiff?

c) Should an employee be entitled to heightened protection from personal jurisdiction when his or her employer is a defendant, concedes jurisdiction in the forum, and has the capacity to pay a damage award?

d) Should a defendant in a libel suit be immune from personal jurisdiction in a distant forum because of First Amendment free speech concerns?

Group 1 began with the Keeton test while Group 2 began with the Calder test. Both groups received the same in-class instruction from the same instructor before taking the post-tests, with Group 1 taking the Calder test as a post-test and Group 2 taking the Keeton test.

Five of the ten multiple choice questions on each exam were single-answer questions of the type shown in the Calder example above. That is, they asked students to select the only correct answer or the best answer. The remaining five questions on each test were multi-answer questions of the type shown in the Keeton example above where students were asked to pick all plausible answers.

When assessing students’ learning gains we considered both their raw learning gains (raw gain) and their normalised learning gain (NLG). Raw gain scores are computed by subtracting a student’s pre-test score from his or her post-test score.

While it is clear that some students gained from the experience it is not clear that the gain is evenly spread across the entire student population. Moreover, as indicated by the NLG scores, while many of the students improved, the population as a whole did not achieve substantial learning gains despite the fact that, given their pre-test scores, they had room for significant improvement.

In terms of both test performance and learning gains, there were no statistically significant differences between the groups.

Overall, there was no significant positive movement in the development of reasoning skills once the students’ post-test performance was examined relative to how much they could potentially improve based on their benchmark pre-test scores. This finding is consistent with prior studies that have found a lack of significant learning gains in terms of law student reading and reasoning skills.

The study found that gender and undergraduate GPA were not significant factors in predicting either performance on the tests or learning gains.

For the single-case, determinate questions, the results indicated that there were no significant raw learning gains or normalised learning gains. For the cross-case indeterminate questions, there were significant raw learning gains under both the basic grading rubric and the even grading rubric. In addition, while there was not a significant normalised learning gain under the basic grading rubric, there was a significant, albeit minor, normalised learning gain under the even grading rubric. These divergent results as to learning gains provide some additional support for Evensen’s and Stratman’s finding that these two question types test different skills.

In summary, the findings of the studies in this area to date indicate that, while legal education does not diminish law students’ skills of reading and reasoning, neither does it significantly enhance these skills.


AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.austlii.edu.au/au/journals/LegEdDig/2012/8.html