AustLII Home | Databases | WorldLII | Search | Feedback

Journal of Law, Information and Science

Journal of Law, Information and Science (JLIS)
You are here:  AustLII >> Databases >> Journal of Law, Information and Science >> 1993 >> [1993] JlLawInfoSci 24

Database Search | Name Search | Recent Articles | Noteup | LawCite | Help

Tiscornia, Daniela --- "Meta-reasoning in Law: A Computational Model" [1993] JlLawInfoSci 24; (1993) 4(2) Journal of Law, Information and Science 368

Meta-reasoning in Law: A Computational Model

by DANIELA TISCORNIA[*]

Abstract

The legal decision- making process can be reproduced by a model in which the deductive structure that justifies the decision is integrated with other formal aspects of legal reasoning, able to represent the process of applicable norms recognition. The concept of applicability is strictly related to the dynamic evolution of the normative systems, that creates anomalous situations, as incompatibility (normative conflicts) or legislative gaps (lacunas). The solution of such a phenomena requires meta-level criteria to be formalised and correctly handled. Key-words: computational model of legal reasoning, nonmonotonic reasoning, analogical reasoning, metaprogramming techniques.

Summary

Introduction. 1. Definition of normative conflicts. 2. Formal structures for conflicts solving: non-monotonic logic. 2.1. Classification of the approaches to non-monotonic reasoning. 2.2. The extension of inference .2.3. Theory Construction. 2.4. Multiple extensions. 3. Theory revision. 4. Dealing with conflicts as a process of metareasoning . 5. The analogical process from a logical point of view. Conclusion.

Introduction

Both legal theory and jurisprudence distinguishes, in the analysis of legal reasoning, the phase of interpretation of the norm from the phase of their application. The distinction is conceptually clear, if we accept a meaning of the norm understood as a output of the interpretation(of normative sentences): this means that such an activity is necessary and preliminary to application.

On a pragmatic level, the two aspects cannot be so clearly differentiated, especially in that part of the process of normative application that belongs to " identifying the set of applicable norms to the case at the hand". This phase is based on the analysis of the normative system in a logical/syntactical perspective, as compared with interpretation, that mainly involves semantic aspects. Often, however, the two levels come to overlap and intertwine, so that it is difficult to save the distinctions found in legal authority. In a descriptive model of the legal decision-making process, set out in a deductive framework, the identification of the normative subsystem applicable is the moment for choosing and verifying the validity of the premises. This approach assumes a restricted meaning of validity, called systematic validity,[1] aimed at being explained in a logic based computational model .The objective of this article is to analyse the legal decision- making process for the purpose of obtaining a model in which the deductive structure that justifies the decision is integrated with other formal aspects of legal reasoning ; and to specify, through such an analysis, the contents of the concept of applicability, by attempting, therefore, to keep the interpretative and evaluative aspects separate from the rational aspect. On the level of computational techniques, several proposals and results will be analysed, by trying solutions that can be adopted as practical applications.

The concept of applicability is strictly related to the dynamic evolution of the normative systems: the introduction of new norms in a system of complex logical/normative relations creates anomalous situations, that lawyers must analyse and solve in the phase of identifying the norm (or norms) to be applied: in particular situations of incompatibility (normative conflicts) or legislative gaps (lacunas). The tools offered to the lawyer for solving these phenomena are defined by the same legal order through metanorms or, in other words, norms that have other norms as their object.

The theoretical models of normative systems generally take into account the distinction between levels of knowledge. In particular, within the traditional differentiation[2] between primary norms which have human behaviour as their subject and the wide, more vagous category of secondary norms, metanorms are considered to belong to the latter (norms of change, according to Hart's classification).[3] These provide the legal sources on which to base the analysis of the two processes that are specifically analysed here: the normative conflict-solving process and reasoning by analogy.[4]

On the level of formal representation, they have some similar, and other complementary aspects; on the cognitive level they may be placed in the category of plausible reasoning, to which both experts in Cognitive Science and experts in Artificial Intelligence have given great attention, because it constitutes a large part of ordinary reasoning.

1. Definition of normative conflicts

In civil (statutory) law normative systems, the new norms that are added to the system provide for a defined space-time area of applicability. When, in relation to a case, two norms may be applicable as alternatives we talk about normative conflict; logically we define two norms as inconsistent when both the facts of the cases of these may be met by the case at the hand, and the consequences of these are inconsistent.

Along with the logical concept of inconsistency, we will broaden the definition of inconsistency to include cases of legal inconsistency (for example, a contract cannot be both void and voidable at the same time) and pragmatic inconsistency (for example, where the norms provide for two contradictory kinds of behaviour: to sleep or to stay awake).[5]

The criteria for conflict solving are, in the Italian legal order, expressed in the "Preleggi" of the Civil Code, in particular art. 15:

"Laws are not abrogated except by later laws in accordance with the express declaration of the legislator, or due to inconsistency between the new and the previous provisions or because the new law regulates the entire matter already regulated by the previous law".

Legal authority classifies them as follows:

- the hierarchical criterion: lex superior legi inferiori derogat;

- the chronological criterion: lex posterior legi anteriori derogat;

- the speciality criterion: lex specialis legi generali derogat;

to which the criteria that regulate spatial conflicts, more precisely defined conflicts between legal orders, that are the object of international law and that we do not take into consideration are to be added.

The interrelationships between the criteria are not completely defined but are integrated by two supplementary principles:

- the prior general law does not abrogate previous special norms :lex generalis posterior legis speciali non derogat;

- unless the new legislation aims at regulating ex novo the subject matter by eliminating all the pre-existing norms (absolute norms).

Furthermore, it should be noted that the speciality relation, unlike the chronological and hierarchic criteria that are common to groups or categories of (for example, the norms of a statute or of a code), may be understood as both a relation between two norms, (rule/exception relation), or, more widely in relation to the content of the norms and to the domain that is regulated (like, for example, fiscal laws are considered special as compared with contractual rules). Solving the conflict means that the weaker norm is to be considered abrogated, where the fact situation of the abrogating norm completely covers the sphere of applicability of the abrogated norm; a norm is derogated where there is a partial overlapping between fact situations; in this case the derogation produces a reduction in the sphere of applicability of the abrogated norm.

2. Formal Structures for Conflicts-Solving: Non-Monotonic Logic.

It is necessary to explain why, in the Introduction, the process of normative conflict solving was put in the class of plausible reasoning, that is reasoning that does not allows a strict verification of formal validity, but only evaluations of degrees of plausibility.

The normalisation of a part of a normative system into a logic model is usually defined as an operation of translating normative, interpreted, sentences in logic formulas that constitute the axioms of the theory; from which, by applying the inference rules of that logic, infer theorems (in legal terms: to verify that a given decision is the legal consequence that follows deductively from the norms chosen as premises). If, therefore, we can think of a logic model of a normative subsystem as a theory of classical logic, the properties of completeness and consistency required by the theory contrast with a legal reality of inconsistency (conflicts between norms) and non completeness (gaps). Above and beyond the complex problems of legal theory, we shall limit our analysis to the definition of a model, close as far as possible to legal reality and which, at the same time, can be expressed in computational terms.

Inconsistency of the normative system, in particular, can be overcome: either by selecting a subset that is consistent with it (normative system in force[6] obtained by applying hermeneutic or meta criteria for conflict solving; or by accepting the notion of defeasible logical consequence. that derogates the monotonicity of classical logic.[7] Processing normative conflicts may, therefore, be considered an exemplification of reasoning processes with incomplete knowledge or common sense, analysed through non-monotonic formal structures.

One of the typical cases of non-monotonic reasoning is that in which there are exceptions to general knowledge; a valid piece of information (either a belief or rule), generally and in normal conditions, it is understood as being able to be asserted (or inferred) in the absence of information about the existence of exceptional conditions and, consequently, it is defeated from the time the abnormal event comes into being or is recognised.

In the law, an exception is made by the legislator when he believes it to be advisable to limit the range of the general norm when there are more specific fact situations. Consequently, the general norm is applicable in the absence of information about the applicability of the exception.

Before analysing some of the approaches to dealing with exceptions, we should distinguish between weak and hard exception[8] : that is, between exceptional circumstances that, where they occur, produce the effect of blocking the application of the general norm (weak exception), so that nothing can be inferred in relation to the situation in course, and exceptions (hard exceptions) that produce the verification of the complementary effect, to that provided for by the general norm . The corresponding legal distinction between (weak) exceptions to norms and (hard) exceptions to the effects can be found in [Sartor 90].

The exception to the effect generally provides for a negative consequence (is not responsible, is not punishable, is not capable of acting...) that makes an exception to the whole class of norms that provide for that effect; for example:

art.52 Criminal Code: Whoever has committed the act is not responsible for having been forced by necessity to defend his rights or those of others against the actual danger of unjust violation, only if the defence is proportionate to the violation.

The exception to the norm that we shall call exclusion of applicability, refers to a specific norm; it may be explicitly expressed by the legislator, for example:

art. 473 Civil Code: Acceptance of an inheritance may only be devolved with the benefit of inventory ... This article does not apply to companies.

In many cases, the above mentioned distinction depends on the interpretation (in a weak or hard sense) of expressions such as "provided that...","except...", etc. For example:

art. 1626 Civil Code: The lease terminates for the interdiction, incapacity or insolvency of the tenant, unless suitable guarantees are given to the landlord for the exact performance of the duties of the tenant.

A hard interpretation of the expression "unless"` in our example would attribute the meaning of a hard exception to the sentence introduced by it (in other words, in the case where suitable guarantees are presented, the contract is not terminated); while a weak interpretation would be limited to blocking the application of art. 1626 when guarantees are given.

2.1. Classification of the Approaches to Non-Monotonic Reasoning

Various criteria have been used for classifying the approaches to nonmonotonic reasoning: the traditional distinctions between syntactical and semantic approaches and that (almost coinciding) between credulous and sceptical approaches. Another distinction based on properties most relevant for the law refers to the nature of the initial assumptions: a) the precondition is the consistency of the premises and the objective is to extend the derivable consequences (theorems): b) the starting point is the inconsistency of the premises and the scope is to identify which theorems can be inferred from a consistent subset of them.

In any case both approaches have to consider cases in which multiple consequences (mutually conflicting) can be inferred by default, or more than one consistent subset can be constructed; according to the methods of selection used in these situations, a transversal criterion distinguishes: approaches based on exception clauses, and approaches where a wider criterion of priority between formulas is fixed.

2.2. The Extension of Inference

We shall cite two of the approaches based on the principle of extension of inferences, default logic and logic programming. Both, it seems to us, permit the relations between general rule and exception to be dealt with, but not the introduction of more general criteria of priority.

At the basis of default logic(DL) , there is the assumption that it is possible (and necessary) to draw inferences by using available knowledge, always that the new inferred knowledge be consistent with the known facts, in other words, that there is no information contrary to the inferred conclusions. In order to explain this idea, a new rule of inference is used that, in the DL formalism, is expressed by the structure:

A : C

------------------------

C

otherwise written as:

A: C / C ;

that we read as follows : " if A (prerequisite) can be proved and it is consistent to assume C (justification), C can be inferred". Given a default theory T made up of a set of defaults D[9] and the set of facts W ( closed formulas of first order logic), an extension is constituted from all the formulas obtained by applying the defaults that can be added to W maintaining consistency. In general a theory of default permits more than one extension: it derives from them a concept of strong validity (a formula is in all the extensions) and weak validity (a formula is in at least one extension).To express the exceptions (hard exceptions), seminormal defaults like the following are required:

A: B Y C

----------------

C

For example, art. 1626 can be translated:

non__complying(tenant): X gives_guarantees(tenant)Y terminates(contract)

----------------------------------------------------------------

In fact, it means blocking a general default when the conditions of exception (B) are proved, but also of being able to use the general default when we know nothing about the conditions of exception:

The conditions of exception cannot, therefore, appear (negated) in the prerequisites of general default (that must be proved to be true), but in the justification. When these conditions are true, the general default cannot be used (in fact, it is no longer consistent to assume their negation) and the specific default is applied, avoiding having two defaults, and therefore two conflicting extensions.

The seminormal defaults have problems at the level of the proof theory,[10] due to the fact that they do not have semimonotonicity;[11] another disadvantage is that, if a more specific default (a new exception) is added, the general default must be rewritten as all the exceptional facts must be explained in the justification.

Logic programming has proved to be a particularly suitable structure for analysing non-monotonic reasoning, firstly for the implementation in Prolog, the most popular language based on logic programming, of the form of non-monotonic negation (negation as failure: NAF): proposals for handling of some of these aspects in the context of logic programming can be found in [Kowalski 1989][12] where the possibility of dealing as constraints clauses with a negative conclusion is demonstrated and, therefore, the possibility to eliminate them through transformation; and in [Kowalski-Sadri 1990],[13] where the exceptions are explained by using clauses with the consequent classically negated.[14] It is shown how, by assuming the priority of the clauses with negative head over clauses with positive head, it is possible, through transformation, to obtain an equivalent logic program, for example:

terminates(contract) :- non_complying(tenant).

Xterminates(contract) :- gives_guarantees(tenant).

and then:

terminates(contract):-non_complying(tenant), not (X terminates).

and finally:

terminates(contract):-non_complying(tenant), not (gives _ guarantees (tenant)).

2.3. Theory Construction

The alternative approach considers the situation of inconsistency that is generated, not by the incompleteness of the knowledge as by "too much" knowledge (and, it is said, the normative conflict may be seen as a situation in which two norms or set of norms are applicable to the same case).

In order to do this it is necessary to change the concept of inference, by substituting it with that of provability: instead of extending the derivable theorems, it means reducing the number of the derivable formulas, that is, only those that can be proved. A formula can be proved if it is possible to build a consistent argument based on the most reliable hypotheses. In practice, the non-monotonic inference can be used to solve the inconsistency, for selecting, within a set of inconsistent norms, a consistent subset on which to base the decision.[15] Argument (or theory) construction is the same as formulating hypotheses (or argumentations) that explain (or justify) a given statement. Such an operation is very close to the process with which a lawyer brings legally grounded arguments for sustaining his case: both the advocate when he wishes to convince the judge to make a decision, favourable to his client, and the theorist in arguing in favour of a particular theoretic position, and, in part, even the judge, when he reaches his decision by, in effect, evaluating the plausibility of the opposing arguments of the parties and, therefore, by choosing that which is better grounded.[16]

In contrast with Default Logic that introduces a new rule of inference, Poole's approach[17] stays within the sphere of classical first order logic, but by dividing the formulas into facts (true) and default: the defaults are possible hypotheses in theory construction (maximal consistent subsets): a scenario is a consistent set of formulas obtained by adding all the facts that can be inferred form defaults and maintaining consistency. Given :

D = defaults;(open formulas: all ground instances of the variables)

F = facts (closed formulas : instantiation of the variables)

D+ F = scenario (consistent)

A formula is explainable if there is a scenario that implies it ( it should be noted that all the logical consequences that can be derived from a scenario corresponds to an Extension in DL and that Poole's defaults correspond to Reiter's without prerequisites, that is, in the form :a/a);

In order to express the exceptions ,constraints are introduced that block the application of a default when there is new information.[18] In practice, they function as seminormal defaults of DL., where every default becomes an hypothesis (or assumption) and every condition found in the justification is considered abducible. The technique of naming is also used for retaining atomic formulas, e.g.:

D (art.1626): non_complying(tenant) >terminates(contract).

Constraints: X(gives_guarantees(tenant) Z terminates(contract))

(or, interpreting art. 1626 as a weak exception:)

Constraints: gives_guarantees(tenant)> X D(art.1626)

The greatest problem within Poole's structure is represented by the impossibility of expressing priority within the theory, if not by using a large number of constraints, [Gordon 1989] .

2.4.Multiple Extensions

The problem of multiple extensions (or scenarios), as we have seen, is solved by introducing an (internal) criterion of speciality or, as in the approaches that will be analysed here defining an (external) criterion of priority through which to order the formulas of the theories.

Brewka's approach[19] is a generalisation of Poole's defaults:[20] in fact, compared with Poole, we go from two levels:

- refutable and inconsistent default

- true and consistent facts

to a partition of the knowledge within a theory on more than one level, where every level contains formulas of the first order at different levels of reliability (from the bottom to the top).

A preferred subtheory is a maximum consistent subset obtained by adding all the formulas that can possibly be added retaining consistency to lower level formulas:

Def: given a theory T divided in levels T1...Tn, S = S1H..HSn is a preferred subtheory if and only if, for all k(l# k# n), S1H..H Sk, is the maximal consistent subset of T1H..HTk. Starting off from this structure, it is possible to define, analogically to the extensions of Poole's DL and Scenarios, two concepts of provability:

- a formula is strong provable if can be inferred from all the maximum subsets;

- is weak provable if can be inferred from a subset.[21]

In this way, a criterion of priority can be expressed on the basis of the reliability of the knowledge, by putting the more specific formulas on a lower (more reliable) level, and by assuming that the facts have a higher priority.[22] This, therefore, allows the speciality relation between formulas to be expressed, and exceptions in the strict sense to be dealt with, placing the formula that contains the exception on a higher level of priority than the general norm. It should be noted that the approach based on preferred subtheories, in contrast with DL, allows exceptions to exceptions to be dealt with, because the knowledge that is inconsistent with new beliefs is not eliminated but only put on a lower level of plausibility, which allows it to be restored where the new knowledge is, in turn, eliminated.

Example of an exception to an exception:

462 Civil Code: All those who were born or conceived at the time the succession was opened are capable of succeeding.

463 Civil Code: Whoever has voluntarily killed or attempted to kill the person whose succession is involved is excluded from the succession as unworthy.

466 Civil Code: Whoever has be found to be unworthy are admitted to the succession when the person whose succession is involved has expressly enabled him with a public deed or with a will.

T1: unworthy(X), enabled (X) - > capable_of_succeeding(X).

T2: unworthy(X) - >, capable_of_succeeding (X).

T3: born_conceived_time_succ(X) - > capable _ of _ succession(X)

Where T1, T2, T3 are the theories set out in decreasing order of provability.

In the case of weak exceptions, it is necessary to use constraints as Poole does, by dividing every level of the theory into two series of formulas, defaults and constraints; these are not included in the subtheories, but only make a subtheory inconsistent where both p and the constraint X p are found.

In effect, with regard to Poole, the preferred subtheories enable all the formulas to be dealt with uniformly, without considering premises irrefutable. The more general approach considers different levels of reliability of the premises, those that are more specific permit the levels to be created by establishing a partial order over the premises.

An approach always based on the construction of argumentation, but within the formal structure of DL can be found in [Prakken 91a, Prakken 91b].

The idea is to express non-monotonicity by using the rule of inference for avoiding the problems (Modus Tollens, counterpositives) created in Poole by the adoption of the whole standard logic; the theories are constructed by applying defaults and by verifying step by step which formulas are invalidated, until the maximal consistent subsets are identified and ordered on the basis of a general relation of priority.

In [Eshghi,Kowalski 1989],[23] the construction of theories in the paradigm of logic programming is approached in an abductive context, where the choice between the different abductive hypotheses (seen as defaults) is guided by special constraints. The possibility of mapping general logic programs in an abductive structure, enables the relationship general rule exception to be transformed in the hypotheses formulation: the general rule is in force until the facts provided for in the exception are not shown to be true, therefore, the abductive hypotheses remain valid until the constraints are satisfied.

3.Theory Revision.

Another approach to normative conflicts could be within the scope of theory revision. Having been developed within the field of epistemic logic,[24] theory revision deals with problems caused by the addition of new knowledge to a set of presumably consistent beliefs.

One of the better known approaches[25] presumes that a set of consistent epistemic states (the set of beliefs, or theory), is logically equivalent to a set of deductively closed logical formulas; the changes produced in the belief set by the elimination of knowledge, the addition of new knowledge, or by the introduction of knowledge inconsistent with the old one, must preserve the consistency of the theory. The criteria on which the revision is based are those of rationality (retaining consistency) and economicity (the changes and the elimination of beliefs are minimal).[26]

It means, therefore, eliminating a small number of beliefs and, on the basis of an ordering criterion, the beliefs that have less epistemic relevance(entrenchment). Compared with non-monotonic logic, where the knowledge that cannot be invalidated must be a priori established, theory revision considers all the beliefs as potentially being able to be defeated, offering, therefore, a more general framework.

The two approaches, that based on non-monotonic and that on theory revision, have many points in common on a logical and methodological level,[27] but they start off, as we have already mentioned, from a different theoretical approach. The choice between them, for the purposes of dealing with aspects of legal reasoning is guided by the diversity emerging from an analysis of legal activities. Where we are considering, as in this case, the problem of selecting the normative system that is applicable to a case, a perspective common to both the work of the judge and the lawyer, the most natural reference seems to be that of the derivation of plausible conclusions, through the selection of premises that are consistent within a theory (the set of all the norms ) ;the objective is not to re-establish consistency but to obtain a logically valid (and, therefore, logically grounded) justification for the decision.

From a legal point of view, theory revision approaches the phenomenon of legislative drafting: a definition of the legal system as a set of all statutory norms, plus all norms that can be inferred from these,[28] seems close with a model made up of logically closed theories. Furthermore, the normative system in force is the system of norms that have validly been enacted and from which all the norms explicitly or implicitly abrogated have been removed. It becomes obvious how a requisite of this system is consistency, and how a structure becomes necessary, that enables consistency to be restored after the addition of new sentences.

4. Dealing with Conflicts as a Process of Metareasoning

As the analysis has pointed out, the exploitation of priority criteria of the premises (or the inferences) constitutes one of the most crucial tasks; in the proposal above mentioned just one ordering criterion is taken into consideration, whether it is that of speciality or the general criterion of priority(whatsoever it is the meaning) . In law, instead, as we have said, the criteria of choice between norms in conflict are more than one, and in relations such that they can, in turn, be in conflict.

It means, therefore, defining a structure that manages and controls the relations between the criteria, solves any conflicts between them and, therefore, applies the selected criterion to solving the normative conflict. It is evident that this requires making a distinction between object knowledge and metaknowledge, and a further definition of metaevaluation rules that manage both kinds of knowledge.

Another aspect to consider is that in law we often find metastatements inserted in the object language, as the expressions that introduce citations, cross references, (based on the definition in art...; with the exception of that which is provided for in art...,etc.), relative and absolute presumptions (" it is presumed, unless the contrary has been proved, it is deemed...):[29] the exclusion and (as we shall see with regard to analogy) the extension of applicability. This requires the ability to amalgamate object predicates and meta predicates in the knowledge base.

The logic programming paradigm is considered a capable representation formalism of the legal domain; it seems also a suitable framework for handling the multilevel features of legal knowledge. In fact, if it does not provide solutions on a general level, it enables us to overcome some of the obstacles found in classical logic and to get good results on an application level.

The use of metaprogramming techniques for introducing extensions in general logic programs has considerable advantages, both formal and computational (" This conservative approach to semantics has the advantage that seeming extensions of the logic are not extensions at all, but are defined in terms of metatheoretic relationships between object level theories."[Kowalski 1990] ).[30]

In [Nute 1988] we have a concrete application of these considerations, through the definition of a logic (meta)program (written in Prolog) that reproduces non-monotonic reasoning. The main aspects relate to:

- the elimination of the NAF from the object language, where a predicate neg is introduced that reproduces the classical negation;

- the interpretation of the defeasible rules of the object program as rules of inference, rather than axioms of the theory; this enables us to distinguish the facts (atomic formulas) and the absolute rules, that is, those that cannot be invalidated (absolute knowledge, for example, that contained in the taxonomic structures, definitions, classifications, etc.);

- the definition of an incompatible predicate that enlarges the concept of logical inconsistency to pragmatic situations of incompatibility.

In [Guidotti, Mariani, Sardu, Tiscornia 1992] a similar approach is described, applied to normative conflicts.

Characteristic of the proposal is the utilisation of the partition of the knowledge in theories that correspond to uniform fragments of legislation (a statute, a regulation, or a subset of norms of a Code); and the creation of additional theories that contain specific information and, in particular:

- temporal, hierarchic data, relative to individual theories;

- the normatively and pragmatically incompatible concepts expressed through constraints.

The idea on which the approach is based is to distinguish the solution of exceptions within the single theory, from the solution of the conflicts in different theories. The former is carried out by using in the interpretive way the transformation provided for by[ Sadri, Kowalski 89], in such a way to retain the explicit representation of the exceptions in the object level.

The conflicts generated between norms belonging to different theories are solved through metaevaluation rules that apply the conflict solving criteria and use informations (hierarchical and temporal data) linked to each theory.

The sequence of steps that the meta_interpreter actuates follows the following order:

for every Goal the interpreter:

(a) will identify whether a norm exists, whose consequent matches the Goal;

(b) will verify the possible existence of an metapredicate inapplicable(Norm), for eliminating the case of exclusion of applicability;

(c) will verify whether the Goal is present in the theory Constraints, in which case it will extend the search to the second term of the constraints;

or:

(d) will attempt to prove the Goal and its negation for identifying an eventual exception or conflict.

In the event that the search c) or d) is successful:

(e) will resolve the exceptions within the same theory, by applying, in the case of incompatibility established by the constraints, partial evaluation techniques for identifying the more specific facts of the case ;[31]

or:

(f) will apply the hierarchic principle, by choosing the norm with the higher hierarchic level(lex superior legi inferiori derogat);

or:

(g) will recognise any absolute norms,[32] in which case it will be capable of solving the conflict by applying the absolute norm;

or:

(h) will apply the principle of speciality: (lex generalis posterior legi speciali non derogat);

or:

(i) will apply the temporal criterion(lex posterior anteriori derogat).

5. The Analogical Process from a Logical Point of View.

For reason of space, it is here impossible, deeply analysing the structure of analogical reasoning,[33] because of the difficulties in logically grounding such a process,[34] it appear to be more correct to consider that reasoning by analogy falls within the class of plausible reasoning. As with regard to common sense reasoning, the approach to the analysis of plausible reasoning is substantially based on the search for criteria for evaluating plausibility and, therefore, for justifying arguments.[35]

It is obvious that such a criterion should be found mainly in the exploitation of the similarity relation, that, like in legal domain, constitutes the core of the reasoning process. We need, therefore, a formal framework in which the reasoner uses a general equality assumption that, providing domain specific instances, introduces additional premises to be uses in the inference process:

A popular A.I. approach[36] refers to a bottom-up search for similarity: by the representation of the properties set, the reasoner fires a matching process, where the roughest, computationally inefficient criterion grounds the plausibility of the inference on the amount of matching elements; the more elements match, the closer the similarity is, the more the analogy is plausible. That opens up several crucial ,methodological and computational aspects, dealing with the criteria of indexing and comparing instances.

When, instead, similarity is given as a starting point and the equality hypothesis is a priori stated and used for inferring new knowledge (top-down), the traditional quantitative approach can be substituted with a more expressive qualitative approach. Given such a general framework, it is therefore possible to graduate the concept of equality itself, introducing metacriteria for comparing relations, objects, properties...

In a legal perspective, the top-down approach allows the ratio legis concept to be represented, saving both of its meanings: in the first of them, analogy is based on a pragmatic relationship between legal properties(qualifications); if in the new situation the same relevant properties hold as in a "normative" one, that constitutes the logical and legal ground for asserting the same legal qualification for the new case. A computational interpretation of that meaning can be found in the "projection" of properties, as in [Pollok 1987], or in the "determination theory" in [Davies Russel 1987]:

Determination rules are based on two assumptions:

(*) " x ( P(x) -> Q(x)) Z (P(x) -> Q(x))

(that is, each element may or may not have property Q, having Properties P)

This allows Q(T) to be inferred from the known properties of S and from the additional statement:

P(T) Y P(S) Y Q(S)

The second assumption defines a function F which determines the value of G.:

(**) " x , y, (F(x) = F(y)) -> (G(x) = G(y))

(if two situations have a common set of properties we can infer that both have one further common property ).

There are four phases in implementation in logic programming:

- given any property Q(T,s) (= find the value of that property or conclusion for the Target):

1) find the determination rule, that is the metarule in the form:

P(x,y) determines Q(x,y) (=find relevant facts)

2) find whether the relevant fact holds in Target: P(T,y)

3) find the Source where the fact holds :P(S,y) where S # T

4) find the value of the conclusion in S: Q(S,z)

5) attribute this value to T : Q(T,z)

The resolution process is, therefore, fired by the two basic rules:

P(x,y) determines Q(x,y).

P(T,y).YP(S,y)Y S# T Y Q(S,z).

A more complex meaning of ratio legis indicates that the exploitation of conceptual elements needs the abstraction, from individual situations (and from legal qualifications provided by the norm), of a wider normative principle, in which it is possible to subsume the new situation. The similarity relation is then justified by the fact that both the given and the new situation pertain to that general class. It is clear that such a process would require a deeper investigation of interpretation and must be supported by the analysis of previous cases.

In the following example, following the RF naming mechanism, we use: # for predicate meta-variables, $ for terms meta-variable, brackets < > for names of predicate, quoted names (") for atoms names.

Example: John purchases goods from Bill who, in turn, has purchased the property by means of a contract subjected to a terminating condition (if the condition comes about, the contract terminates). If the condition occurs and the contract is terminated, there is no norm, in Italian law, that regulates the legal position of the third party purchaser. According to jurisprudence and case law, the most sound position involves extending to the case at the hand the principle of "protection of the third party purchaser a non domino in good faith " .

The difference between the case of John and the normative situation lies in the fact that Bill was the actual owner. The element on which the analogy is based is, therefore, between the "owner on the basis of a title subjected to a suspending condition " and the "non owner". It is then appropriate to represent, at a higher level, the ratio as meaning "the relationship between a general normative scope, a factual situation, a stated norm (if there are), that should reach that normative scope": ratio(Scope, #Situation;$Norm).

Es:

ratio(protection_of_third_parthy_in_good_faith,<purchaser_a_non_domino>,"art._1153")

ratio(protection_of_third_parthy_in_good_faith,< purchaser_conditional_owner>,$X).

That allows:

- a similarity relation to be asserted by instantiating the metarule:

similar(#X,#Y,$N) :- ratio(#Z,#X,$N), ratio(#Z,#Y,-).

similar( <purchaser_a_non_domino>, < purchaser _ conditional _ owner >, "art._1153").

From the above metaknowledge, the meta rule and the formalised representation of art. 1153:

art.1153 CC: Whoever has moveable goods transferred to him by a person who is not their owner, acquires property in them by possession, provided that he is in good faith at the time of their consignment and that there is a good title for the transferring of the property.

acquires_property (X,Goods):-

purchaser_a_non_domino (X),

has_possession(X,Goods),

is_in_good_faith(X),

good_title_acquired(X,Goods).

the new norm for solving the case is created by the meta interpreter:

acquires_property(john,goods):-

purchaser_conditioned_owner(john),

has_possession(john),

is_in_good_faith(john),

good_title_acquired(john,good).

Moreover, it should be noted that the introduction of the meta-predicates similar(X,Y) allows us to treat in a uniform way most simple situations, for example, when a stronger relation of analogy has been explicitly established by the legislator. E.g.: the case of separation without the fault:

art. 548: The spouse who has not been blamed for the separation in a final judgement ... has the same succession rights as a non- separated spouse.

In this case, we have a generic reference to a series of provisions, that enable us to define a direct relation of similarity:similar("spouse_separated_without_blame","spouse", $X).and to create new analogous norms; e.g., if in the case on hand is whether such a spouse is entitled to the family home, basing it on reference to art. 548(2) art. 548 (2): The spouse is reserved the right of living in the house used as the family home ..if it was owned by the dead spouse or jointly.

Norm n.540: has_right(X,family_home):- spouse(X).creates, by analogy, the norm:

has_right(X,family_home):-separated_spouse_without_blame(X).

The use of metaprograming techniques for the representation of the analogical process in law has been already proposed in [Hamfelt, Barklund 1989]and [Hamfelt, Barklund 1992]. It seems a good choice for many reasons: a) from the point of view of the legal domain, there is a direct and natural correspondence between metaprogramming and the nature of the metanorms regulating analogy, as with the metalogical (deductive) justification of the analogical inference; b) from a logical point of view, the metaprogramming paradigm is coherent with the criterion for the validation of the inference by the introduction of high level additional premises; c) on a computational level, the definition of meta evaluation rules allows the inference strategies to be changed or updated (eg.introducing or augmenting determination rules, heuristic rules, etc.).[37] A further consideration is the possibility of coherently handling, by such a multilevel structure, similar and connected legal phenomena (as in the case when analogy is explicitly stated,and with a contrario argument), defining a legal\logical ordering between them.

Conclusion

As we have mentioned, by considering working on a "logical model" of a system of norms, the treatment of incompatibility between norms, like the process by analogy can in certain way be brought down to a verification of consistency and completeness of a theory. On these bases it seems, therefore, acceptable to apply the methodologies of logical systems to the law. We have especially analysed proposals for solving in metatheoretical terms some of the problems linked to the processing of plausible reasoning, like analogical and non-monotonic reasoning.

On the legal level, the approach seems acceptable with regard to the natural nature and fidelity of the representation: from the analysis, it is evident that the concept of applicability, also like that of speciality or priority, similarity, defeasibility are metaconcepts, and, as a consequence, require representation through metaknowledge. One of the particular features is the intertwining, in such a structure, of metaknowledge on the processes and metaknowledge on the contents, which requires deeper investigation about the nature and the relations existing between them. Another valuable aspect on the application level is the utilisation of the technique of the modularization of knowledge and, therefore, of the principles of the composition of theories:[38] as well as respect the partition of knowledge into sets of norms with common characteristics and properties, that is, it operationally allows general information to be automatically associated with the information relating to the single norms

In view of extending the explicit representation of law and the declarative use of logic programming techniques, the introduction of plausible inferences in a logic framework should allow us to move from fixed interpretation of law,[39] inherent in rule- and logic-based formalisms, to a wider structure, able to capture flexible representations of norms and of different sources of law; and to broaden the computational range of reasoning processes, from a deductive justification of the decision, towards a support of the process of argument construction .[40]

Bibliography

[Ashley 1990]: Kevin Ashley, Modeling Legal Argument, MIT press, 1990.

[Alchourron,.Gardenfors, Makinson 1985]: Carlos E. Alchourron, Peter Gardenfors, David Makinson, On the Logic of Theory Change: Partial Meet Functions for Contraction and Revision, Journal of Symbolic Logic, 50, 1985.

[Alchourron 1981] : Carlos E. Alchourron, Ordinamento normativo e abrogazione, in Preproceedings of the First International Conference on Logica, Informatica, Diritto, Florence, 1981, pp.3 - 57. Engl. Transl: Normative Order and Derogation, in Deontic Logic, Computational Linguistics and Legal Information Retrieval, A.A. Martino ed. ,North-Holland, 1982, pp. 51 - 64. .

[Brewka 1989]: Gerhard Brewka, Preferred subtheories: an extended logical framework for default reasoning, Proc. IJCAI 1989, pp. 1043 -8.)

[Brewka 1990] : Gerhard Brewka, Belief Revision in a Framework for Default Reasoning, Proceedings of the Workshop on the Logic of Theory Change, Kostanz, 1989, Springer Verlag, 1990, pp. 217 e ss];

[Brewka 1991]: Gerhard Brewka, Nonmonotonic Reasoning: Logical Foundation of Commonsense, Cambridge University Press, Cambridge,1991.

[Brogi Mancarella Pedreschi Turini 1990] : A.Brogi, P.Mancarella, D.Pedreschi, F.Turini, Composition Operators for Logic Theories, in Proc. of Computational Logic, Symposium,(Lloyd ed.), Springer Verlag Brussels, 1990.

[Carcaterra 1974] : Gaetano Carcaterra, Le norme Costitutive, Milano, Giuffrh,1974.

[Costantini, Lanzarone 1989]:Stefania Costantini, Gaetano Lanzarone, Analogical Reasoning in Reflective Prolog, in Preatti del Terzo Convegno Internazionale su Logica, Informatica Diritto, Firenze, 1989, pp. 117- 136.

[Costantini Lanzarone 1991]: Stefania Costantini, Gaetano Lanzarone, Metalevel Represantation of Analogical Inference, in Trends in Artficial Intelligence, (Ardizzone, Gaglio Sorbello eds.), Springer Verlag, 1991, 460-464.

[Davies Russel 1987]: T.R.Davies and S.J. Russel, A Logical Approach to Reasoning by Analogy, in Proc. of the 3th IJCAI, Morgan Kaufmann Publ. Los Altos CA, 1987.

[Doyle 1979]: Jon Doyle, A Truth Maintenance System, Artificial Intelligence ,12, 1979..

[Gardenfors 1988] : Peter Gardenfors, Knowledege in Flux, MIT Press, Cambridge, MA, 1988.

[Gardenfors, Makinson 1988]: Peter Gardenfors, David Makinson: Revision of Knowledge Systems Using Epistemic Entrechment, in Proceedings of the Second Conference on Theoretical Aspects of Reasoning about Knowledge, Morgan Kaufmann, Los altos, 1988.

[Gelfond, Lifschitz 1988] M. Gelfond, V. Lifschitz, The Stable Model Semantic for Logic Programming, in Proc. AAAI 1987, pp. 207 - 211.

[Gelfond, Lifschitz 1990): M. Gelfond, V. Lifschitz, Logic Programs with Classical Negation, in. Proc. 7th Int. Conf. on Logic Programming, 1990, pp. 579- 597.

[Gordon 1989] : Thomas Gordon, Issue Spotting in a System for Searching Interpretation Spaces, in Proc. of the Second Intern. Conf. on Artificia Intelligence and Law, ACM, New York, 1989.

[Guidotti, Mariani, Sardu, Tiscornia 1992]: Paolo Guidotti, Paola Mariani, Giuseppe Sardu, Daniela Tiscornia, Metalevel Reasoning. The design of a System to Handle Legal Knowledge Bases, in Proceedings of the 7th Italian Conference on Logic Programming(GULP 92, Citta Studi Editore, Milano, 1992.

[Hall 1989] : Roger. P. Hall, "Computational Approaches to Analogical Reasoning : A Comparative Analisys" in Artificial Intelligence, 39, 1989, pp. 39-120.)

[Hamfelt - Barklund 89]: Andreas Hamfelt, Jonas Barklund, Metalevels in Legal Knowledge and their Runnable Representation in Logic, in Preatti del Terzo Convegno Internazionale su Logica, Informatica Diritto, Firenze, 1989, pp. 557- 576.

[Hamfelt Barklund 92] : Andreas Hamfelt, Jonas Barklund, Hierarchical Representation of Legal Knowledge with Metaprogramming in logic, in Proc. of First Compulog-Net Workshop, Imperial College, London, 1992.

[Hart 1961]: H.L. Hart, The Concept of Law, Clarendon,Oxford, 1961.

[Kalinowski.1965] : Georges Kalinowski, Introduction ` la logique juridique, Paris, Pichon & Durand-Auzias, 1965. Trad.it. : Introduzione alla logica giuridica, Giuffrh, Milano, 1971..

[Kowalski 1989]: Robert A. Kowalski, The treatment of negation in logic programs for representing legislation, Proceedings of the Second International Conference on Artificial Intelligence and Law, ACM, 1989, pp. 11-15.

[Eshgi Kowalski 1989]: K Eshgi, Robert A.Kowalski,, Abduction compared with negation by failure, Proceedings of the 6th Int. Conf. of Logic Programming., 1989, pp. 234 ff.

[Kowalski Sadri 1990], Robert Kowalski, Fariba Sadri, Logic Programming with exceptions, in Proceedings of the Seventh International Conference on Logic Programming, MIT press, 1990, pp598-613.

[Kowalski Sadri 1990]: Robert A. Kowalski, Fariba Sadri, Logic Programming with Exceptions, Proceedings of the 7th Int. Conf. on Logic Programming, MIT Press, 1990.

[Kowalski 1990] Robert A. Kowalski, Problems and Promises of Computational Logic, Compulog, 1990.

[Martino 1981]: Antonio A. Martino : Abrogazione di norme e decidibilit` sintattica negli ordinamenti giuridici, in Preproceedings of the First International Conference on Logica, Informatica, Diritto, Florence, 1981, pp.626 - 520. Engl. Transl:,Derogations of Norms abd Decidability in Legal Orders, in Deontic Logic, Computational Linguistics and Legal Information Retrieval, A.A. Martino ed. ,North-Holland, 1982, pp. 39 -50. .

[Nute 1985] : Donald Nute, A nonmonotonic Logic based on Conditional Logic, ACMC Research Report n. 01-007, University of Georgia, 1985.

[Nute 1988]: Donald Nute, Defeasible reasoning: a philosophical analysis in Prolog, in Aspects of Artificial Intelligence, 1988.

[Perelman Olbrechts-Tyeca 1958]: Ch. Perelman, L. Olbrechts-Tyeca, Traiti de l'argumentation. La nouvelle rhitorique, Paris, PUF, 1958. It.:Trans. Trattato dell'argomentazione. La nuova retorica. Torino, Einaudi, 1956.

[Pollock 1987]: J.L. Pollock, Defeasible Reasoning, in Cognitive Science, vol. 11, 1987, pp. 481-518.

[Poole 1988] :Poole D.L., A logical Framework foe Default Reasoning, Artficial Intelligence, 36:1(!988), pp. 27 - 47)

[Prakken 1991 a] Henry Prakken, A tool in modelling disagreement in Law: preferring the most specific argument, in Proc. of the 3th. Int, Conf. on Artificial Intelligence and Law, ACM Press, 19991, pp. 165 - 174.

[Prakken 1991 b] Henry Prakken, A formal Theory about Preferring the Most Specific Argument, Report, Vrije Univeristeit, Amsterdam, 1991.

[Reiter 1980]:R.Reiter: A Logic for Default Reasoning, Artificial Intelligence, 13, 1980, pp. 81 - 132)

[Rescher 1964] Rescher Nicholas: Hipothetical Reasoning, North-Holland Amsterdam, 1964.

[Routen 1989]: Tom Routen, Hierarchically organized formalizations, in Proc. of the Second Int. Conf. on Artificial Intelligence and Law, ACM Press, New York, 1989, pp. 242 - 250.

[Routen Bench-Capon. 1991] Tom Routen, Trevor Bench-Capon, Hierarchical formalizations, in International Journal Man- Machine Studies, 35, 1991, pp. 69 - 93.

[Sartor1991] : Giovanni Sartor, The Structure of Norm Conditions and Nonmonotonic Reasoning in Law, in Proc. of the 3th. Int, Conf. on Artificial Intelligence and Law, ACM Press, 19991, pp. 155 - 164.

[Searle 1969]: John R. Searle, Speech Acts. An Essay in the Philosophy of Language, London, Oxford University Press, 1969. Trad. it.: Atti linguistici . Saggio di filosofia del linguaggio, Boringhieri,Torino, 1976.

[Skalak Rissland 1992]D.B. Skalak, Edwina Rissland, Arguments and Cases: An Inevitable Intertwining, in Artificial Intelligence and Law, 1, 1992.

[Sergot 1991]: Marek Sergot The Representation of Law in Computer Programs, in Knowledge Based Systems and legal Applications Trevor Bench Capon (ed.), Academic Press, 1991, pp. 3-68.

[Tiscornia 1993]: Daniela Tiscornia, Un modello computabile del concetto di applicabilit` normativa, in Informatica e Diritto,Firenze, 1993,forthcoming.

[Wroblewski 1982] : Jerzy Wroblewski, Tre concetti di validit`, in Rivista trimestrale di diritto e procedura civile, 36, 1982, pp. 584 - 595.

[Wroblewski 1990] : Jerzy Wroblewski, Computers and the Consistency of Law, in Informatica e diritto, n. 2, Le Monnier, Firenze, 1990.

A shortened version of this paper has been presented at the DEXA 93 Conference in Prague.

The expression "systematic validity" was coined by J.Wroblewski in [ Wroblewski 1982] and repeated in [Wroblewski1990, p. 11]: "Consistency referring to a set of rules in a legal dis-course.... in the wide meaning of the term(W-consistency) refers to the lack of conflicts between rules beginning to this set....The basic conception of validity for statutory (civil)law systems is the systemic validity. Roughly speaking, a rule is valid in the system in question if and only if a) is enacted by competent law-making agency and is applicable in defined spatial temporal dimension; b) is not derogated; c) is W-consistent with other valid rules; d) if it is W-consistent it either does not loose the validity on the strength of the conflict of law rules or is interpreted in a way eliminating the inconsistency in question."

The distinction between primary and secondary norms was introduced by [Hart 1961]. An equally important alternative distinction between constitutive and prescriptive norms was introduced by [Searle 1969]. Some authors acquaint this distinction, applied to the legal domain, with primary norms (or norms of behaviour or prescriptive rules) and secondary norms (constitutive norms) see, [ Carcaterra 1974].

The metanorms for identifying the normative subsystem applicable to the case can, in the Italian system, be inferred (as a large number of the metanorms mentioned above) from some provisions of the Constitution and from the "General Provisions on the Law" (otherwise called the Preleggi"):

I will examine in wider details the conflicts-solving problem , while the analogical reasoning will be only breafly sketched. See [Tiscornia 1993].

The above mentioned definition considers cases of tacit and unnamed abrogation (" all norms that are inconsistent with this law are to be considered abrogated ...."). We shall not, therefore, consider those cases where the legislator expressly states that prior norms are abrogated (express abrogation), that amount to( at least theoretical )elimination of sentences from the legislative system.

"[It is necessary] to distinguish legal systems as temporal hierarchic sets of norms, from legal orders conceived of as temporal sequences of legal systems".

[Martino 1981] In traditional logic the formulas that are the logical consequences of a set of premises do not change by augumenting the set of premises. This means that the addition of new knowledge (for example, a new norm or the ascertaining of new facts) cannot invalidate the formulas that have logically been inferred from the original premises The distinction is to be found in [Brewka 1991].

[Reiter 1980].The formulas are not considered to be universally quantified but, rather, are considered the set of all possible instances of the variables; it means that asserting the negation of an instance does not imply the falsity of the formula. A solution may be to consider normal defaults for expressing the general norm, so that neither preconditions nor grounds that are different from the consequent, that is, it will be in the form: A/A. While the assertion of its negation will be a formula of the following kind:

C - >X A. (in practice the problem of seminormal defaults is avoided by using only normal defaults).

Normal defaults are semimonotonic, that is, by adding new defaults the inferences drawn from previous defaults do not change; in a legal perspective, this does not allow exceptions to exceptions to be handled, for which it would be necessary that a general default blocked by an exception can be restored once the exception, in turn, has been blocked by a new exception.

[Kowalski 1989], pp. 11-15.

[Kowalski Sadri 1990], pp. 598-613.

The introduction of classical negation is based on [Gelfond, Lifschitz 1990] who suggest a appropriate adaption of the semantics of stable models; in [ Kowalski Sadri 1990] it is shown how it is possible to extent the answer-set semantics of Gelfond and Lifschitz [Gelfond Lifschitz 1990] to dealing with exceptions (e-answer-set).

The approach is based on the views of [Rescher 1964 ] .

The theory of argumentation is well known in legal theory [Perelman, Olbrechts-Tyeca 1958], [Toulmin 1958].

[Poole 1988], pp. 27-47.

Unlike with DL, in Poole we can have the counterpositive, that is, given a default 'if A typically B', we arrive at, 'given non B, typically non A'. This does not happen in DL because the inference rule is directional and, furthermore, the prerequisite must be proved (whereby it cannot be falsified by introducing the negation of the consequent).

[Brewka 1989].

Given D H F, a preferred subtheory therefore corresponds to a scenario that is, to a consistent subset of D H F that contains all the F (facts). So a formula that can be explained by a scenario corresponds to a formula that can be proved in a subtheory, or to a weak provable formula in an extension in DL.

The concept of extension in DL corresponds to all the theorems that can be inferred from S ( TH(S)= E) where S is a preferred subtheory.

It should be noted that the provability a formula depends on the syntactic form of the premises (if the premises are A and B, by adding X A it is possible to maintain B; if the premises are A Z B , the conjunction is to be eliminated by adding XA).

Brewka introduces a further generalization that permits the definition of an arbitrary partial ordering of the premises, giving, therefore, greater elasticity in evaluating the levels, rather than via different a priori levels.

[Kowalski Eshghi 1989],pp. 234 ff. It shows how the abduction can be seen as the generalization of negation by failure, which allows a semantics for the NAF to be formulated.

But the ideas underlying it originate in the legal field: [Alchourron 1981].

Known as AGM from the names of the authors: [ Alchourron, Gardenfors, Makinson 1985].

A different approach, based on the concept of justification (every belief is valid in as far as there is another valid belief that justifies it) is developed in TMS (Truth Maintenance Systems) , see [Doyle 1979].

For a comparison between the two aspects, see [Brewka 1989, pp. 217 ff.; Brewka 1991 pp. 77 ff]; [Gardenfors Makinson 1989]; [Sartor 1991]. In practice, a preferred subtheory of a theory T contracted in respect to p (T - p) is the same as pref.subt. of a theory T' ( T'= T H X p) where X p is the constraint added to T. The constraint guarantees that p cannot be derived, the theory T having to be consistent with X p.

A strong provable formula (contained in all the extensions) is the same as the fact that it is contained in the set deriving from partial meet contraction. If T is a theory in the epistemic sense (= containing all the logical consequences) the results are the same. In fact, even if the extensions are closed deductively, they may not coincide the epistemic states contracted, because the deduction is made after the premises have been contracted.

It nevertheless always consists of a restricted definition of normative system, because it does not take all the possible interpretations of the norms, legal decisions or customary norms into consideration.

[Routen 1989]

(Kowalski 1990], p. 16.

It is obvious that in case of indirect incompability the criterion of the priority of negated antecedent does not hold.

The norms where the legislator has listed all the situations in which the consequence occurs (exaustive enumeration) are absolute. For these the Closed Word Assumption(CWA) applies, that is, the available knowledge is considered to be complete, so that the impossibility of proving the Goal equals the proof of its complementary: [Gelfond 1990]

XP < - not P;

An example of an absolute norm can be found in art. 179 of the Italian Civil Code that lists the personal property of a spouse under the community property regime:

"The following do not constitute the object of community property and are personal assets of the spouse: a) assets which the spouse was owner of before the marriage... b) assets acquired as a gift or through succession after the marriage ... c) assets for strictly personal use... d)...:asset_personal(Asset,Spouse):- (owner_before_marriage(Spouse,Asset); has_acquired_as_gift(Spouse, sset);asset_personal(Asset)).

Xasset_personal(Asset,Spouse):- ot owner before marriage Spouse, Asset); has_acquired_as_gift(Spouse, Asset); asset_personal(Asset)).

[Hall 1989].

More complete and refined approach, based on the research of the more on point precedent are in [Ashley 1990], and [Skalak Rissland 1992].

In common law systems the opposite principle"nemo dat quod non habet"holds .

solve(<a>, $X):- similar(<a>,<b>,),solve(<b>,$X).

Brogi, Mancarelle,Pedreschi,Turini 1990].

[Sergot 1991].

[Perelman 1958].

Given a theory Th, which is to say a set consisting of statements related both to the known object source(S) and to the less-known target(T), a singular Goal G, representing the problem, a set An of analogical statements expressing a relation of similarity between T and S, the analogical inference to evaluate the goal G must respect the following conditions:

- Th| 9 G (T) (which would exclude the need for analogy);

- Th H An| = G(T) (it is possible to find a solution analogically and only that solution);

- Th| 9 XG(T) (G(T) is not known to be false, that is, adding it to the theory does not generate inconsistency);

- Th| = G(S) (that is, the use of the source S for analogy is grounded).

5.1.Similarity Assumptions

Given a theory Th, which is to say a set consisting of statements related both to the known object source(S) and to the less-known target(T), a singular Goal G, representing the problem, a set An of analogical statements expressing a relation of similarity between T and S, the analogical inference to evaluate the goal G must respect the following conditions:

- Th| 9 G (T) (which would exclude the need for analogy);

- Th H An| = G(T) (it is possible to find a solution analogically and only that solution);

- Th| 9 XG(T) (G(T) is not known to be false, that is, adding it to the theory does not generate inconsistency);

- Th| = G(S) (that is, the use of the source S for analogy is grounded).


[*] Istituto per la Documentazione Giuridica, of the Italian National Research Council (C.N.R.) Via Panciatichi 56/16, 50121, Florence, Italy e-mail DANIELA @vm.idg.fi.cnr.it

[1] The expression "systematic validity" was coined by J.Wroblew ski in [ Wroblewski 1982] and repeated in [Wroblewski1990, p. 11]: "Consistency referring to a set of rules in a legal dis course.... in the wide meaning of the term(W-consistency) refers to the lack of conflicts between rules beginning to this set....The basic conception of validity for statutory (civil)law systems is the systemic validity. Roughly speaking, a rule is valid in the system in question if and only if a) is enacted by competent law-making agency and is applicable in defined spatial temporal dimension; b) is not derogated; c) is W-consistent with other valid rules; d) if it is W-consistent it either does not loose the validity on the strength of the conflict of law rules or is interpreted in a way eliminating the inconsistency in question."

[2] The distinction between primary and secondary norms was intro duced by [Hart 1961]. An equally important alternative distinction between constitutive and prescriptive norms was introduced by [Searle 1969]. Some authors acquaint this distinction, applied to the legal domain, with primary norms (or norms of behaviour or prescriptive rules) and secondary norms (constitutive norms) see, [ Carcaterra 1974].

[3] The metanorms for identifying the normative subsystem applicable to the case can, in the Italian system, be inferred (as a large number of the metanorms mentioned above) from some provisions of the Constitution and from the "General Provisions on the Law" (otherwise called the Preleggi"):

[4] I will examine in wider details the conflicts-solving problem , while the analogical reasoning will be only breafly sketched. See [Tiscornia 1993].

[5] The above mentioned definition considers cases of tacit and unnamed abrogation (" all norms that are inconsistent with this law are to be considered abrogated ...."). We shall not, there fore, consider those cases where the legislator expressly states that prior norms are abrogated (express abrogation), that amount to( at least theoretical )elimination of sentences from the legis lative system.

[6] "[It is necessary] to distinguish legal systems as temporal hierarchic sets of norms, from legal orders conceived of as temporal sequences of legal systems". [Martino 1981].

[7] In traditional logic the formulas that are the logical conse quences of a set of premises do not change by augumenting the set of premises. This means that the addition of new knowledge (for example, a new norm or the ascertaining of new facts) cannot invalidate the formulas that have logically been inferred from the original premises

[8] The distinction is to be found in [Brewka 1991].

[9] The formulas are not considered to be universally quantified but, rather, are considered the set of all possible instances of the variables; it means that asserting the negation of an instance does not imply the falsity of the formula.

[Reiter 1980].???????????

[10] A solution may be to consider normal defaults for expressing the general norm, so that neither preconditions nor grounds that are different from the consequent, that is, it will be in the form: A/A. While the assertion of its negation will be a formula of the following kind:

C - >X A.

(in practice the problem of seminormal defaults is avoided by using only normal defaults).

[11] Normal defaults are semimonotonic, that is, by adding new defaults the inferences drawn from previous defaults do not change; in a legal perspective, this does not allow exceptions to exceptions to be handled, for which it would be necessary that a general default blocked by an exception can be restored once the exception, in turn, has been blocked by a new exception.

[12] [Kowalski 1989], pp. 11-15.

[13] [Kowalski Sadri 1990], pp. 598-613.

[14] The introduction of classical negation is based on [Gelfond, Lifschitz 1990] who suggest a appropriate adaption of the semantics of stable models; in [ Kowalski Sadri 1990] it is shown how it is possible to extent the answer-set semantics of Gelfond and Lifschitz [Gelfond Lifschitz 1990] to dealing with exceptions (e-answer-set).

[15] The approach is based on the views of [Rescher 1964 ]

[16] The theory of argumentation is well known in legal theory [Perelman, Olbrechts-Tyeca 1958], [Toulmin 1958].

[17] [Poole 1988], pp. 27-47.

[18] Unlike with DL, in Poole we can have the counterpositive, that is, given a default 'if A typically B', we arrive at, 'given non B, typically non A'. This does not happen in DL because the inference rule is directional and, furthermore, the prerequisite must be proved (whereby it cannot be falsified by introducing the negation of the consequent).

[19] [Brewka 1989].

[20] Given D H F, a preferred subtheory therefore corresponds to a sce nario that is, to a consistent subset of D H F that contains all the F (facts). So a formula that can be explained by a scenario corresponds to a formula that can be proved in a subtheory, or to a weak provable formula in an extension in DL.

[21] The concept of extension in DL corresponds to all the theo rems that can be inferred from S ( TH(S)= E) where S is a pre ferred subtheory.

It should be noted that the provability a formula depends on the syntactic form of the premises (if the premises are A and B, by adding X A it is possible to maintain B; if the premises are A Z B , the conjunction is to be eliminated by adding XA).

[22] Brewka introduces a further generalization that permits the definition of an arbitrary partial ordering of the premises, giving, therefore, greater elasticity in evaluating the levels, rather than via different a priori levels.

[23] [Kowalski Eshghi 1989],pp. 234 ff. It shows how the abduc tion can be seen as the generalization of negation by failure, which allows a semantics for the NAF to be formulated.

[24] But the ideas underlying it originate in the legal field: [Alchourron 1981].

[25] Known as AGM from the names of the authors: [ Alchourron, Gardenfors, Makinson 1985].

[26] A different approach, based on the concept of justification (every belief is valid in as far as there is another valid belief that justifies it) is developed in TMS (Truth Maintenance Sys tems) , see [Doyle 1979].

[27] For a comparison between the two aspects, see [Brewka 1989, pp. 217 ff.; Brewka 1991 pp. 77 ff]; [Gardenfors Makinson 1989]; [Sartor 1991]. In practice, a preferred subtheory of a theory T contracted in respect to p (T - p) is the same as pref.subt. of a theory T' ( T'= T H X p) where X p is the constraint added to T. The constraint guarantees that p cannot be derived, the theory T having to be consistent with X p.

A strong provable formula (contained in all the extensions) is the same as the fact that it is contained in the set deriving from partial meet contraction. If T is a theory in the epistemic sense (= containing all the logical consequences) the results are the same. In fact, even if the extensions are closed deductively, they may not coincide the epistemic states contracted, because the deduction is made after the premises have been contracted.

[28] It nevertheless always consists of a restricted definition of normative system, because it does not take all the possible interpretations of the norms, legal decisions or customary norms into consideration.

[29] [Routen 1989]

[30] (Kowalski 1990], p. 16.

[31] It is obvious that in case of indirect incompability the criterion of the priority of negated antecedent does not hold.

[32] The norms where the legislator has listed all the situations in which the consequence occurs (exaustive enumeration) are absolute. For these the Closed Word Assumption(CWA) applies, that is, the available knowledge is considered to be complete, so that the impossibility of proving the Goal equals the proof of its complementary: [Gelfond 1990]

XP < - not P;

An example of an absolute norm can be found in art. 179 of the Italian Civil Code that lists the personal property of a spouse under the community property regime:

"The following do not constitute the object of community property and are personal assets of the spouse: a) assets which the spouse was owner of before the marriage... b) assets acquired as a gift or through succession after the marriage ... c) assets for strictly personal use... d)...:

asset_personal(Asset,Spouse):- (owner_before_marriage(Spouse,Asset);

has_acquired_as_gift(Spouse, sset);asset_personal(Asset)).

Xasset_personal(Asset,Spouse):- not(owner_before_marriage(Spouse,Asset);

has_acquired_as_gift(Spouse, Asset);

asset_personal(Asset)).

[33] In procedural terms, reasoning by analogy is an argumentative process for inferring that a certain property/conclusion Q holds of a situation Target(T), from the fact that another Property (or set of Properties) P is in common with a Source situation (S), that has the Property Q. The set of Properties P constitutes the similarity between T and S, while the conclusion Q is projected from S into T:

Given, therefore, the premises:

P(S) Y Q(S)

P(T) we infer:

------------------------

Q(T)

The argumentation cannot be considered a valid deductive inference, because the conclusion does not follow syntactically from the premises (even if the premises are true). A way to provide the analogical process with strict logical foundations may be to divide the process of analogy into two steps , considering analogy as a combination of a reasoning for amplifying induction (generalization), and of a deductive reasoning:

1) from P(S) and Q(S) we generalize:

" X (P(X) -> Q(X))

from which, instancing X with T, we have the second step:

2) P(T)-> Q(T) and appling Modus Ponens:

given P(T) we infer Q(T)

Only the second step is deductive, whereby the problem remains of justifying the first step, that is the single-instance inductive generalization.

[34] Only the complete induction, that is, the induction in which the set of the individual premises ends the specie of which the conclusion is the object is considered a valid logic argument.

[35] "Find a criterion wich, if satisfied by any particular analogical inference, sufficiently establishes the truth of that inference". [Davies, Russel 1987], p.264.

[36] [Hall 1989].

[37] In [Costantini Lanzarone 1989 and 1991] an implementation of the approach in the metalogic programming language called Reflexive Prolog is described, The language provides additional features in respect to standard Prolog languages, a naming mechanism, metaevaluations clauses, an extended resolution, including implicit reflection. In RF the search strategy is depth-first, considering base-level clauses first, and metaevaluation clauses only when the former fail. This seems appropriate in reproducing legal reasoning, where the judge first tries to find a solution from the stated norms, and only in case of normative gaps, is he allowed to apply analogy.

[38] Brogi, Mancarelle,Pedreschi,Turini 1990].

[39] [Sergot 1991].

[40] [Perelman 1958].


AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.austlii.edu.au/au/journals/JlLawInfoSci/1993/24.html