Free download. Book file PDF easily for everyone and every device. You can download and read online Logic programming : proceedings of the 1999 International Conference on Logic Programming file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Logic programming : proceedings of the 1999 International Conference on Logic Programming book. Happy reading Logic programming : proceedings of the 1999 International Conference on Logic Programming Bookeveryone. Download file Free Book PDF Logic programming : proceedings of the 1999 International Conference on Logic Programming at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Logic programming : proceedings of the 1999 International Conference on Logic Programming Pocket Guide.
Safe & Secure Shopping

Many programmers find debugging a frustrating and unproductive activity. Declarative debugging promises to alleviate this problem by automating some of the reasoning used in the debugging process. We have implemented a declarative debugger for Mercury. In the process, we found a problem not addressed in the existing literature on declarative debuggers, which considers programs to consist of clauses conjunctions of literals : how to handle if-then-elses. The problem is that the condition of an if-then-else should be treated as a negated goal if and only if the condition fails.

Negated contexts may cause a declarative debugger to switch from wrong answer analysis to missing answer analysis and vice versa. Since missing answer analysis explores a search space that is subtly different from the space explored for wrong answer analysis, the two kinds of analysis need different kinds of trees to represent the program execution. For the conditions of if-then-elses, the debugger does not know which kind of tree to build until the condition has either succeeded or failed, by which time it is too late.

To solve this problem, we have designed a novel data structure, the annotated event trace, which is flexible enough to support both wrong and missing answer analysis. The advantages of this data structure are that it is quite compact, requiring little more space than an ordinary stored event trace, and that the algorithms to build this data structure and to derive from it the information required for two kinds of diagnosis are all simple as well as efficient.


  • America and the Armenian Genocide of 1915;
  • Reasoning about Actions and Planning with Preferences Using Prioritized Default Theory.
  • Product details.

One of the key advantages of modern programming languages is that they free the programmer from the burden of explicit memory management. Usualy, this means that memory management is delegated to the runtime system by the use of a run-rime garbage collector RTGC. Basically, a RTGC is a dedicated process that is run in parallel with the user program.

Recommended for you

Whever the user program needs to store some data, the RTGC provides the desired memory space. At regular intervals, the RTGC reviews the uses of the allocated memory space, and recovers those memory cells that have become garbage, i. We study the technique of compile-time garbage collection in the context of Mercury, a pure declarative language. A key element of declarative languages is that they disallow explicit memory updates which are common operations in most other programming paradigms but they rely instead on term construction and deconstruction to manipulate the program data.

This places a high demand on the memory management and makes declarative languages a primary target for compile-time garbage collection. Moreover, the clear mathematical foundations of Mercury, being a pure declarative language, makes the development of the program analyses that are necessary for CTGC feasible. In this thesis we look at mode analysis of logic programs. Being based on the mathematical formalism of predicate logic, logic programs have no a priori notion of data flow—a single logic program may run in multiple modes where each mode describes, or prescribes, a pattern of data flow.

A mode system provides an abstract domain for describing the flow of data in logic programs, and an algorithm for analysing programs to infer the modes of a program or to check the correctness of mode declarations given by the programmer. Such an analysis can provide much useful information to the compiler for optimising the program. In a prescriptive mode system, mode analysis is also an important part of the semantic analysis phase of compilation much like type analysis and can inform the programmer of many errors or potential errors in the program at compile time.

We therefore believe it is an essential component of any industrial strength logic programming system. Our aim is to develop a strong and prescriptive mode system that is both as precise and expressive as possible. We believe this requires a strongly typed and purely declarative language and so we focus on the language Mercury. The first contribution of our work is to give a detailed description of Mercury's existing mode system, which is based on abstract interpretation.

Although most of this system has been around for several years, this is the first time it has been described in this level of detail. This is also the first time the relationship of the mode system to the formalism of abstract interpretation has been made clear. Following that, we look at ways of extending the mode system to provide further precision and expressiveness, and to overcome some of the limitations of the current system. The first of these extensions is to support a form of constrained parametric polymorphism for modes.

This is analogous to constrained parametric polymorphic type systems such as type classes, and adds a somewhat similar degree of expressiveness to the mode system. Next we look at a method for increasing the precision of the mode analysis by keeping track of aliases between variables. The increased precision we gain from this allows an increase in expressiveness by allowing the use of partially instantiated data structures and more complex uniqueness annotations on modes. The final area we look at is an alternative approach to mode analysis using Boolean constraints.

This allows us to design a mode system that can capture complex mode constraints between variables and more clearly separates the various tasks required for mode analysis. We believe that this constraint-based system provides a good platform for further extension of the Mercury mode system. The work we describe has all been implemented in the Melbourne Mercury compiler, although only constrained parametric polymorphism has so far become part of an official compiler release. Debuggers for logic programming languages have traditionally had a capability most other debuggers did not: the ability to jump back to a previous state of the program, effectively travelling back in time in the history of the computation.

This report presents a new termination analysis for Mercury that approximates interargument size relationships using convex constraints. These relationships are derived using an analysis based upon abstract interpretation. Although this analysis is more expensive than that of the existing termination analyser, it is able to prove the termination of a larger class of predicates. This thesis presents the foundations for extending the implementation of the declarative logic programming language Mercury to support the parallel execution of programs.

The new material in this thesis is in three parts. The first part presents an extension to the existing sequential execution model for Mercury that allows programs to be executed in parallel. Programmers can exploit this extension simply by replacing some sequential conjunction operators connecting independent goals with a new parallel conjunction operator. Such changes do not change the declarative semantics of the program, but can improve performance. The second part of the new material presents a model for explicit threading in Mercury, which is useful when the programmer's goal is not efficiency but the explicit representation of concurrent tasks and control of their interactions.

We show how our extended execution model supports the usual operations on threads. The final part of the new material presented in this thesis presents a new technique for obtaining a detailed and accurate picture of the performance of a program. The basis of our technique is associating a complete context with each measurement, rather than approximating the context as conventional profilers do.

In order to make our new profiling system feasible we have had to develop sophisticated techniques for reducing the cost of recording complete contexts; in order to make its output tractable, we also had to develop techniques for dealing with interactions between higher order constructs and recursion. We have also developed a tool for helping programmers and eventually compilers to digest the large volume of data generated by our profiling technique. All the ideas presented in this thesis have been implemented in the Melbourne Mercury compiler.

The aim of this thesis is the design of a type system for an industrial strength logic programming language. The type system we describe has been implemented for the Mercury programming language, in the Melbourne Mercury compiler. We begin by presenting a simple higher order extension of the Mycroft-O'Keefe type system. We then use this type system as a basis for two significant extensions. The first extension is the adoption of a type class system similar to that found in some modern functional languages in the context of higher order logic programming.

We give a set of typing rules which both provide a formal definition of type correctness and define the source-to-source transformation we have used to implement type classes. This transformation is simple and effective, and can be easily shown to preserve Mercury's mode, determinism and uniqueness correctness properties. The second extension is to allow existentially quantified type variables in the types of function symbols and of predicates.

This represents the most significant contribution of this thesis. We then formally define the type system that results from the combination of both type classes and existential quantification of type variables. The two type system extensions are quite orthogonal. As a result, the definition of type correctness in the combined system is a fairly simple combination of the definitions for the individual systems.

However, the mechanisms contributed by the two systems combine synergistically; the resulting type system is extremely expressive. We then show how the type system we have designed allows the programmer to take advantage of many object oriented design techniques. We give an in depth analysis of object oriented design and isolate the mechanisms which are likely to result in reusable and maintainable software systems.

We show that our type system allows the programmer to directly express these kinds of designs and discourages the use of the kinds of object oriented designs which reduce maintainability by introducing unnecessary implementation dependencies. We show that these principles apply in a direct and simple manner to the modelling of component interfaces as supported by modern component frameworks such as CORBA. Recent logic programming languages, such as Mercury and HAL, require type, mode and determinism declarations for predicates. This information allows the generation of efficient target code and the detection of many errors at compile-time.

Unfortunately, mode checking in such languages is difficult. One of the main reasons is that, for each predicate mode declaration, the compiler is required to decide which parts of the procedure bind which variables, and how conjuncts in the predicate definition should be re-ordered to enforce this behaviour. Current mode checking systems limit the possible modes that may be used because they do not keep track of aliasing information, and have only a limited ability to infer modes, since inference does not perform reordering. In this paper we develop a mode inference system for Mercury based on mapping each predicate to a system of Boolean constraints that describe where its variables can be produced.

This allows us to handle programs that are not supported by the existing system. The value of a variable is often given by a field of a heap cell, and frequently the program will pick up the values of several variables from different fields of the same heap cell. By keeping some of these variables out of the stack frame, and accessing them in their original locations on the heap instead, we can reduce the number of loads from and stores to the stack at the cost of introducing a smaller number of loads from the heap.

We present an algorithm that finds the optimal set of variables to access via a heap cell instead of a stack slot, and transforms the code of the program accordingly. The optimization is straightforward to apply to Mercury and to other languages with immutable data structures; its adaptation to languages with destructive assignment would require the compiler to perform mutability analysis.

Available here 83K. Previous attempts at garbage collection in uncooperative environments have generally used conservative or mostly-conservative approaches. We have implemented this in the Mercury compiler, which generates C code, and present preliminary performance data on the overheads of this technique. We also show how this technique can be extended to handle multithreaded applications.

Preliminary Proceedings pages Available here 93K. NET Common Language Runtime CLR offers a new opportunity to experiment with multi-language interoperation, and provides a relatively rare chance to explore deep interoperation of a wide range of programming language paradigms. This article describes how Mercury is compiled to the CLR. We describe the problems we have encountered with generating code for the CLR, give some preliminary benchmark results, and suggest some possible improvements to the CLR regarding separate compilation, verifiability, tail calls, and efficiency.

Available here 67K.

Journals etc

Many logic programming implementations compile to C, but they compile to very low-level C, and thus discard many of the advantages of compiling to a high-level language. We describe an alternative approach to compiling logic programs to C, based on continuation passing, that we have used in a new back-end for the Mercury compiler.

The new approach compiles to much higher-level C code, which means the compiler back-end and run-time system can be considerably simpler. We present a formal schema for the transformation, and give benchmark results which show that this approach delivers performance that is more than competitive with the fastest previous implementation, with greater simplicity and better portability and interoperability.

The approach we describe can also be used for compiling to other target languages, such as IL the Microsoft. NET intermediate language. The benchmark data on which the performance evaluation section of this paper is based is available here 9. Declarative programs differ from imperative programs in several respects, the main ones being their heavy use of recursion, of various forms of polymorphism, and of higher order. Existing profilers tend not to produce very useful information in the presence of these constructs. We present a new profiling technique we call deep profiling that yields detailed and useful information about programs even in the presence of these constructs, information that is significantly richer than the output of other profilers.

The heart of deep profiling is a source-to-source transformation. We have implemented this transformation and its associated infrastructure in the compiler for Mercury, a purely declarative logic programming language. While our measurements of this implementation show that deep profiling has slightly greater overhead than some other profiling techniques, the wealth of information it provides makes this extra overhead worthwhile.

The deep profiling algorithms themselves are applicable to most other language styles, including imperative, object-oriented, and functional languages. Available here 84K. Compile-time garbage collection CTGC is still a very uncommon feature within compilers. In previous work we have developed a compile-time structure reuse system for Mercury, a logic programming language. This system indicates which datastructures can safely be reused at run-time. As preliminary experiments were promising, we have continued this work and have now a working and well performing near-to-ship CTGC-system built into the Melbourne Mercury Compiler MMC.

In this paper we present the multiple design decisions leading to this system, we report the results of using CTGC for a set of benchmarks, including a real-world program, and finally we discuss further possible improvements. Benchmarks show substantial memory savings and a noticeable reduction in execution time. In this paper we present a binding-time analysis for the logic programming language Mercury.

Binding-time analysis is a key analysis needed to perform off-line program specialisation. Our analysis deals with the higher-order aspects of Mercury, and is formulated by means of constraint normalisation. This allows at least part of the analysis to be performed on a modular basis. Seperate compilation of modules is an essential ingredient of a language such as Mercury which supports programming in the large. Hence, to be practical, a live-structure analysis also has to be module based.

This paper develops a modular live-structure analysis and extends it with a modular reuse analysis. It also describes preliminary results obtained with a first prototype of the module based analysis. We present two optimizations for making Mercury programs tail recursive. Both operate by taking computations that occur after a recursive call and moving them before the recursive call, modifying them as necessary. The first optimization moves calls to associative predicates; it is a pure source to source transformation.

The second optimization moves construction unifications; it required extensions to the mode system to record aliases and to the parameter passing convention to allow arguments to be returned in memory. The two optimizations are designed to work together, and can make a large class of programs tail recursive. The raw data on which the evaluation is based is available as a 5 Kb tar file. Available here 57K. Recursive predicates frequently generate some state which is updated after the recursive call. We present a source to source transformation which can move the state update before the recursive call, thus helping to make the predicate tail recursive, and report on its implementation in the Mercury compiler.

Available here 52K. By using purity declarations with the foreign language interface, programmers can take advantage of many of the features of a high level programming language while writing imperative code to interface with existing imperative libraries. This paper outlines the purity system in Mercury and how it affects operational semantics, compares this purity system with other approaches to declaring impurity in a pure language, and gives an extended example of how impurity and foreign language interfaces can work together to simplify the chore of writing declarative interfaces to libraries.

In most situations, it would be nicer if the programmer didn't have to worry about the details of memory management. The paper briefly reports on some experiments with a prototype analyser which aims at detecting memory available for reuse. The prototype is based on the live-structure analysis developed by us for logic programs extended with declarations. Yet the major contribution of this paper consists of the development of the principles of a module based analysis which are essential for the analysis of large Mercury programs with code distributed over many modules.

In this paper, we describe a binding-time analysis BTA for a statically typed and strongly moded pure logic programming language, in casu Mercury. Binding-time analysis is the key concept in achieving off-line program specialisation: the analysis starts from a description of the program's input available for specialisation, and propagates this information throughout the program, deriving directives for when and how to perform specialisation.

Exploiting the possibilities offered by Mercury's strong type and mode system, we present a completely automatic BTA dealing with partially static binding-times. The implementation technology of the Mercury debugger. Available here 66K. Every programming language needs a debugger. Mercury now has three debuggers: a simple procedural debugger similar to the tracing systems of Prolog implementations, a prototype declarative debugger, and a debugger based on the idea of automatic trace analysis.

In this paper, we present the shared infrastructure that underlies the three debuggers, and describe the implementation of the procedural debugger. We give our reasons for each of our main design decisions, and show how several of these decisions are rooted in our experience with the debugging of large programs working with large data structures. Available here 62K. This paper has been superceded by the LNCS version. Available here 75K. For efficiency, the Mercury compiler uses type specific representations of terms, and implements polymorphic operations such as unifications via generic code invoked with descriptions of the actual types of the operands.

These descriptions, which consist of automatically generated data and code, are the main components of the Mercury runtime type information RTTI system. We have used this system to implement several extensions of the Mercury system, including an escape mechanism from static type checking, generic input and output facilities, a debugger, and automatic memoization, and we are in the process of using it for an accurate, native garbage collector.

We give detailed information on the implementation and uses of the Mercury RTTI system as well as measurements of the space costs of the system. The raw data on which the evaluation is based is available as a 70 Kb tar file Optimization of Mercury programs. Honours report. This paper describes the implementation of several of the high-level optimization passes of the Mercury compiler, including deforestation, type specialization, constraint propagation and structure reuse.

Available here 65K.


  • Riverworld: To Your Scattered Bodies Go; The Fabulous Riverboat (Riverworld, Books 1-2).
  • Probabilistic and truth-functional many-valued logic programming - IEEE Conference Publication!
  • Vladimir Lifschitz: Papers.
  • New England Wild Flower Societys Flora Novae Angliae: A Manual for the Identification of Native and Naturalized Higher Vascular Plants of New England!
  • Log in to Wiley Online Library.
  • Answer Set Programming without Unstratified Negation | SpringerLink?

The binding preserves the referential transparency of the language, and has several advantages over similar bindings for other strongly typed declarative languages. Our approach simplifies the mapping, makes the implementation of CORBA's interface inheritance straightforward, and makes it trivial for programmers to provide several different implementations of the same interface. It uses existential types to model the operation of asking CORBA for an object that satisfies a given interface but whose representation is unknown.

Available here 82K.

Using Logic Programming to Recover C++ Classes and Methods from Compiled Executables

In this paper, we explain how we have extended Mercury's type system to include support for type classes. We give a formal semantics for this extension to our type system, adapting the typing rules used in functional languages to the differing demands of logic programming languages. We show that type classes integrate very nicely with Mercury's mode, uniqueness and determinism systems, and describe how our implementation works.

The raw data on which the evaluation is based is available as a 5.

Logic Programming and Automated Reasoning

Available here 46K. Journal of Logic Programming, volume 29, number , October-December , pages Elsevier owns the copyright of this paper; it is made available here by their permission. This paper contains a brief overview of the Mercury language, and a reasonably detailed overview of the implementation technology used in the Mercury compiler.

It describes the abstract machine that the compiler generates code for. Our other papers listed below go into more detail on exactly how the code is generated, and on how the abstract machine instructions are implemented as C or GNU C code. The raw data on which the evaluation is based is available on our benchmarking page.

Available here 78K. A longer version of the paper is available here 76K. This paper discusses Mercury's determinism system in detail, including the algorithms for switch detection, deep indexing, determinism inference, and determinism checking. Available here 68K. This paper describes the structure of the Mercury compiler, its calling conventions, and the algorithms it uses in generating code. These algorithms include lazy code generation and a novel way of handling nested disjunctions. Portland, Oregon, December This paper discusses the merits of using C, and in particular GNU C, as an intermediate target language for the compilation of logic programs, and describes the approach we have taken in the implementation of Mercury.

Available here 85K. An overview paper. Available here 96K. The first paper on Mercury. It is superseded by the paper in the Journal of Logic Programming. This is the first paper on the code generator. Warning: several aspects of the code generator have changed since this paper was written. Some of these are documented in the version in the ILPS 95 proceedings. Related Papers This paper covers a case study where Mercury was used as an implementation language.

Programs are assumed to be well and nicely moded, which are two widely used concepts for verification. Many predicates terminate under such weak assumptions. Knowing these predicates is useful even for programs where not all predicates have this property. Login Admin Dashboards Simple search Advanced search. Home Browse Latest additions Help Contact.

Download Statistics. A curiosity: three literary awards. We have cross-fertilized our computational linguistics expertise with knowledge based systems, yielding high level methodologies for endowing the internet with intelligent communication capabilities, e. Our prototype system, LogiMOO, accepts interactions in various languages, translates each to a controlled English based interlingua, and reacts in the language of origin. Underlying this system is a novel logic programming infrastructure for internet programming [J23,M10,M11].

Logic Programming | The MIT Press

We have also studied models for virtual world and database creation from controlled English [P30] as well as developed and implemented theories for human language guided learning of mathematical software [J24], and for virtual environments for long distance learning [P34,P35,S13]. I have combined this new interest with my own field of expertise in two ways:. With Andre Levesque and Manuel Zahariev, I developed efficient software for plant pathology identification from signature oligos [M17].

Our results, which are used daily now, reduced what used to be a six month-person effort at Agriculture and AgriFood Canada to an average 15 minutes of computing. With Maryam Bavarian, I applied the high level methods I have developed for processing language to the automatic analysis of biological sequences [J26] and to RNA secondary structure design [P52]. From trying to create knowledge bases from human language [M12,M14], the need for more flexible linguistic models became apparent. Among those aiming at accepting the typically imperfect input that results from spontaneous speech, we noted the Property based paradigm, evolved by Blache from Bes' original 5P formalism, which relies on property satisfaction between categories, allowing us to parse incomplete and even incorrect input in a very modular and adaptable way.

I obtained a position as Chercheur Etranger at Universite de Provence in order to develop a methodology for parsing Property Grammars which invisibly interprets linguistic descriptions as directly executable specifications, and shows all partial analyses even upon failure [P44]. More generally, our work shows for the first time that direct renditions of flexible, constraint based parsing formalisms can be made to run efficiently while preserving a one-to-one correspondence between the conceptual and the representational levels.

Our results generalize into directly executable cognitive models P45]. Applications to language processing include: application to coordination, with Dulce Aguilar Solis [P47], and to long distance dependencies [M16]. With Philippe Blache, I have obtained very encouraging results on extracting noun phrases from arbitrary text we use text from the newspaper Le Monde [P54], and with Baohua Gu, I have generalized these results into concept extraction through natural language-- specifically, we have adapted my parser to extract concepts as well as targeted syntactic phrases, and to use English rather than French, with encouraging results [56,59].

However, much work remains to be done on the semantic component. Motivated by language processing problems, I developed with Paul Tarau et al. With Henning Christiansen, I combined my earlier work on diagnosis through datalog grammars and abduction [M15] with Constraint Handling Rules CHR instead of datalog, and used it for the automatic diagnosis and correction of syntactic errors [J25].

We use abduction in a novel, direct way, without the overhead of alternating abductive steps with resolution steps, as in previous approaches. It shows a novel flexibility in the interaction between the different paradigms, including all additional built-in predicates and constraint solvers that may be available through CHR, whose syntax it partially borrows from and is used to implement integrity constraints associated to assumptions or to abducibles. As well, it seems to provide the most efficient of known implementations for abduction in logic programming. A new workshop series on Constraint Solving and Language Processing has sprung from this collaboration.

Triangle Journal , vol. International Journal on Molecular Ecology Resources. Applied Mathematics Letters. Dahl, 3. Informatica , vol 2. Logic Programming Journal , 38 3 , Informatica , 22 4 , Invited contribution: extension of a previous version which was selected as one of the best papers at NLDB' Andrews, V. Dahl, and F. Journal of Logic Programming 26 3 : Journal of Software Engineering and Knowledge Engineering , vol. Journal of Logic Programming , 12 1 , pp. Dahl What the study of language can contribute to AI. AI Communications , 6 2 Dahl, F. Popowich, and M.

Rochemont A principled characterization of dislocated phrases: Capturing barriers with Static Discontinuity Grammars. Linguistics and Philosophy , 16 4 , August Dahl, G. Sidebottom, and J. Ueberla Automatic configuration through constraint-based reasoning. Journal of Expert Systems: Research and Applications , 6 4 Incomplete types for logic databases. Applied Mathematics Letters , 4 3 , Describing linguistic knowledge about constraints in user-friendly ways.

Journal of Expert Systems: Research and Applications , 3 2 , Dahl and F. Parsing and generation with Static DiscontinuityGrammars. Applied Mathematics Letters , , Discontinuous grammars. Computational Intelligence , 5 4 , Gramaticas discontinuas: una herramienta computacional con aplicaciones en la teoria de reccion y ligamiento. Revista Argentina de Linguistica , 2 2 , More on gapping grammars. In Proc. American Journal of Computational Linguistics 9 2 : Cercone and G. McCalla eds. PAAMS , volume 2.

Mira et al. Dahl, V. Cabestany et al. In: Ono, H. Sadri and T. Kakas eds. In: C. Ramakrishnan and S. Krishnamurthi eds. In: K. Apt, V. Marek and D. Warren eds. The Logic Programming Paradigm: A year perspective. Springer-Verlag, pp. In: M. Pazienza ed. Information Extraction: towards scalable, adaptable systems. In: Wooldridge, M. In: Conen, W. In: J.


  1. International Conference on Logic Programming;
  2. Theory of Spectra and Atomic Constitution: Three Essays..
  3. Emily Pricklebacks Clever Idea (Magic Animal Friends, Book 6);
  4. Dix, L. Pereira and T. Przymusinski eds. Jaffar, J. In: A. Sobrino ed. Ensayos sobre programacion logica. Comment on implementing Government-Binding theories In R. Levine, editor, Formal Linguistics: Theory and Practice , pages Oxford University Press. Garcia and Y. Abramson and V. Logic Grammars. Springer-Verlag, Dahl and P. Processing techniques for discontinuous grammars. In Meta-Programming for Logic Programming , pages MIT Press, Logic Programming for Constructive Expert Systems. Database Systems and Applications.

    In: Proc. In: Gottlob, G. Datalog-2 Conference,