Probabilistic Models of Natural Language Semantics

Probabilistic Models of Natural Language Semantics PDF Author: Ingmar Schuster
Publisher:
ISBN:
Category :
Languages : en
Pages : 0

Get Book Here

Book Description

Probabilistic Models of Natural Language Semantics

Probabilistic Models of Natural Language Semantics PDF Author: Ingmar Schuster
Publisher:
ISBN:
Category :
Languages : en
Pages : 0

Get Book Here

Book Description


Bayesian Natural Language Semantics and Pragmatics

Bayesian Natural Language Semantics and Pragmatics PDF Author: Henk Zeevat
Publisher: Springer
ISBN: 3319170643
Category : Language Arts & Disciplines
Languages : en
Pages : 256

Get Book Here

Book Description
The contributions in this volume focus on the Bayesian interpretation of natural languages, which is widely used in areas of artificial intelligence, cognitive science, and computational linguistics. This is the first volume to take up topics in Bayesian Natural Language Interpretation and make proposals based on information theory, probability theory, and related fields. The methodologies offered here extend to the target semantic and pragmatic analyses of computational natural language interpretation. Bayesian approaches to natural language semantics and pragmatics are based on methods from signal processing and the causal Bayesian models pioneered by especially Pearl. In signal processing, the Bayesian method finds the most probable interpretation by finding the one that maximizes the product of the prior probability and the likelihood of the interpretation. It thus stresses the importance of a production model for interpretation as in Grice’s contributions to pragmatics or in interpretation by abduction.

The Handbook of Contemporary Semantic Theory

The Handbook of Contemporary Semantic Theory PDF Author: Shalom Lappin
Publisher: John Wiley & Sons
ISBN: 1119046823
Category : Language Arts & Disciplines
Languages : en
Pages : 771

Get Book Here

Book Description
The second edition of The Handbook of Contemporary Semantic Theory presents a comprehensive introduction to cutting-edge research in contemporary theoretical and computational semantics. Features completely new content from the first edition of The Handbook of Contemporary Semantic Theory Features contributions by leading semanticists, who introduce core areas of contemporary semantic research, while discussing current research Suitable for graduate students for courses in semantic theory and for advanced researchers as an introduction to current theoretical work

Probabilistic Linguistics

Probabilistic Linguistics PDF Author: Rens Bod
Publisher: A Bradford Book
ISBN: 0262025361
Category : Language Arts & Disciplines
Languages : en
Pages : 465

Get Book Here

Book Description
For the past forty years, linguistics has been dominated by the idea that language is categorical and linguistic competence discrete. It has become increasingly clear, however, that many levels of representation, from phonemes to sentence structure, show probabilistic properties, as does the language faculty. Probabilistic linguistics conceptualizes categories as distributions and views knowledge of language not as a minimal set of categorical constraints but as a set of gradient rules that may be characterized by a statistical distribution. Whereas categorical approaches focus on the endpoints of distributions of linguistic phenomena, probabilistic approaches focus on the gradient middle ground. Probabilistic linguistics integrates all the progress made by linguistics thus far with a probabilistic perspective. This book presents a comprehensive introduction to probabilistic approaches to linguistic inquiry. It covers the application of probabilistic techniques to phonology, morphology, semantics, syntax, language acquisition, psycholinguistics, historical linguistics, and sociolinguistics. It also includes a tutorial on elementary probability theory and probabilistic grammars.

Probabilistic Models of Pragmatics for Natural Language

Probabilistic Models of Pragmatics for Natural Language PDF Author: Reuben Harry Cohn-Gordon
Publisher:
ISBN:
Category :
Languages : en
Pages :

Get Book Here

Book Description
Grice (1975) puts forward a view of linguistic meaning in which conversational agents enrich the semantic interpretation of linguistic expressions by recourse to pragmatic reasoning about their interlocutors and world knowledge. As a simple example, on hearing my friend tell me that she read some of War and Peace, I reason that, had she read all of it, she would have said as much, and accordingly that she read only part. It turns out that this perspective is well suited to a probabilistic formalization. In these terms, linguistic meaning is fully characterized by a joint probability distribution P(W; U) between states of the world W and linguistic expressions U. The Gricean perspective described above corresponds to a factoring of this enormously complex distribution into a semantics [[u]](w) : U -> (W -> {0, 1}, world knowledge P(W) and a pair of agents which reason about each other on the assumption that both are cooperative and have access to a commonly known semantics. This third component, of back and forth reasoning between agents, originates in work in game-theory (Franke, 2009; Lewis, 1969) and has been formalized in probabilistic terms by a class of models often collectively referred to as the Rational Speech Acts (RSA) framework (Frank and Goodman, 2012). By allowing for the construction of models which explain in precise terms how Gricean pressures like informativity and relevance interact with a semantics, this framework allows us to take an intuitive theory and explore its predictions beyond the limits of intuition. But it should be more than a theoretical tool. To the extent that its characterization of meaning is correct, it should allow for the construction of computational systems capable of reproducing the dynamics of opendomain natural language. For instance, on the assumption that humans produce language pragmatically, one would expect systems which generate natural language to most faithfully reproduce human behavior when aiming to be not only truthful, but also informative to a hypothetical interlocutor. Likewise, systems which interpret language in a human-like way should perform best when they model language as being generated by an informative speaker. Despite this, standard approaches to many natural language processing (NLP) tasks, like image captioning (Farhadi et al., 2010; Vinyals et al., 2015), translation (Brown et al., 1990; Bahdanau et al., 2014) and metaphor interpretation (Shutova et al., 2013), only incorporate pragmatic reasoning implicitly (in the sense that a supervised model trained on human data may learn to replicate pragmatic behavior). The approach of this dissertation is to take models which capture dynamics of pragmatic language use and apply them to open-domain settings. In this respect, my work builds on research in this vein for referential expression generation (Monroe and Potts, 2015; Andreas and Klein, 2016a), image captioning (Vedantam et al., 2017) and instruction following (Fried et al., 2017), as well as work using neural networks as generative models in Bayesian cognitive architectures (Wu et al., 2015; Liu et al., 2018). The content of the dissertation divides into two parts. The first (chapter 2) focuses on the interpretation of language (particularly non-literal language) using a model of non-literal language previously applied to hyperbole and metaphor interpretation in a setting with a hand-specified and idealized semantics. Here, the goal is to instantiate the same model, but with a semantics derived from a vector space model of word meaning. In this setting, the model remains unchanged, but states are points in an abstract word embedding space - a central computational linguistic representation of meaning (Mikolov et al., 2013; Pennington et al., 2014). The core idea here is that points in the space can be viewed as a continuous analogue of possible worlds, and that linear projections of a vector space are a natural way to represent the aspect of the world that is relevant in a conversation. The second part of the dissertation (chapters 3 and 4) focuses on the production of language, in settings where the length of utterances (and consequently the set of all possible utterances) is unbounded. The core idea here is that pragmatic reasoning can take place incrementally, that is, midway through the saying or hearing of an utterance. This incremental approach is applied to neural language generation tasks, producing informative image captions and translations. The result of these investigations is far from a complete picture, but nevertheless a substantial step towards Bayesian models of semantics and pragmatics which can handle the full richness of natural language, and by doing so provide both explanatory models of meaning and computational systems for producing and interpreting language.

Probabilistic Linguistics

Probabilistic Linguistics PDF Author: Rens Bod
Publisher: MIT Press
ISBN: 9780262523387
Category : Language Arts & Disciplines
Languages : en
Pages : 468

Get Book Here

Book Description
For the past forty years, linguistics has been dominated by the idea that language is categorical and linguistic competence discrete. It has become increasingly clear, however, that many levels of representation, from phonemes to sentence structure, show probabilistic properties, as does the language faculty. Probabilistic linguistics conceptualizes categories as distributions and views knowledge of language not as a minimal set of categorical constraints but as a set of gradient rules that may be characterized by a statistical distribution. Whereas categorical approaches focus on the endpoints of distributions of linguistic phenomena, probabilistic approaches focus on the gradient middle ground. Probabilistic linguistics integrates all the progress made by linguistics thus far with a probabilistic perspective. This book presents a comprehensive introduction to probabilistic approaches to linguistic inquiry. It covers the application of probabilistic techniques to phonology, morphology, semantics, syntax, language acquisition, psycholinguistics, historical linguistics, and sociolinguistics. It also includes a tutorial on elementary probability theory and probabilistic grammars.

Natural Language Semantics Using Probabilistic Logic

Natural Language Semantics Using Probabilistic Logic PDF Author: Islam Kamel Ahmed Beltagy
Publisher:
ISBN:
Category :
Languages : en
Pages : 352

Get Book Here

Book Description
With better natural language semantic representations, computers can do more applications more efficiently as a result of better understanding of natural text. However, no single semantic representation at this time fulfills all requirements needed for a satisfactory representation. Logic-based representations like first-order logic capture many of the linguistic phenomena using logical constructs, and they come with standardized inference mechanisms, but standard first-order logic fails to capture the “graded” aspect of meaning in languages. Other approaches for semantics, like distributional models, focus on capturing “graded” semantic similarity of words and phrases but do not capture sentence structure in the same detail as logic-based approaches. However, both aspects of semantics, structure and gradedness, are important for an accurate language semantics representation. In this work, we propose a natural language semantics representation that uses probabilistic logic (PL) to integrate logical with weighted uncertain knowledge. It combines the expressivity and the automated inference of logic with the ability to reason with uncertainty. To demonstrate the effectiveness of our semantic representation, we implement and evaluate it on three tasks, recognizing textual entailment (RTE), semantic textual similarity (STS) and open-domain question answering (QA). These tasks can utilize the strengths of our representation and the integration of logical representation and uncertain knowledge. Our semantic representation 1 has three components, Logical Form, Knowledge Base and Inference, all of which present interesting challenges and we make new contributions in each of them. The first component is the Logical Form, which is the primary meaning representation. We address two points, how to translate input sentences to logical form, and how to adapt the resulting logical form to PL. First, we use Boxer, a CCG-based semantic analysis tool to translate sentences to logical form. We also explore translating dependency trees to logical form. Then, we adapt the logical forms to ensure that universal quantifiers and negations work as expected. The second component is the Knowledge Base which contains “uncertain” background knowledge required for a given problem. We collect the “relevant” lexical information from different linguistic resources, encode them as weighted logical rules, and add them to the knowledge base. We add rules from existing databases, in particular WordNet and the Paraphrase Database (PPDB). Since these are incomplete, we generate additional on-the-fly rules that could be useful. We use alignment techniques to propose rules that are relevant to a particular problem, and explore two alignment methods, one based on Robinson’s resolution and the other based on graph matching. We automatically annotate the proposed rules and use them to learn weights for unseen rules. The third component is Inference. This component is implemented for each task separately. We use the logical form and the knowledge base constructed in the previous two steps to formulate the task as a PL inference problem then develop a PL inference algorithm that is optimized for this particular task. We explore the use of two PL frameworks, Markov Logic Networks (MLNs) and Probabilistic Soft Logic (PSL). We discuss which framework works best for a particular task, and present new inference algorithms for each framework.

Semantics-Oriented Natural Language Processing

Semantics-Oriented Natural Language Processing PDF Author: Vladimir Fomichov A.
Publisher: Springer Science & Business Media
ISBN: 0387729267
Category : Science
Languages : en
Pages : 340

Get Book Here

Book Description
Gluecklich, die wissen, dass hinter allen Sprachen das Unsaegliche steht. Those are happy who know that behind all languages there is something unsaid Rainer Maria Rilke This book shows in a new way that a solution to a fundamental problem from one scienti?c ?eld can help to ?nd the solutions to important problems emerged in several other ?elds of science and technology. In modern science, the term “Natural Language” denotes the collection of all such languages that every language is used as a primary means of communication by people belonging to any country or any region. So Natural Language (NL) includes, in particular, the English, Russian, and German languages. The applied computer systems processing natural language printed or written texts (NL-texts) or oral speech with respect to the fact that the words are associated with some meanings are called semantics-oriented natural language processing s- tems (NLPSs). On one hand, this book is a snapshot of the current stage of a research p- gram started many years ago and called Integral Formal Semantics (IFS) of NL. The goal of this program has been to develop the formal models and methods he- ing to overcome the dif?culties of logical character associated with the engineering of semantics-oriented NLPSs. The designers of such systems of arbitrary kinds will ?nd in this book the formal means and algorithms being of great help in their work.

Modern Computational Models of Semantic Discovery in Natural Language

Modern Computational Models of Semantic Discovery in Natural Language PDF Author: Žižka, Jan
Publisher: IGI Global
ISBN: 146668691X
Category : Computers
Languages : en
Pages : 353

Get Book Here

Book Description
Language—that is, oral or written content that references abstract concepts in subtle ways—is what sets us apart as a species, and in an age defined by such content, language has become both the fuel and the currency of our modern information society. This has posed a vexing new challenge for linguists and engineers working in the field of language-processing: how do we parse and process not just language itself, but language in vast, overwhelming quantities? Modern Computational Models of Semantic Discovery in Natural Language compiles and reviews the most prominent linguistic theories into a single source that serves as an essential reference for future solutions to one of the most important challenges of our age. This comprehensive publication benefits an audience of students and professionals, researchers, and practitioners of linguistics and language discovery. This book includes a comprehensive range of topics and chapters covering digital media, social interaction in online environments, text and data mining, language processing and translation, and contextual documentation, among others.

Generalized Probabilistic Topic and Syntax Models for Natural Language Processing

Generalized Probabilistic Topic and Syntax Models for Natural Language Processing PDF Author: William Michael Darling
Publisher:
ISBN:
Category :
Languages : en
Pages :

Get Book Here

Book Description