Get Our Extension

Word-sense disambiguation

From Wikipedia, in a visual modern way

Word-sense disambiguation (WSD) is the process of identifying which sense of a word is meant in a sentence or other segment of context. In human language processing and cognition, it is usually subconscious/automatic but can often come to conscious attention when ambiguity impairs clarity of communication, given the pervasive polysemy in natural language. In computational linguistics, it is an open problem that affects other computer-related writing, such as discourse, improving relevance of search engines, anaphora resolution, coherence, and inference.

Given that natural language requires reflection of neurological reality, as shaped by the abilities provided by the brain's neural networks, computer science has had a long-term challenge in developing the ability in computers to do natural language processing and machine learning.

Many techniques have been researched, including dictionary-based methods that use the knowledge encoded in lexical resources, supervised machine learning methods in which a classifier is trained for each distinct word on a corpus of manually sense-annotated examples, and completely unsupervised methods that cluster occurrences of words, thereby inducing word senses. Among these, supervised learning approaches have been the most successful algorithms to date.

Accuracy of current algorithms is difficult to state without a host of caveats. In English, accuracy at the coarse-grained (homograph) level is routinely above 90% (as of 2009), with some methods on particular homographs achieving over 96%. On finer-grained sense distinctions, top accuracies from 59.1% to 69.0% have been reported in evaluation exercises (SemEval-2007, Senseval-2), where the baseline accuracy of the simplest possible algorithm of always choosing the most frequent sense was 51.4% and 57%, respectively.

Discover more about Word-sense disambiguation related topics

Context (language use)

Context (language use)

In semiotics, linguistics, sociology and anthropology, context refers to those objects or entities which surround a focal event, in these disciplines typically a communicative event, of some kind. Context is "a frame that surrounds the event and provides resources for its appropriate interpretation". It is thus a relative concept, only definable with respect to some focal event within a frame, not independently of that frame.

Language processing in the brain

Language processing in the brain

In psycholinguistics, language processing refers to the way humans use words to communicate ideas and feelings, and how such communications are processed and understood. Language processing is considered to be a uniquely human ability that is not produced with the same grammatical understanding or systematicity in even human's closest primate relatives.

Cognition

Cognition

Cognition refers to "the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses". It encompasses all aspects of intellectual functions and processes such as: perception, attention, thought, intelligence, the formation of knowledge, memory and working memory, judgment and evaluation, reasoning and computation, problem solving and decision making, comprehension and production of language. Imagination is also a cognitive process, it is considered as such because it involves thinking about possibilities. Cognitive processes use existing knowledge and discover new knowledge.

Consciousness

Consciousness

Consciousness, at its simplest, is sentience and awareness of internal and external existence. However, its nature has led to millennia of analyses, explanations and debates by philosophers, theologians, linguists, and scientists. Opinions differ about what exactly needs to be studied or even considered consciousness. In some explanations, it is synonymous with the mind, and at other times, an aspect of mind. In the past, it was one's "inner life", the world of introspection, of private thought, imagination and volition. Today, it often includes any kind of cognition, experience, feeling or perception. It may be awareness, awareness of awareness, or self-awareness either continuously changing or not. The disparate range of research, notions and speculations raises a curiosity about whether the right questions are being asked.

Ambiguity

Ambiguity

Ambiguity is the type of meaning in which a phrase, statement or resolution is not explicitly defined, making several interpretations plausible. A common aspect of ambiguity is uncertainty. It is thus an attribute of any idea or statement whose intended meaning cannot be definitively resolved, according to a rule or process with a finite number of steps.

Computational linguistics

Computational linguistics

Computational linguistics is an interdisciplinary field concerned with the computational modelling of natural language, as well as the study of appropriate computational approaches to linguistic questions. In general, computational linguistics draws upon linguistics, computer science, artificial intelligence, mathematics, logic, philosophy, cognitive science, cognitive psychology, psycholinguistics, anthropology and neuroscience, among others.

Discourse

Discourse

Discourse is a generalization of the notion of a conversation to any form of communication. Discourse is a major topic in social theory, with work spanning fields such as sociology, anthropology, continental philosophy, and discourse analysis. Following pioneering work by Michel Foucault, these fields view discourse as a system of thought, knowledge, or communication that constructs our experience of the world. Since control of discourse amounts to control of how the world is perceived, social theory often studies discourse as a window into power. Within theoretical linguistics, discourse is understood more narrowly as linguistic information exchange and was one of the major motivations for the framework of dynamic semantics, in which expressions' denotations are equated with their ability to update a discourse context.

Coherence (linguistics)

Coherence (linguistics)

Coherence in linguistics is what makes a text semantically meaningful. It is especially dealt with in text linguistics. Coherence is achieved through syntactical features such as the use of deictic, anaphoric and cataphoric elements or a logical tense structure, as well as presuppositions and implications connected to general world knowledge. The purely linguistic elements that make a text coherent are subsumed under the term cohesion.

Inference

Inference

Inferences are steps in reasoning, moving from premises to logical consequences; etymologically, the word infer means to "carry forward". Inference is theoretically traditionally divided into deduction and induction, a distinction that in Europe dates at least to Aristotle. Deduction is inference deriving logical conclusions from premises known or assumed to be true, with the laws of valid inference being studied in logic. Induction is inference from particular evidence to a universal conclusion. A third type of inference is sometimes distinguished, notably by Charles Sanders Peirce, contradistinguishing abduction from induction.

Machine learning

Machine learning

Machine learning (ML) is a field of inquiry devoted to understanding and building methods that "learn" – that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence.

Algorithm

Algorithm

In mathematics and computer science, an algorithm is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. More advanced algorithms can use conditionals to divert the code execution through various routes and deduce valid inferences, achieving automation eventually. Using human characteristics as descriptors of machines in metaphorical ways was already practiced by Alan Turing with terms such as "memory", "search" and "stimulus".

Homograph

Homograph

A homograph is a word that shares the same written form as another word but has a different meaning. However, some dictionaries insist that the words must also be pronounced differently, while the Oxford English Dictionary says that the words should also be of "different origin". In this vein, The Oxford Guide to Practical Lexicography lists various types of homographs, including those in which the words are discriminated by being in a different word class, such as hit, the verb to strike, and hit, the noun a blow.

Variants

Disambiguation requires two strict inputs: a dictionary to specify the senses which are to be disambiguated and a corpus of language data to be disambiguated (in some methods, a training corpus of language examples is also required). WSD task has two variants: "lexical sample" (disambiguating the occurrences of a small sample of target words which were previously selected) and "all words" task (disambiguation of all the words in a running text). "All words" task is generally considered a more realistic form of evaluation, but the corpus is more expensive to produce because human annotators have to read the definitions for each word in the sequence every time they need to make a tagging judgement, rather than once for a block of instances for the same target word.

Discover more about Variants related topics

Dictionary

Dictionary

A dictionary is a listing of lexemes from the lexicon of one or more specific languages, often arranged alphabetically, which may include information on definitions, usage, etymologies, pronunciations, translation, etc. It is a lexicographical reference that shows inter-relationships among the data.

Corpus linguistics

Corpus linguistics

Corpus linguistics is the study of a language as that language is expressed in its text corpus, its body of "real world" text. Corpus linguistics proposes that a reliable analysis of a language is more feasible with corpora collected in the field—the natural context ("realia") of that language—with minimal experimental interference.

Language

Language

Language is a structured system of communication that consists of grammar and vocabulary. It is the primary means by which humans convey meaning, both in spoken and written forms, and may also be conveyed through sign languages. The vast majority of human languages have developed writing systems that allow for the recording and preservation of the sounds or signs of language. Human language is characterized by its cultural and historical diversity, with significant variations observed between cultures and across time. Human languages possess the properties of productivity and displacement, which enable the creation of an infinite number of sentences, and the ability to refer to objects, events, and ideas that are not immediately present in the discourse. The use of human language relies on social convention and is acquired through learning.

History

WSD was first formulated as a distinct computational task during the early days of machine translation in the 1940s, making it one of the oldest problems in computational linguistics. Warren Weaver first introduced the problem in a computational context in his 1949 memorandum on translation.[1] Later, Bar-Hillel (1960) argued[2] that WSD could not be solved by "electronic computer" because of the need in general to model all world knowledge.

In the 1970s, WSD was a subtask of semantic interpretation systems developed within the field of artificial intelligence, starting with Wilks' preference semantics. However, since WSD systems were at the time largely rule-based and hand-coded they were prone to a knowledge acquisition bottleneck.

By the 1980s large-scale lexical resources, such as the Oxford Advanced Learner's Dictionary of Current English (OALD), became available: hand-coding was replaced with knowledge automatically extracted from these resources, but disambiguation was still knowledge-based or dictionary-based.

In the 1990s, the statistical revolution advanced computational linguistics, and WSD became a paradigm problem on which to apply supervised machine learning techniques.

The 2000s saw supervised techniques reach a plateau in accuracy, and so attention has shifted to coarser-grained senses, domain adaptation, semi-supervised and unsupervised corpus-based systems, combinations of different methods, and the return of knowledge-based systems via graph-based methods. Still, supervised systems continue to perform best.

Discover more about History related topics

Warren Weaver

Warren Weaver

Warren Weaver was an American scientist, mathematician, and science administrator. He is widely recognized as one of the pioneers of machine translation and as an important figure in creating support for science in the United States.

Yehoshua Bar-Hillel

Yehoshua Bar-Hillel

Yehoshua Bar-Hillel was an Israeli philosopher, mathematician, and linguist. He was a pioneer in the fields of machine translation and formal linguistics.

Yorick Wilks

Yorick Wilks

Yorick Wilks FBCS, a British computer scientist, is emeritus professor of artificial intelligence at the University of Sheffield, visiting professor of artificial intelligence at Gresham College, Former senior research fellow at the Oxford Internet Institute, senior scientist at the Florida Institute for Human and Machine Cognition, and a member of the Epiphany Philosophers. As of February 2023, Wilks joined WiredVibe as Director of Artificial Intelligence and Board Member to help commercialise his previous ideas and research.

Advanced learner's dictionary

Advanced learner's dictionary

The advanced learner's dictionary is the most common type of monolingual learner's dictionary, that is, a dictionary written in one language only, for someone who is learning a foreign language. It differs from a bilingual or translation dictionary, a standard dictionary written for native speakers, or a children's dictionary. Its definitions are usually built on a restricted defining vocabulary. "Advanced" usually refers learners with a proficiency level of B2 or above according to the Common European Framework. Basic learner's dictionaries also exist.

Domain adaptation

Domain adaptation

Domain adaptation is a field associated with machine learning and transfer learning. This scenario arises when we aim at learning from a source data distribution a well performing model on a different target data distribution. For instance, one of the tasks of the common spam filtering problem consists in adapting a model from one user to a new user who receives significantly different emails. Domain adaptation has also been shown to be beneficial for learning unrelated sources. Note that, when more than one source distribution is available the problem is referred to as multi-source domain adaptation.

Difficulties

Differences between dictionaries

One problem with word sense disambiguation is deciding what the senses are, as different dictionaries and thesauruses will provide different divisions of words into senses. Some researchers have suggested choosing a particular dictionary, and using its set of senses to deal with this issue. Generally, however, research results using broad distinctions in senses have been much better than those using narrow ones.[3][4] Most researchers continue to work on fine-grained WSD.

Most research in the field of WSD is performed by using WordNet as a reference sense inventory for English. WordNet is a computational lexicon that encodes concepts as synonym sets (e.g. the concept of car is encoded as { car, auto, automobile, machine, motorcar }). Other resources used for disambiguation purposes include Roget's Thesaurus[5] and Wikipedia.[6] More recently, BabelNet, a multilingual encyclopedic dictionary, has been used for multilingual WSD.[7]

Part-of-speech tagging

In any real test, part-of-speech tagging and sense tagging have proven to be very closely related, with each potentially imposing constraints upon the other. The question whether these tasks should be kept together or decoupled is still not unanimously resolved, but recently scientists incline to test these things separately (e.g. in the Senseval/SemEval competitions parts of speech are provided as input for the text to disambiguate).

Both WSD and part-of-speech tagging involve disambiguating or tagging with words. However, algorithms used for one do not tend to work well for the other, mainly because the part of speech of a word is primarily determined by the immediately adjacent one to three words, whereas the sense of a word may be determined by words further away. The success rate for part-of-speech tagging algorithms is at present much higher than that for WSD, state-of-the art being around 96%[8] accuracy or better, as compared to less than 75% accuracy in word sense disambiguation with supervised learning. These figures are typical for English, and may be very different from those for other languages.

Inter-judge variance

Another problem is inter-judge variance. WSD systems are normally tested by having their results on a task compared against those of a human. However, while it is relatively easy to assign parts of speech to text, training people to tag senses has been proven to be far more difficult.[9] While users can memorize all of the possible parts of speech a word can take, it is often impossible for individuals to memorize all of the senses a word can take. Moreover, humans do not agree on the task at hand – give a list of senses and sentences, and humans will not always agree on which word belongs in which sense.[10]

As human performance serves as the standard, it is an upper bound for computer performance. Human performance, however, is much better on coarse-grained than fine-grained distinctions, so this again is why research on coarse-grained distinctions[11][12] has been put to test in recent WSD evaluation exercises.[3][4]

Pragmatics

Some AI researchers like Douglas Lenat argue that one cannot parse meanings from words without some form of common sense ontology. This linguistic issue is called pragmatics. As agreed by researchers, to properly identify senses of words one must know common sense facts.[13] Moreover, sometimes the common sense is needed to disambiguate such words like pronouns in case of having anaphoras or cataphoras in the text.

Sense inventory and algorithms' task-dependency

A task-independent sense inventory is not a coherent concept:[14] each task requires its own division of word meaning into senses relevant to the task. Additionally, completely different algorithms might be required by different applications. In machine translation, the problem takes the form of target word selection. The "senses" are words in the target language, which often correspond to significant meaning distinctions in the source language ("bank" could translate to the French "banque"—that is, 'financial bank' or "rive"—that is, 'edge of river'). In information retrieval, a sense inventory is not necessarily required, because it is enough to know that a word is used in the same sense in the query and a retrieved document; what sense that is, is unimportant.

Discreteness of senses

Finally, the very notion of "word sense" is slippery and controversial. Most people can agree in distinctions at the coarse-grained homograph level (e.g., pen as writing instrument or enclosure), but go down one level to fine-grained polysemy, and disagreements arise. For example, in Senseval-2, which used fine-grained sense distinctions, human annotators agreed in only 85% of word occurrences.[15] Word meaning is in principle infinitely variable and context-sensitive. It does not divide up easily into distinct or discrete sub-meanings.[16] Lexicographers frequently discover in corpora loose and overlapping word meanings, and standard or conventional meanings extended, modulated, and exploited in a bewildering variety of ways. The art of lexicography is to generalize from the corpus to definitions that evoke and explain the full range of meaning of a word, making it seem like words are well-behaved semantically. However, it is not at all clear if these same meaning distinctions are applicable in computational applications, as the decisions of lexicographers are usually driven by other considerations. In 2009, a task – named lexical substitution – was proposed as a possible solution to the sense discreteness problem.[17] The task consists of providing a substitute for a word in context that preserves the meaning of the original word (potentially, substitutes can be chosen from the full lexicon of the target language, thus overcoming discreteness).

Discover more about Difficulties related topics

Dictionary

Dictionary

A dictionary is a listing of lexemes from the lexicon of one or more specific languages, often arranged alphabetically, which may include information on definitions, usage, etymologies, pronunciations, translation, etc. It is a lexicographical reference that shows inter-relationships among the data.

Thesaurus

Thesaurus

A thesaurus, sometimes called a synonym dictionary or dictionary of synonyms, is a reference work which arranges words by their meanings, sometimes as a hierarchy of broader and narrower terms, sometimes simply as lists of synonyms and antonyms. They are often used by writers to help find the best word to express an idea:...to find the word, or words, by which [an] idea may be most fitly and aptly expressed

Lexicon

Lexicon

A lexicon is the vocabulary of a language or branch of knowledge. In linguistics, a lexicon is a language's inventory of lexemes. The word lexicon derives from Greek word λεξικόν, neuter of λεξικός meaning 'of or for words'.

Synonym

Synonym

A synonym is a word, morpheme, or phrase that means exactly or nearly the same as another word, morpheme, or phrase in a given language. For example, in the English language, the words begin, start, commence, and initiate are all synonyms of one another: they are synonymous. The standard test for synonymy is substitution: one form can be replaced by another in a sentence without changing its meaning. Words are considered synonymous in only one particular sense: for example, long and extended in the context long time or extended time are synonymous, but long cannot be used in the phrase extended family. Synonyms with exactly the same meaning share a seme or denotational sememe, whereas those with inexactly similar meanings share a broader denotational or connotational sememe and thus overlap within a semantic field. The former are sometimes called cognitive synonyms and the latter, near-synonyms, plesionyms or poecilonyms.

Roget's Thesaurus

Roget's Thesaurus

Roget's Thesaurus is a widely used English-language thesaurus, created in 1805 by Peter Mark Roget (1779–1869), British physician, natural theologian and lexicographer.

BabelNet

BabelNet

BabelNet is a multilingual lexicalized semantic network and ontology developed at the NLP group of the Sapienza University of Rome. BabelNet was automatically created by linking Wikipedia to the most popular computational lexicon of the English language, WordNet. The integration is done using an automatic mapping and by filling in lexical gaps in resource-poor languages by using statistical machine translation. The result is an encyclopedic dictionary that provides concepts and named entities lexicalized in many languages and connected with large amounts of semantic relations. Additional lexicalizations and definitions are added by linking to free-license wordnets, OmegaWiki, the English Wiktionary, Wikidata, FrameNet, VerbNet and others. Similarly to WordNet, BabelNet groups words in different languages into sets of synonyms, called Babel synsets. For each Babel synset, BabelNet provides short definitions in many languages harvested from both WordNet and Wikipedia.

Part-of-speech tagging

Part-of-speech tagging

In corpus linguistics, part-of-speech tagging, also called grammatical tagging is the process of marking up a word in a text (corpus) as corresponding to a particular part of speech, based on both its definition and its context. A simplified form of this is commonly taught to school-age children, in the identification of words as nouns, verbs, adjectives, adverbs, etc.

SemEval

SemEval

SemEval is an ongoing series of evaluations of computational semantic analysis systems; it evolved from the Senseval word sense evaluation series. The evaluations are intended to explore the nature of meaning in language. While meaning is intuitive to humans, transferring those intuitions to computational analysis has proved elusive.

Supervised learning

Supervised learning

Supervised learning (SL) is a machine learning paradigm for problems where the available data consists of labeled examples, meaning that each data point contains features (covariates) and an associated label. The goal of supervised learning algorithms is learning a function that maps feature vectors (inputs) to labels (output), based on example input-output pairs. It infers a function from labeled training data consisting of a set of training examples. In supervised learning, each example is a pair consisting of an input object and a desired output value. A supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples. An optimal scenario will allow for the algorithm to correctly determine the class labels for unseen instances. This requires the learning algorithm to generalize from the training data to unseen situations in a "reasonable" way. This statistical quality of an algorithm is measured through the so-called generalization error.

Inter-rater reliability

Inter-rater reliability

In statistics, inter-rater reliability is the degree of agreement among independent observers who rate, code, or assess the same phenomenon.

Variance

Variance

In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. Variance has a central role in statistics, where some ideas that use it include descriptive statistics, statistical inference, hypothesis testing, goodness of fit, and Monte Carlo sampling. Variance is an important tool in the sciences, where statistical analysis of data is common. The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , , , or .

Douglas Lenat

Douglas Lenat

Douglas Bruce Lenat is the CEO of Cycorp, Inc. of Austin, Texas, and has been a prominent researcher in artificial intelligence. Lenat was awarded the biannual IJCAI Computers and Thought Award in 1976 for creating the machine-learning program AM. He has worked on machine learning, knowledge representation, "cognitive economy", blackboard systems, and what he dubbed in 1984 "ontological engineering". He has also worked in military simulations, and numerous projects for US government, military, intelligence, and scientific organizations. In 1980, he published a critique of conventional random-mutation Darwinism. He authored a series of articles in the Journal of Artificial Intelligence exploring the nature of heuristic rules.

Approaches and methods

There are two main approaches to WSD – deep approaches and shallow approaches.

Deep approaches presume access to a comprehensive body of world knowledge. These approaches are generally not considered to be very successful in practice, mainly because such a body of knowledge does not exist in a computer-readable format, outside very limited domains.[18] Additionally due to the long tradition in computational linguistics, of trying such approaches in terms of coded knowledge and in some cases, it can be hard to distinguish between knowledge involved in linguistic or world knowledge. The first attempt was that by Margaret Masterman and her colleagues, at the Cambridge Language Research Unit in England, in the 1950s. This attempt used as data a punched-card version of Roget's Thesaurus and its numbered "heads", as an indicator of topics and looked for repetitions in text, using a set intersection algorithm. It was not very successful,[19] but had strong relationships to later work, especially Yarowsky's machine learning optimisation of a thesaurus method in the 1990s.

Shallow approaches don't try to understand the text, but instead consider the surrounding words. These rules can be automatically derived by the computer, using a training corpus of words tagged with their word senses. This approach, while theoretically not as powerful as deep approaches, gives superior results in practice, due to the computer's limited world knowledge.

There are four conventional approaches to WSD:

Almost all these approaches work by defining a window of n content words around each word to be disambiguated in the corpus, and statistically analyzing those n surrounding words. Two shallow approaches used to train and then disambiguate are Naïve Bayes classifiers and decision trees. In recent research, kernel-based methods such as support vector machines have shown superior performance in supervised learning. Graph-based approaches have also gained much attention from the research community, and currently achieve performance close to the state of the art.

Dictionary- and knowledge-based methods

The Lesk algorithm[20] is the seminal dictionary-based method. It is based on the hypothesis that words used together in text are related to each other and that the relation can be observed in the definitions of the words and their senses. Two (or more) words are disambiguated by finding the pair of dictionary senses with the greatest word overlap in their dictionary definitions. For example, when disambiguating the words in "pine cone", the definitions of the appropriate senses both include the words evergreen and tree (at least in one dictionary). A similar approach[21] searches for the shortest path between two words: the second word is iteratively searched among the definitions of every semantic variant of the first word, then among the definitions of every semantic variant of each word in the previous definitions and so on. Finally, the first word is disambiguated by selecting the semantic variant which minimizes the distance from the first to the second word.

An alternative to the use of the definitions is to consider general word-sense relatedness and to compute the semantic similarity of each pair of word senses based on a given lexical knowledge base such as WordNet. Graph-based methods reminiscent of spreading activation research of the early days of AI research have been applied with some success. More complex graph-based approaches have been shown to perform almost as well as supervised methods[22] or even outperforming them on specific domains.[3][23] Recently, it has been reported that simple graph connectivity measures, such as degree, perform state-of-the-art WSD in the presence of a sufficiently rich lexical knowledge base.[24] Also, automatically transferring knowledge in the form of semantic relations from Wikipedia to WordNet has been shown to boost simple knowledge-based methods, enabling them to rival the best supervised systems and even outperform them in a domain-specific setting.[25]

The use of selectional preferences (or selectional restrictions) is also useful, for example, knowing that one typically cooks food, one can disambiguate the word bass in "I am cooking basses" (i.e., it's not a musical instrument).

Supervised methods

Supervised methods are based on the assumption that the context can provide enough evidence on its own to disambiguate words (hence, common sense and reasoning are deemed unnecessary). Probably every machine learning algorithm going has been applied to WSD, including associated techniques such as feature selection, parameter optimization, and ensemble learning. Support Vector Machines and memory-based learning have been shown to be the most successful approaches, to date, probably because they can cope with the high-dimensionality of the feature space. However, these supervised methods are subject to a new knowledge acquisition bottleneck since they rely on substantial amounts of manually sense-tagged corpora for training, which are laborious and expensive to create.

Semi-supervised methods

Because of the lack of training data, many word sense disambiguation algorithms use semi-supervised learning, which allows both labeled and unlabeled data. The Yarowsky algorithm was an early example of such an algorithm.[26] It uses the ‘One sense per collocation’ and the ‘One sense per discourse’ properties of human languages for word sense disambiguation. From observation, words tend to exhibit only one sense in most given discourse and in a given collocation.[27]

The bootstrapping approach starts from a small amount of seed data for each word: either manually tagged training examples or a small number of surefire decision rules (e.g., 'play' in the context of 'bass' almost always indicates the musical instrument). The seeds are used to train an initial classifier, using any supervised method. This classifier is then used on the untagged portion of the corpus to extract a larger training set, in which only the most confident classifications are included. The process repeats, each new classifier being trained on a successively larger training corpus, until the whole corpus is consumed, or until a given maximum number of iterations is reached.

Other semi-supervised techniques use large quantities of untagged corpora to provide co-occurrence information that supplements the tagged corpora. These techniques have the potential to help in the adaptation of supervised models to different domains.

Also, an ambiguous word in one language is often translated into different words in a second language depending on the sense of the word. Word-aligned bilingual corpora have been used to infer cross-lingual sense distinctions, a kind of semi-supervised system.

Unsupervised methods

Unsupervised learning is the greatest challenge for WSD researchers. The underlying assumption is that similar senses occur in similar contexts, and thus senses can be induced from text by clustering word occurrences using some measure of similarity of context,[28] a task referred to as word sense induction or discrimination. Then, new occurrences of the word can be classified into the closest induced clusters/senses. Performance has been lower than for the other methods described above, but comparisons are difficult since senses induced must be mapped to a known dictionary of word senses. If a mapping to a set of dictionary senses is not desired, cluster-based evaluations (including measures of entropy and purity) can be performed. Alternatively, word sense induction methods can be tested and compared within an application. For instance, it has been shown that word sense induction improves Web search result clustering by increasing the quality of result clusters and the degree diversification of result lists.[29][30] It is hoped that unsupervised learning will overcome the knowledge acquisition bottleneck because they are not dependent on manual effort.

Representing words considering their context through fixed size dense vectors (word embeddings) has become one of the most fundamental blocks in several NLP systems.[31][32][33] Even though most of traditional word embedding techniques conflate words with multiple meanings into a single vector representation, they still can be used to improve WSD.[34] A simple approach to employ pre-computed word embeddings to represent word senses is to compute the centroids of sense clusters.[35][36] In addition to word embeddings techniques, lexical databases (e.g., WordNet, ConceptNet, BabelNet) can also assist unsupervised systems in mapping words and their senses as dictionaries. Some techniques that combine lexical databases and word embeddings are presented in AutoExtend[37][38] and Most Suitable Sense Annotation (MSSA).[39] In AutoExtend,[38] they present a method that decouples an object input representation into its properties, such as words and their word senses. AutoExtend uses a graph structure to map words (e.g. text) and non-word (e.g. synsets in WordNet) objects as nodes and the relationship between nodes as edges. The relations (edges) in AutoExtend can either express the addition or similarity between its nodes. The former captures the intuition behind the offset calculus,[31] while the latter defines the similarity between two nodes. In MSSA,[39] an unsupervised disambiguation system uses the similarity between word senses in a fixed context window to select the most suitable word sense using a pre-trained word embedding model and WordNet. For each context window, MSSA calculates the centroid of each word sense definition by averaging the word vectors of its words in WordNet's glosses (i.e., short defining gloss and one or more usage example) using a pre-trained word embeddings model. These centroids are later used to select the word sense with the highest similarity of a target word to its immediately adjacent neighbors (i.e., predecessor and successor words). After all words are annotated and disambiguated, they can be used as a training corpus in any standard word embedding technique. In its improved version, MSSA can make use of word sense embeddings to repeat its disambiguation process iteratively.

Other approaches

Other approaches may vary differently in their methods:

Other languages

  • Hindi : Lack of lexical resources in Hindi have hindered the performance of supervised models of WSD, while the unsupervised models suffer due to extensive morphology. A possible solution to this problem is the design of a WSD model by means of parallel corpora.[48][49] The creation of the Hindi WordNet has paved way for several Supervised methods which have been proven to produce a higher accuracy in disambiguating nouns.[50]

Local impediments and summary

The knowledge acquisition bottleneck is perhaps the major impediment to solving the WSD problem. Unsupervised methods rely on knowledge about word senses, which is only sparsely formulated in dictionaries and lexical databases. Supervised methods depend crucially on the existence of manually annotated examples for every word sense, a requisite that can so far be met only for a handful of words for testing purposes, as it is done in the Senseval exercises.

One of the most promising trends in WSD research is using the largest corpus ever accessible, the World Wide Web, to acquire lexical information automatically.[51] WSD has been traditionally understood as an intermediate language engineering technology which could improve applications such as information retrieval (IR). In this case, however, the reverse is also true: web search engines implement simple and robust IR techniques that can successfully mine the Web for information to use in WSD. The historic lack of training data has provoked the appearance of some new algorithms and techniques, as described in Automatic acquisition of sense-tagged corpora.

Discover more about Approaches and methods related topics

Computational linguistics

Computational linguistics

Computational linguistics is an interdisciplinary field concerned with the computational modelling of natural language, as well as the study of appropriate computational approaches to linguistic questions. In general, computational linguistics draws upon linguistics, computer science, artificial intelligence, mathematics, logic, philosophy, cognitive science, cognitive psychology, psycholinguistics, anthropology and neuroscience, among others.

Margaret Masterman

Margaret Masterman

Margaret Masterman was a British linguist and philosopher, most known for her pioneering work in the field of computational linguistics and especially machine translation. She founded the Cambridge Language Research Unit.

Machine-readable dictionary

Machine-readable dictionary

Machine-readable dictionary (MRD) is a dictionary stored as machine (computer) data instead of being printed on paper. It is an electronic dictionary and lexical database.

Knowledge base

Knowledge base

A knowledge base (KB) is a set of sentences, each sentence given in a knowledge representation language, with interfaces to tell new sentences and to ask questions about what is known, where either of these interfaces might use inference. It is a technology used to store complex structured data used by a computer system. The initial use of the term was in connection with expert systems, which were the first knowledge-based systems.

Semi-supervised learning

Semi-supervised learning

Semi-supervised learning is an approach to machine learning that combines a small amount of labeled data with a large amount of unlabeled data during training. Semi-supervised learning falls between unsupervised learning and supervised learning. It is a special instance of weak supervision.

Supervised learning

Supervised learning

Supervised learning (SL) is a machine learning paradigm for problems where the available data consists of labeled examples, meaning that each data point contains features (covariates) and an associated label. The goal of supervised learning algorithms is learning a function that maps feature vectors (inputs) to labels (output), based on example input-output pairs. It infers a function from labeled training data consisting of a set of training examples. In supervised learning, each example is a pair consisting of an input object and a desired output value. A supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples. An optimal scenario will allow for the algorithm to correctly determine the class labels for unseen instances. This requires the learning algorithm to generalize from the training data to unseen situations in a "reasonable" way. This statistical quality of an algorithm is measured through the so-called generalization error.

Naive Bayes classifier

Naive Bayes classifier

In statistics, naive Bayes classifiers are a family of simple "probabilistic classifiers" based on applying Bayes' theorem with strong (naive) independence assumptions between the features. They are among the simplest Bayesian network models, but coupled with kernel density estimation, they can achieve high accuracy levels.

Decision tree

Decision tree

A decision tree is a decision support tool that uses a tree-like model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. It is one way to display an algorithm that only contains conditional control statements.

Lesk algorithm

Lesk algorithm

The Lesk algorithm is a classical algorithm for word sense disambiguation introduced by Michael E. Lesk in 1986.

Semantic similarity

Semantic similarity

Semantic similarity is a metric defined over a set of documents or terms, where the idea of distance between items is based on the likeness of their meaning or semantic content as opposed to lexicographical similarity. These are mathematical tools used to estimate the strength of the semantic relationship between units of language, concepts or instances, through a numerical description obtained according to the comparison of information supporting their meaning or describing their nature. The term semantic similarity is often confused with semantic relatedness. Semantic relatedness includes any relation between two terms, while semantic similarity only includes "is a" relations. For example, "car" is similar to "bus", but is also related to "road" and "driving".

Graph (discrete mathematics)

Graph (discrete mathematics)

In discrete mathematics, and more specifically in graph theory, a graph is a structure amounting to a set of objects in which some pairs of the objects are in some sense "related". The objects correspond to mathematical abstractions called vertices and each of the related pairs of vertices is called an edge. Typically, a graph is depicted in diagrammatic form as a set of dots or circles for the vertices, joined by lines or curves for the edges. Graphs are one of the objects of study in discrete mathematics.

Spreading activation

Spreading activation

Spreading activation is a method for searching associative networks, biological and artificial neural networks, or semantic networks. The search process is initiated by labeling a set of source nodes with weights or "activation" and then iteratively propagating or "spreading" that activation out to other nodes linked to the source nodes. Most often these "weights" are real values that decay as activation propagates through the network. When the weights are discrete this process is often referred to as marker passing. Activation may originate from alternate paths, identified by distinct markers, and terminate when two alternate paths reach the same node. However brain studies show that several different brain areas play an important role in semantic processing.

External knowledge sources

Knowledge is a fundamental component of WSD. Knowledge sources provide data which are essential to associate senses with words. They can vary from corpora of texts, either unlabeled or annotated with word senses, to machine-readable dictionaries, thesauri, glossaries, ontologies, etc. They can be[52][53] classified as follows:

Structured:

  1. Machine-readable dictionaries (MRDs)
  2. Ontologies
  3. Thesauri

Unstructured:

  1. Collocation resources
  2. Other resources (such as word frequency lists, stoplists, domain labels,[54] etc.)
  3. Corpora: raw corpora and sense-annotated corpora

Discover more about External knowledge sources related topics

Evaluation

Comparing and evaluating different WSD systems is extremely difficult, because of the different test sets, sense inventories, and knowledge resources adopted. Before the organization of specific evaluation campaigns most systems were assessed on in-house, often small-scale, data sets. In order to test one's algorithm, developers should spend their time to annotate all word occurrences. And comparing methods even on the same corpus is not eligible if there is different sense inventories.

In order to define common evaluation datasets and procedures, public evaluation campaigns have been organized. Senseval (now renamed SemEval) is an international word sense disambiguation competition, held every three years since 1998: Senseval-1 (1998), Senseval-2 (2001), Senseval-3 (2004), and its successor, SemEval (2007). The objective of the competition is to organize different lectures, preparing and hand-annotating corpus for testing systems, perform a comparative evaluation of WSD systems in several kinds of tasks, including all-words and lexical sample WSD for different languages, and, more recently, new tasks such as semantic role labeling, gloss WSD, lexical substitution, etc. The systems submitted for evaluation to these competitions usually integrate different techniques and often combine supervised and knowledge-based methods (especially for avoiding bad performance in lack of training examples).

In recent years 2007-2012, the WSD evaluation task choices had grown and the criterion for evaluating WSD has changed drastically depending on the variant of the WSD evaluation task. Below enumerates the variety of WSD tasks:

Task design choices

As technology evolves, the Word Sense Disambiguation (WSD) tasks grows in different flavors towards various research directions and for more languages:

  • Classic monolingual WSD evaluation tasks use WordNet as the sense inventory and are largely based on supervised/semi-supervised classification with the manually sense annotated corpora:[55]
    • Classic English WSD uses the Princeton WordNet as it sense inventory and the primary classification input is normally based on the SemCor corpus.
    • Classical WSD for other languages uses their respective WordNet as sense inventories and sense annotated corpora tagged in their respective languages. Often researchers will also tapped on the SemCor corpus and aligned bitexts with English as its source language
  • Cross-lingual WSD evaluation task is also focused on WSD across 2 or more languages simultaneously. Unlike the Multilingual WSD tasks, instead of providing manually sense-annotated examples for each sense of a polysemous noun, the sense inventory is built up on the basis of parallel corpora, e.g. Europarl corpus.[56]
  • Multilingual WSD evaluation tasks focused on WSD across 2 or more languages simultaneously, using their respective WordNets as its sense inventories or BabelNet as multilingual sense inventory.[57] It evolved from the Translation WSD evaluation tasks that took place in Senseval-2. A popular approach is to carry out monolingual WSD and then map the source language senses into the corresponding target word translations.[58]
  • Word Sense Induction and Disambiguation task is a combined task evaluation where the sense inventory is first induced from a fixed training set data, consisting of polysemous words and the sentence that they occurred in, then WSD is performed on a different testing data set.[59]

Discover more about Evaluation related topics

Data set

Data set

A data set is a collection of data. In the case of tabular data, a data set corresponds to one or more database tables, where every column of a table represents a particular variable, and each row corresponds to a given record of the data set in question. The data set lists values for each of the variables, such as for example height and weight of an object, for each member of the data set. Data sets can also consist of a collection of documents or files.

SemEval

SemEval

SemEval is an ongoing series of evaluations of computational semantic analysis systems; it evolved from the Senseval word sense evaluation series. The evaluations are intended to explore the nature of meaning in language. While meaning is intuitive to humans, transferring those intuitions to computational analysis has proved elusive.

Semantic role labeling

Semantic role labeling

In natural language processing, semantic role labeling is the process that assigns labels to words or phrases in a sentence that indicates their semantic role in the sentence, such as that of an agent, goal, or result.

Lexical substitution

Lexical substitution

Lexical substitution is the task of identifying a substitute for a word in the context of a clause. For instance, given the following text: "After the match, replace any remaining fluid deficit to prevent chronic dehydration throughout the tournament", a substitute of game might be given.

Supervised learning

Supervised learning

Supervised learning (SL) is a machine learning paradigm for problems where the available data consists of labeled examples, meaning that each data point contains features (covariates) and an associated label. The goal of supervised learning algorithms is learning a function that maps feature vectors (inputs) to labels (output), based on example input-output pairs. It infers a function from labeled training data consisting of a set of training examples. In supervised learning, each example is a pair consisting of an input object and a desired output value. A supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples. An optimal scenario will allow for the algorithm to correctly determine the class labels for unseen instances. This requires the learning algorithm to generalize from the training data to unseen situations in a "reasonable" way. This statistical quality of an algorithm is measured through the so-called generalization error.

Semi-supervised learning

Semi-supervised learning

Semi-supervised learning is an approach to machine learning that combines a small amount of labeled data with a large amount of unlabeled data during training. Semi-supervised learning falls between unsupervised learning and supervised learning. It is a special instance of weak supervision.

Translation

Translation

Translation is the communication of the meaning of a source-language text by means of an equivalent target-language text. The English language draws a terminological distinction between translating and interpreting ; under this distinction, translation can begin only after the appearance of writing within a language community.

BabelNet

BabelNet

BabelNet is a multilingual lexicalized semantic network and ontology developed at the NLP group of the Sapienza University of Rome. BabelNet was automatically created by linking Wikipedia to the most popular computational lexicon of the English language, WordNet. The integration is done using an automatic mapping and by filling in lexical gaps in resource-poor languages by using statistical machine translation. The result is an encyclopedic dictionary that provides concepts and named entities lexicalized in many languages and connected with large amounts of semantic relations. Additional lexicalizations and definitions are added by linking to free-license wordnets, OmegaWiki, the English Wiktionary, Wikidata, FrameNet, VerbNet and others. Similarly to WordNet, BabelNet groups words in different languages into sets of synonyms, called Babel synsets. For each Babel synset, BabelNet provides short definitions in many languages harvested from both WordNet and Wikipedia.

Software

  • Babelfy,[60] a unified state-of-the-art system for multilingual Word Sense Disambiguation and Entity Linking
  • BabelNet API,[61] a Java API for knowledge-based multilingual Word Sense Disambiguation in 6 different languages using the BabelNet semantic network
  • WordNet::SenseRelate,[62] a project that includes free, open source systems for word sense disambiguation and lexical sample sense disambiguation
  • UKB: Graph Base WSD,[63] a collection of programs for performing graph-based Word Sense Disambiguation and lexical similarity/relatedness using a pre-existing Lexical Knowledge Base[64]
  • pyWSD,[65] python implementations of Word Sense Disambiguation (WSD) technologies

Source: "Word-sense disambiguation", Wikipedia, Wikimedia Foundation, (2023, March 14th), https://en.wikipedia.org/wiki/Word-sense_disambiguation.

Enjoying Wikiz?

Enjoying Wikiz?

Get our FREE extension now!

See also
References
  1. ^ Weaver 1949.
  2. ^ Bar-Hillel 1964, pp. 174–179.
  3. ^ a b c Navigli, Litkowski & Hargraves 2007, pp. 30–35.
  4. ^ a b Pradhan et al. 2007, pp. 87–92.
  5. ^ Yarowsky 1992, pp. 454–460.
  6. ^ Mihalcea 2007.
  7. ^ A. Moro, A. Raganato, R. Navigli. Entity Linking meets Word Sense Disambiguation: a Unified Approach Archived 2014-08-08 at the Wayback Machine. Transactions of the Association for Computational Linguistics (TACL), 2, pp. 231-244, 2014.
  8. ^ Martinez, Angel R. (January 2012). "Part-of-speech tagging: Part-of-speech tagging". Wiley Interdisciplinary Reviews: Computational Statistics. 4 (1): 107–113. doi:10.1002/wics.195. S2CID 62672734.
  9. ^ Fellbaum 1997.
  10. ^ Snyder & Palmer 2004, pp. 41–43.
  11. ^ Navigli 2006, pp. 105–112.
  12. ^ Snow et al. 2007, pp. 1005–1014.
  13. ^ Lenat.
  14. ^ Palmer, Babko-Malaya & Dang 2004, pp. 49–56.
  15. ^ Edmonds 2000.
  16. ^ Kilgarrif 1997, pp. 91–113.
  17. ^ McCarthy & Navigli 2009, pp. 139–159.
  18. ^ Lenat & Guha 1989.
  19. ^ Wilks, Slator & Guthrie 1996.
  20. ^ Lesk 1986, pp. 24–26.
  21. ^ Diamantini, C.; Mircoli, A.; Potena, D.; Storti, E. (2015-06-01). "Semantic disambiguation in a social information discovery system". 2015 International Conference on Collaboration Technologies and Systems (CTS): 326–333. doi:10.1109/CTS.2015.7210442. ISBN 978-1-4673-7647-1. S2CID 13260353.
  22. ^ Navigli & Velardi 2005, pp. 1063–1074.
  23. ^ Agirre, Lopez de Lacalle & Soroa 2009, pp. 1501–1506.
  24. ^ Navigli & Lapata 2010, pp. 678–692.
  25. ^ Ponzetto & Navigli 2010, pp. 1522–1531.
  26. ^ Yarowsky 1995, pp. 189–196.
  27. ^ Mitkov, Ruslan (2004). "13.5.3 Two claims about senses". The Oxford Handbook of Computational Linguistics. OUP. p. 257. ISBN 978-0-19-927634-9.
  28. ^ Schütze 1998, pp. 97–123.
  29. ^ Navigli & Crisafulli 2010.
  30. ^ DiMarco & Navigli 2013.
  31. ^ a b Mikolov, Tomas; Chen, Kai; Corrado, Greg; Dean, Jeffrey (2013-01-16). "Efficient Estimation of Word Representations in Vector Space". arXiv:1301.3781 [cs.CL].
  32. ^ Pennington, Jeffrey; Socher, Richard; Manning, Christopher (2014). "Glove: Global Vectors for Word Representation". Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Stroudsburg, PA, USA: Association for Computational Linguistics: 1532–1543. doi:10.3115/v1/d14-1162. S2CID 1957433.
  33. ^ Bojanowski, Piotr; Grave, Edouard; Joulin, Armand; Mikolov, Tomas (December 2017). "Enriching Word Vectors with Subword Information". Transactions of the Association for Computational Linguistics. 5: 135–146. doi:10.1162/tacl_a_00051. ISSN 2307-387X.
  34. ^ Iacobacci, Ignacio; Pilehvar, Mohammad Taher; Navigli, Roberto (2016). "Embeddings for Word Sense Disambiguation: An Evaluation Study". Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Berlin, Germany: Association for Computational Linguistics: 897–907. doi:10.18653/v1/P16-1085.
  35. ^ Bhingardive, Sudha; Singh, Dhirendra; V, Rudramurthy; Redkar, Hanumant; Bhattacharyya, Pushpak (2015). "Unsupervised Most Frequent Sense Detection using Word Embeddings". Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Denver, Colorado: Association for Computational Linguistics: 1238–1243. doi:10.3115/v1/N15-1132. S2CID 10778029.
  36. ^ Butnaru, Andrei; Ionescu, Radu Tudor; Hristea, Florentina (2017). "ShotgunWSD: An unsupervised algorithm for global word sense disambiguation inspired by DNA sequencing". Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: 916–926. arXiv:1707.08084.
  37. ^ Rothe, Sascha; Schütze, Hinrich (2015). "AutoExtend: Extending Word Embeddings to Embeddings for Synsets and Lexemes". Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Stroudsburg, PA, USA: Association for Computational Linguistics: 1793–1803. arXiv:1507.01127. Bibcode:2015arXiv150701127R. doi:10.3115/v1/p15-1173. S2CID 15687295.
  38. ^ a b Rothe, Sascha; Schütze, Hinrich (September 2017). "AutoExtend: Combining Word Embeddings with Semantic Resources". Computational Linguistics. 43 (3): 593–617. doi:10.1162/coli_a_00294. ISSN 0891-2017.
  39. ^ a b Ruas, Terry; Grosky, William; Aizawa, Akiko (December 2019). "Multi-sense embeddings through a word sense disambiguation process". Expert Systems with Applications. 136: 288–303. arXiv:2101.08700. doi:10.1016/j.eswa.2019.06.026. hdl:2027.42/145475. S2CID 52225306.
  40. ^ Gliozzo, Magnini & Strapparava 2004, pp. 380–387.
  41. ^ Buitelaar et al. 2006, pp. 275–298.
  42. ^ McCarthy et al. 2007, pp. 553–590.
  43. ^ Mohammad & Hirst 2006, pp. 121–128.
  44. ^ Lapata & Keller 2007, pp. 348–355.
  45. ^ Ide, Erjavec & Tufis 2002, pp. 54–60.
  46. ^ Chan & Ng 2005, pp. 1037–1042.
  47. ^ Stuart M. Shieber (1992). Constraint-based Grammar Formalisms: Parsing and Type Inference for Natural and Computer Languages. MIT Press. ISBN 978-0-262-19324-5.
  48. ^ Bhattacharya, Indrajit, Lise Getoor, and Yoshua Bengio. Unsupervised sense disambiguation using bilingual probabilistic models. Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics, 2004.
  49. ^ Diab, Mona, and Philip Resnik. An unsupervised method for word sense tagging using parallel corpora. Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics, 2002.
  50. ^ Manish Sinha, Mahesh Kumar, Prabhakar Pande, Laxmi Kashyap, and Pushpak Bhattacharyya. Hindi word sense disambiguation. In International Symposium on Machine Translation, Natural Language Processing and Translation Support Systems, Delhi, India, 2004.
  51. ^ Kilgarrif & Grefenstette 2003, pp. 333–347.
  52. ^ Litkowski 2005, pp. 753–761.
  53. ^ Agirre & Stevenson 2006, pp. 217–251.
  54. ^ Magnini & Cavaglià 2000, pp. 1413–1418.
  55. ^ Lucia Specia, Maria das Gracas Volpe Nunes, Gabriela Castelo Branco Ribeiro, and Mark Stevenson. Multilingual versus monolingual WSD Archived 2012-04-10 at the Wayback Machine. In EACL-2006 Workshop on Making Sense of Sense: Bringing Psycholinguistics and Computational Linguistics Together, pages 33–40, Trento, Italy, April 2006.
  56. ^ Els Lefever and Veronique Hoste. SemEval-2010 task 3: cross-lingual word sense disambiguation. Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions. June 04-04, 2009, Boulder, Colorado
  57. ^ R. Navigli, D. A. Jurgens, D. Vannella. SemEval-2013 Task 12: Multilingual Word Sense Disambiguation. Proc. of seventh International Workshop on Semantic Evaluation (SemEval), in the Second Joint Conference on Lexical and Computational Semantics (*SEM 2013), Atlanta, USA, June 14-15th, 2013, pp. 222-231.
  58. ^ Lucia Specia, Maria das Gracas Volpe Nunes, Gabriela Castelo Branco Ribeiro, and Mark Stevenson. Multilingual versus monolingual WSD Archived 2012-04-10 at the Wayback Machine. In EACL-2006 Workshop on Making Sense of Sense: Bringing Psycholinguistics and Computational Linguistics Together, pages 33–40, Trento, Italy, April 2006
  59. ^ Eneko Agirre and Aitor Soroa. Semeval-2007 task 02: evaluating word sense induction and discrimination systems. Proceedings of the 4th International Workshop on Semantic Evaluations, p.7-12, June 23–24, 2007, Prague, Czech Republic
  60. ^ "Babelfy". Babelfy. Retrieved 2018-03-22.
  61. ^ "BabelNet API". Babelnet.org. Retrieved 2018-03-22.
  62. ^ "WordNet::SenseRelate". Senserelate.sourceforge.net. Retrieved 2018-03-22.
  63. ^ "UKB: Graph Base WSD". Ixa2.si.ehu.es. Retrieved 2018-03-22.
  64. ^ "Lexical Knowledge Base (LKB)". Moin.delph-in.net. 2018-02-05. Retrieved 2018-03-22.
  65. ^ alvations. "pyWSD". Github.com. Retrieved 2018-03-22.
Works cited
External links and suggested reading
  • Computational Linguistics Special Issue on Word Sense Disambiguation (1998)
  • Evaluation Exercises for Word Sense Disambiguation The de facto standard benchmarks for WSD systems.
  • Roberto Navigli. Word Sense Disambiguation: A Survey, ACM Computing Surveys, 41(2), 2009, pp. 1–69. An up-to-date state of the art of the field.
  • Word Sense Disambiguation as defined in Scholarpedia
  • Word Sense Disambiguation: The State of the Art (PDF) A comprehensive overview By Prof. Nancy Ide & Jean Véronis (1998).
  • Word Sense Disambiguation Tutorial, by Rada Mihalcea and Ted Pedersen (2005).
  • Well, well, well ... Word Sense Disambiguation with Google n-Grams, by Craig Trim (2013).
  • Word Sense Disambiguation: Algorithms and Applications, edited by Eneko Agirre and Philip Edmonds (2006), Springer. Covers the entire field with chapters contributed by leading researchers. www.wsdbook.org site of the book
  • Bar-Hillel, Yehoshua. 1964. Language and Information. New York: Addison-Wesley.
  • Edmonds, Philip & Adam Kilgarriff. 2002. Introduction to the special issue on evaluating word sense disambiguation systems. Journal of Natural Language Engineering, 8(4):279-291.
  • Edmonds, Philip. 2005. Lexical disambiguation. The Elsevier Encyclopedia of Language and Linguistics, 2nd Ed., ed. by Keith Brown, 607–23. Oxford: Elsevier.
  • Ide, Nancy & Jean Véronis. 1998. Word sense disambiguation: The state of the art. Computational Linguistics, 24(1):1-40.
  • Jurafsky, Daniel & James H. Martin. 2000. Speech and Language Processing. New Jersey, USA: Prentice Hall.
  • Litkowski, K. C. 2005. Computational lexicons and dictionaries. In Encyclopaedia of Language and Linguistics (2nd ed.), K. R. Brown, Ed. Elsevier Publishers, Oxford, U.K., 753–761.
  • Manning, Christopher D. & Hinrich Schütze. 1999. Foundations of Statistical Natural Language Processing. Cambridge, MA: MIT Press. Foundations of Statistical Natural Language Processing
  • Mihalcea, Rada. 2007. Word sense disambiguation. Encyclopedia of Machine Learning. Springer-Verlag.
  • Resnik, Philip and David Yarowsky. 2000. Distinguishing systems and distinguishing senses: New evaluation methods for word sense disambiguation, Natural Language Engineering, 5(2):113-133. [1]
  • Yarowsky, David. 2001. Word sense disambiguation. Handbook of Natural Language Processing, ed. by Dale et al., 629–654. New York: Marcel Dekker.

The content of this page is based on the Wikipedia article written by contributors..
The text is available under the Creative Commons Attribution-ShareAlike Licence & the media files are available under their respective licenses; additional terms may apply.
By using this site, you agree to the Terms of Use & Privacy Policy.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization & is not affiliated to WikiZ.com.