"Spoken words … are symbols of the affections of the soul …" (Aristotle: Die Interpretatione I)
Recall from Week 12: there's no consensus on what semantic representations should be like.
NLP applications use various resources to capture some aspects of meaning, but there aren't formal theories of semantics that are universally useful.
A true theory of semantics would be a mapping from linguistic structures to the domain of "knowledge" or "reality" or "all the things we can talk about", so really it would be a theory of everything.
An artificial semantics engine would then be capable of artificial general intelligence
"A semantic theory describes and explains the interpretative ability of speakers by accounting for their performance in determining the number and content of the readings of a sentence, by detecting semantic anomalies, by deciding on paraphrase relations between sentences, and by marking every other semantic property or relation that plays a role in this ability." (Katz & Fodor 1963)
"our goal must be to develop a theory capable of handling the kind of commonsensical inferences that people routinely, automatically, and generally subconsciously make when answering simple questions about simple stories" (Kornai, to appear)
atomic statements and logical connectives only
Some phenomena seem easy to handle, but only at first glance:
Red ball: $R(x) \wedge B(x)$
This looks good: Red balls are things that are both red and ball
Large flea: $L(x) \wedge F(x)$
This is problematic: what is $L$? How can you tell if something is large?
- text_to_4lang: simple mapping from dependency parses to 4lang graphs: