Tuesday, July 31, 2018

Rule Probabilities and Quantification

In Artificial Intelligence of the symbolic kind, it is often the case that probabilities are applicability constraints that are simply attached to the rules.
[In some survey of some part of the populace,] 80% of all Biritsh scientists are the children of scientists [the oldest child, an only child, etc].
The problem with these types of examples is that they disguise how the probabilities end up affecting the quantifier structure. This is especially true when we deal with a language that supports exceptions (i.e. non-monotonic changes) in the knowledge capture because there the extremes are best captured as exceptions.
None of the cities of of the Roman empire, excepting Rome itself, had more than five hundred thousand citizens.
(exceptWhen (equalSymbols ?CITY Rome)
    (=> (and (isa ?CITY City) (subOrganization ?CITY RomanEmpire)
             (population ?CITY ?POP))
        (lessThan ?POP 500000)))
(with appropriate temporal restrictions somewhere, e.g. implied by the Roman empire membership or on the  population number).

At the same time, at the middle range of the distribution, it would be most natural to speak of existential quantification, perhaps baths or amphitheaters, or a temple dedicated to a specific deified hero or an oriental fertility cult.

Thus we observe that a continuous model (probabilities [0,1]) meet a discrete model of quantifiers and exceptions, and we expect the continuous model to deal with data changes more gracefully, especially if the representation of discrete parts is not automatically driven by continuous representation. Perhaps we could consider 1% and 99% as the cutoffs for 'none' and 'all' (not in principle, in a specific case), and have rules to then conclude the discrete quantifier structure from the data distribution and its interpretation?

No comments:

Post a Comment