## Книга: Practical Common Lisp

## Per-Word Statistics

**Per-Word Statistics **

The heart of a statistical spam filter is, of course, the functions that compute statistics-based probabilities. The mathematical nuances[254] of why exactly these computations work are beyond the scope of this book—interested readers may want to refer to several papers by Gary Robinson.[255] I'll focus rather on how they're implemented.

The starting point for the statistical computations is the set of measured values—the frequencies stored in `*feature-database*`

, `*total-spams*`

, and `*total-hams*`

. Assuming that the set of messages trained on is statistically representative, you can treat the observed frequencies as probabilities of the same features showing up in hams and spams in future messages.

The basic plan is to classify a message by extracting the features it contains, computing the individual probability that a given message containing the feature is a spam, and then combining all the individual probabilities into a total score for the message. Messages with many "spammy" features and few "hammy" features will receive a score near 1, and messages with many hammy features and few spammy features will score near 0.

The first statistical function you need is one that computes the basic probability that a message containing a given feature is a spam. By one point of view, the probability that a given message containing the feature is a spam is the ratio of spam messages containing the feature to all messages containing the feature. Thus, you could compute it this way:

`(defun spam-probability (feature)`

(with-slots (spam-count ham-count) feature

(/ spam-count (+ spam-count ham-count))))

The problem with the value computed by this function is that it's strongly affected by the overall probability that *any* message will be a spam or a ham. For instance, suppose you get nine times as much ham as spam in general. A completely neutral feature will then appear in one spam for every nine hams, giving you a spam probability of 1/10 according to this function.

But you're more interested in the probability that a given feature will appear in a spam message, independent of the overall probability of getting a spam or ham. Thus, you need to divide the spam count by the total number of spams trained on and the ham count by the total number of hams. To avoid division-by-zero errors, if either of `*total-spams*`

or `*total-hams*`

is zero, you should treat the corresponding frequency as zero. (Obviously, if the total number of either spams or hams is zero, then the corresponding per-feature count will also be zero, so you can treat the resulting frequency as zero without ill effect.)

`(defun spam-probability (feature)`

(with-slots (spam-count ham-count) feature

(let ((spam-frequency (/ spam-count (max 1 *total-spams*)))

(ham-frequency (/ ham-count (max 1 *total-hams*))))

(/ spam-frequency (+ spam-frequency ham-frequency)))))

This version suffers from another problem—it doesn't take into account the number of messages analyzed to arrive at the per-word probabilities. Suppose you've trained on 2,000 messages, half spam and half ham. Now consider two features that have appeared only in spams. One has appeared in all 1,000 spams, while the other appeared only once. According to the current definition of `spam-probability`

, the appearance of either feature predicts that a message is spam with equal probability, namely, 1.

However, it's still quite possible that the feature that has appeared only once is actually a neutral feature—it's obviously rare in either spams or hams, appearing only once in 2,000 messages. If you trained on another 2,000 messages, it might very well appear one more time, this time in a ham, making it suddenly a neutral feature with a spam probability of .5.

So it seems you might like to compute a probability that somehow factors in the number of data points that go into each feature's probability. In his papers, Robinson suggested a function based on the Bayesian notion of incorporating observed data into prior knowledge or assumptions. Basically, you calculate a new probability by starting with an assumed prior probability and a weight to give that assumed probability before adding new information. Robinson's function is this:

`(defun bayesian-spam-probability (feature &optional`

(assumed-probability 1/2)

(weight 1))

(let ((basic-probability (spam-probability feature))

(data-points (+ (spam-count feature) (ham-count feature))))

(/ (+ (* weight assumed-probability)

(* data-points basic-probability))

(+ weight data-points))))

Robinson suggests values of 1/2 for `assumed-probability`

and 1 for `weight`

. Using those values, a feature that has appeared in one spam and no hams has a `bayesian-spam-probability`

of 0.75, a feature that has appeared in 10 spams and no hams has a `bayesian-spam-probability`

of approximately 0.955, and one that has matched in 1,000 spams and no hams has a spam probability of approximately 0.9995.

- 8.5.2 Typical Condition Variable Operations
- InterBase Super Server для Windows
- Каталог BIN в SuperServer
- Минимальный состав сервера InterBase SuperServer
- SuperServer
- Classic vs SuperServer
- Рекомендации по выбору архитектуры: Classic или SuperServer?
- Улучшенное время отклика для версии SuperServer
- Helper match
- Как уменьшить размер документа Microsoft Word?
- Глава 5. Разработка и анализ бизнес-планов в системе Project Expert
- Как в документ Microsoft Word вставить текст, в котором отсутствует форматирование?