Andrew Cooke | Contents | Latest | RSS | Previous | Next

C[omp]ute

Welcome to my blog, which was once a mailing list of the same name and is still generated by mail. Please reply via the "comment" links.

Always interested in offers/projects/new ideas. Eclectic experience in fields like: numerical computing; Python web; Java enterprise; functional languages; GPGPU; SQL databases; etc. Based in Santiago, Chile; telecommute worldwide. CV; email.

Personal Projects

Choochoo Training Diary

Last 100 entries

Surprise Paradox; [Books] Good Author List; [Computing] Efficient queries with grouping in Postgres; [Computing] Automatic Wake (Linux); [Computing] AWS CDK Aspects in Go; [Bike] Adidas Gravel Shoes; [Computing, Horror] Biological Chips; [Books] Weird Lit Recs; [Covid] Extended SIR Models; [Art] York-based Printmaker; [Physics] Quantum Transitions are not Instantaneous; [Computing] AI and Drum Machines; [Computing] Probabilities, Stopping Times, Martingales; bpftrace Intro Article; [Computing] Starlab Systems - Linux Laptops; [Computing] Extended Berkeley Packet Filter; [Green] Mainspring Linear Generator; Better Approach; Rummikub Solver; Chilean Poetry; Felicitations - Empowerment Grant; [Bike] Fixing Spyre Brakes (That Need Constant Adjustment); [Computing, Music] Raspberry Pi Media (Audio) Streamer; [Computing] Amazing Hack To Embed DSL In Python; [Bike] Ruta Del Condor (El Alfalfal); [Bike] Estimating Power On Climbs; [Computing] Applying Azure B2C Authentication To Function Apps; [Bike] Gearing On The Back Of An Envelope; [Computing] Okular and Postscript in OpenSuse; There's a fix!; [Computing] Fail2Ban on OpenSuse Leap 15.3 (NFTables); [Cycling, Computing] Power Calculation and Brakes; [Hardware, Computing] Amazing Pockit Computer; Bullying; How I Am - 3 Years Post Accident, 8+ Years With MS; [USA Politics] In America's Uncivil War Republicans Are The Aggressors; [Programming] Selenium and Python; Better Walking Data; [Bike] How Fast Before Walking More Efficient Than Cycling?; [COVID] Coronavirus And Cycling; [Programming] Docker on OpenSuse; Cadence v Speed; [Bike] Gearing For Real Cyclists; [Programming] React plotting - visx; [Programming] React Leaflet; AliExpress Independent Sellers; Applebaum - Twilight of Democracy; [Politics] Back + US Elections; [Programming,Exercise] Simple Timer Script; [News] 2019: The year revolt went global; [Politics] The world's most-surveilled cities; [Bike] Hope Freehub; [Restaurant] Mama Chau's (Chinese, Providencia); [Politics] Brexit Podcast; [Diary] Pneumonia; [Politics] Britain's Reichstag Fire moment; install cairo; [Programming] GCC Sanitizer Flags; [GPU, Programming] Per-Thread Program Counters; My Bike Accident - Looking Back One Year; [Python] Geographic heights are incredibly easy!; [Cooking] Cookie Recipe; Efficient, Simple, Directed Maximisation of Noisy Function; And for argparse; Bash Completion in Python; [Computing] Configuring Github Jekyll Locally; [Maths, Link] The Napkin Project; You can Masquerade in Firewalld; [Bike] Servicing Budget (Spring) Forks; [Crypto] CIA Internet Comms Failure; [Python] Cute Rate Limiting API; [Causality] Judea Pearl Lecture; [Security, Computing] Chinese Hardware Hack Of Supermicro Boards; SQLAlchemy Joined Table Inheritance and Delete Cascade; [Translation] The Club; [Computing] Super Potato Bruh; [Computing] Extending Jupyter; Further HRM Details; [Computing, Bike] Activities in ch2; [Books, Link] Modern Japanese Lit; What ended up there; [Link, Book] Logic Book; Update - Garmin Express / Connect; Garmin Forerunner 35 v 230; [Link, Politics, Internet] Government Trolls; [Link, Politics] Why identity politics benefits the right more than the left; SSH Forwarding; A Specification For Repeating Events; A Fight for the Soul of Science; [Science, Book, Link] Lost In Math; OpenSuse Leap 15 Network Fixes; Update; [Book] Galileo's Middle Finger; [Bike] Chinese Carbon Rims; [Bike] Servicing Shimano XT Front Hub HB-M8010; [Bike] Aliexpress Cycling Tops; [Computing] Change to ssh handling of multiple identities?; [Bike] Endura Hummvee Lite II; [Computing] Marble Based Logic; [Link, Politics] Sanity Check For Nuclear Launch; [Link, Science] Entropy and Life

© 2006-2017 Andrew Cooke (site) / post authors (content).

Efficient Spam Filtering With Mutt and SpamAssassin

From: andrew cooke <andrew@...>

Date: Fri, 12 Mar 2010 11:11:45 -0300

I've finally got my spam rates down to GMail levels - effectively none.
Here's how to do it.  This is a bit long and detailed, but it presents most
details of a coherent system that works well for me.


First, get Spamassassin installed and working.  In OpenSuse this means
installing the relevant packages.  I run spamd as a service and then use spamc
to call that.  This avoids the overhead of starting Spamassassin each time an
email arrives.

One reason GMail can filter spam so efficienctly is that it can detect when
many people get the same email.  On a local system you can also do this in
three different ways.  The first way is to use Vipul's Razor.  This is a
centralized service allows you to pool resources with many other users.  It
works with Spamassassin, but needs to be separately installed and configured.

Vipul's Razor is also an OpenSuse package.  Instructions on configuring it
with Spamassassin are here - http://wiki.apache.org/spamassassin/RazorSiteWide

The second way to exploit data from other emails is to use an external DNS
blacklist.  By default, Spamassassin is configured to not use external source
of data (like Vipul's Razor and DNS blacklists).  To change this, edit the
flags in /etc/sysconfig/spamd (this is an OpenSuse specific detail - other
distros will use a different mechanism).

I have: SPAMD_ARGS="-d -c --allow-tell"

(I'll explain --allow-tell later; the important thing here is that -L has been
removed).

Also, in /etc/mail/spamassassin/local.cf, I have:

# Enable the Bayes system
use_bayes               1

# Enable Bayes auto-learning
bayes_auto_learn              1

# Enable or disable network checks
skip_rbl_checks         0
use_razor2              1
razor_config /etc/mail/spamassassin/razor/razor-agent.conf


The Bayes mentioned above allows Spamassassin to "learn" what email is good
and what bad.  Again, I will describe Mutt macros that help with this later.


Next, we need to configure procmail to call Spamassassin and then filter
spam.  To do this with Mutt I use the following mail folders (I am using
maildir; you can do something similar with mboxes):

.spam - this is where I put questionable emails.  These are borderline emails
and this folder needs to be checked regularly by hand (later I will describe
how Mutt macros can simplify this process).

.0-spam - this is where I put emails that were detected as spam, but which are
not "super obviously bad".  When starting, this folder also needs to be
checked regularly (see discussion of mailing lists below), but once everything
is working, it can be left pretty mcuh unattended - it then works as an
emergency backup so that if you incorrectly filter something, you can still
retrieve it.

/dev/null - this is where I send "super obvious" spam.

.learn-spam - this is used for Bayes (see later)

.learn-ham - this is used for Bayes (see later)

Given those, my .procmailrc file looks like this:


MAILDIR=$HOME/mail
DEFAULT=$MAILDIR/ 
LOGFILE=$HOME/log/procmail.log
LOGABSTRACT=all               

# get spamassassin to check emails
:0fw: .spamassassin.lock
* < 256000              
| spamc                 

# strong spam are discarded
:0                         
* ^X-Spam-Level: \*\*\*\*\*\*
/dev/null                    

# weak spam are kept just in case - clear this out every now and then
:0                                                                   
* ^X-Spam-Level: \*\*\*\*\*                                          
.0-spam/                                                             

# if it wasn't detected as spam, but is to a fake address, then we
# know it is spam, so learn from that                             
:0                                                                
* !^(From|To|cc|bcc)[ :].*(compute|andrew|root|webmaster|admin|postmaster).*@acooke\.org
* !^(From|To|cc|bcc)[ :].*@isti\.com
# add mailing lists below
* !^From[ :].*(snowmail_daily@...|Section@...|rforno@...|alert@...).*
{
  # save in case of screw-ups, mailing lists, etc
  :0 c
  .0-spam/
  :0
  .learn-spam/
}             

# otherwise, marginal spam goes here for revision
:0                                               
* ^X-Spam-Level: \*\*                            
.spam/                                           


Earlier I said there were three ways to detect spam using emails to other
people.  The third way is the "fake address" trick above - I download all
email from my ISP that is addressed to acooke.org, even though I know that
only a few addresses are actually valid.  I then use email to invalid
addresses as an extra source of known spam.


With the above configured you should see Spamassassin being called correctly
in the logs (and Vipul's Razor being used too).


Next, some Mutt macros that help simplify all this:

macro index S "<tag-prefix><save-message>=.learn-spam<enter>" "move to learn-spam"
macro pager S "<save-message>=.learn-spam<enter>" "move to learn-spam"
macro index H "<tag-prefix><copy-message>=.learn-ham<enter>" "copy to learn-ham"
macro pager H "<copy-message>=.learn-ham<enter>" "copy to learn-ham"


These are used together with these crontab entries:

*/3 * * * * /home/andrew/bin/spam
*/3 * * * * /home/andrew/bin/ham


And these scripts (this is why --allow-tell was needed for spamd - it lets
these scripts update the server with new information):

> cat spam
#!/bin/bash

for f in `ls /home/andrew/mail/.learn-spam/cur`
do
    spamc -L spam < "/home/andrew/mail/.learn-spam/cur/$f" > /dev/null
    rm "/home/andrew/mail/.learn-spam/cur/$f"
done
for f in `ls /home/andrew/mail/.learn-spam/new`
do
    spamc -L spam < "/home/andrew/mail/.learn-spam/new/$f" > /dev/null
    rm "/home/andrew/mail/.learn-spam/new/$f"
done

> cat ham
#!/bin/bash

for f in `ls /home/andrew/mail/.learn-ham/cur`
do
    spamc -L ham < "/home/andrew/mail/.learn-ham/cur/$f" > /dev/null
    rm "/home/andrew/mail/.learn-ham/cur/$f"
done
for f in `ls /home/andrew/mail/.learn-ham/new`
do
    spamc -L ham < "/home/andrew/mail/.learn-ham/new/$f" > /dev/null
    rm "/home/andrew/mail/.learn-ham/new/$f"
done


The idea here is that anything moved to .learn-spam (by pressing the S key) is
then learnt by the system as spam, while anything copied to .learn-ham is
learnt as ham (non-spam).  Note that S also deletes files.

In practice this means that you can use S to delete and files in your inbox,
or in .spam, and the system will learn from them.  Similarly, if you see
something in .spam or .0-spam that should not be there, you can use H to
"unlearn" it (you must then also copy it manually to wherever you want to keep
it).


Finally, a note on mailing lists.  When you subscribe to a new mailing list it
will not be listed in the .procmailrc above, and so will be sent to .0-spam.
You'll realise that the email is missing, fix procmailrc, and use H + copy to
correct things.  That's a nuisance, but it happens quite infrequently so I
haven't tried to simplify it.

Oh, and also, flag a pile of known good emails as ham.  Without this it takes
teh Bayes system a while to get started.

Andrew

Comment on this post