Thursday, September 06, 2007

if you want to predict conservative behavior...use the dehumanizing approach???

Several days ago I posted a link into my Facebook account regarding an article about the use of largescale datasets to predict everything from wine prices to Supreme Court decisions.

I was rereading the article today and noticed this:

But evidence is mounting in favour of a different and much more "dehumanising" mechanism for combining human and super-crunching expertise. Several studies have shown that the most accurate way to exploit traditional expertise is merely to add the expert evaluation as an additional factor in the statistical algorithm. Ruger’s Supreme Court study, for example, suggested that a computer that had access to human predictions would rely on the experts to determine the votes of the more liberal members of the court (Stephen Breyer, Ruth Bader Ginsburg, David Souter and John Paul Stevens, in this case) – because the unaided experts outperformed the super-crunching algorithm in predicting the votes of these justices.


So if I am reading this correctly the implication is that the more dehumanising mechanism (i.e., pure number crunching without expert opinion) is more accurate for predicting the conservative court members, while the more human-oriented approach of adding expert opinion as a variable to the equation allows one greater performance in predicting the votes of the liberal justices.

Is it just me or does anyone else think that using the word "dehumanising" in this article is a little ethically weighted and politically biased?

2 comments:

Unknown said...

But didn't you know? Conservatives and dehumanising go hand in hand ;)

Anonymous said...

Conservatives dread the issuance of souls.