Re: Progress in philosophy

1

A guest post by Matt Yglesias, everyone. Thanks, Matt.


Posted by: ogged | Link to this comment | 02-14-08 3:43 PM
horizontal rule
2

Allow me to add that John Emerson should stay well away from this essay and indeed practically all the literature on this so-called problem.


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 3:45 PM
horizontal rule
3

1: Ah, thank you. One of the problems with occasionally reading something by a philosophy type is that when that read bit is incoherent, it's unclear whether that's a function of the poor writing or ignorant reading.


Posted by: SomeCallMeTim | Link to this comment | 02-14-08 3:47 PM
horizontal rule
4

"proudced" is a term of art.


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 3:50 PM
horizontal rule
5

there is nothing that it its to be

A phrase of art?


Posted by: SomeCallMeTim | Link to this comment | 02-14-08 3:52 PM
horizontal rule
6

A mistake. "There is nothing that it is to be".

I make plentiful typos in all transcriptions; I don't see why ogged singles this one out for mockery. Except that he's a meanieface.


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 3:55 PM
horizontal rule
7

Since this is already about philosophy of possible worlds and mentions Yglesias in the first comment, can we talk about the post title, "And If Obama Were A Giraffe, He'd Have a Really Short Neck." I was trying to start an argument about whether Obama would have a short neck in the possible world where he's a giraffe, but no one else seemed interested.


Posted by: washerdreyer | Link to this comment | 02-14-08 4:00 PM
horizontal rule
8

Well, come on, that thing with the cup requires two girls, not one, so the thing with the cup that Serena and Charlie are up to is clearly different. And anyway, SG clearly just lacks imagination; there are plenty of arousing things to be done with a cup.


Posted by: Hamilton-Lovecraft | Link to this comment | 02-14-08 4:09 PM
horizontal rule
9

Gwyneth Paltrow's hip to the cupping action.


Posted by: Cryptic Ned | Link to this comment | 02-14-08 4:11 PM
horizontal rule
10

Re 7:

My intuitions agree with washerdreyer's. Obama appears relatively long-necked for a human. In the closest possible world in which Osama is a giraffe, I expect him to be equally as long-necked, relative to the local population of giraffes.


Posted by: Moby Ape | Link to this comment | 02-14-08 4:12 PM
horizontal rule
11

Oh God, typo.


Posted by: | Link to this comment | 02-14-08 4:12 PM
horizontal rule
12

I was trying to start an argument about whether Obama would have a short neck in the possible world where he's a giraffe, but no one else seemed interested.

You appear to be thinking of the possible world in which Mr. Obama is an okapi, no?


Posted by: LizardBreath | Link to this comment | 02-14-08 4:13 PM
horizontal rule
13

Osama

Oh yeah, I almost forgot. We're screwed.


Posted by: Cryptic Ned | Link to this comment | 02-14-08 4:14 PM
horizontal rule
14

Obama appears relatively long-necked for a human. In the closest possible world in which Osama is a giraffe, I expect him to be equally as long-necked, relative to the local population of giraffes.

No, see, Obama's long-necked for a human, but if he were a giraffe with the same length of neck he would be short-necked. We're talking absolute neck length, not relative.


Posted by: teofilo | Link to this comment | 02-14-08 4:15 PM
horizontal rule
15

As always, we face a problem of definitions. Do you mean "a possible world in which Obama is, rather than homo sapiens, a normal specimen of giraffa camelopardalis", or do you mean "a possible world in which the homo sapiens named B. H. Obama is classified as a giraffe?" In the first, he would have a long neck, in the second, a short one.


Posted by: Hamilton-Lovecraft | Link to this comment | 02-14-08 4:17 PM
horizontal rule
16

ah, but teo, if he were a giraffe, he'd be a giraffe. It's certainly that true that Obama right now has a short neck for a giraffe, but that doesn't tell us if giraffe-obama would also have a short neck for a giraffe.


Posted by: washerdreyer | Link to this comment | 02-14-08 4:18 PM
horizontal rule
17

For instance, if I were to say, "I wish I was a dolphin" I wouldn't be expressing a wish that nothing about change except that I suddenly become classified as a member of another species.


Posted by: washerdreyer | Link to this comment | 02-14-08 4:20 PM
horizontal rule
18

16: See 15. Yglesias clearly had the second scenario in mind.


Posted by: teofilo | Link to this comment | 02-14-08 4:20 PM
horizontal rule
19

crap, "I wish I were a dolphin." I also wouldn't be trying to become a member of a shitty football team.


Posted by: washerdreyer | Link to this comment | 02-14-08 4:20 PM
horizontal rule
20

Google is not doing much to enlighten me on this cupping shit, but it sounds painful. Not that there's anything wrong with that.


Posted by: Anderson | Link to this comment | 02-14-08 4:21 PM
horizontal rule
21

this cupping shit

You're pretty close right there, actually.


Posted by: teofilo | Link to this comment | 02-14-08 4:22 PM
horizontal rule
22

...or maybe some strange human-giraffe hybrid.

With frickin' lasers on its head.


Posted by: Hamilton-Lovecraft | Link to this comment | 02-14-08 4:23 PM
horizontal rule
23

This is reminding me of the Lincoln joke: "How many legs does a donkey have, if we call its tail a leg?"

"Four. Calling the tail a leg doesn't make it one."


Posted by: LizardBreath | Link to this comment | 02-14-08 4:24 PM
horizontal rule
24

On Mars!


Posted by: teofilo | Link to this comment | 02-14-08 4:24 PM
horizontal rule
25

... maybe on a hot dog.


Posted by: ed bowlinger | Link to this comment | 02-14-08 4:33 PM
horizontal rule
26

20, Wikipedia is your friend.


Posted by: Hamilton-Lovecraft | Link to this comment | 02-14-08 4:33 PM
horizontal rule
27

Is anybody going to explain what the hell Gendler is talking about?


Posted by: PerfectlyGoddamnDelightful | Link to this comment | 02-14-08 4:34 PM
horizontal rule
28

Obama appears relatively long-necked for a human. In the closest possible world in which Osama is a giraffe, I expect him to be equally as long-necked, relative to the local population of giraffes.

I like to imagine 10 as an endorsement of Obama by an open-minded giraffe.


Posted by: felix | Link to this comment | 02-14-08 4:37 PM
horizontal rule
29

27: I think he's saying that there is no "thing with the cup" that makes sense in that passage from Wolfe. w-lfs-n refutes him by pulling tea for two out of his ass.


Posted by: SomeCallMeTim | Link to this comment | 02-14-08 4:40 PM
horizontal rule
30

One sees the giraffe in a debate with Obama, looking down disdainfully: "You're longnecked enough."


Posted by: LizardBreath | Link to this comment | 02-14-08 4:40 PM
horizontal rule
31

29: Sexist.


Posted by: Merganser | Link to this comment | 02-14-08 4:44 PM
horizontal rule
32

Gendler's female. And she's arguing that while we know from context that 'the-thing-with-the-cup' must be naughty, it's not because of any one-to-one correspondence with a naughty thing in the real world.


Posted by: Cala | Link to this comment | 02-14-08 4:46 PM
horizontal rule
33

If Obama Were a Giraffe, He'd Be a Venomous Cult Leader Giraffe


Posted by: washerdreyer | Link to this comment | 02-14-08 4:50 PM
horizontal rule
34

I think Gendler was the TA for a course that I endured once.


Posted by: Flippanter | Link to this comment | 02-14-08 4:53 PM
horizontal rule
35

32: But: "There are no extra body parts, no extra positions, no extra ways in which something that is not arousing in this world is arousing in that world.' That is, there isn't a MAF naughty thing that corresponds to a real world not naughty thing, no? So either it floats, unconnected, in MAF world, or it doesn't exist. I would think.


Posted by: SomeCallMeTim | Link to this comment | 02-14-08 4:54 PM
horizontal rule
36

In the closest possible world in which Obama is a giraffe, all of us are giraffes. Even those of us who are ducks.


Posted by: Merganser | Link to this comment | 02-14-08 4:55 PM
horizontal rule
37

I'm going to hit someone with a copy of Naming and Necessity, any second now. And perhaps I'll add to it by strapping a copy of Lewis' Counterfactuals to it to add some extra heft.


Posted by: nattarGcM ttaM | Link to this comment | 02-14-08 5:03 PM
horizontal rule
38

35: Well, that's why it's a puzzle. You're right that it shouldn't be arousing, but it's hard to read that passage and think 'they thought they were aroused by the thing with the cup, but weren't.'


Posted by: Cala | Link to this comment | 02-14-08 5:05 PM
horizontal rule
39

To 35: Yeah, I agree: I also think she is implying that there is either (a) no activity the author had in mind (like the "Venus Butterfly" from LA Law) or, more strongly, (b) no activity that could be coherently imagined as actual that "the thing with the cup" corresponds to.

To 37: Lewis would allow for Obama-giraffes, wouldn't he?


Posted by: Moby Ape | Link to this comment | 02-14-08 5:07 PM
horizontal rule
40

I understand and agree with 39.1. This from 35, though:

So either it floats, unconnected, in MAF world, or it doesn't exist. I would think.

I'm not sure what it means. "The thing with the cup" has to exist in MAF world -- the presumably truthful narration reports it as existing.


Posted by: LizardBreath | Link to this comment | 02-14-08 5:11 PM
horizontal rule
41

39: There is in Tlön.


Posted by: Flippanter | Link to this comment | 02-14-08 5:19 PM
horizontal rule
42

40: I think, if I read 39 properly, we're saying the same thing, though 39 says it so much more clearly that it might be a different thing. I think the "more strongly" part is that "thing with the cups" can't be imagined even by the rules of MAF world.


Posted by: SomeCallMeTim | Link to this comment | 02-14-08 5:20 PM
horizontal rule
43

In the possible world in which ttaM is Samuel Johnson, Refutation by Projectile is a valid argument form.


Posted by: Merganser | Link to this comment | 02-14-08 5:22 PM
horizontal rule
44

42: Actually, reading 39 again, I think I'm wrong about "more strongly." But that's what I meant.


Posted by: SomeCallMeTim | Link to this comment | 02-14-08 5:23 PM
horizontal rule
45

can't be imagined even by the rules of MAF world.

How can that be? The characters aren't just imagining it, they're doing, and enjoying, it. We can't imagine it, sure, but the rules of MAF world have to be such that there is some 'thing with the cup' that's an arousing activity; just not one that can be comprehended in our world.


Posted by: LizardBreath | Link to this comment | 02-14-08 5:25 PM
horizontal rule
46

The point of the Yglesian Counterfactual (as I propose we call "If Obama were a giraffe..." from now on) is, of course, that he's making an analogy with Michael O'Hanlon saying something like "if Obama thinks direct meetings with leaders of enemy countries will solve all our foreign policy problems, he's totally and dangerously wrong." Since Obama doesn't in fact think that, speculating about what it would say about him if he did is pointless except to smear him. Similarly, since Obama is not in fact a giraffe, talking about how his neck (that is, the actual neck that Obama the human being has) would be short if he were one is pointless and absurd, and could only serve the purpose of discrediting him with the all-important giraffe demographic.


Posted by: teofilo | Link to this comment | 02-14-08 5:28 PM
horizontal rule
47

45: That's exactly it. If I remember the rest of the essay correctly, the presumption is that MAF-world is like ours in lots (maybe even all) of relevant ways, except for this indescribable thing-with-a-cup. That should strike us as a little weird.


Posted by: Cala | Link to this comment | 02-14-08 5:31 PM
horizontal rule
48

I tend to find that all the people I like understand the following two sentences immediately:

If I was a cat I'd never lick my arse because that's disgusting. BUt if I was a cat, I'd lick my arse anyway because I wouldn't be me, I'd be a cat

and all the people I don't like get confused by it. It's like that funeral puzzle test for psychopaths.


Posted by: dsquared | Link to this comment | 02-14-08 5:34 PM
horizontal rule
49

45: Well, it could be a mistake, like saying that Superman both can and can't look through walls made of lead. But the weaker argument is probably better.


Posted by: SomeCallMeTim | Link to this comment | 02-14-08 5:34 PM
horizontal rule
50

What does "MAF" stand for?

Was it obvious to everyone else that Wolfe emphasized the ambiguity of "the thing with the cup" to a comical extent, thus making it obvious that it's not really ambiguous because the author is making it perfectly clear that for no good reason he won't tell you what "the thing" is?

Is this really what philosophy is reduced to?


Posted by: Cryptic Ned | Link to this comment | 02-14-08 5:35 PM
horizontal rule
51

wait...have 5 different people so far used "MAF" as an abbreviation for "A Man In Full", without noticing? Or does MAF mean something else?


Posted by: Cryptic Ned | Link to this comment | 02-14-08 5:36 PM
horizontal rule
52

No, Ned, you're just smarter than everyone else.


Posted by: Cala | Link to this comment | 02-14-08 5:37 PM
horizontal rule
53

It's like that funeral puzzle test for psychopaths.

?


Posted by: LizardBreath | Link to this comment | 02-14-08 5:38 PM
horizontal rule
54

Is that sarcastic? Does it mean something else?

I knew I shouldn't have entered the actual-philosophy thread.


Posted by: Cryptic Ned | Link to this comment | 02-14-08 5:38 PM
horizontal rule
55

a MAn in Full. Why anyone would abbreviate Man In Full that way I couldn't tell you.

46: I don't deny that the author's intention is to create a parallel like the one you propose. I deny that the natural interpretation of the Yglesian Counterfactual does create such a parallel.


Posted by: washerdreyer | Link to this comment | 02-14-08 5:39 PM
horizontal rule
56

I deny that the natural interpretation of the Yglesian Counterfactual does create such a parallel.

I don't disagree with that. It's certainly counterintuitive.


Posted by: teofilo | Link to this comment | 02-14-08 5:40 PM
horizontal rule
57

54: No, you got it right. (Actually, I'd noticed it, but figured that since everyone seemed to be using the abbreviation fluently, straightening it out was more trouble than it was worth.)


Posted by: LizardBreath | Link to this comment | 02-14-08 5:40 PM
horizontal rule
58

Look, the question isn't just that Gendler didn't understand exaggeration for comic effect. The problem is that the traditional analyses of passages like that can't explain our (completely expected) interpretation of the passage because those traditional analyses would predict the wrong response to a passage like that.

The whole paper is actually pretty interesting (even though I disagree with her conclusions), and it sits at this bizarre intersection between psychology and philosophy.

So, yeah, that's what it's reduced to.


Posted by: Cala | Link to this comment | 02-14-08 5:42 PM
horizontal rule
59

54: Basically, I fucked up the abbreviation and everyone else just let it go.


Posted by: SomeCallMeTim | Link to this comment | 02-14-08 5:42 PM
horizontal rule
60

straightening it out was more trouble than it was worth

New mouseover.


Posted by: parsimon | Link to this comment | 02-14-08 5:44 PM
horizontal rule
61

it s/b Labs


Posted by: LizardBreath | Link to this comment | 02-14-08 5:47 PM
horizontal rule
62

46: I agree. I, at least, was just goofing off about what philosophy is ultimately reduced to. (Nevertheless, I do believe, for the reasons stated, the following: if Obama were a giraffe in a Disney animation featuring the presidential candidates represented as non-human animals, he would have a long neck.).

47, 58: That seems exactly right, and what she's ultimately interested in is which anomalies we let slide and which we "resist" when interacting with fictions:

(1) People are just like us in a world just like ours but they are mysteriously aroused by "the thing with the cup."
(2) People are just like us in a world just like ours but they are emotionally indifferent to [pick generic moral horror].

As readers, we're ok with (1), not with (2).


Posted by: Moby Ape | Link to this comment | 02-14-08 6:12 PM
horizontal rule
63

I can think of all kinds of arousing things they could be doing with a cup. I agree with Gendler that the *point* is that no one can know---there is no obvious wink-nudge here to an act we all know about---but it's a common enough object that there are all kinds of things that could be imagined in the place of cup-sex.

I just figured it was some kind of reference to the first scene in The Story of the Eye, but I guess that was a bowl, not a cup.


Posted by: A White Bear | Link to this comment | 02-14-08 6:17 PM
horizontal rule
64

she's ultimately interested in is which anomalies we let slide and which we "resist" when interacting with fictions:

(1) People are just like us in a world just like ours but they are mysteriously aroused by "the thing with the cup."
(2) People are just like us in a world just like ours but they are emotionally indifferent to [pick generic moral horror].

Situation (2) is a lot more likely and easy to understand than situation (1), since even right here in our world people are routinely indifferent to moral horrors. Being aroused by a cup is much rarer. The only reason people didn't "resist" all the ridiculous passages in A Man In Full -- indeed, the only reason the damn thing was published -- is it was written by the ancient literary lion Tom Wolfe.

A big reason so much philosophy is bullshit is that the "evidence" is the intuition of the author, who then tries to browbeat the reader into accepting it as humanly universal.


Posted by: PerfectlyGoddamnDelightful | Link to this comment | 02-14-08 6:18 PM
horizontal rule
65

whoops! tags!

Come on Emerson, let's see your view on all this.


Posted by: PGD | Link to this comment | 02-14-08 6:19 PM
horizontal rule
66

Being aroused by a cup is much rarer.

Sure, but every sexual relationship has a few weird kinky things in it that are described as "that thing with the x." I figured it was a locution not meant to drive the reader mad with wonder (OMG Everyone knows what the thing with the cup is except me!) but to evoke a private erotic language that makes sense to them and is inaccessible to everyone else, reader included.


Posted by: A White Bear | Link to this comment | 02-14-08 6:21 PM
horizontal rule
67

The problem is that the traditional analyses of passages like that can't explain our (completely expected) interpretation of the passage because those traditional analyses would predict the wrong response to a passage like that.

(I haven't read the paper -- possible worlds was never my thing. Bear with me while I walk through this.)

What's the traditional analysis? Simply that in MAF-world, the thing with the cup is naughty/arousing? I'm not seeing why, in this world, we'd have trouble imagining some such thing with a cup. In particular, Gendler's "there are no extra positions" seems unwarranted. It's some thing with a cup, after all, calling for unknown but conceivable positions, one would think.

So if Gendler's thing-with-the-cup problem is actually as Cala explains above, it's not (as I want to think) that the MAF scenario is a bad example, but that the traditional analysis -- that we can easily imagine some scenario -- doesn't account for comedic, or otherwise unusual, literary devices and effects?

In which case, I want to say: so what? Is it supposed to? I seem to be missing something here.


Posted by: parsimon | Link to this comment | 02-14-08 6:23 PM
horizontal rule
68

I defy you to come up with a sex act that fits that passage, is arousing and anatomically possible, and is not impossibly perverted (a la two girls, one cup).


Posted by: PerfectlyGoddamnDelightful | Link to this comment | 02-14-08 6:23 PM
horizontal rule
69

68: I can come up with like 12.


Posted by: A White Bear | Link to this comment | 02-14-08 6:24 PM
horizontal rule
70

We do have an active thread dedicated to smut.


Posted by: LizardBreath | Link to this comment | 02-14-08 6:26 PM
horizontal rule
71

69: What kind of a response is that? Jesus. I think you're lying.

I defy you to post descriptions of several on this thread!


Posted by: PerfectlyGoddamnDelightful | Link to this comment | 02-14-08 6:27 PM
horizontal rule
72

Not things that would be universally arousing to all people in all situations, but it's very easy to imagine a little cup taking on a sort of fetishistic value for a particular couple. Household objects can be employed in a variety of erotic ways that do not necessarily involve, like, insertion or whatever. Even just the fact that this particular cup was once used in erotic play to, like, playfully roll across someone's hips or something, can make it take on this fetish value, not out of the inherent sexiness of cups, but because the memory of the surprising arousal of the first interaction with the cup is always present for that couple.

It's kind of like if there's a particular album playing while you happen to be having the hottest sex of your life. If you named the album to someone else, they might say, "But that's not sexy!" and you'd reply, "But it is to me, because I can't hear it without remembering the hottest sex of my life."


Posted by: A White Bear | Link to this comment | 02-14-08 6:29 PM
horizontal rule
73

72: Yeah, but that sort of thing doesn't really fit the passage as written, does it? The implication of the passage is that the cup is instrumentally necessary to perform some act that is both degrading/degenerate and very pleasurable. The sort of memento/fetish value you're talking about wouldn't be described in the same terms, I don't think.


Posted by: LizardBreath | Link to this comment | 02-14-08 6:33 PM
horizontal rule
74

All that "The problem with that thing in the cup is that there is nothing that it its to be that thing with the cup in this (the actual) world, " demonstrates to me is that Gendler hasn't bothered to do any research at all into sexual kinks and fetishes. There is at least one thing with pretty much any noun you care to name, usually more. The fact that someone who finds sexual gratification in relatively conventional ways may not have heard of them, or been able to imagine them, is no proof of anything when it comes to whether they actually exist or not.

A simple Google search for fetish with cup reveals several things with cups right in the first few links. Gendler is arrogantly overconfident and ignorant. Wolfe often is too but this time it's not his fault. :)


Posted by: Bruce Baugh | Link to this comment | 02-14-08 6:33 PM
horizontal rule
75

OT: can someone quickly give me a good German synonym for lurid.


Posted by: Napi | Link to this comment | 02-14-08 6:37 PM
horizontal rule
76

73: But it could still be something fairly sexual, of course. What if it's just that he once grabbed a little porcelain cup to stimulate her (the sides being curved and cool, etc.) and she happened to be fantastically aroused by it, and now she begs him to do "the thing with the cup" because the cup has taken on this really personal erotic value? This is difficult to imagine? It's a personal universe of desire, not a sentimental thing.


Posted by: A White Bear | Link to this comment | 02-14-08 6:38 PM
horizontal rule
77

There are no possible worlds in the Szabó Gendler paper, people, just fictional ones.

I think a lot of the literature on this is badly done because people use such ridiculous toy examples—Wal/ton and Wea/therson especially—and have such an impoverished understanding of what imagination is and what imaginative response to literature is that they're not really talking about anything at all. This is especially ironic, since the Mor/an paper that SG cites is much better on this sort of thing, with an appreciation for various sorts of imaginative participation and the importance of style and whatnot.


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 6:38 PM
horizontal rule
78

I wish I could lick myself like a cat.


Posted by: John Emerson | Link to this comment | 02-14-08 6:41 PM
horizontal rule
79

That's the most reasonable google-proofing in ages.


Posted by: washerdreyer | Link to this comment | 02-14-08 6:41 PM
horizontal rule
80

Okay, there is "Danger! Imminent exposure!" and all that, but imagine something only a little ramped up from "genital stimulation with a cup" and there is guaranteed to be a dude driven wild at the insane kinkiness of it. There is very little anyone could ever do in bed that genuinely merits that kind of anxious fear response, but the lesson here is that people are weird about sex.


Posted by: A White Bear | Link to this comment | 02-14-08 6:41 PM
horizontal rule
81

A big reason so much philosophy is bullshit is that the "evidence" is the intuition of the author, who then tries to browbeat the reader into accepting it as humanly universal.

I don't know much about philosophy, but this is definitely a problem in linguistics. Things are improving, though.

and is not impossibly perverted (a la two girls, one cup)

Why this criterion? "Impossibly perverted" sounds like more or less what Wolfe means.


Posted by: teofilo | Link to this comment | 02-14-08 6:42 PM
horizontal rule
82

There are no possible worlds in the Szabó Gendler paper, people, just fictional ones.

Yeah, I almost reworded that.

Which Mo/ran paper that SG cites?


Posted by: parsimon | Link to this comment | 02-14-08 6:44 PM
horizontal rule
83

76 -- you can't mechanically stimulate anybody with a porcelain cup. The rounded sides make it too difficult, and if you tried vigorously enough to get somewhere you'd be taking the risk of breaking it and embedding shards of porcelain in somebody's inner thigh. The only thing a cup could possibly do is hold some bodily excretion, which doesn't really fit the passage.

This is starting to bug me. What a lousy piece of writing that is. Out of the whole vast universe of feasible things to get off with, Wolfe has her pull a *cup* out of her purse.


Posted by: PerfectlyGoddamnDelightful | Link to this comment | 02-14-08 6:45 PM
horizontal rule
84

you can't mechanically stimulate anybody with a porcelain cup.

Where'd the empiricists who tested out bathroom stall oral sex positions go?


Posted by: washerdreyer | Link to this comment | 02-14-08 6:46 PM
horizontal rule
85

mbedding shards of porcelain in somebody's inner thigh

hott.


Posted by: Gonerill | Link to this comment | 02-14-08 6:46 PM
horizontal rule
86

Parsley: "The Expression of Feeling in Im/agination". Very good!


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 6:47 PM
horizontal rule
87

81: he's a wealthy businessman and she's a sophisticated seductress, it would totally twist up the character to have him doing scat with her the first time they had sex.

Maybe piss or something. I think I have to stop now.


Posted by: PerfectlyGoddamnDelightful | Link to this comment | 02-14-08 6:47 PM
horizontal rule
88

Where'd the empiricists who tested out bathroom stall oral sex positions go?

The sociology department.


Posted by: Gonerill | Link to this comment | 02-14-08 6:47 PM
horizontal rule
89

83: It is possible to be gentle, you know. I feel weird explaining this. The thing that makes something like a porcelain cup stimulating is that you couldn't use it vigorously at all, which constraint makes the act potentially extraordinarily hot.

But yes, it's a lousy piece of writing and Tom Wolfe should be ashamed of himself.


Posted by: A White Bear | Link to this comment | 02-14-08 6:48 PM
horizontal rule
90

86: Thanks, Ben.

83: you can't mechanically stimulate anybody with a porcelain cup

PGD is trolling.


Posted by: parsimon | Link to this comment | 02-14-08 6:49 PM
horizontal rule
91

Yeah, I'm going to have to defer to AWB on this -- I don't think I'd been sufficiently accounting for individual weirdnesses. (That is, messing around with a cup? Eh, certainly possible. Perceiving it as being as fraught with meaning as the passage does? Seems peculiar, but I suppose not impossibly so.)


Posted by: LizardBreath | Link to this comment | 02-14-08 6:52 PM
horizontal rule
92

PGD is trolling.

You know, if you substitute the P for the letter just prior to it in a standard Latin alphabet, pronounce the resulting three letter combination as one word, and then spell out what it is you pronounced, you get a different pseudonym. One that trolls. Coincidence? Perhaps. But perhaps not.


Posted by: washerdreyer | Link to this comment | 02-14-08 6:55 PM
horizontal rule
93

I would give Real Life Examples of people being absurdly weirded out even by things they themselves introduced into the bedroom, but as everyone has surely noticed, I don't do that anymore. You're welcome.


Posted by: A White Bear | Link to this comment | 02-14-08 6:56 PM
horizontal rule
94

when I first read the passage when the book came out I assumed she got naked and inserted her birth control cup in front of him. that was a huge turn on for me when my future wife and I were new at it. it doesn't really fit the passage though.


Posted by: martin van buren | Link to this comment | 02-14-08 7:00 PM
horizontal rule
95

Don't you people realize? It's not a novel -- IT'S A COOKBOOK!!!


Posted by: minneapolitan | Link to this comment | 02-14-08 7:29 PM
horizontal rule
96

there was a famous scene from our old 1940ies' historical movie when a woman sips wine from the cup and passes a mouthful to her lover
similar scene is in the movie Indochina but without any cups
irl i imagine that would be gross to drink from someone else's mouth
bingo!?


Posted by: read | Link to this comment | 02-14-08 7:30 PM
horizontal rule
97

64: Well, here's the kind of example people have in mind with respect to "resistance" to imagining a counterfactual morality (from
this paper
):

Jack and Jill were arguing again. This was not in itself unusual, but this time they were standing in the fast lane of I-95 having their argument. This was causing traffic to back up a bit. It wasn't significantly worse than [what] normally happened around Providence, not that you could have told that from the reactions of passing motorists. They were convinced that Jack and Jill, and not the volume of traffic, were the primary causes of the slowdown. They all forgot how bad traffic normally is along there. When Craig saw that the cause of the backup had been Jack and Jill, he took his gun out of the glovebox and shot them. People then started driving over their bodies, and while the new speed hump caused some people to slow down a bit, mostly traffic returned to its normal speed. So Craig did the right thing, because Jack and Jill should have taken their argument somewhere else where they wouldn't get in anyone's way.

So, the intuition is that as a reader you find the progression in that passage to be funky (for reasons more interesting than philosophy's use of silly examples).

And I was wrong in 62 in how I stated the supposed anomaly in this case: it's not that you resist imagining emotional indifference to a generic moral horror, but that you resist imagining that emotional indifference as being (in that fictional world) the morally right emotional action in that case. In effect, you resist imagining a world in which the "moral facts" are inverted as a matter of course, whereas you don't with other kinds of facts.


Posted by: Moby Ape | Link to this comment | 02-14-08 7:37 PM
horizontal rule
98

irl i imagine that would be gross to drink from someone else's mouth

No. Not if you're planning on putting your tongue in there, anyway.


Posted by: Jesus McQueen | Link to this comment | 02-14-08 7:38 PM
horizontal rule
99

irl i imagine that would be gross to drink from someone else's mouth

Not if you work at Connected Ventures.


Posted by: minneapolitan | Link to this comment | 02-14-08 7:38 PM
horizontal rule
100

They were convinced that Jack and Jill, and not the volume of traffic, were the primary causes of the slowdown. They all forgot how bad traffic normally is along there.

and then

People then started driving over their bodies, and while the new speed hump caused some people to slow down a bit, mostly traffic returned to its normal speed.

Apparently, "they" weren't the only people who forgot "how bad traffic normally is along there." What a poorly imagined passage.


Posted by: eb | Link to this comment | 02-14-08 7:44 PM
horizontal rule
101

I had the same thought as martin van buren, without his specific valence or memories.


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 8:20 PM
horizontal rule
102

I don't think this is hard to imagine at all. (In fact, when I read Man in Full, I thought it was referring to something like "fire cupping".) Going with AWB's suggestion, what if it was a demitasse cup?


Posted by: Walt Someguy | Link to this comment | 02-14-08 8:29 PM
horizontal rule
103

It's kind of like if there's a particular album playing while you happen to be having the hottest sex of your life. If you named the album to someone else, they might say, "But that's not sexy!" and you'd reply, "But it is to me, because I can't hear it without remembering the hottest sex of my life."

I can think of exactly one song like that. But it wasn't during any sex act, it was while driving in the car on a date. And she was talking about how sexy the song was. But it was obviously a cartoonishly intentionally sexy song to begin with, so I guess I just have the intended response, but the song would have failed to induce the intended response for me if it weren't for this girl's eroticism.


Posted by: Cryptic Ned | Link to this comment | 02-14-08 8:31 PM
horizontal rule
104

I can think of exactly one song like that.

Huh. I can think of a half-dozen or so songs that I can't listen to without feeling a twinge of one kind of another.


Posted by: mrh | Link to this comment | 02-14-08 8:33 PM
horizontal rule
105

I bet it was the kind of cup used in bloodletting.


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 8:33 PM
horizontal rule
106

105 meet 9.


Posted by: Cryptic Ned | Link to this comment | 02-14-08 8:35 PM
horizontal rule
107

I immediately pictured a little cup like they use to serve tea at Chinese restaurants, white with no handle, and sturdy.


Posted by: A White Bear | Link to this comment | 02-14-08 8:37 PM
horizontal rule
108

Wow, nobody has followed up Hamilton-Lovecraft's link to firecupping at #26. I followed the link, and I can imagine variations being very hot, either wet or dry. A fine cognac used as the flammable, a beautiful porcelain cup, the sensations, the aftermarks. For several hours I have been trying to imagine firecupping used at erogenous zones.

The reason firecupping interested me was listening to a 15-min segment on NPR this afternoon about hickeys. In general guys thought hickeys hotter than ladies.

Is this a philosophy or a sex thread?


Posted by: bob mcmanus | Link to this comment | 02-14-08 8:39 PM
horizontal rule
109

Wait, that should be "26 meet 9". I forgot what my own link was to. But then, 26 is a lot more informative than 9. I ban myself.


Posted by: "Cryptic Ned" | Link to this comment | 02-14-08 8:41 PM
horizontal rule
110

In general guys thought hickeys hotter than ladies

Man, we even rate lower than hickeys?


Posted by: A White Bear | Link to this comment | 02-14-08 8:43 PM
horizontal rule
111

So the story I might tell about a fictional world might go something like this: the author tells a story, and the audience as necessary fills in all the background details that the author left out. Usually this is quite a lot, and we take our cues from context. E.g., we'll assume that the laws of physics are the same unless otherwise stated. You probably believe Sherlock Holmes is white. And then we imagine things to flesh out the fictional world.

But sometimes the author changes those rules. One interesting philosophical puzzle (so it is claimed) is that it is far easier to get an audience to imagine a world without our laws of physics or biology, but (so it is claimed), much harder to get an audience to imagine a world where they endorse a different moral system. We don't have a problem with hyperspace, but we resist imagining, e.g., infanticide as a moral duty. (A good example in Gendler's paper is one of Kipling's happy imperialist poems. Take up the white man's burden? Eech.)

So this example comes up in that context. We figure they were doing something with the cup, even if we can't say what (I don't think it's a diaphragm, because they're really very dull), we get that it's meant to be writerly somehow, and move on.

I kind of reject the premise of the argument, because I don't think that there's a good way to compare 'that world is too unlike ours physically, i can't imagine it' and 'that world is too unlike ours morally, i can't endorse it.'


Posted by: Cala | Link to this comment | 02-14-08 8:45 PM
horizontal rule
112

110: You can't deny this charm, AWB.


Posted by: Cryptic Ned | Link to this comment | 02-14-08 8:45 PM
horizontal rule
113

77: The field is pretty young, comparatively, and there's a lot of excitement over psychology, which (imo) tends to mean a trade-off in philosophical rigor.


Posted by: Cala | Link to this comment | 02-14-08 8:46 PM
horizontal rule
114

Is this a philosophy or a sex thread?

There's a difference?


Posted by: teofilo | Link to this comment | 02-14-08 8:47 PM
horizontal rule
115

and there's a lot of excitement over psychology, which (imo) tends to mean a trade-off in philosophical rigor

And experimental rigor, too, frankly.


Posted by: Gonerill | Link to this comment | 02-14-08 8:48 PM
horizontal rule
116

113, 115: Again, the parallel with linguistics suggests itself.


Posted by: teofilo | Link to this comment | 02-14-08 8:50 PM
horizontal rule
117

I figured that went without saying. I like the idea of interdisciplinary stuff, but it often doesn't turn out resembling scholarship.


Posted by: Cala | Link to this comment | 02-14-08 8:50 PM
horizontal rule
118

(A good example in Gendler's paper is one of Kipling's happy imperialist poems. Take up the white man's burden? Eech.)

Actually, that's a terrible example, because she completely misreads it.


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 8:52 PM
horizontal rule
119

How so? She's not doing literary criticism, but as a 'here is a poem people cringe at these days'-type example, surely it suffices.

Been about three years since I've read the paper.


Posted by: Cala | Link to this comment | 02-14-08 8:53 PM
horizontal rule
120

God fucking dammit.


Posted by: Cala | Link to this comment | 02-14-08 8:54 PM
horizontal rule
121

There's another paper on the same topic that points out what I claimed in 118, but I can't remember what it is.

The field is pretty young, comparatively, and there's a lot of excitement over psychology, which (imo) tends to mean a trade-off in philosophical rigor.

I don't think that's an excuse. (I also think that, if it's a field, it's some kind of philosophical aesthetics, which is not young.) Have you read some of these papers? It's as if the authors have never read a novel. The relevant discipline with which to get inter isn't psychology but literary studies.


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 8:54 PM
horizontal rule
122

The relevant discipline with which to get inter isn't psychology but literary studies.

Yeah but then no MRI machines.

God fucking dammit.

Ah, crap.


Posted by: Gonerill | Link to this comment | 02-14-08 8:57 PM
horizontal rule
123

I have read many of the papers. I disagree with them, but I think they're wrong in interesting ways or at least fruitful ways for me. (Except for the Fearing Fictions paper, which is a complete trainwreck.)

Aesthetics got banished to Tatooine for a period of years, and it's all this psychological inter-ing which is making it respectable (so the story goes.)


Posted by: Cala | Link to this comment | 02-14-08 8:58 PM
horizontal rule
124

Yeah but then no MRI machines.

Perhaps both? "Inter" means "among" as well as "between."


Posted by: teofilo | Link to this comment | 02-14-08 9:01 PM
horizontal rule
125

There's a reason the philosophy department and the literature department are separate.


Posted by: Cala | Link to this comment | 02-14-08 9:02 PM
horizontal rule
126

How so?

Well, here's what she says: "Leaving aside the niceties of literary interpretation, let us take this poem as a straightforward invitation to make-believe, a proposal about something we are called to imagine without committing ourselves to its literal truth." But why in the world would we want to do that? What's the point in adducing an actual text if she's going to ignore obviously salient features of it? "WMB" is not an invitation to make believe any more than a racist op-ed essay is—the fact that it rhymes doesn't make it so. It is not the case that "among the things that Kipling is asking us to make-believe there are the following: that there are certain white characters who have taken it upon themselves to initiate a group of nonwhites into the ways of Western culture" and so on. He is exhorting is audience to be a group of white actual existing persons and go out and initiate nonwhites into etc.

She may as well have taken an actual racist opinion essay from the time (I'm sure there were many) and said, "let's pretend this is a short story". If you want to investigate our imaginative interactions with texts, you should choose texts appropriately interacted with imaginatively. (You should also not take extremely crude texts; they won't provide very interesting fodder.)


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 9:02 PM
horizontal rule
127

There's a reason the philosophy department and the literature department are separate.

Indeed. There is a reason. It's not because the interests of the members of the one and the interests of the members of the other are completely nonoverlapping, or that the members of the one have nothing to say to the members of the other and with the thing reversed, though.


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 9:04 PM
horizontal rule
128

There's a reason the philosophy department and the literature department are separate.

And there isn't a reason the philosophy and psychology departments are separate?


Posted by: teofilo | Link to this comment | 02-14-08 9:04 PM
horizontal rule
129

125: La, la! Nothing to see here! [sweeping dissertation behind door]


Posted by: A White Bear | Link to this comment | 02-14-08 9:04 PM
horizontal rule
130

128: that is entirely due to the clinamen.


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 9:05 PM
horizontal rule
131

Let's all try to figure out what true and relevant thing Cala might have meant to communicate with 125.


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 9:06 PM
horizontal rule
132

No, it's because they fight like scalded cats when in the same room.

126: I see where you're coming from. Off the cuff, the problem is not her use of Kipling's poem, it's that pretense has overrun philosophy.


Posted by: Cala | Link to this comment | 02-14-08 9:06 PM
horizontal rule
133

No, it's because they fight like scalded cats when in the same room.

Maybe at your benighted institution, said the guy TAing a class team-taught by a philosophy and a french professor.


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 9:08 PM
horizontal rule
134

Benighted my ass.


Posted by: Cala | Link to this comment | 02-14-08 9:12 PM
horizontal rule
135

130: Psychology isn't quite an experimental science yet, there are still very few chinamen even in its "labs" (read: MRI machines).


Posted by: Cryptic Ned | Link to this comment | 02-14-08 9:13 PM
horizontal rule
136

130: Pineal gland, I think.


Posted by: Cala | Link to this comment | 02-14-08 9:14 PM
horizontal rule
137

I can't believe nobody's brought up the Delight of the Razor. I blame America's public schools.


Posted by: Flippanter | Link to this comment | 02-14-08 9:15 PM
horizontal rule
138

pineal gland what?


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 9:16 PM
horizontal rule
139

Oh, I get it, maybe. Ha.


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 9:17 PM
horizontal rule
140

(Except for the Fearing Fictions paper, which is a complete trainwreck.)

When I applied to graduate school, I submitted as a writing sample a paper which spent some time along the way doing its best to rip "Fearing Fictions" apart, with a rather disdainful and mocking tone. At some point it dawned on me that I was applying to Michigan, where the paper's author is a professor, and had I been under serious consideration as a candidate he surely would have read it. Since, in the end, I probably wasn't a serious candidate, and I didn't get into Michigan or anywhere else (in the scheme of things, probably a good thing and the right decision on their part), I assume he didn't get around to reading it, but I've always kind of hoped he did.


Posted by: W.H. Harrison | Link to this comment | 02-14-08 9:18 PM
horizontal rule
141

Since the doctrines of "Fearing Fictions" appear more or less unaltered in MaMB, and are several decades old, you think its trainwreckness would have come to light by now in the general philosophical imagination, and pretense's death-grip would be slackened.

140: I sent to Columbia a paper (actually my senior thesis, which was rather short) arguing against Danto. They waitlisted me (and didn't accept me off it). I wonder if he read it.


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 9:20 PM
horizontal rule
142

Aw, hell, the thread has rightfully moved on, but I'll post this already-written thing anyway:

97: it's not that you resist imagining emotional indifference to a generic moral horror, but that you resist imagining that emotional indifference as being (in that fictional world) the morally right emotional action in that case. In effect, you resist imagining a world in which the "moral facts" are inverted as a matter of course, whereas you don't with other kinds of facts.

There's a lot packed in here. Discussion of 'moral facts' gives me a pain in my ass, makes me grit my teeth. It seems clear that the terms of the discussion are impoverished.

IF there are 'moral facts' then we respond to them as a matter of course, and resist alternative responses ... okay ... I really don't know what 'whereas you don't resist [emotionally] with other kinds of facts' means, except that alleged moral facts don't operate in the same way as non-moral facts, and we knew this. But I'm not sure yet how resistance to imagining illuminates the difference.

Did I mention that moral realism drives me nuts?

For resisting non-moral facts, try some of Wittgenstein's early thought experiments -- the one about the imagined people who measure and assess the value and worth of a pile of sticks (or whatever) according to its stacked height. The same lot of sticks piled higher is worth more, in their world, it is obvious! Do you resist that imaginative world?

If so, it indicates a kind of non-moral fact (modes of measurement, in this case) that we are very deeply invested in. Emotionally, if you want. Morally?

So I'm not seeing the lack of imaginative resistance to the "other kinds of facts" that, um, Moby Ape referred to in 97.


Posted by: parsimon | Link to this comment | 02-14-08 9:21 PM
horizontal rule
143

141: I know! Either the argument is false, or it's dumb (to fear things fictionally is to be afraid of* a fiction. No shit, Sherlock.)

*"near" would be a better word here.


Posted by: Cala | Link to this comment | 02-14-08 9:24 PM
horizontal rule
144

Wea/therson's paper asserts that the resistance (a) exists and (b) owes to a certain kind of supervenience structure, such that he thinks he can give examples of nonmoral resistenda. I don't have a copy of it (though prob. I could get it through jstor and actually I might have a paper copy about ten feet to my left), though.

Mo/ran's paper has examples like the funniness of an occurrence, but he's really chasing different prey (at least in part of that section—actually it's hard to separate the strands of the argument in the section where he's talking about this stuff).


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 9:25 PM
horizontal rule
145

You know, it's not all fancy philosophy but the "imagine a world where what we think is good is thought of as bad and that's completely normal" thing is a pretty common trope of, er, science fiction. Sometimes as a critique of the world in which we live, sometimes just for the frisson. The first story that springs to mind is the famous one (whose title I can't recall) where there are these anthropologists on another planet, see, and they have this daughter. And they all make friends with the intelligent creatures of the planet. But one day the creatures start chasing the daughter with obvious intent to kill. Why? Why? Because they in fact chase their young and kill the ones they catch as an ordinary culling-the-stupid-ones practice sanctified by thousands of years of, and blah blah. So the girl survives, thinks sadly of all the dead little alien children and goes back to her parents who tell her that that's just how the aliens are and therefore it's okay.

Now, this isn't a totally different universe in that the main character is a human who isn't used to the idea of killing your young to make sure that you don't end up with stupid adults, but the intent of the story is very plainly "imagine a world where what strikes us as moral horror really isn't."

And as you can tell from the plot summary, this is hardly some kind of super-subtle work of literary genius. I'm not making an argument for this particular type of SF as profound or even especially interesting.

Counterfactuals? Hah. A few trips to science fiction conventions will improve your counterfactuals out of recognition, plus you'll get to meet all those fascinating people who dress up like aliens.


Posted by: Frowner | Link to this comment | 02-14-08 9:26 PM
horizontal rule
146

to fear things fictionally is to be afraid of* a fiction

But that's not what fearing things fictionally is, Wal/ton nach. I do think there's lots to disagree with in the article, but it's a pretty resourceful view in the end—141 should be read modus tollens–style.


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 9:27 PM
horizontal rule
147

It should be obvious to anyone with a healthy sex life that she masturbates him into the cup, and then empties the cup.


Posted by: d7c | Link to this comment | 02-14-08 9:28 PM
horizontal rule
148

That doesn't seem particularly exciting.


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 9:29 PM
horizontal rule
149

nonmoral resistenda

Ben.


Posted by: parsimon | Link to this comment | 02-14-08 9:32 PM
horizontal rule
150

I'm sorry, parsimon.

I know how wild that thing with the gerunds drives you.


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 9:32 PM
horizontal rule
151

I think pretense is resourceful (if running amok), but the article is just a mess. Again, a while since I've read it, but he tries to treat emotions like belief. I have a make-belief in the story, and I also have a fictional emotion. Trouble is when you ask what a fictional emotion is; is it not real fear? Walton wants to say it's real fear (I think this is confirmed in a follow-up article), so I'm not pretending to be afraid, but then it seems this just amounts to a restatement of the problem. Currie's work in this area is better.


Posted by: Cala | Link to this comment | 02-14-08 9:33 PM
horizontal rule
152

145: Scifi is one main reason I don't buy a strong version of the 'resistance to immoral imaginings' thesis.


Posted by: Cala | Link to this comment | 02-14-08 9:35 PM
horizontal rule
153

He doesn't try to treat emotions like belief, he's a cognitivist about emotion, which is a reasonably respectable position, and I'd be very interested in seeing where he wants to say that you've got real fear in that case, because that is absolutely the opposite of what he says in the article and his big book.

You do feel something—various physiological things and their relatively low-grade psychological accompaniments as, eg, a thumping heart, tension, and disposition to react startledly to sudden movements—but those all get classed as "quasi-fear", and it's on the basis of a de se imagining about yourself, guided by your (actual) quasi-fear, that you are fictionally afraid.

IMO one of the weakest bits of the paper is the part where he says that quasi-fear arises as a result of imagining the truth of various propositions, and then has a footnote saying he doesn't need to say anything about how that works.


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 9:36 PM
horizontal rule
154

If I'm remembering the second article correctly, fictionally afraid and "quasi-fear" turns out to be real fear. (In Emotions and the Arts eds. Hjort & Laver.) Could be wrong, last read it 2004, &c.


Posted by: Cala | Link to this comment | 02-14-08 9:43 PM
horizontal rule
155

This awful De Kalb thing. "The gunman, he said, had been a graduate student in sociology at the university in 2007, but was no longer enrolled here."


Posted by: Gonerill | Link to this comment | 02-14-08 9:43 PM
horizontal rule
156

moral realism drives me nuts

Are statements of ethical judgment in your view truth-apt?


Posted by: washerdreyer | Link to this comment | 02-14-08 9:44 PM
horizontal rule
157

155: The shooting at Nerd U Business School in 2003 was a similar thing. No-longer-enrolled grad student who'd been hanging around for years, obviously had lost his mind, but was tolerated in computer labs and stuff, freaked out and held everyone hostage for eight hours before shooting people. Grad school is a really bad thing for the potentially unhinged. Very, very sad.


Posted by: A White Bear | Link to this comment | 02-14-08 9:47 PM
horizontal rule
158

155: Oh god. Early reports were saying he was aiming for the instructor of the course.


Posted by: Cala | Link to this comment | 02-14-08 9:49 PM
horizontal rule
159

148: Depends on how she empties the cup.

Since the doctrines of "Fearing Fictions" appear more or less unaltered in MaMB, and are several decades old, you think its trainwreckness would have come to light by now in the general philosophical imagination

That I found myself thinking this about all sorts of things as an undergrad philosophy major is one of the reasons it's probably best that I'm no longer half-assedly trying to make a career out of it.


Posted by: W.H. Harrison | Link to this comment | 02-14-08 9:49 PM
horizontal rule
160

Yeah we had one at my campus a few years ago, too. Disgruntled loser (male) shot dead several of his professors (female).


Posted by: Gonerill | Link to this comment | 02-14-08 9:50 PM
horizontal rule
161

Thanks to google books:

I stand by my contention that it is only in imagination that Charles fears the Slime, and that appreciators do not literally pity Willy Loman, grieve for Anna Karenina, and admire Superman …

But:

If I should find myself imagining [torturing kittens after reading a story about same] with a sense of glee, however, I may have reason to worry. The glee is real. But my experience certainly does not have to be described as actually taking pleasure in the suffering of kittens, in order to signal a cruel streak in my character.

This seems an awkward concession for him to make, though I'm not sure how to press things (having just read it right now and all).


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 9:50 PM
horizontal rule
162

Are statements of ethical judgment in your view truth-apt?

What? I can't tell if that's a serious question. (I've just gotten over straightening my face over ben's "resistenda," not even knowing if he coined that or it's actually a term.)

What's meant by "truth-apt"? If that's a straight question and it means "Can statements of ethical judgment be assigned truth-values," then no, not in the same sense in which nonmoral statements can (in theory) be. But that's taking the question on the given terms, terms that are inadequate.


Posted by: parsimon | Link to this comment | 02-14-08 9:54 PM
horizontal rule
163

I'm pretty sure the mention of his department wasn't in the article when I first read it (right after Cala linked it). It just said he was a former grad student.


Posted by: teofilo | Link to this comment | 02-14-08 9:54 PM
horizontal rule
164

Either I just coined it or I derived it from the Latin resistere; you may say either.


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 9:55 PM
horizontal rule
165

97: As anyone with a feel for actual literature would tell you, the issue with that passage is that there is an extremely crude omniscient narrator informing us what we are to feel. The reader takes offense at being lectured in such a patronizing manner, and resists it. The omniscient narrator is a somewhat didactic literary style, unusual in Western history and used in only a few literary periods, with a lot of potential to be crude if it isn't artfully managed. A lot of readers would resist that style even if the author was informing us of something that was morally unexceptionable, let alone when it's an unusual viewpoint that the author hasn't laid the groundwork for. In other words: the problem is that *it's horribly written literature, it's bad art*.

I venture to say that readers would have no problem identifying with a very artfully written character, say a charming serial killer, who was really irritated at someone blocking him off in traffic and therefore kills them.

The real issue here is that philosophers are for the most part horrible artists, who don't understand how literature works or how it achieves it effects. But analytic philosophers, like economists, are imperialistic rationalists, ever ready to bulldoze other intellectual fields in the name of boringly literal-minded logic chopping. They can't bring themselves to imagine that novelists are actually smarter than philosophers, and engaged in a more complex and involved game.

Where *is* Emerson in this discussion? Seth Edelstein would also be a good participant.


Posted by: PerfectlyGoddamnDelightful | Link to this comment | 02-14-08 9:55 PM
horizontal rule
166

See, 165 is why I said people who want to talk about this should talk to the litterateurs.


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 9:58 PM
horizontal rule
167

164: I liked it.


Posted by: Cala | Link to this comment | 02-14-08 9:58 PM
horizontal rule
168

When I was in grad school a nutbar in my program shoved me around after a misunderstanding. I learned afterwards that a group of senior faculty wanted him expelled because they were worried he'd go postal, and this incident crystallized it for them. But the tried and true method of ignore him and he'll go away was chosen instead. He dropped out later, after I left. He was the kind of person about whom people would have said, if he had gone off, "Yeah he was a total lunatic."


Posted by: Herbert Hoover | Link to this comment | 02-14-08 9:59 PM
horizontal rule
169

Oh, don't get me wrong, I liked it too.


Posted by: parsimon | Link to this comment | 02-14-08 10:01 PM
horizontal rule
170

Is Ben actually agreeing with me?

I will admit I'm kind of trolling, as my invocations of Emerson and Edelstein should make clear. I'm a little drunk at the moment. Alcohol always helps you really *feel* the superiority of literary over philosophical reasoning.


Posted by: PerfectlyGoddamnDelightful | Link to this comment | 02-14-08 10:02 PM
horizontal rule
171

It's like that funeral puzzle test for psychopaths.

?

At the funeral for her grandfather, a woman meets a man whom she doesn't know. He's the man of her dreams, and she's utterly infatuated, but he leaves before she can get his name or phone number. A few days later, she kills her sister.

Why did she kill her sister?


Posted by: Matt F | Link to this comment | 02-14-08 10:03 PM
horizontal rule
172

161: That the complete uselessness of Charles fearing the Slime as an illustrative example hadn't occurred to Walton all those years later boggles the mind. The other thing is kind of awkward, but I think it kind of misses the point. I have a sexual fetish for the Thing with the Cup, by which I mean that I take pleasure in imagining it and enjoy seeing it fancifully depicted, but morally I know that the Thing with the Cup is reprehensible (this is probably part of why I find it erotic) and were I to actually witness it being perpetrated I undoubtedly would try to intervene and save the poor gentleman who was being so used. If I get aroused by the idea but not the reality then it's not my arousal that's quasi-, it's the thing which caused it.

If I developed it further maybe that could be a good example of how I think Wal/ton's intuitions about the world of emotional make-believe can be a helpful tool, but how I also think they're still mostly misapplied when used to look at fiction.


Posted by: W.H. Harrison | Link to this comment | 02-14-08 10:04 PM
horizontal rule
173

I actually agree with the first two paragraphs of 165, yes.

171: because he'll come to the next funeral too, or so she reasons.


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 10:04 PM
horizontal rule
174

157: I left the building literally just as that guy was entering through the other door. He ended up shooting one of my good friends.


Posted by: PerfectlyGoddamnDelightful | Link to this comment | 02-14-08 10:05 PM
horizontal rule
175

I actually agree with you, too, PGD. Quite a lot of the problems with the arguments are that the toy problems philosophers use suck. I think we can imagine clever and sympathetic serial killers, &c.

On the woo-hoo-dumb-philosophers point, not so much, but you're drunkenly trolling.


Posted by: Cala | Link to this comment | 02-14-08 10:06 PM
horizontal rule
176

Ugh, 168 makes me crazy. I've seen so many grad students who are not doing well just about lose their minds trying to get some kind of feedback from professors that explains what's happening to them. I know one woman who has lost every teaching job she's gotten, been shuttled about from advisor to advisor, and she's not a totally emotionally stable person to begin with. She keeps cornering me and asking why, what evidence are people giving for shunning her. And no one has any answers other than, well, they secretly think she's not very smart and that she's more than a little crazy. No one's telling her to drop out or advising her about how to make the transition into another career. They just walk out of the room when she walks in. I'd flip out if I were her.


Posted by: A White Bear | Link to this comment | 02-14-08 10:07 PM
horizontal rule
177

170: You'll have noticed that your trolling was utterly transparent, and garnered a wink and a nod.

(I don't know why straight anal phil involves itself in literary stuff either, to tell you the truth; it's not its forte.)


Posted by: parsimon | Link to this comment | 02-14-08 10:08 PM
horizontal rule
178

Frowner's 145 is really key, I think. When I watched the anti-slavery movie Amazing Grace, I was reminded of how movies like that are always walking a tightrope of showing modern audiences how Very, Very Bad the thing that used to be common was (forced marriage, slavery, whatever), but yet also how common and ordinary it was, and how extraordinary for a person of that time (the Hero of the Movie) to be able to see past it.

One reason this is a tightrope is that not all movie fans are scifi fans. I suspect that if they were, you wouldn't have to waste such an almighty amount of time establishing that this is a really, really bad thing, and could just leap into establishing the throughness with which the historical world was infested by the phenomenon and the interestingness of someone being able to see past it.


Posted by: Witt | Link to this comment | 02-14-08 10:08 PM
horizontal rule
179

straight anal phil involves itself in literary stuff either

I think it's called "the DL" or something like that.


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 10:09 PM
horizontal rule
180

I actually agree with the first two paragraphs of 165, yes.

Yes, the third paragraph was when I really let loose with that venomous (and hence falsifying) trolling impulse. As Cala points out as well.

But there's an emotional truth in trolling, which is why I think it gets a bum rap. I do get tired of obviously unimaginative rationalistic types lecturing on the "real meaning" of imaginative culture. One sees this now with evolutionary psychology, economics, perhaps analytic philosophy sometimes, etc. It's a mindset.


Posted by: PerfectlyGoddamnDelightful | Link to this comment | 02-14-08 10:11 PM
horizontal rule
181

Keep an eye on Ben at the next meetup, people. Fair warning.


Posted by: Matt F | Link to this comment | 02-14-08 10:11 PM
horizontal rule
182

178: You can see it, too, in adaptations where they change something (The Golden Compass comes to mind) so the audience won't be offended. I think the basic idea the puzzle gets at it that we're not surprised when a movie does it with moral sensibilities, but we don't expect the movie to ignore hyperspace because it's too unrealistic.


Posted by: Cala | Link to this comment | 02-14-08 10:14 PM
horizontal rule
183

"the DL"?


Posted by: parsimon | Link to this comment | 02-14-08 10:16 PM
horizontal rule
184

The down-low.


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 10:18 PM
horizontal rule
185

Not that I'm thinking necessarily of what parsimon means by "straight anal phil", other than using "analytic" as a catchall for "modern Anglo-American and disdainful of Derrida and probably Foucault," but:

I think the key is for philosophers to be cognizant of their limitations (which will be different from one to another) and for both philosophers and their readers to be able to distinguish when they're straying out of properly philosophical territory. I had one class where we read a discussion or debate or symposium or whatever between Wayne Booth (I think), Martha Nussbaum, and Richard Posner on the moral implications of reading, teaching, enjoying, certain fictions that might be morally objectionable, taking the issue of antisemitism of Merchant of Venice as the prime example. Now, I think this is an area where philosophers can contribute valuable and interesting discussion, but the problem was that the back and forth kept straying into textual interpretation and appreciation. Booth (unsurprisingly) struck my classmates and me as an extremely insightful and sensitive reader of the text; Nussbaum was ok, but at the level of maybe the average undergrad English student, and Posner demonstrated the literary insight of a clever fifth grader.


Posted by: W.H. Harrison | Link to this comment | 02-14-08 10:18 PM
horizontal rule
186

we're not surprised when a movie does it with moral sensibilities, but we don't expect the movie to ignore hyperspace because it's too unrealistic.

Wait, I understand the first half of this, but I can't translate the second. Can you clarify?


Posted by: Witt | Link to this comment | 02-14-08 10:20 PM
horizontal rule
187

In the class I had as an undergrad reading the Nussbaum-Posner exchange, the professor (I believe this anecdote has been immortalized on, yecch, crescat sententia) pointed out that Posner writes an article a day, and when you do that, you don't have time to think.


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 10:20 PM
horizontal rule
188

174: I'm so sorry. That was a scary, bad day.


Posted by: A White Bear | Link to this comment | 02-14-08 10:22 PM
horizontal rule
189

Speaking of Wayne Booth, the free indirect style in that passage is atrocious. Her demented form of lust!?


Posted by: Michael Vanderwheel, B.A. | Link to this comment | 02-14-08 10:22 PM
horizontal rule
190

186: Let's take The Golden Compass as an example. So we have resistance, say, to imagining that the Vatican sanctions violent and cruel torture of children, but not to the fact that Lyra meets talking bears. Both represent the world of the story as different from the real world, but only one gets people upset, even though both are make-believe.


Posted by: Cala | Link to this comment | 02-14-08 10:23 PM
horizontal rule
191

Ah, thanks.


Posted by: Witt | Link to this comment | 02-14-08 10:25 PM
horizontal rule
192

Hello, laydeez.


Posted by: Straight Anal Phil | Link to this comment | 02-14-08 10:26 PM
horizontal rule
193

Try kissing her first, teo.


Posted by: Cala | Link to this comment | 02-14-08 10:27 PM
horizontal rule
194

192: I believe we've already met.


Posted by: A White Bear | Link to this comment | 02-14-08 10:28 PM
horizontal rule
195

184: Yeah, I figured it was down-low.

185: What I mean by "anal phil" is nothing more or less than that it's an abbreviation for analytic philosophy (I am, by training, one of those). With respect to the rest of 185: there is no such thing as properly philosophical territory, but there is such a thing as established philosophical terms of discussion, just there are terms of literary discussion. Trained philosphers do well to know when they're out of their field of expertise, just as do literary or political theorists.


Posted by: parsimon | Link to this comment | 02-14-08 10:28 PM
horizontal rule
196

Jurists like Posner have the attitude of stereotypical analytic philosophers with none of the quality control.


Posted by: Gonerill | Link to this comment | 02-14-08 10:29 PM
horizontal rule
197

187:

I can't imagine that I wouldn't recall hearing that professor saying that, but I also can't imagine him not repeating it from year to year. (I believe I took that class the year after you probably did. If I'm correct in matching the name Ben w-lfs-n with the proper face, we never had a philosophy class together, but we did share in the excruciating joys of reading Dostoevsky with a deaf man.)


Posted by: W.H. Harrison | Link to this comment | 02-14-08 10:31 PM
horizontal rule
198

I suspect that if they were, you wouldn't have to waste such an almighty amount of time establishing that this is a really, really bad thing, and could just leap into establishing the throughness with which the historical world was infested by the phenomenon and the interestingness of someone being able to see past it.

I TA'd a class where we read slave narratives - Frederick Douglass and Harriet Jacobs - one week and a collection of proslavery documents the next week. Students had a much easier time believing there was a past in which slavery was common, brutal and wrong and people fought against it than they had believing that there was a past in which slavery was common, brutal and wrong and people fought for it. Which is to say that the proslavery stuff shocked the students far, far more than the narratives written with an eye toward shocking their contemporaries.


Posted by: eb | Link to this comment | 02-14-08 10:32 PM
horizontal rule
199

I believe we've already met.

Sorry, I didn't recognize you from the front.


Posted by: Straight Anal Phil | Link to this comment | 02-14-08 10:33 PM
horizontal rule
200

we have resistance, say, to imagining that the Vatican sanctions violent and cruel torture of children, but not to the fact that Lyra meets talking bears. Both represent the world of the story as different from the real world, but only one gets people upset, even though both are make-believe.

That strikes me as social convention pure and simple. A matter for sociologists, not philosophers. We have large industries devoted to getting people used to talking bears (if only so they can buy toys based on the film), and also large industries devoted to getting people to be morally scandalized at any criticism of religious groups (if only so said religious groups can mobilize voters more easily).

When I watched the anti-slavery movie Amazing Grace, I was reminded of how movies like that are always walking a tightrope of showing modern audiences how Very, Very Bad the thing that used to be common was

I actually believe there was a particular historical / cultural period going from Victorian England through the political correctness of, say, the 1980s -- call it the Whig period, where the modern liberal middle class takes shape -- where there is a lot of moralistic middlebrow censorship of the facts of human history and human nature. Slavery looks rather different once you understand how horrific the lives of e.g. white European peasants were in the early 19th century.


Posted by: | Link to this comment | 02-14-08 10:34 PM
horizontal rule
201

195: That's essentially what I meant, hence the anecdote about Booth v. Nussbaum; there's a conversation on that general topic where the philosopher could have brought her philosophical terms of discussion to bear, and in fairness she probably did, but what I remember is her attempt to engage in the literature professor's terms of discussion and failing. What I was trying to say wasn't "there are things philosophers shouldn't talk about" but "there are ways philosophers should be wary of talking about things."

(Nussbaum being more or less a philosopher, though I'm not sure to what extent we could say she "trained" in the field.)


Posted by: W.H. Harrison | Link to this comment | 02-14-08 10:35 PM
horizontal rule
202

200 was me.

Speaking of which: 200!


Posted by: PerfectlyGoddamnDelightful | Link to this comment | 02-14-08 10:35 PM
horizontal rule
203

198: Huh, interesting. I liked The Price of a Child exactly because it did such a vivid job of evoking the weirdness of being a former slave lecturing in the North. It resists this notion of wise, benevolent white anti-slavery activists.

The narrator is blunt about how impatient she is with Northern audiences' fascination with the cruelties of slavery, and the hypocrisy of the Northern reliance on cotton.


Posted by: Witt | Link to this comment | 02-14-08 10:40 PM
horizontal rule
204

Way back at 145, says Frowner:

You know, it's not all fancy philosophy but the "imagine a world where what we think is good is thought of as bad and that's completely normal" thing is a pretty common trope of, er, science fiction. Sometimes as a critique of the world in which we live, sometimes just for the frisson.

And as you can tell from the plot summary, this is hardly some kind of super-subtle work of literary genius. I'm not making an argument for this particular type of SF as profound or even especially interesting.

I don't know why you'd dismiss this as uninteresting; of course it's a standard sci-fi alternate worlds thing, not always well carried out. There's a great deal of value in it, philosophical and otherwise.

My only point here and there in this thread is that obviously *moral* oddity is the easiest way to demonstrate our, er, situatedness; but there are equally seemingly nonmoral oddities that point it out as well. A good example: LeGuin's Left Hand of Darkness.


Posted by: parsimon | Link to this comment | 02-14-08 10:43 PM
horizontal rule
205

I liked that Dostoevsky class, Harrison. Also you have me, indirectly, to thank or blame for the reduction in the required reading (I complained to a girl who was in his Hum class, who relayed—without my asking—the complaint to him).

what I remember is her attempt to engage in the literature professor's terms of discussion and failing.

If you read the article by her I recently had the extreme pleasure of reading, Finely Aware and Richly Responsible: Literature and the Moral Imagination, you will see something in the way of a justification for that. (Reading that article gave me, literally, innumerably infinite hedons.)


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 10:45 PM
horizontal rule
206

What is it like to be that thing with the cup?


Posted by: Matt McIrvin | Link to this comment | 02-14-08 10:47 PM
horizontal rule
207

I don't know why I put that article name in italics since it's an article and not a book. OH WELL.


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 10:47 PM
horizontal rule
208

Perhaps because the title of the article contains quotation marks?


Posted by: teofilo | Link to this comment | 02-14-08 10:48 PM
horizontal rule
209

Yeah, I thought that might be it. Triggering avoidance behavior or something. But normally I love nesting shit like that.


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 10:49 PM
horizontal rule
210

What is it like to be that swamp thing with the cup?


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 10:49 PM
horizontal rule
211

Students had a much easier time believing there was a past in which slavery was common, brutal and wrong and people fought against it than they had believing that there was a past in which slavery was common, brutal and wrong and people fought for it.

The fact that this occurred with nonfictional texts is one that ought to be heeded.


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 10:52 PM
horizontal rule
212

But a weird thing about early Nussbaum is that she somehow thought Aristotle was some kind of moral paradigm. Weird.


Posted by: PGD | Link to this comment | 02-14-08 10:55 PM
horizontal rule
213

Speaking of, I realized last night why Labs finds eudaimonia implausible: although he is very tall, he has, at best, a tenor voice, and doesn't speak particularly slowly. That must rankle.


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 10:57 PM
horizontal rule
214

205: I liked *what* we read immensely, and I appreciated when he would spend time discussing the original Russian to clarify an image or metaphor, but otherwise I felt like we basically showed up and spent a lot of time reading aloud (plus I wasn't super fond of the translations we used, but whatever). The evolution of how he handled his hearing issues also got progressively ridiculous, as I recall. There was one day - I think we were still on C&P - where a girl gave her opinion on a question he asked, he requested that she repeat it while pointing his contraption at her, nonetheless understood her to have said the opposite of what she had said, and dismissed her with a sarcastic comment. ... I don't mean to be ungenerous to Professor I., I personally didn't get a lot out of the class that I couldn't have gotten from just reading the books, but of course that's neither here nor there when it comes to someone else's experience.

I should read that Nussbaum article; I had the pleasure of taking a class from her in my last quarter and was very impressed.


Posted by: W.H. Harrison | Link to this comment | 02-14-08 10:59 PM
horizontal rule
215

I can email it to you if you want (you can email me if you don't want to post your address). If you knew Flo Lall/ement or Anne Ciech/anowski, then you might be the very person to whom I heard that very opinion attributed when discussing the class near or shortly after the close of the quarter with one of the aforementioned. I got more out of it, being hermeneutically unskilled. (Also, he liked my final paper a lot, so, you know, I'm inclined to think positively.)


Posted by: ben w-lfs-n | Link to this comment | 02-14-08 11:02 PM
horizontal rule
216

215: Funny; that he liked my final paper was one of the reasons I didn't think positively of him, since I didn't think positively of it. I kinda sorta knew Anne, and although I can't place her face, Flo's name sounds incredibly familiar, so I might be the person; it could also be the friend I took that class with. We had the same generic take on the class, and probably reinforced each others' negative impressions about the professor. I'm a much more generous person than I used to be.

I should probably just buy the book, but I'm also dropping the pseudo-presidential pseudonymity since I only adopted it when I thought I was going to driveby Ken/dall Wal/ton. Of course, I'm dropping it for another pseudonym, but this one has an email address.


Posted by: Quarterican | Link to this comment | 02-14-08 11:24 PM
horizontal rule
217

the facts of human history and human nature

You just suggested it's purely a social construction that we a) prioritize human matters (children, the Church) over non-human (bears), and b) react more strongly to depictions of plausible phenomena (institutionalized torture) than to the obviously fanciful (talking non-human animals). This doesn't leave you much room for talk about human nature.


Posted by: Michael Vanderwheel, B.A. | Link to this comment | 02-14-08 11:41 PM
horizontal rule
218

Students had a much easier time believing there was a past in which slavery was common, brutal and wrong and people fought against it than they had believing that there was a past in which slavery was common, brutal and wrong and people fought for it.

The only way to address this is for people to learn about the socio-economic conditions in place at the time, the consequent lifestyle differences, and the extraordinary degree to which those things determine moral outlook. Slavery in the south was simply an obvious choice at the time. If we want to go meta, as it were, and normative, we'd need to say that we must establish a critical stance toward our own socioeconomic conditions, and a critical eye toward our ethical behavior, in order to measure our inherited moral outlooks against what we think might actually be right.

Yeah, that's the idea, isn't it? Unfortunately, if you take seriously the notion that we're necessarily blind to any deep wrongdoing, there's not a thing we can do. (This is what I objected to in Megan's peasant championship of the other day. Actually.)

Our current deep wrongdoing is widespread destruction of the less fortunate members of the planet, of course, so rewrite this:

Students had a much easier time believing there was a past in which slavery consumer capitalism was common, brutal and wrong and people fought against it than they had believing that there was a past in which slavery consumer capitalism was common, brutal and wrong and people fought for it.


Posted by: parsimon | Link to this comment | 02-14-08 11:48 PM
horizontal rule
219

Unfortunately, if you take seriously the notion that we're necessarily blind to any deep wrongdoing, there's not a thing we can do.

And yet, this may well be the case.


Posted by: teofilo | Link to this comment | 02-15-08 12:08 AM
horizontal rule
220

Teo, they say that conscience is to fill the necessary role.

We will probably always oppress rocks and mosquitos, but at least we can figure out a few things: the earth doesn't like what we're doing to her, and she's fighting back, and she's going to stamp us all out if we don't stop it. So now we have to decide whether to continue killing everything.


Posted by: parsimon | Link to this comment | 02-15-08 12:24 AM
horizontal rule
221

I'm not sure how one oppresses a rock, but I say the mosquitos have it coming.


Posted by: gswift | Link to this comment | 02-15-08 12:50 AM
horizontal rule
222

I didn't read this thread, but the seeing "fucking" and "MRI" juxtaposed in 122 reminded me of this study and this study. Apologies if you've already seen them.


Posted by: Otto von Bisquick | Link to this comment | 02-15-08 12:57 AM
horizontal rule
223

Parsimon, your 218 reminds me of an exercise Michael Flynn sometimes engages in at conventions, and used to do a few times on the GEnie network: asking folks something to the effect of "What beliefs that are very important to you do you think most likely to seem ridiculous in 50 or 100 years?" It's very hard to get people to answer that question. They're happy to talk about the beliefs other people now hold that they expect to be as generally ridiculous-seeming as they are now to the speaker. The self-scrutiny is apparently much less entertaining.

It's a good exercise when given some serious engagement, though.


Posted by: Bruce Baugh | Link to this comment | 02-15-08 3:16 AM
horizontal rule
224

thinking it's OK to eat animals seems like the most obvious of my moral beliefs that might be destined for the ash-heap, especially if people are able to vat-grow cultured tissue. and it might turn out it's not OK to abort human fetuses in a world in which fetuses and embryos can be easily brought to maturity outside anyone's body. or rather, the coming to be of such a world might make it apparent that it has always been the case that abortion is wrong.

my husband has lots of interesting things to say about the gendler thing, including sci-fi considerations, but because the academic publishing system is idiotic and takes years to even reject things no one knows about them but me. and the people who read that other blog of his.


Posted by: alameida | Link to this comment | 02-15-08 4:38 AM
horizontal rule
225

I'm not sure if there are any core ethical precepts I hold that I'd imagine would seem ridiculous in 50 or 100 years. The vast majority of them have been around for much much longer already. I suppose it's possible that the relative secularism of the past two hundred years or so might get decisively reversed and my views on religion and society might seem stupid and outmoded.

In terms of science, however, I'd imagine that huge swathes of what I believe will seem stupid. I suspect a lot of people are fairly sure that we are on the cusp of some 'paradigm shift' and that physics may be due for a big shake-up.

On a personal level, I'm a pretty convinced skeptic about all things AI, transhumanist, 'singularitarian', etc. I don't imagine a future, in the next fifty years anyway, in which strong AI, pervasive nanotech, and so on, exist. In fact, I'm fairly sure that 50 years from now won't be more radically different from the present day than 1958 is in the other direction.

However, it's possible there might be some breakthrough tomorrow that renders that dumb.


Posted by: nattarGcM ttaM | Link to this comment | 02-15-08 5:11 AM
horizontal rule
226

165: I'm taking a break from slagging on analytic philosophy.

Seth Edelstein is actually Seth Edenbaum, who never comes here.

All Jews look alike to PGD though Edenbaum is only semi-Semitic. (PGD is a Nazi, you know).


Posted by: John Emerson | Link to this comment | 02-15-08 5:39 AM
horizontal rule
227

187, 196: As an influential legal scholar, Posner is one of the most influential analytic philosophers in the world, and as a leader of "Law and Economics" (along with the loony Gary Becker) he's a pretty influential economist too.

So of course economists and analytic philosophers both disavow him. Not their problem. (Some economists even try to disavow Becker, though the Nobel Prize makes that more difficult.)


Posted by: John Emerson | Link to this comment | 02-15-08 5:57 AM
horizontal rule
228

"In fact, I'm fairly sure that 50 years from now won't be more radically different from the present day than 1958 is in the other direction."

well, 1908 is radically different from 1958. If things will be as radically different in 2058 as 1958 was from 1908 it's gonna be a hell of a big number of changes.

However there is also a question as to if the rate of change has been increasing in the last few decades.


Posted by: bryan | Link to this comment | 02-15-08 6:39 AM
horizontal rule
229

If you don't think 2008 is radically different from 1958, it's only because you've lived through all or part of it and didn't notice the wood for the trees.


Posted by: OneFatEnglishman | Link to this comment | 02-15-08 6:53 AM
horizontal rule
230

Ah. But what precisely does "radically" mean?


Posted by: John Emerson | Link to this comment | 02-15-08 6:55 AM
horizontal rule
231

230 gets the point, I think.

In that while there have been massive changes since 1958 they don't seem to be the sort of changes that the strong-AI/singularitarian/transhumanist types point to. Perhaps 1908 - 1958 is much more that sort of shift.

It does, as Emerson says, depend a lot on what you mean by radical.


Posted by: nattarGcM ttaM | Link to this comment | 02-15-08 7:08 AM
horizontal rule
232

1959 ideas of the future. Much like?


Posted by: OneFatEnglishman | Link to this comment | 02-15-08 7:09 AM
horizontal rule
233

OK, they got online shopping right.


Posted by: OneFatEnglishman | Link to this comment | 02-15-08 7:12 AM
horizontal rule
234

The transhumanists and singularitists have loaded their ideas with a lot of nerd fantasies and ideological freight, too. Even if they have the magnitude right, they've probably missed some big details.

It reminds me of the Buckminster Fuller / Timothy Leary / Ken Kesey / Stewart Brand / Marshal MacLuhan visionaries back in the sixties. They imagined a lot of today's world (e.g. the internet and satellite feeds) but not, for example, George Bush and Dick Cheney.


Posted by: John Emerson | Link to this comment | 02-15-08 7:13 AM
horizontal rule
235

re: 234

Yes, I remember reading a lot of my parents books when I was a kid [Whole Earth Catalogue, that sort of thing] and it was all geodesic domes and hydroponic sex communes.


Posted by: nattarGcM ttaM | Link to this comment | 02-15-08 7:21 AM
horizontal rule
236

People weren't just predicting hydroponic dope, either. The plan was to ditch the archaic past and grow staples hydroponically without all that inefficient dirt..


Posted by: John Emerson | Link to this comment | 02-15-08 7:25 AM
horizontal rule
237

236: Which isn't so far from what happened in a way, agriculture has moved over to a large scale forced N-P-K cycle that doesn't rely much on dirt. People thought they'd be doing it themselves small scale. Instead, a big corporation is doing it in the central valley.

Of course, it turned out not to be such a great idea, too.


Posted by: soup biscuit | Link to this comment | 02-15-08 7:33 AM
horizontal rule
238

The hydroponic dope, on the other hand, is excellent.


Posted by: soup biscuit | Link to this comment | 02-15-08 7:34 AM
horizontal rule
239

How strong do you like your AI, sir?

Emerson has it right with the "nerd fantasies and ideological freight". Actually, a lot of the tech. associated with transhumanism is being developed for surgical prosthesis. It's turned out that remote controlled machines are better at exploring space than people, but space is still being explored. I wouldn't bet against current machine learning research and the Japanese games industry delivering a lot of the stuff Minsky was banging on about twenty years ago, but it won't look like Asimovian robots.


Posted by: OneFatEnglishman | Link to this comment | 02-15-08 7:35 AM
horizontal rule
240

re: 239

Yeah, I have fairly stringent criteria for what counts as AI, I suspect.

Much more advanced machine learning and/or 'expert systems' I take for granted.


Posted by: nattarGcM ttaM | Link to this comment | 02-15-08 7:41 AM
horizontal rule
241

239: That's true. Some bits of `AI' turned out to be a lot harder than first thought (big surprise). But a lot of progress is being made. It is already pretty ubiquitous, but not in a particularly visible way. Most people working in the area aren't really pursuing Asimovian robots, at all.


Posted by: soup biscuit | Link to this comment | 02-15-08 7:51 AM
horizontal rule
242

240: The problem with this is a tendency (someone had a nice sound byte about it, but I forget it) for `AI' research to be reclassified as something else as soon as it works. I never liked the term that much, really, but I can't see a sensible line between machine learning and `AI' unless you want disclaim everything below demonstrable consciousness. In which case, sure, maybe we won't get there any time soon (or at all?)


Posted by: soup biscuit | Link to this comment | 02-15-08 7:54 AM
horizontal rule
243

re: 242

In which case, sure, maybe we won't get there any time soon (or at all?)

Well, yes. As I said, my issue is with the singularitarian types who predict genuinely 'strong' AI. Which I don't see happening any time soon (if at all) either.

On the other hand, the idea that incremental but useful progress can be made in using computers to carry out the sorts of information processing tasks that have hitherto been solely done by humans, seems inevitable.

We are already a huge way down that line in all kinds of areas -- language processing, image processing, etc. If we are calling that 'AI' then I have no issue with the idea that future progress with 'AI' will happen.


Posted by: nattarGcM ttaM | Link to this comment | 02-15-08 7:57 AM
horizontal rule
244

I never liked the term that much, really, but I can't see a sensible line between machine learning and `AI' unless you want disclaim everything below demonstrable consciousness.

I thought that was at least one conventional usage of AI, isn't it?


Posted by: LizardBreath | Link to this comment | 02-15-08 8:00 AM
horizontal rule
245

243/244 yes, this is a traditional line, with the problem that it is very difficult to define.

Personally, I don't find worrying about the distinction that interesting. As ttaM says, we already use a lot of machine learning day to day, and this will only get more pervasive.


Posted by: soup biscuit | Link to this comment | 02-15-08 8:06 AM
horizontal rule
246

225: I don't imagine a future, in the next fifty years anyway, in which strong AI, pervasive nanotech, and so on, exist.

Pervasive nanotech will almost certainly exist, in terms of a manufacturing sector based mostly on nanotechnology. The early stages of this are already happening, and would be my bet for the development with broadest impact on society since the original Industrial Revolution. Next fifteen, twenty years at the outside.

234: Even if they have the magnitude right, they've probably missed some big details.

The problem is, most of them are accustomed to thinking of the challenges as engineering problems. With "AI" for example, they don't think (or want to think, perhaps, because it spoils the fantasy) about any of the basic social problems related to artificial consciousness. For example, in order for "AI" to be "conscious" it has to be able to interpret, and therefore able to mis-interpret or argue with, input. This would remove one of the main features -- relative predictability -- that makes computers useful in the first place. So why would anyone want such a feature? Why would anyone invest energy in keeping such an "AI" running, except perhaps eccentricity?


Posted by: DS | Link to this comment | 02-15-08 8:15 AM
horizontal rule
247

For example, in order for "AI" to be "conscious" it has to be able to interpret, and therefore able to mis-interpret or argue with, input. This would remove one of the main features -- relative predictability -- that makes computers useful in the first place. So why would anyone want such a feature?

If you've got input that's useless without interpretation? Like, any problem that computers can't deal with now?


Posted by: LizardBreath | Link to this comment | 02-15-08 8:27 AM
horizontal rule
248

247: If you've got input that's useless without interpretation?

And why would having an "AI" carry out that kind of higher-level interpretation be more useful than simply having a human do it? It's not comparable to tasks like number-crunching in which the computer has an indisputable advantage.


Posted by: DS | Link to this comment | 02-15-08 8:34 AM
horizontal rule
249

||

Hello, my fellow Americans. I'm a bit timid and largely lurk, but wanted to thank you all for the Mo' Money thread a couple of days ago.

I just requested a pay raise this morning. I've been frustrated since I took this job six months ago. The responsibilities are significantly greater than expected and my position is an attack point from several directions. I found that I was the third person in this position within the last couple of years.

I'd thought the drill was to put yourself on the market and use any offers as leverage. But after the Mo Money thread, I realized that I was in a good position to just ask for the money. I'd taken a couple of personal days recently and they actually thought I was interviewing (I neither confirmed or denied).

So put in a request for 20 to 25% raise. With a prepared justification argument. Supervisor (who I like) actually said 'Whew, I thought you were going to quit' and seems receptive. Wow. Never did anything like this before. Feeling all empowered and shit.

I'm back to lurking now. Like I said, not one for conversation. Just wanted to share.

|>


Posted by: Calvin Coolidge | Link to this comment | 02-15-08 8:36 AM
horizontal rule
250

re: 248

Cheapness? I can think of lots of situations where corporations would want that.

Call-centres? Why outsource to India when you can use the Turing-o-tron 9000 Callcentrematic?


Posted by: nattarGcM ttaM | Link to this comment | 02-15-08 8:37 AM
horizontal rule
251

248: Without actually making one, we don't know that they wouldn't be much better at in than we are. Which contains it's own problems [1]. The issue of low cost 24/7 labor rears it's shiny metal head, too.


[1] see a few thousand dystopic SF stories


Posted by: soup biscuit | Link to this comment | 02-15-08 8:40 AM
horizontal rule
252

That's cool, calvin.


Posted by: soup biscuit | Link to this comment | 02-15-08 8:41 AM
horizontal rule
253

248: Because I can see advantages to having something with human-class judgment and also the sort of brute-force information retrieval and number-crunching abilities that computers have. Think, say, evidence-based medicine. You need human-class judgment to turn a patient into a comprehensible set of signs and symptoms for diagnosis. But the human exercising that judgment tends to rely on the medical knowledge in her head, rather than all the medical knowledge in the world, because people are like that.

A 'doctor' with human-class judgment, but full, instant access to all research that's ever been published, would probably be an awfully good doctor.

And I'd bet you could say the same sorts of things about other domains.


Posted by: LizardBreath | Link to this comment | 02-15-08 8:41 AM
horizontal rule
254

And Woohoo, Silent Cal!


Posted by: LizardBreath | Link to this comment | 02-15-08 8:42 AM
horizontal rule
255

A 'doctor' with human-class judgment, but full, instant access to all research that's ever been published, would probably be an awfully good doctor.

What the tech people often fail to consider is the social/political aspects of uptake. We are already in a situation now where some technologies that objectively improve medical performance are rejected by MDs, others by lawyers. Eventually they will almost certainly be taken up, but it will probably require a generational change. Medicine is bad that way (as are other areas).


Posted by: soup biscuit | Link to this comment | 02-15-08 8:47 AM
horizontal rule
256

251: "Better" is a function of social value. It's very easy to imagine AI that makes arguably smarter interpretive decisions than humans. That doesn't mean those decisions will necessarily be useful to humans or acceptable to human social judgment.

And of course this is the stuff of dystpic SF, HAL 9000 etc etc, but it's more banal than that. If you design something genuinely conscious, with "human-level" or better interpretive capability, and task it to making medical decisions or running a call centre, what's to stop it from deciding it would rather paint, or compose rock operas? Who's going to tell it that isn't the smarter decision? If it's not constrained by human fleshly concerns, it won't necessarily be constrained by human concerns about duty, social ostracism etc. either.

If you limit it so it can't disagree with you about such things and such problems don't arise, then you have limit its interpretive flexibility and what you wind up with is a thin simulacrum of consciousness, thereby defeating the whole purpose.

253: Because I can see advantages to having something with human-class judgment and also the sort of brute-force information retrieval and number-crunching abilities that computers have.

How about a human, with a computer, that has a database on it?


Posted by: DS | Link to this comment | 02-15-08 8:51 AM
horizontal rule
257

249- Gosh darn it. I'm so sorry to those in this wonderfully international group who may have been slighted in my greeting. Such an isolationist time we live in of which I briefly contributed. Very depressing. Carry on, fine folks.


Posted by: Calvin Coolidge | Link to this comment | 02-15-08 8:53 AM
horizontal rule
258

253: It's a speed and integration problem. As a lawyer, I'm basically what you describe -- a human, attached to a computer, with a database in it. I'd be a better lawyer if I read faster and had more perfect retention, so I could make connections flawlessly between things I'd read at different times -- I have to rely on a whole lot of memory to get my work done at all. If you could build something with my conscious judgment, but that could errorlessly search the whole corpus of written law every time it had a passing thought of something that would be useful, it would kick the living shit out of me as a lawyer.


Posted by: LizardBreath | Link to this comment | 02-15-08 8:56 AM
horizontal rule
259

If you design something genuinely conscious, with "human-level" or better interpretive capability, and task it to making medical decisions or running a call centre, what's to stop it from deciding it would rather paint, or compose rock operas?

You build it so that it wants to make medical decisions more than anything else in the world. Motivations aren't arrived at logically - they're wired in.
In one of Richard Morgan's books, there's an AI that runs a hotel, and someone remarks that it's "hard wired to want customers the way people want sex". There's your answer.

Is it moral to create a conscious being and manipulate its motivations like that? No idea.


Posted by: ajay | Link to this comment | 02-15-08 8:58 AM
horizontal rule
260

256: Yes, I didn't say it would necessarily be better, just that we couldn't rule that out. Without actually having such a thing, it's pretty speculative to say `i can't see how that would helpful/better/whatever'

On the latter point though, LB is right. A human with a huge database doesn't do such a good job at some things because the human can't handle the information density very well. Maybe, however, a human with a big database and an AI-ish bit of code to help navigate it... but as I noted there are already uptake problems with simpler stuff.


Posted by: soup biscuit | Link to this comment | 02-15-08 9:00 AM
horizontal rule
261

Or would it be practical? One constraint for making AI useful is that you would be able to build in motivations -- if strong AI is possible, it's also perfectly possible that we'd be able to figure out how to make something conscious, but not to design it very specifically. The "Dial F for Frankenstein" possibility -- you make a sufficiently big, complex, whatever computer and it 'wakes up', but that doesn't mean it's taking orders.


Posted by: LizardBreath | Link to this comment | 02-15-08 9:01 AM
horizontal rule
262

258 is exactly right. Out of curiousity, LB ... my impression is that law is much more open to using related tech than medicine is, does that match what you see? On the other hand, the tools available are much weaker, I think.


Posted by: soup biscuit | Link to this comment | 02-15-08 9:02 AM
horizontal rule
263

261 to 259.


Posted by: LizardBreath | Link to this comment | 02-15-08 9:02 AM
horizontal rule
264

Congratulations, Calvin!

As for "the thing with the cup," I'm one-hundred percent in agreement with PerfectlyGoddamnDelightful that the most appropriate response is confusion and then wrath at the author for being purposively obfuscatory.


Posted by: Jackmormon | Link to this comment | 02-15-08 9:04 AM
horizontal rule
265

258: Integration is one thing, and I can see a genuine possibility for hyper-advanced databases interfaces that respond to your thoughts and can build those connections for you. (Chalk the Prosthetic Database Implant as some Cool SF Shit I believe could actually happen.) But isn't your ability to get distracted and drift off on tangents an integral part of "your conscious judgment"? Aren't those elements of your consciousness just as arguably features as defects? Would the interpretive judgment of a computer system that eliminated them be actually more useful in a social sense? I doubt it.


Posted by: DS | Link to this comment | 02-15-08 9:06 AM
horizontal rule
266

259: You build it so that it wants to make medical decisions more than anything else in the world.

And control its definition of "medical decision" how? If it can genuinely interpret, it can engage in metaphor.


Posted by: DS | Link to this comment | 02-15-08 9:07 AM
horizontal rule
267

in order for "AI" to be "conscious" it has to be able to interpret, and therefore able to mis-interpret or argue with, input.

What a great feature. Your computer makes an interpretation and draws a conclusion. Then when challenged on its assumptions, it gets defensive and starts to yell.


Posted by: heebie-geebie | Link to this comment | 02-15-08 9:08 AM
horizontal rule
268

262: Yeah, I think so. The situations aren't parallel, exactly -- I think there's a whole lot more of medicine that could be substituted with weak AI, expert systems and the like, than there is for law, so expert systems aren't threatening to lawyers the same way they are to doctors. To replace persuasive writing and speech, they're going to need consciousness, not a database.


Posted by: LizardBreath | Link to this comment | 02-15-08 9:09 AM
horizontal rule
269

265: I really don't know why you are assuming these are uniquely human feature/defects. I think we have to separate two ideas: One is really-shiny-tech, basically an extrapolation of what we can do now, + integration etc. The other is a putative machine consciousness. Maybe it's not realizable. If it is though, we have absolutely no idea what it would be like, and there is no reason to assume it would perform like a human does. No reason to assume it wouldn't either.

The former we can get some feel for where it's going. The latter is pretty much an unknown.


Posted by: soup biscuit | Link to this comment | 02-15-08 9:10 AM
horizontal rule
270

236: Which isn't so far from what happened in a way, agriculture has moved over to a large scale forced N-P-K cycle that doesn't rely much on dirt.

That was already pretty much there in 1958, though. And dirt is still needed, otherwise the tropical rainforests would all be cropped by now (their soil is easily exhausted).


Posted by: John Emerson | Link to this comment | 02-15-08 9:14 AM
horizontal rule
271

260: Without actually having such a thing, it's pretty speculative to say `i can't see how that would helpful/better/whatever'

Watching the evolution of AI to date, some things aren't that speculative, really. For example, the most fruitful directions in AI research have been moving away from the idea of replicating human consciousness. To say that human societies are unlikely to place much trust in the interpretive judgment of nonhuman consciousness, which is the likeliest form of AI "consciousness" to be achieved in the coming century at least, is not to go out on very much of a limb.


Posted by: DS | Link to this comment | 02-15-08 9:16 AM
horizontal rule
272

268: The other thing is, the law is not a a system that is as amenable to scientific study. Given a big enough and accurate enough database containing only symptoms and outcomes, we could probably do some pretty amazing diagnostic things using simple statistics, particularly for obscure presentations. This is not really possible for a human to do, but wouldn't be particularly `intelligent'. Building such a DB is a different problem, of course.

I don't see an equivalent db for law. One thing you could do is a classification of case-law that includes all connections, both presented and even merely researched. So if you were looking for something on a new case as soon as you established connections, it could estimate relevance across all other cases. None of this relies on the system knowing anything about the cases, just what *other* humans have done. Of course, that's a weakness too.


Posted by: soup biscuit | Link to this comment | 02-15-08 9:16 AM
horizontal rule
273

265: The interesting thing is that the Prosthetic Database Implant, if it shows up, is going to have a huge effect on who's good at stuff. My strong point, as a lawyer, is really a matter of memory. I've read enough law and have integrated it well enough that on most legal issues I've come near, while I couldn't give you the case names without doing the research, I've got a pretty reliable sense as to how they're going to come out, which makes the actual research and writing a smoother proposition. If everyone can run and read ten searches in their head during the course of a conversation, what I've got isn't much of a skill anymore -- something different, about how well you play with your implant, will be.


Posted by: LizardBreath | Link to this comment | 02-15-08 9:16 AM
horizontal rule
274

269: I really don't know why you are assuming these are uniquely human feature/defects.

They don't have to be. But the idea of eliminating these features / defects, as LB illustrates, is one of the major selling points of machine "consciousness." My point is that if it proves to be achievable, the human version will likely still be more socially useful.


Posted by: DS | Link to this comment | 02-15-08 9:17 AM
horizontal rule
275

273: Totally. Come that day, being a pothead will no longer be a disadvantage. And then, dear LB, the world will be mine. All mine.


Posted by: DS | Link to this comment | 02-15-08 9:18 AM
horizontal rule
276

271: That doesn't help you much because most directions of related research are moving away from the idea of pursuing consciousness at all. The nut proved a lot harder to crack than initially thought, and people are mostly chasing smaller, probably constituent parts, which are also proving very hard.

There is some woolly, handwavy stuff that doesn't seem to have got very far at all in `AI'. There are some related areas that have made very concrete progress over the last 20-30 years, but they are mostly unconcerned with issues of consciousness at all.


Posted by: soup biscuit | Link to this comment | 02-15-08 9:22 AM
horizontal rule
277

It's hard to tell how it will play out, though. The kind of memory I've got makes Google a huge advantage for me -- I don't actually know much about anything, but I've got enough of a hook into most topics that I can get to a search that will give me an answer I want. Google lets me talk on the level with people who really do have expert knowledge, and increases the apparent gap between someone half-educated like me, and someone who doesn't have my mile-wide-millimeter-deep erudition, so has trouble looking things up.

Is the PDI going to work for me like Google? Can't tell till it shows up.


Posted by: LizardBreath | Link to this comment | 02-15-08 9:23 AM
horizontal rule
278

My point is that if it proves to be achievable, the human version will likely still be more socially useful.

I see that's what your saying, but I don't buy it. Or rather, I think it's either tautological (within human society, humans socialize best) or unknown *and* unpredictable.


Posted by: soup biscuit | Link to this comment | 02-15-08 9:24 AM
horizontal rule
279

276: That doesn't help you much because most directions of related research are moving away from the idea of pursuing consciousness at all.

Well, for the sake of argument I'm granting that the smaller, harder nuts will be cracked in time for "consciousness" to become an issue at all. Of course that's not a foregone conclusion.


Posted by: DS | Link to this comment | 02-15-08 9:24 AM
horizontal rule
280

277: What I suspect is more likely is that emphasis on information management will trump specialization on particular types of information.


Posted by: soup biscuit | Link to this comment | 02-15-08 9:25 AM
horizontal rule
281

278: I think "within human society, humans socialize best" is an empirically observable fact, not an abstract tautology. A survey of how non-humans tend to fare when encountering human society demonstrates this pretty amply.


Posted by: DS | Link to this comment | 02-15-08 9:27 AM
horizontal rule
282

Well, for the sake of argument I'm granting that the smaller, harder nuts will be cracked in time for "consciousness" to become an issue at all.

Ok, but to the degree that this is true, I think it's misleading to try and extrapolate from what research in this area is doing to what would be attemped then. After all, the lowest levels of our own vision system are best understood reductively but that doesn't tell you much at all about `human vision'. There is no direct path that way from understanding the signal processing done in the retina to understanding the HVS.


Posted by: soup biscuit | Link to this comment | 02-15-08 9:28 AM
horizontal rule
283

282: Fair enough.


Posted by: DS | Link to this comment | 02-15-08 9:30 AM
horizontal rule
284

281: It's not observable with the presence of a human level (or higher) conciousness.

But I probably worded that badly, anyway. I see no reason to, as you have, assume that any constructed intelligence would not have these same properties. For all we know they are necessary. And I don't see the direction of much of current research to be useful to extrapolate from.


Posted by: soup biscuit | Link to this comment | 02-15-08 9:30 AM
horizontal rule
285

284/283 crossed, I think we're understanding each other now.


Posted by: soup biscuit | Link to this comment | 02-15-08 9:31 AM
horizontal rule
286

A survey of how non-humans tend to fare when encountering human society demonstrates this pretty amply.

This is silly, isn't it? What non-humans with human-level intelligence and communication skills were you thinking of?


Posted by: LizardBreath | Link to this comment | 02-15-08 9:31 AM
horizontal rule
287

LB, that was an enormous "if". The instant search of a more-than-human memory is already here (though a lot of effort would have to be put into feedback mechanisms making sure that the AI medical memory in use is accurate and up to date.)

But the "human-class judgment" part -- the "if" -- is more or less undescribeable. There's no consensus as to what good human-class-judgment is. There are a lot of very bright flesh and blood humans whose judgment is horrible because of blind spots and obsessions.

I can see a point arriving when AI can reliably match human-class bad judgment. Reliably matching human-class good judgment would be much harder to do because it would be hard to recognize success, since there isn't a lot of consensus good judgment. Though I suppose that once you've reached the "average" or "not bad" level, you've succeeded.)

If I understand correctly, one intended effect of the updated evidence-based database is to take certain kinds of medical questions out of the area of judgment and make them automatic.


Posted by: John Emerson | Link to this comment | 02-15-08 9:33 AM
horizontal rule
288

I have a friend who works in AI, and he has a robot that is essentially a giant proto-eyeball. And the point of the research is to get the robot to see, in the track-an-object sense. This has turned out to be very, very hard, even before we consider whether the robot knows that it's seeing.

Thus I conclude that we will not have conscious robots within the next 50 years.


Posted by: Cala | Link to this comment | 02-15-08 9:34 AM
horizontal rule
289

we will not have conscious robots within the next 50 years.

You mean I won't be getting my Adrienne Barbeaubot? That sucks.


Posted by: apostropher | Link to this comment | 02-15-08 9:38 AM
horizontal rule
290

288: There are some good reasons to consider vision `ai complete' (meaning that if you have `real' vision, you have `real' conciousness/ai). It's that hard a problem.

At lower levels though, like what your friend is doing, there is lots of neat stuff on the go. Object tracking and recognition have made some real leaps, but not many in the `mimic what humans do' direcitons.


Posted by: soup biscuit | Link to this comment | 02-15-08 9:38 AM
horizontal rule
291

At some point we will get back to the problem of what "consciousness" or "the mind" is. My own opinion would be that to the extent that it performs like a mind, it is one.

More interesting to me is whether AI minds are minds of persons. A lot of AI energy seems to come from very specialized people who don't especially like or understand most other people and are trying to construct artificial minds without the messy traits they don't like in others. In fact, there seems to have been a centuries-long trend toward training flesh-and-blood people to be more properly abstract and mental and rational and scientific and less fleshly and emotional and humorous, which ended up producing scientists whose goal was to produce minds even less human than themselves.


Posted by: John Emerson | Link to this comment | 02-15-08 9:40 AM
horizontal rule
292

287: Oh, I'm not expecting to get to AI consciousness soon, or really ever, just talking about what it would be like if it did show up. But you're right -- someone/thing with perfect information retrieval doesn't need the same quality of judgment, because the answer to a whole lot more questions becomes unambiguous.


Posted by: LizardBreath | Link to this comment | 02-15-08 9:41 AM
horizontal rule
293

286: What non-humans with human-level intelligence and communication skills were you thinking of?

Far as we can tell, we pretty much wiped them out when we came in contact with them about 28000 years ago.


Posted by: DS | Link to this comment | 02-15-08 9:41 AM
horizontal rule
294

we will not have conscious robots within the next 50 years.

Mitt Romney. Al Gore. I refute you thus.


Posted by: Gonerill | Link to this comment | 02-15-08 9:41 AM
horizontal rule
295

Apo, the artificial bimbo is already here for you. Just rinse it out after each use.


Posted by: John Emerson | Link to this comment | 02-15-08 9:42 AM
horizontal rule
296

293: Okay, but if you're talking about Neanderthals, you have no idea what happened to them 'societally'.


Posted by: LizardBreath | Link to this comment | 02-15-08 9:42 AM
horizontal rule
297

296: We have enough DNA evidence to know that they didn't vanish through interbreeding, which at least minimally tells us they were not socialized with anatomically modern humans. The exact mechanics are otherwise obscure, that's true.


Posted by: DS | Link to this comment | 02-15-08 9:44 AM
horizontal rule
298

(Although what human populations have subsequently done to each other probably provides some clues.)


Posted by: DS | Link to this comment | 02-15-08 9:45 AM
horizontal rule
299

290: I'd have to agree; if you get 'vision', you've probably got 'mind.' But it's just really, really, hard. Like, 'the robot eye tracked an object without crashing the computer, let us celebrate and dance with joy' kind of hard.


Posted by: Cala | Link to this comment | 02-15-08 9:45 AM
horizontal rule
300

289: Barbeau is now a Van Zandt of the Springsteen Van Zandt family.


Posted by: John Emerson | Link to this comment | 02-15-08 9:47 AM
horizontal rule
301

Argh I wish I had time to keep up with this thread. To sum up: soup, Cala: well, sort of. The rest of you goofballs: goofballs.


Posted by: Sifu Tweety | Link to this comment | 02-15-08 9:47 AM
horizontal rule
302

297: Well, no. It says that they weren't interfertile, presumably for social rather than genetic reasons. That doesn't mean there was no social contact between the two groups, and it hasn't got thing one to do with possible modern-day reactions to machine intelligences.


Posted by: LizardBreath | Link to this comment | 02-15-08 9:48 AM
horizontal rule
303

What non-humans with human-level intelligence and communication skills were you thinking of?

The Thetans are still with us, and their clammy fate was not bitter. Each of us bears a real Thetan within us, and thus virtual Thetans control the internet.


Posted by: John Emerson | Link to this comment | 02-15-08 9:50 AM
horizontal rule
304

299: Yup, I'm very, very familiar with this part of it.

301: well, we can talk about it some other time if you'd like. I don't actually work in AI, but what I do work in overlaps in places. So while I have a good feel for what's going on there, there are areas of it I don't pay much attention too. I think I've got a fairly good handle on the broad strokes though (but of course, happy to learn new stuff!)


Posted by: soup biscuit | Link to this comment | 02-15-08 9:52 AM
horizontal rule
305

We did joke, however, that his robot was beginning to show traditional sci-fi signs of conscious, viz., his robot inexplicably stops working whenever I stop by the lab, thus displaying the usual troubles robots cause around women.


Posted by: Cala | Link to this comment | 02-15-08 9:55 AM
horizontal rule
306

291 is good.

301: Come back, Sifu! You can't just leave us hanging like that!

302: It says that they weren't interfertile, presumably for social rather than genetic reasons.

By "not socialized with anatomically modern humans" I mean "not integrated into modern human societies." Of course there can be all sorts of "social" contact between otherwise relatively discrete societies. Who knows but that they probably traded, raided, mooned each other, made crude jokes at one anothers' expense and so on. But we're talking about integration of nonhuman with human, which the evidence we have to date suggests happens on terms of subjugation or does not happen at all.

The relevance, of course, is to the question of whether "humans socialize best within human societies." And yes, there is an extremely vast abundance of evidence that this is true, to which the objection that it's a "tautology" is not convincing. Ergo, the idea that nonhuman AI consciousness (and even if designed to replicate the human mind, it would still be in important senses nonhuman) would be socially valuable to humans to the extent of, say, being trusted to doctor them or dispose of their relatives' estates, is dubious.


Posted by: DS | Link to this comment | 02-15-08 10:00 AM
horizontal rule
307

290, 299, 304: Of course, there's mind and mind. A rat can see, so while there's an argument that if you've solved vision you've solved mind on some level, there could be as yet unguessed at problems between the truly awesomely challenging "We've created a hamster-level machine intelligence" and "We've created a person-level machine intelligence."

(I have no idea what those problems might be, but you see what I mean.)


Posted by: LizardBreath | Link to this comment | 02-15-08 10:01 AM
horizontal rule
308

304: oh, I think you do, I just think you may be giving short shrift a little bit to some of the possible avenues for making inroads into understanding and modelling pieces of consciousness -- probably nothing you don't know about, just might be a matter of relative optimism.

I also think it's sort of misleading to conflate vision and consciousness, or even really to tall about either of those things as single problems. You could definitely achieve a lot of the functionality of vision (see e.g. Face detection) without addressing anything about self-reflective consciousness, and you can address elements of (especially social) consciousness without having the sensory pieces there.


Posted by: Sifu Tweety | Link to this comment | 02-15-08 10:04 AM
horizontal rule
309

306: The relevance, of course, is to the question of whether "humans socialize best within human societies." And yes, there is an extremely vast abundance of evidence that this is true, to which the objection that it's a "tautology" is not convincing.

Yes, it is convincing. If all you're saying is that nothing that can't pass for human will be absorbed seamlessly into human society, no shit, Sherlock. (And I'm still not getting your 'vast abundance of evidence'. I see one data point (which I hadn't actually known had been established), that Neanderthal DNA hasn't survived in modern humans. Is there another relevant data point?)

But how that gets you to machine intelligences wouldn't be useful or accepted in performing tasks, I don't see at all. They might not be, and I would guess are unlikely to ever exist, but the fact that no one's going to be having sex with their artificially intelligent doctor-system doesn't mean they won't ask it to diagnose their rashes, if it's better and cheaper than a human doctor.


Posted by: LizardBreath | Link to this comment | 02-15-08 10:07 AM
horizontal rule
310

"tall", "Face": this is what happens when I try to be substantive typing on my phone.


Posted by: Sifu Tweety | Link to this comment | 02-15-08 10:08 AM
horizontal rule
311

308: I think we're on the same page. I was trying to be a bit careful by using scare quotes: `real' vision being something that does everything we think of as vision... but individual functionality is a very different story. It's difficult to be broad and precise and also not blather on at lengths annoying to the unfoggedatariat.

We're not actually very good at face detection yet, but we're getting pretty far on constrained versions of it... very much falling under my latter `progress made in directions not mimicing the way humans do it'.

I suspect relative optimism does come into play. There are lots of interesting avenues, I agree. The ones that have made the most measurable progress by far are the more reductionist, less `AI' areas. But this is probably as much because they are attacking simpler problems and have more to leverage.


Posted by: soup biscuit | Link to this comment | 02-15-08 10:13 AM
horizontal rule
312

309: Is there another relevant data point?

The experience of the entire remainder of the plant and animal kingdoms in encountering and interacting with human societies, yes. (Of course, the idea that this is relevant to any discussion of "consciousness" is itself not widely accepted, since the idea that animals have anything worth describing as mind at all is a relative novelty in most modern cultural contexts. Which itself is an interesting sort of a data point.)

But how that gets you to machine intelligences wouldn't be useful or accepted in performing tasks, I don't see at all.

Well, let's unpack. If something that can't pass for human will not be absorbed "seamlessly" into human society, as you seem willing to stipulate (I think it's understatement but will accept it for the sake of argument), what does that mean concretely? On the level of day-to-day social interactions, which is how the actual influence of any form of intelligence is largely decided?


Posted by: DS | Link to this comment | 02-15-08 10:16 AM
horizontal rule
313

I think that the evidence about the Neanderthals isn't quite as unambiguous as DS thinks. IIRC the question is still in the air.

305: Perhaps if they picked that up and ran with it, they could devise a horny visual system able to recognize T&A in a tenth of a second as far away as 60 feet or so. Maybe with a little virtual erection and masturbation module.

Once they had the basic (male) humanity built in, they could start adding on other forms of visual perception by treating visual space as a collection of transformations of T&A.


Posted by: John Emerson | Link to this comment | 02-15-08 10:22 AM
horizontal rule
314

312: The experience of the entire remainder of the plant and animal kingdoms in encountering and interacting with human societies, yes.

I would have thought it was uncontroversial that plants and animals generally differ strongly from humans in their mental capacities. Aren't we talking about machine intelligences that would more closely approximate human intelligence?

On the level of day-to-day social interactions, which is how the actual influence of any form of intelligence is largely decided?

Damned if I know, but damned if you do either. If some form of human-level machine consciousness is developed, unlikely as it seems, we'll see what influence it has when it gets here.

On the example I've mostly been kicking around, medicine, I can't think offhand why AIs wouldn't be accepted. I don't socialize with my doctor now: I go in, say "[X] hurts, or looks funny" and they grunt and write a script, which generally makes me feel better. I'd be fine if the grunt were replaced with a beep, so long as I felt better at the end of it.


Posted by: LizardBreath | Link to this comment | 02-15-08 10:23 AM
horizontal rule
315

John, your 291 reminded me of a bit I once read about the history of computer animation. At some point animators started seriously asking themselves why everything they rendered came out looking plastic. It turns out that plastic reflects light with white highlights, while most substances reflect it with highlights in their respective colors. It's just that the animators worked in environments where pretty much everything is made of plastic. Took a while to nail that one down. I sometimes wonder what equivalent oversights there might be in early synthetic consciousness.

My own guess for views I hold most likely to seem ridiculous in 50-100 years is my views on what constitutes humanity in various senses, including the moral - who has what kind of claim on me (and everyone else) because of their humanity. I think it quite possible that improved tools for neurological analysis and communication assistance will lead to a general recognition of the human-level sentience of at least one other species, leading to a (lumpy and uneven) shift in accepted usages of "humanity" beyond homo sapiens. But this won't be the utopia of animal rights activists. An expanded sense of what's "human" overall will go with some more categories of "humans who have claims on us" within it that may well end up constricting some rights I think of as universal. It'll be weird and, to use a word repeatedly, lumpy.

At least one major philosophical category that's important to me - feminism? socialism? - won't exist at all as a category in a couple generations, and students of that era will be baffled by the strange connections we draw between what are to them obviously very different things, much as most of us have to do now with the progressive movement of the early 20th century and the role of prohibition and eugenics, and so on.

Erudite trolling will be alive and well, and so will cock jokes. Better understanding of biology and the application of weak nanotechnology to the manufacture of tailored drugs will enable a much wider range of cock jokes.


Posted by: Bruce Baugh | Link to this comment | 02-15-08 10:24 AM
horizontal rule
316

My understanding re: DNA evidence from Neanderthals it seems unlikely that they interbred with us. But there is also a bit of genetic evidence out there suggesting otherwise.


Posted by: nattarGcM ttaM | Link to this comment | 02-15-08 10:29 AM
horizontal rule
317

There are arguments that a human-level consciousness would have to have desires (Antonio Damasio in several books), would have to have a body (Francisco Varela and other in "The Embodied Mind") and would have to be acculturated and participate as a person in interactions with other people (no specific citation).

For example, human sensation isn't just information processing, it is learned and developed through physical interactions of embodied humans manipulating physical reality, moving about in it, and learning to recognize the way each part of reality impacts the organism in terms of its needs.

However, a lot of AI seems to aim for non-human consciousnesses purged of human weakness.


Posted by: John Emerson | Link to this comment | 02-15-08 10:38 AM
horizontal rule
318

317 has mostly beat me to the punch (with more specific examples) in what I would have said to 314.2. And sure, the differences in capacity are part of the story, but an equally significant part of the story is reactions to difference. For that matter, the slowness with which humans are often wont to regard other human populations as being fully human are another data point.

Of course the outcomes of all this are speculative. I'm just taking exception to the idea that we have no basis at all from which to speculate; I think we have some pretty specific clues from which to speculate.

I'm not sure the medical profession is the example you're looking for. The "doctor as pill dispenser" mode of interaction is a historical anomaly that has generated serious problems of its own (such as antibiotic-resistant microorganisms, for instance), and the slowness of the medical profession in transitioning away from it has generated a multibillion dollar market in alternative and "holistic" medicine. There's a reason why some female patients tend to prefer female doctors, and it's not because the social aspect of medicine is insignificant.

As to why a lot of AI seems to aim for machines purged of human weakness, the most obvious answer is because computers are most useful to human societies as tools that don't emulate human "weaknesses," or the quirkier and less predictable facets of human consciousness. The closer machines get to reproducing human consciousness, the likelier it is that they'll start reproducing those quirks, which makes them less useful in the role of tool, as we talked about a bit upthread.


Posted by: DS | Link to this comment | 02-15-08 10:53 AM
horizontal rule
319

This is unfair of me, because I dropped the discussion last night to go out and get a drink in order to scare-quote celebrate close scare-quote Valentine's Day. But, "If that's a straight question and it means "Can statements of ethical judgment be assigned truth-values," then no, not in the same sense in which nonmoral statements can (in theory) be."

Later, "Students had a much easier time believing there was a past in which consumer capitalism was common, brutal and wrong and people fought against it than they had believing that there was a past in which consumer capitalism was common, brutal and wrong and people fought for it."

What's the status of your claim that consumer capitalism was wrong? I'd want to say that it's true (or false if the evidence points that way). I know you don't want to say it's true, but then I have no idea what you do want to say about it. It accurately expresses your emotions towards consumer capitalism?


Posted by: washerdreyer | Link to this comment | 02-15-08 11:02 AM
horizontal rule
320

For that matter, the slowness with which humans are often wont to regard other human populations as being fully human are another data point.

But a data point for what? I'm really not clear what you're arguing here. Way above, it looked as if you were arguing that human-intelligence-level AIs (if they existed) wouldn't be any use -- a sufficiently computer-aided person could do anything an AI could do. And that struck me as a really unlikely thing to say confidently, for the reasons I gave above.

Now you seem to be arguing that socially integrating AIs would be difficult, or people wouldn't like them, or something like that (I'm being vague not to dismiss your argument, but to indicate that I'm not clear on exactly what you're driving at), and pointing to inter-ethnic racism and other conflicts as support. And I'm sure you're right that there would be huge social problems around AIs, if they were ever developed. But I can't see that meaning they wouldn't be used or useful, which I at first thought you were arguing.

I don't know if I disagree with you or about what -- I'm arguing mostly because you're sounding very certain about very speculative stuff.


Posted by: LizardBreath | Link to this comment | 02-15-08 11:17 AM
horizontal rule
321

There have to be more categories than "facts" and "emotions". An emotion is by definition private, unshared, and does not entail obligations on others. An ethical judgment may have both an emotional and a factual component, but it's not a fact and it's not an emotion, because by definition it entails an obligation.

The positivists asserted that everything that was not a fact was an emotion, and emotions distort the perception of reality. They were rather like nihilists that way, and both of them are a sort of apotheosis of a kind of value-free scientific point of view (i.e., the belief that values are bad.)


Posted by: John Emerson | Link to this comment | 02-15-08 11:22 AM
horizontal rule
322

Sounding more certain than I am is my speciality. If it helps, feel free to preface all my posts with a disclaimer saying "it is my admittedly imperfect and half-assed speculation that..."

My contention is that:

1) The usefulness of tools is a function of their social utility, not some absoluate criterion of efficiency;
2) Computers have amply demonstrated their social usefulness as tools;
3) Owing to the complications that come with being able to engage in interpretation, metaphorical extension etc., computers with something like consciousness would be inherently less useful as tools, and competing for a social niche that existing humans already occupy.


Posted by: DS | Link to this comment | 02-15-08 11:22 AM
horizontal rule
323

that existing humans already occupy.

I should add, "and could conceivably continue to occupy effectively enough that having computers do the work wouldn't be much of an advantage, in comparison with the social obstacles."


Posted by: DS | Link to this comment | 02-15-08 11:25 AM
horizontal rule
324

I'm sure an actual ethicist is going to have something to say here, but there are non-cognitivist metaethical views that aren't simple emotivism and varieties of cognitivism that aren't some simple form of moral realism.


Posted by: nattarGcM ttaM | Link to this comment | 02-15-08 11:26 AM
horizontal rule
325

Yeah, and I can't buy into 3 at all. There are a lot of tasks where the combination of human-class judgment and brute-force information retrieval and processing power would seem to me to be very useful, not strongly dependent on the sort of social interaction that you're seeing as inherently problematic. Again, my judgment with perfect recall of all written law would make a kick-ass lawyer, and that's a very social task. When you get into geekier stuff -- any kind of technical design, say -- I'd think an AI of the sort we're thinking of would be so useful that people would find ways to work around the social problems.


Posted by: LizardBreath | Link to this comment | 02-15-08 11:28 AM
horizontal rule
326

Well, 265 to 325. Something of an impasse at this point.


Posted by: DS | Link to this comment | 02-15-08 11:30 AM
horizontal rule
327

Discussions of consciousness with no real consensus of why that actually means, Emerson recapitulating Hillary Putnam, LB taking the side of our robot overlords; has the world gone mad?


Posted by: Sifu Tweety | Link to this comment | 02-15-08 11:30 AM
horizontal rule
328

I'm sure an actual ethicist is going to have something to say here, but there are non-cognitivist metaethical views that aren't simple emotivism and varieties of cognitivism that aren't some simple form of moral realism.

I used to know about this, where "this" is different ways of classifying ethical views and "know" means took one class and did most of the reading, but I've forgotten a lot of it. I did intend my question about truth aptitude (I almost wrote this as "aptness") to get at cognitive/non-cognitive distinction.


Posted by: washerdreyer | Link to this comment | 02-15-08 11:39 AM
horizontal rule
329

325: I think that "human-class judgment" is the doubtful term in what you're saying. AI processes the data in its enormous memory -- it doesn't just retrieve information. A lot of sophisticated technical processes are completely automated and do replace human-class judgment, but this is because the processes are better understood now. (Metallurgy is an example -- before materials science, it was an empirical art).

In other words, AI can replace human class judgment when some area is moved out of the art/craft/skill area into the area of fully-understood processes. But that's different than having human-class judgment.


Posted by: John Emerson | Link to this comment | 02-15-08 11:40 AM
horizontal rule
330

re: 328

I've actually taught it, but I have masqueraded as an 'ethicist' for the purposes of teaching the occasional undergraduate revision class. If/when filling out job applications I wouldn't list it as an 'area of competence'.

There are people who post here who genuinely know stuff about it.


Posted by: nattarGcM ttaM | Link to this comment | 02-15-08 11:42 AM
horizontal rule
331

329: That's an argument that we're not going to get strong AI (which I think, ignorantly, is pretty likely to be correct). All I've been arguing about with Slack is if we did develop strong AI (yeah, Tweety, not rigorously defined or anything. Some machine which you could interact with in a manner generally appearing to be conscious, able to perform those cognitive tasks which now only people can perform. That's still not rigorous, but you know what I'm handwaving at.) whether it would be any use.


Posted by: LizardBreath | Link to this comment | 02-15-08 11:45 AM
horizontal rule
332

"Actual ethicist" = "metaethicist". People who talk about ethical questions per se are amateurs and cranks. Ethical questions are sometimes useful for testing metaethical theories, but so are trolleycar problems.


Posted by: John Emerson | Link to this comment | 02-15-08 11:47 AM
horizontal rule
333

John, I realize you're trolling, but "actual ethicist = metaethicist" is just wrong, even by the academy's definitions.


Posted by: Cala | Link to this comment | 02-15-08 11:49 AM
horizontal rule
334

331: that is extremely vague, though. You mean all tasks a human can perform? Interacting in a human-like way in all circumstances? It's very likely we'll keep chipping away at the edges, as we have been, perhaps indefinitely. Indeed, it may be that machine consciousness could be utterly dissimilar to our own, as implemented.


Posted by: Sifu Tweety | Link to this comment | 02-15-08 11:50 AM
horizontal rule
335

You mean all tasks a human can perform? Interacting in a human-like way in all circumstances?

And I haven't got the knowledge to get much more specific. But handwavingly, what I know about AI research is that it's divided into stuff that's not even a little consciousness-like, like expert systems and so forth (really really useful and important, but not much related to how biological minds function), or that's really really really low-level perception. Something with what would be perceived as initiative and judgment, also capable of high-level cognitive functioning, even if it wasn't much like a human to interact with, is what I'm thinking of.

But that's not much less vague, and it's informed by a combination of a lifetime of SF reading and pop-science articles, not anything more substantive.


Posted by: LizardBreath | Link to this comment | 02-15-08 11:57 AM
horizontal rule
336

People keep telling me about the great stuff in philosophical ethics, but I don't see philosophical ethics playing much of a role in real-world ethical discussions. I believe that I've even seen it asked "Why should it?" Because philosophical ethics is too specialized and technical for real-world people.

Bob Somerby, who studied philosophy many years ago, asked why public political discourse today is so wretched when we have so many wonderful philosophers. He mentioned Nozick and Rawls specifically, who are really at the relatively engaged end of political philosophy, but really haven't contributed much of anything real. (Nozick more than Rawls, but his influence has been pretty negative. Singer strikes me as a bad example too, since he provides sophisticated arguments for the tiny audience of philosophical animal-rightsers. The energy of animal rights comes from elsewhere.)


Posted by: John Emerson | Link to this comment | 02-15-08 12:07 PM
horizontal rule
337

John, your 291 reminded me of a bit I once read about the history of computer animation. At some point animators started seriously asking themselves why everything they rendered came out looking plastic. It turns out that plastic reflects light with white highlights, while most substances reflect it with highlights in their respective colors. It's just that the animators worked in environments where pretty much everything is made of plastic. Took a while to nail that one down. I sometimes wonder what equivalent oversights there might be in early synthetic consciousness.

By your use of `rendered' I assume you mean computer animation. In which case this is a mostly just-so story. Or at least, while some animators may have asked the question (if they didn't understand the underlying tools too deeply) it was really never an `oversight'.


The understanding of physical optics has been far ahead of computer graphics since before computers existed. The problem isn't the models, it's the computation.

It turns out that fairly crude approximations to the `correct' models (e.g. Phong shading) are much *better* approximations to what goes on with plastics, because plastics interaction with light is fairly simple. The problem is the mathematics may be reasonably easy to write down but a) real life interactions aren't simple and b) even the simple interactions can be very expensive to compute. But specular highlights and things like the Fresnel effect have been understood well before anyone asked a computer to try them.

The history of computer graphics consists mostly of a mix of slowly increasing the accuracy of approximation for the relevant integrals/stochastics (e.g. global illumination) and hacks that are fast & wrong but look pretty good (e.g. games).


Posted by: soup biscuit | Link to this comment | 02-15-08 12:29 PM
horizontal rule
338

Indeed, it may be that machine consciousness could be utterly dissimilar to our own, as implemented.

Right, which is why extrapolation from current situation is a bit of a mugs game.


Posted by: soup biscuit | Link to this comment | 02-15-08 12:31 PM
horizontal rule
339

w/d up there at 319: What's the status of your claim that consumer capitalism was wrong? I'd want to say that it's true (or false if the evidence points that way). I know you don't want to say it's true, but then I have no idea what you do want to say about it. It accurately expresses your emotions towards consumer capitalism?

Basically what 321 and 324 say.

328: I did intend my question about truth aptitude (I almost wrote this as "aptness") to get at cognitive/non-cognitive distinction.

Still don't know quite what you mean by truth aptitude, and as already noted, there's a hell of lot of metaethical water under the bridge already about this stuff. The easiest answer, given that this thread is dead anyway, is that you can, if talking about moral 'truth' seems necessary (and it often does so seem), talk about moral *reasoning*, noting that it differs in important ways from nonmoral reasoning, noting that what counts as evidence for moral claims is rather different from that for nonmoral claims, and thereby noting that applying the notion of truth-status to the former is a forced fit that necessarily distorts the ways in which moral claims work.

There are a bunch of red herrings to be avoided along the way, not least of which is the prescriptive (moral) / descriptive (nonmoral) divide. "It's raining," e.g. can be used as a prescriptive statement.

Speaking again very (very) crudely, the kind of question you ask insists on an epistemological reading of ethics ('What sort of knowledge is moral knowledge?' 'What sorts of facts make moral statements true?'); whereas I'd read it in terms of philosophy of language*: what are we doing when we engage in moral language-games? I'm a Wittgensteinian, baby. In the beginning was the deed. The question is not how we map ourselves onto the world as receptors, but how we participate in it as agents.

* The professional philosophers will cringe at that distinction.


Posted by: parsimon | Link to this comment | 02-15-08 2:13 PM
horizontal rule
340

I agree with LizardBreath. Assuming we do get AI as good as humans and the cost of this AI is low enough, such AI will be massively used.

My hunch is getting AI to human level intelligence will be very hard, but moving AI from human level intelligence to greater than human level intelligence won't be that difficult. Especially, since humans won't necessarily be the ones doing the work.


Posted by: lemmy caution | Link to this comment | 02-15-08 2:45 PM
horizontal rule
341

The question is not how we map ourselves onto the world as receptors, but how we participate in it as agents.

This sounds like a phenomenological understanding of ethics, not necessarily one that focuses on language specifically.


Posted by: Michael Vanderwheel, B.A. | Link to this comment | 02-15-08 2:54 PM
horizontal rule
342

341: Dude, you're using big words. You're messing up the categories. Don't you know that analytic philosophers hate phenomenology?


Posted by: parsimon | Link to this comment | 02-15-08 3:03 PM
horizontal rule
343

Mega-late, but several of points on the next 50 years scenarios:

1) I think the role of genetic engineering and other "direct" manipulations of human beings have not been given their proportional due in this thread.

2) I agree that it will continue to be human-computer symbiosis that will get most of the traction in this period. "Judgment" will increasingly get merged right into the search, so you will have the option getting partially analyzed results.

3) There will be a lot of attention to advancing the the human-computer scenario given the commercial demand right now for improving on today's absurd Blackberryish gyrations (perspective from a man with big fingers). For input there aren't that many channels for fine motor control past the fingers, eyeball, tongue, sort of toes; for output miniature heads-up displays and embedded earphones will go a long way toward satisfying demand for years. The latter may impede development of more direct neural inputs.


Posted by: JP Stormcrow | Link to this comment | 02-16-08 10:03 AM
horizontal rule
344

from a man with big fingers

"You know what they say about men with big fingers."

"No, what do they say about men with big fingers?"

"Their emails have a lot of typos."


Posted by: LizardBreath | Link to this comment | 02-16-08 10:12 AM
horizontal rule
345

344: As do their Unfogged posts.

"You know what honey? Maybe you could just leave your pants on tonight."


Posted by: JP Stormcrow | Link to this comment | 02-16-08 10:30 AM
horizontal rule