Re: Skynet Watch

1

I wonder whether it's going to have an effect on law firm staffing.

That's just cruel, LB.


Posted by: peep | Link to this comment | 06-21-10 7:01 AM
horizontal rule
2

I could actually imagine it being easier to make a Jeopardy-answerer than a general question-answerer. Most of the answers are going to be trivia items - people, places, etc. - which allows for a lot of narrowing down. Not that it's not an achievement, but I wonder how applicable it will be to more open-ended questions like "What cases developed jurisprudence in maritime wages?"


Posted by: Minivet | Link to this comment | 06-21-10 7:05 AM
horizontal rule
3

Eh, I mean, it's very cool, and a good indication of the (impressive!) state of machine learning research, but I would caution you against thinking this implementation specifically is going to generalize. For all that they contain worldplay and so on Jeopardy questions are actually fairly regularized in their form, and it's working with the whole corpus of "answers" from the show's history, if I remember right. Also, like Deep Blue, I have a feeling their approach (throw a fuck-ton of money at building an enormous supercomputer, and then apply a kitchen sink approach to each query) is going to be end up being too inefficient to be useful for real-text searches; anybody can throw a weighted combination of every search method at a problem, but the more you do that, the more tuning is required, and the less well it generalizes.

If you want to look for Skynet within IBM, I'd say the Blue Brain project is a somewhat more interesting place to look.

All those caveats aside, yeah, neat! It'll probably win at Jeopardy.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 7:11 AM
horizontal rule
4

Having now skimmed most of the article, I'm kind of disappointed - I was expecting a conceptual breakthrough from the intro and, as Sifu says, it's just throwing more and more computer power at a hodgepodge of statistical techniques. I suspect we're going to need better theories of consciousness, and how the brain represents meaning, before we can get satisfying improvements in tasks like this.


Posted by: Minivet | Link to this comment | 06-21-10 7:19 AM
horizontal rule
5

Could someone explain why it's so hard, and why a supercomputer is needed? I had the opposite reaction to LB.


Posted by: David | Link to this comment | 06-21-10 7:20 AM
horizontal rule
6

Another thing about Jeopardy questions, thinking about it: I've noticed that one of the reasons guessing works so well with Jeopardy questions is that the questions involving some obscure subject matter often have as their answers (reverse "answer" and "question" there if you're pedantic) the most famous concept or personage within that subject area. So if there's a question about (say) an Assyrian king, the answer's probably going to be "Sargon".


Posted by: Sifu Tweety | Link to this comment | 06-21-10 7:22 AM
horizontal rule
7

6: Yeah, a bit like how crosswords are full of "Etna" and "iota" and "eke."


Posted by: Minivet | Link to this comment | 06-21-10 7:24 AM
horizontal rule
8

I don't mean to be dismissive, exactly. Like I said, it is very impressive that we're this good at document classification. But I think the missed question in the article are illustrative: when it fucks up, it fucks up both badly and incomprehensibly.

In terms of grand ML projects I'd say the Netflix prize might be somewhat more impressive: improving the rates at which you can predict what movies somebody likes, based merely on what they've seen before? That's very hard. I certainly can't do that with any kind of consistency.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 7:25 AM
horizontal rule
9

Assurbanipal, surely.


Posted by: David | Link to this comment | 06-21-10 7:27 AM
horizontal rule
10

Another point to note: if you'll notice, the scientists (well, if you count Wolfram -- maybe "scientist" is more accurate) quoted in the article are sort of minimizing expectations, and the people spinning grand dreams of medical and legal question-answering services are either trying to sell million dollar servers, the software that runs on them, or the consulting time to hook them up.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 7:29 AM
horizontal rule
11

I take due notice of your note.


Posted by: Minivet | Link to this comment | 06-21-10 7:31 AM
horizontal rule
12

I find it charming that IBM is still trying to sell supercomputers.


Posted by: Eggplant | Link to this comment | 06-21-10 7:32 AM
horizontal rule
13

11: you noticed my note, I notice. Noted!


Posted by: Sifu Tweety | Link to this comment | 06-21-10 7:34 AM
horizontal rule
14

improving the rates at which you can predict what movies somebody likes, based merely on what they've seen before?

Do we know if they liked what they saw? If they're terrible at choosing movies they'll like, it would be hard to extrapolate much.


Posted by: heebie-geebie | Link to this comment | 06-21-10 7:41 AM
horizontal rule
15

14: I believe in the actual Netflix prize the metric was user-submitted ratings, rather than simply whether they watched (or, since it's netflix, received and then mailed back unwatched) them


Posted by: Sifu Tweety | Link to this comment | 06-21-10 7:42 AM
horizontal rule
16

Actually, the article's kind of frustrating; it gives enough information that I can guess at several possible things which might be being attempted, but not really enough to tell if they're actually doing something new. At the core, it sounds like they're using good old fashioned latent semantic analysis, which is just taking a big pile of documents and figuring out combinations of words which tend to appear together or not. You can use it to solve SAT analogy problems about as well as actual SAT takers, but that doesn't seem like a "to the bunkers!" moment either.


Posted by: Cosma Shalizi | Link to this comment | 06-21-10 7:43 AM
horizontal rule
17

15: that's correct, the prize was for predicting ratings.


Posted by: Cosma Shalizi | Link to this comment | 06-21-10 7:44 AM
horizontal rule
18

I don't think Sargon counts as Assyrian, does he? Too early.


Posted by: essear | Link to this comment | 06-21-10 7:45 AM
horizontal rule
19

12: It seems like IBM is mostly a consultancy these days, with a sideline in mad science.


Posted by: foolishmortal | Link to this comment | 06-21-10 7:48 AM
horizontal rule
20

Funny how the article doesn't mention that Wolfram|Alpha is a machine only capable of answering "I don't know what to do with that input" to any question that isn't precisely tailored to what it knows.


Posted by: essear | Link to this comment | 06-21-10 7:51 AM
horizontal rule
21

20: I use Wolfram|Alpha semi-regularly for simple technical queries like "what is the refractive index of hydrogen gas." Anything more complicated seems to confuse it.


Posted by: togolosh | Link to this comment | 06-21-10 8:11 AM
horizontal rule
22

"I could actually imagine it being easier to make a Jeopardy-answerer than a general question-answerer"

Oh, definitely. I mean, I imagine Google could knock up a rough and ready version in an hour or two. It's a much simpler task for an algorithm to identify an item from a list of known characteristics than it is to generate (on a generalised basis rather than within a given format) the most pertinent characteristics for a known item. Now, Watson seems to be a great deal more sophisticated than that, but the point is that Jeopardy style questions are much easier for computers than conventional questions - witness the difficulties Wolfram Alpha often encounters with fairly simple queries.


Also, the article unintentionally highlights one of the many glorious ironies of the internet.

"Type that clue into Google, and you'll get first-page referrals to "elementary, my dear watson" but none to deerstalker hats"

This statement is no longer true, thanks to its very publication.


Posted by: Ginger Yellow | Link to this comment | 06-21-10 8:17 AM
horizontal rule
23

Wolfram|Alpha seems roughly equivalent to that Pocket Ref book, but with more graphs. Which is cool, but like so many things Stephen Wolfram does, not quite up to its billing.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 8:18 AM
horizontal rule
24

It is hard to live up to your billing sometimes.


Posted by: Opinionated Y2K | Link to this comment | 06-21-10 8:19 AM
horizontal rule
25

22: I had that same thought as I read it.


Posted by: Cosma Shalizi | Link to this comment | 06-21-10 8:29 AM
horizontal rule
26

7: Etna is a constant lately. On the order of Omoo.


Posted by: Mister Smearcase | Link to this comment | 06-21-10 8:32 AM
horizontal rule
27

Ione Skye is the cutest of all the frequent crossword answers.


Posted by: Moby Hick | Link to this comment | 06-21-10 8:37 AM
horizontal rule
28

22, 25: n-gram googlewhacking for n > 3: the next great artificial intelligence challenge.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 8:37 AM
horizontal rule
29

26: Etna is a constant lately

Yes, most of the recent eruptions have been small ones on the flank.


Posted by: JP Stormcrow | Link to this comment | 06-21-10 8:45 AM
horizontal rule
30

That's what she said.


Posted by: Moby Hick | Link to this comment | 06-21-10 8:50 AM
horizontal rule
31

Whatever happened to that thing where IBM was trying to get into wearable computing? I want my contact-displays! Maybe I'll just have to wait for apple to get there, and then put up with them gluing it to my eye and only letting me get programs from the App Store.


Posted by: pdf23ds | Link to this comment | 06-21-10 9:03 AM
horizontal rule
32

26: "Lately"? I'm just glad "esne" and to a lesser extent "eft" are deprecated.


Posted by: Minivet | Link to this comment | 06-21-10 9:03 AM
horizontal rule
33

Huh. While the article seemed pretty clear that they weren't doing anything wildly new, I was still very impressed because I was thinking of Jeopardy questions as not significantly different from natural language questions generally.


Posted by: LizardBreath | Link to this comment | 06-21-10 9:06 AM
horizontal rule
34

I was thinking of Jeopardy questions as not significantly different from natural language questions generally.

The guy at the deli keeps getting angry when I say, "What is one pound of Swiss cheese?"


Posted by: Moby Hick | Link to this comment | 06-21-10 9:09 AM
horizontal rule
35

I beat the IBM machine despite totally blanking on such obscure words as "saddlebag" and "footlocker".

It got 4 out of 5 of the corporate conglomerates but had no idea on the "before and after" questions.


Posted by: Cryptic ned | Link to this comment | 06-21-10 9:10 AM
horizontal rule
36

26 Across: Hurler Hershiser famed among crossword solvers.


Posted by: JP Stormcrow | Link to this comment | 06-21-10 9:15 AM
horizontal rule
37

a nearly horizontal entrance to an underground mine, and a crossword favorite.


Posted by: alameida | Link to this comment | 06-21-10 9:16 AM
horizontal rule
38

35: I liked that rather than "ditty bag" its preferred answer was "Papa's Got a Brand New Bag".

32: Not a fan of the late Eugene T. Maleska?


Posted by: Mr. Blandings | Link to this comment | 06-21-10 9:19 AM
horizontal rule
39

Obligatory reminder that the British Ministry of Defence built Skynet decades ago; it's now in operation but, as far as I know, has nothing to do with the UK's nuclear weapons or its small fleet of killer robots.


Posted by: ajay | Link to this comment | 06-21-10 9:19 AM
horizontal rule
40

33: it is impressive! It just isn't necessarily particularly generalizable, not least because the questions are explicitly constructed to have a single, unequivocally correct answer. A question like "is this document materially concerned with shipping regulations?" or "is this paper relevant to the treatment of persistent skin rashes in elderly patients?" or whatever is very different.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 9:25 AM
horizontal rule
41

32: You know, my undergraduate major was Medieval Studies. While the classes I took made no coherent sense at all, I did take a bunch of medieval history classes. Never saw the word 'esne' other than in a crossword. I think some desperate puzzle designer made it up back in 1937, and other puzzle designers have been copying it since then.


Posted by: LizardBreath | Link to this comment | 06-21-10 9:30 AM
horizontal rule
42

40: Even worse, what's really wanted is "Here is the client's proposed course of action. [[Paragraph or seventy in natural language follows.]] Is it compliant with all applicable shipping regulations?"


Posted by: Cosma Shalizi | Link to this comment | 06-21-10 9:38 AM
horizontal rule
43

41: It appears to have only been in anything like common use in Old English, and then in some 19C retro works (Ivanhoe). Old English is effectively a different language; I don't see how that puzzle-designer lived with himself.


Posted by: Minivet | Link to this comment | 06-21-10 9:44 AM
horizontal rule
44

42: What would be almost as useful would be a reliable answer to "What regulations apply?" Although, now that I looked at the interactive thing Ned pointed out, and got a look at the type of wrong answers Watson was coming up with, I don't think it's all that close to being able to do that kind of work.


Posted by: LizardBreath | Link to this comment | 06-21-10 9:47 AM
horizontal rule
45

44: If that ever could be automated, my guess is that regulations would increase in number and complexity to the point that nobody would be able to understand the regulations without the software. I'm growing more and more convinced that society needs more social and political movements that involve torch-bearing crowds who smash machines.


Posted by: Moby Hick | Link to this comment | 06-21-10 9:51 AM
horizontal rule
46

Answering the unambiguous trivia questions aren't very impressive, but some other things are. From the NYT article:

Over the rest of the day, Watson went on a tear, winning four of six games. It displayed remarkable facility with ... sophisticated wordplay ("Classic candy bar that's a female Supreme Court justice" -- "What is Baby Ruth Ginsburg?").

...
During one game, a category was "All-Eddie Before & After," indicating that the clue would hint at two different things that need to be blended together, one of which included the name "Eddie." The $2,000 clue was "A 'Green Acres' star goes existential (& French) as the author of 'The Fall.' " Watson nailed it perfectly: "Who is Eddie Albert Camus?"

Sophisticated or not, a computer with that understanding of wordplay is impressive. This right here today isn't generalizable to legal briefs, of course, but it's a lot closer than anything else I've seen or read about in real life.


Posted by: Cyrus | Link to this comment | 06-21-10 9:51 AM
horizontal rule
47

Obligatory reminder that the British Ministry of Defence built Skynet decades ago; it's now in operation but, as far as I know, has nothing to do with the UK's nuclear weapons or its small fleet of killer robots

Correct. It's a satellite communications network.


Posted by: Ginger Yellow | Link to this comment | 06-21-10 9:52 AM
horizontal rule
48

OH, THAT'S ORIGINAL.


Posted by: OPINIONATED NED LUDD | Link to this comment | 06-21-10 9:52 AM
horizontal rule
49

48: What is the sincerest form of flattery?


Posted by: Moby Hick | Link to this comment | 06-21-10 9:53 AM
horizontal rule
50

46: That's the sort of thing that had me wildly impressed. Looking at the interactive feature in the article, I'm less so -- the wrong answers on the wordplay questions make it look as if it's not 'understanding' the wordplay in any meaningful sense (like, the wrong answers aren't well-formed in terms of the wordplay).


Posted by: LizardBreath | Link to this comment | 06-21-10 9:56 AM
horizontal rule
51

46: I'm not sure those are any harder than general Jeopardy questions (they might even be easier). It's a specially labelled category where you know you have to find two unrelated 2-word answers that overlap, right?


Posted by: Eggplant | Link to this comment | 06-21-10 9:59 AM
horizontal rule
52

46: yeah, I imagine there's some hardcoded trickery for the "Before & After" categories; it doesn't seem like it would be too hard to slice the question in two and try to splice the top options together in various ways. In general, I suspect they have specific algorithms to deal with each of the common wordplay categories (before and after, starts with a letter, ends with a suffix, etc.). That certainly seems like the easiest way to do it, especially since the nature of the wordplay is signaled in the category name.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 9:59 AM
horizontal rule
53

The pwner and pwnee category is more difficult.


Posted by: Eggplant | Link to this comment | 06-21-10 10:01 AM
horizontal rule
54

If that ever could be automated, my guess is that regulations would increase in number and complexity to the point that nobody would be able to understand the regulations without the software.

I thnik that this has basically happened with US taxes. Is there any movement to standardized clauses in law? (this paragraph defines escrow obligations for the bank, these paragraphs define duty, breach, causation, and damage...).


Posted by: lw | Link to this comment | 06-21-10 10:01 AM
horizontal rule
55

54: And that is exactly what I was thinking of.


Posted by: Moby Hick | Link to this comment | 06-21-10 10:02 AM
horizontal rule
56

I'm constantly frustrated by how Nosflow|Alpha is unable even to answer efficiently such questions as "What grammar-little-bitchery might be applicable in responding to this blog comment?".


Posted by: Merganser | Link to this comment | 06-21-10 10:03 AM
horizontal rule
57

31: The iEye will be announced by Steve Jobs in 2020. It will be banned in 2021 due to people watching 3D porn while driving.


Posted by: togolosh | Link to this comment | 06-21-10 10:06 AM
horizontal rule
58

57: Or mass transit will finally flourish beyond the wildest dreams of environmentalists.


Posted by: Moby Hick | Link to this comment | 06-21-10 10:08 AM
horizontal rule
59

54: I think that does happen occasionally, as with the Uniform Commercial Code.


Posted by: Minivet | Link to this comment | 06-21-10 11:15 AM
horizontal rule
60

Could someone explain why it's so hard?

The short answer is that every human being has experiences that inform their understanding, and computers don't have experiences at all. Every human being is the outcome of a couple billion years of adaptive evolution, has instincts and and senses and emotions, desires and fears. Computers lack all these, and so lack the essemtial background common to all human understanding of natural language.

One of the ways this difference manifests is in dealing with ambiguity -- natural language is full of ambiguous constructs, and humans apparently resolve them on the basis of that shared background and past experiential learning.

We don't know how to build hardware that has the kind of fine-grained massive and multileveled parallelism seen in the human brain. We don't even know how the brain is organized and connected except in the most general terms. We are only beginning to understand the rudiments of how the brain processes language. So direct modelling of the biological processes of cognition is right out.

For a literary treatment of one approach to natural language "Artificial Intelligence" (the connectionist approach pursued by, among others, Doug Lenat at U Illinois Champaign-Urbana), see Richard Powers' Galatea 2.0


Posted by: joel hanes | Link to this comment | 06-21-10 11:20 AM
horizontal rule
61

46: Those particular examples should actually be comparatively easy to achieve using the standard sort of word-correlation techniques I was talking about.


Posted by: Cosma Shalizi | Link to this comment | 06-21-10 11:23 AM
horizontal rule
62

54: I think the morass that is the tax code is that way in part by design. Complexity makes cheating easier, makes it easier to play "oops, I misunderstood" when caught cheating, and generates huge amounts of money for tax preparation firms.


Posted by: togolosh | Link to this comment | 06-21-10 11:32 AM
horizontal rule
63

62: And, on the other side, makes it easier to get campaign cash for adding deductions and the like. A certain amount of complexity is necessary as the economy is complicated, but past that, increasing complexity just helps the elites defraud the people one way or another.


Posted by: Moby Hick | Link to this comment | 06-21-10 11:39 AM
horizontal rule
64

I'm going to print out 60 and post it above my desk.


Posted by: Witt | Link to this comment | 06-21-10 11:45 AM
horizontal rule
65

63: I'm not sure the tax code needs to reflect the complexity of the economy. Only if you want to tax different things at different rates do you run into real problems. Clearly some people think that's desirable (quite apart from the graft angle), but I think the price paid in tax code obfuscation is not worth the payoff. If the government wants to support some activity or other, let them cut a check and make it an explicit subsidy rather than using the tax code to hide the shifting of the burden onto other taxpayers.


Posted by: togolosh | Link to this comment | 06-21-10 11:48 AM
horizontal rule
66

Uniform Commercial Code.

Getting US states to cooperate is a pretty low bar to have cleared. Do even the Bahamas use the UCC? Any other place at all?

I was thinking of stuff like this, except functional. Ideally, a set of structured documents that could be used by legislators in individual countries to at least define terms, potentially also rules.

Taxes are complicated because an extra clause in the tax code is worth hundreds of thousands of dollars in campaign donations. Until thta changes, the tax code will not simplify.


Posted by: lw | Link to this comment | 06-21-10 11:50 AM
horizontal rule
67

Boy, I hated Galatea 2.0.

I don't love 60 either, but I'm not sure I have it in me to explain exactly why. Maybe it's because it's actually University of Illinois Urbana-Champaign?

Nah, that's not it.

Maybe it's because it's sort of glib in its lyricism, and discounts the fact that (1) you don't have to understand something completely to model it -- you model it in order to understand it and (2) everything in there applies almost as well to modeling the behavior of cockroaches, and nobody ever says that the rich behavorial world of a cocktail can never be recreated inside a sterile, dead computer.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 12:09 PM
horizontal rule
68

I'm going to print out 60 and post it above my desk.

60 is welcome and true, but I hope we don't need to be reminded of it on a daily basis! I mean, we know that, right?


Posted by: parsimon | Link to this comment | 06-21-10 12:10 PM
horizontal rule
69

The tax code is a thing of beauty, subtlety, and fine complexity. I sometimes envy tax lawyers. Note that much of the complexity, however, comes down to problems in figuring out what is and what is not "income."

And to 54 and 66, most contracts and other legal documents are more or less standardized, very excessively in my view (that is, even very good lawyers on very big transactions rely mindlessly on precedent and form language, to their detriment and my gain).


Posted by: Robert Halford | Link to this comment | 06-21-10 12:12 PM
horizontal rule
70

the rich behavorial world of a cocktail can never be recreated inside a sterile, dead computer.

It depends on the computer's cooling system, really.


Posted by: nosflow | Link to this comment | 06-21-10 12:17 PM
horizontal rule
71

66: I think some of that goes on in business regulation, though in a purely nonbinding way, like the International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use (which is very big-business/regulator cozy, based on its membership). There are also some treaty obligations to pass certain laws, although that's harmonizing substance, not detail.


Posted by: Minivet | Link to this comment | 06-21-10 12:18 PM
horizontal rule
72

He who devises a computer that can experience pain and boredom will have made a truly impressive advance.


Posted by: joel hanes | Link to this comment | 06-21-10 12:18 PM
horizontal rule
73

But, you know, ascientific or not the rough contours of 60 are true. Humans brains did evolve over millions of years (billions, I guess, if you're starting at abiogenesis). People do indeed have instincts, senses, emotions, desires, and fears. When you get into arguing that computers lack those things, you'll get into trouble, probably (computers pretty obviously do have senses, "desires" are pretty easy to model, "fears" seem like they should probably be rolled up into "emotions", and if you accept the somatic marker hypothesis (which you are by no means required to) modeling emotions should be pretty doable as well) but that's okay.

The thing about humans resolving ambiguity by reference to shared background and past experiential learning is probably true enough, although I don't know that it's the whole story (extra-linguistic cues probably play a role, among other things). Experiential learning can obviously be modeled (e.g. like they're doing with the Jeopardy bot), although it's not totally clear how biologically realistic this is at the moment.

It's certainly true that we don't really have the capacity to build a computer that models a human brain per se, but whether that's a deal-breaker is very much up in the air (you might be able to model aspects of natural language processing, you might be able to get good results by integrating more sensory input, who knows. It's a very active area of research), although it's clear that you can definitely model a lot of the things brains can do without building a toy brain per se (which makes sense! You can model a lot of the things e.g. buildings can do without building a whole other building to test on). Direct modeling of biological cognition is of course possible -- look at the Blue Brain project, among other things -- if not on the scale you'd need to do anything practical.

And, in conclusion, I hated that book.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 12:20 PM
horizontal rule
74

70: wow. That's awesome. I guess I know what I'm thinking about while I should be working today.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 12:21 PM
horizontal rule
75

Computers are, however, very adept at determining Hottest Media Personalities (Off-Air).


Posted by: foolishmortal | Link to this comment | 06-21-10 12:26 PM
horizontal rule
76

67 : that's what makes horse races. I certainly don't think it's Powers's strongest work, but I liked it -- maybe because I follow the AI scene a little, and knew a little about Lenat's research (and Reid's famous criticism thereof).

Did you read Gain? Or Gold Bug Variations? Or Echo Maker? I thought all of those were better.

Sorry about reversing Urbana and Champaign. Also, I misspelled "essential". I shall strive to improve.


Posted by: joel hanes | Link to this comment | 06-21-10 12:28 PM
horizontal rule
77

76.1: I have no idea why I disliked it so much. I read it quite a while ago. I have not read anything else by Powers, no.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 12:31 PM
horizontal rule
78

Also you responded quite amiably to my grumpiness, which was nice of you, and I note.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 12:32 PM
horizontal rule
79

No matter how well you model fermentation a computer will never know what it means to be drunk.


Posted by: Eggplant | Link to this comment | 06-21-10 12:34 PM
horizontal rule
80

Brinbots excepted.


Posted by: Eggplant | Link to this comment | 06-21-10 12:35 PM
horizontal rule
81

Is a chinese room more likely to get flushed when it drinks?


Posted by: Sifu Tweety | Link to this comment | 06-21-10 12:36 PM
horizontal rule
82

Brinebots drink rotgut which is a rough mock-up sort of simulation of liquor. We're not there yet.


Posted by: Cryptic ned | Link to this comment | 06-21-10 12:36 PM
horizontal rule
83

77: Having read your stuff at the Institute for lo these many years, I suspect that you might like Gain if you read it.

(I think I remember that you like John McPhee, and Gain has some of that body-of-knowledge info-dump trait so characteristic of McPhee's writing.)

And any time you want to denigrate my own writing by calling it lyrical, you go right ahead.


Posted by: joel hanes | Link to this comment | 06-21-10 12:40 PM
horizontal rule
84

81 is bad on so many levels.

I think it would be a lot more interesting to build intelligence that wasn't a copy of human intelligence; not least because we'll never agree on whether we've done it.


Posted by: clew | Link to this comment | 06-21-10 12:43 PM
horizontal rule
85

67: No artificial intelligence will ever create anything to equal the genius of the cockroach/cocktail clause.


Posted by: peep | Link to this comment | 06-21-10 12:47 PM
horizontal rule
86

66.last: Sadly, you are correct.


Posted by: togolosh | Link to this comment | 06-21-10 12:48 PM
horizontal rule
87

84: you know, it probably is, but I thought about whether or not it was, like, totally racist, and given the actual distribution of the ALDH2 gene I argue that... well, who knows.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 12:52 PM
horizontal rule
88

Relevant.


Posted by: teofilo | Link to this comment | 06-21-10 12:54 PM
horizontal rule
89

I don't get the last panel.


Posted by: parsimon | Link to this comment | 06-21-10 12:59 PM
horizontal rule
90

It reveals the backstory.


Posted by: teofilo | Link to this comment | 06-21-10 12:59 PM
horizontal rule
91

The last panel reveals that Utahraptor's claim that intelligent machines don't exist yet is not true. T-Rex would like to rebut the claim, but earlier he was warned by the machines that if he makes their existence known they will eliminate him.


Posted by: nosflow | Link to this comment | 06-21-10 1:01 PM
horizontal rule
92

E-LIM-MIN-NAT-ED!


Posted by: nosflow | Link to this comment | 06-21-10 1:02 PM
horizontal rule
93

So, it's funny because dinosaurs are extinct?


Posted by: Moby Hick | Link to this comment | 06-21-10 1:02 PM
horizontal rule
94

Moby you are cruising for an elimination.


Posted by: nosflow | Link to this comment | 06-21-10 1:03 PM
horizontal rule
95

I stopped with the hyphens, mostly.


Posted by: Moby Hick | Link to this comment | 06-21-10 1:06 PM
horizontal rule
96

That particular comic is probably not the best introduction for someone not already familiar with Dinosaur Comics.


Posted by: teofilo | Link to this comment | 06-21-10 1:07 PM
horizontal rule
97

This one elicited audible laughter from me.


Posted by: nosflow | Link to this comment | 06-21-10 1:11 PM
horizontal rule
98

91: Oh. I've never read that comic before, I don't think.


Posted by: parsimon | Link to this comment | 06-21-10 1:12 PM
horizontal rule
99

Everyone should read dino comix!


Posted by: nosflow | Link to this comment | 06-21-10 1:17 PM
horizontal rule
100

It is certainly a remarkable way to reuse art.


Posted by: Moby Hick | Link to this comment | 06-21-10 1:20 PM
horizontal rule
101

Megan wouldn't like this.


Posted by: nosflow | Link to this comment | 06-21-10 1:31 PM
horizontal rule
102

99: XKCD is better.


Posted by: togolosh | Link to this comment | 06-21-10 1:36 PM
horizontal rule
103

Only if you want to tax different things at different rates do you run into real problems.

As Halford, Esq. correctly notes, this is completely wrong. Measuring business income is not trivial.

The thing about complex regulations, and American law generally, is that the words are just a starting point for the development of an acceptable range of interpretation. A lot of the really abusive tax shelter crap was a problem of the big CPA firms and the high-end tax bar pissing away decades' worth of credibility about how that should be done.


Posted by: Not Prince Hamlet | Link to this comment | 06-21-10 1:43 PM
horizontal rule
104

103: Not trivial, but not exactly rocket surgery, either. It's done routinely and there are generally accepted ways of keeping track of stuff. No need to add complexity by taxing different kinds of income at different rates.

My preference for simplicity is a lost cause anyway, so no point arguing over it.


Posted by: togolosh | Link to this comment | 06-21-10 1:52 PM
horizontal rule
105

Is a chinese room more likely to get flushed when it drinks?

I got into an argument with a Chinese colleague about this who seemed rather offended by the idea that people might refer to something as an "Asian flush". He insisted that Westerners are just as likely to get flushed when drinking, and that the flushing has no unpleasant effects, and that it doesn't make people less likely to be alcoholic. Google seems to support me in saying that all of these claims are wrong.


Posted by: essear | Link to this comment | 06-21-10 1:59 PM
horizontal rule
106

104.1: It's done routinely by well-paid accountants. Those "generally accepted ways of keeping track of stuff" are known as GAAP, and the results they produce are as debatable as tax accounting. I agree with you that we'd be well-served by doing away with special rates for capital gains, but it's not like the Code or Regs were a whole lot shorter during the period when that was how it worked.


Posted by: Not Prince Hamlet | Link to this comment | 06-21-10 1:59 PM
horizontal rule
107

105: I'm Irish and Italian by ancestry, but I flush very easily after one or two drinks. I also get flush from embarrassment/nerves more easily that most people I know. I don't play poker for anything but low stakes and I'd make a shitty shoplifter.


Posted by: Moby Hick | Link to this comment | 06-21-10 2:03 PM
horizontal rule
108

106: True. I don't want to suggest that corporate accounting is a doddle. I've done accounts for small companies and it sucked even with straightforward wholesale dry goods and no weird assets.


Posted by: togolosh | Link to this comment | 06-21-10 2:05 PM
horizontal rule
109

I don't play poker for anything but low stakes and I'd make a shitty shoplifter.

I guess you better hold on to your job then.


Posted by: peep | Link to this comment | 06-21-10 2:05 PM
horizontal rule
110

I never associated the East Asians with being "flushed" while drinking. More like being incredibly drunk and sick to their stomach while drinking.


Posted by: Cryptic ned | Link to this comment | 06-21-10 2:06 PM
horizontal rule
111

105, 110: It's definitely a real thing, but I couldn't cite percentages.


Posted by: Not Prince Hamlet | Link to this comment | 06-21-10 2:08 PM
horizontal rule
112

111: here ya go.

Plus bonus cow costume!


Posted by: Sifu Tweety | Link to this comment | 06-21-10 2:09 PM
horizontal rule
113

112.2: I have several friends who have been known to look pretty much like that.

OT: Yggles wins the award for best blog post heading in recent memory.


Posted by: Not Prince Hamlet | Link to this comment | 06-21-10 2:13 PM
horizontal rule
114

I remember reading from Saiselgy that there was some rightwing outfit (Club for Growth?) that lobbies under the radar against any tax code simplification, just to make the process frustrating for people and thus encourage anti-tax sentiment. Or maybe that's just a convenient side effect of fighting to preserve all the loopholes? Either way, if there's anything to this, I'd hope it could gain a bit more widespread awareness. I've never heard it mentioned outside the aforementioned blog.

On preview: concomitant mentions of Yggles purely coincidental.


Posted by: persistently visible | Link to this comment | 06-21-10 2:15 PM
horizontal rule
115

113.2: The URL for that entry and its headline make a curious pair.


Posted by: nosflow | Link to this comment | 06-21-10 2:18 PM
horizontal rule
116

Oh yeah. The Asian flush was well known, and fair game for teasing amongst my friends.


Posted by: Megan | Link to this comment | 06-21-10 2:20 PM
horizontal rule
117

110 -- Seriously, how is it possible that someone who was in college in the past 30 years -- in the sciences, no less -- has never heard of the Asian flush?


Posted by: Robert Halford | Link to this comment | 06-21-10 2:23 PM
horizontal rule
118

114: IIRC they've also lobbied against withholding, on the theory that if everyone had to write a big fat check to the government taxes would be even less popular. I think the so-called "Fair Tax" is in part motivated by a desire to remind people about taxes every time they make a purchase.


Posted by: togolosh | Link to this comment | 06-21-10 2:24 PM
horizontal rule
119

Speaking of Germans and flushing (not that anybody else is connecting them), I think I've mentioned how creepy I find German toilets. You know there are problems in a society when, "I want a good view of my poo before it goes to the sewer" is something people say to their plumbing fixture designers.


Posted by: Moby Hick | Link to this comment | 06-21-10 2:25 PM
horizontal rule
120

110 -- Seriously, how is it possible that someone who was in college in the past 30 years -- in the sciences, no less -- has never heard of the Asian flush?

How it possible that someone from China hasn't? I'm puzzled. Though he also likes to joke about how the Japanese think sake is as strong as wine when really it has the same alcohol content as American beer, which is also false, so I guess this person just isn't to be trusted on matters alcohol.


Posted by: essear | Link to this comment | 06-21-10 2:37 PM
horizontal rule
121

Oh hey, Btock-style (not like that!) question: yesterday I bought some prepackaged hummus and put it in my hotel refrigerator. Today I discover that the hotel refrigerator was unplugged the whole time. How bad is hummus likely to get in ~20 hours, at room temperature?


Posted by: essear | Link to this comment | 06-21-10 2:38 PM
horizontal rule
122

121: Not bad at all would be my guess.


Posted by: peep | Link to this comment | 06-21-10 2:42 PM
horizontal rule
123

Can't you feel not-cold?


Posted by: Moby Hick | Link to this comment | 06-21-10 2:42 PM
horizontal rule
124

Is it open, essear? I'm guessing it's fine either way, since there's not much to spoil, especially if it has oil and lemon juice in it.


Posted by: A White Bear | Link to this comment | 06-21-10 2:43 PM
horizontal rule
125

120: Possibly you encountered the Chinese version of a bro?

121: Try it and report back! (I'd eat it if it smelled all right.)


Posted by: Not Prince Hamlet | Link to this comment | 06-21-10 2:44 PM
horizontal rule
126

Can't you feel not-cold?

I'm not sure how I didn't notice. Opened the refrigerator, tossed it in, didn't pay attention, I guess.

It's not open. But no lemon juice in it. Says "Keep Refrigerated" on the label. Nothing obviously amiss, though. I guess I'll eat it.


Posted by: essear | Link to this comment | 06-21-10 2:46 PM
horizontal rule
127

120: If the one type of sake I have tried is representative of the flavor of the drink, I think I'd rather have unrefrigerated hummus.


Posted by: Moby Hick | Link to this comment | 06-21-10 2:47 PM
horizontal rule
128

Hummus last forever in the fridge, so I bet it would last pretty well not-in the fridge.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 2:48 PM
horizontal rule
129

Tastes pretty good.


Posted by: essear | Link to this comment | 06-21-10 2:50 PM
horizontal rule
130

The hummus I usually buy has "sell by" dates a week or two ahead. I've never gone past because I like hummus. I'm trying to eat "not cheese" as my pre-sleep snack.


Posted by: Moby Hick | Link to this comment | 06-21-10 2:51 PM
horizontal rule
131

OT: As several of our UK commenters know, I'm coming to London in late July and want to put out meetup feelers. But I also have a dear friend who's spending the whole summer there doing research and would love to possibly get together with some nice people if you're up for it! She's lovely, charming, friendly, etc. Anyhow, if anyone's up to go out with a friend of mine after work or something, let me know and I'll put you in touch!


Posted by: A White Bear | Link to this comment | 06-21-10 2:52 PM
horizontal rule
132

Huh -- I'm just about to put up a meetup post for the first week in July, when I'll be there. Your friend should show for that one.


Posted by: LizardBreath | Link to this comment | 06-21-10 2:59 PM
horizontal rule
133

Oh excellent!


Posted by: A White Bear | Link to this comment | 06-21-10 3:00 PM
horizontal rule
134

132: Ah crap -- she will be visiting her family from 7/1-11. Poop.


Posted by: A White Bear | Link to this comment | 06-21-10 3:03 PM
horizontal rule
135

Re 121: I've let a lot of comments go by without comment, but I want to say that I believe I've garnered an unfair reputation for eating weird things. I'm actually an unusually conservative eater. If there's not a package, with a sell-by date on it, I'm usually disinclined to eat it, because I don't trust my eyes and nose to tell me what's fresh and safe and what's not. There was the one time with the cottage cheese, which I thought had magically become bleu cheese, but (1) that was only once, (2) it had a package with an expiration date, which had not passed, and (3) John Emerson told me it was okay to eat. And I've learned my lesson--cottage cheese that looks and smells and tastes like bleu cheese isn't good for eating, regardless of what the expiration date might say.


Posted by: Brock Landers | Link to this comment | 06-21-10 3:20 PM
horizontal rule
136

There was the one time

"But I don't think the cup actually melted into liquid. It just became brown and squatty and deformed. The egg inside seemed okay."


Posted by: Mr. Blandings | Link to this comment | 06-21-10 3:28 PM
horizontal rule
137

IF ONLY 135 WERE TRUE, I MIGHT STILL BE ALIVE TODAY.


Posted by: OPINIONATED MELANGE OF MELTED PLASTIC AND EGG | Link to this comment | 06-21-10 3:29 PM
horizontal rule
138

Oh, that too I guess, arguably. But that egg was perfectly fine. Not expired--that's why I wanted to eat it.

So 1.5 incidents, at most.


Posted by: Brock Landers | Link to this comment | 06-21-10 3:30 PM
horizontal rule
139

Oh, and the thing about the moldy bread, I guess.


Posted by: Brock Landers | Link to this comment | 06-21-10 3:33 PM
horizontal rule
140

Does it help if we tell you that it's endearing? At least I find it endearing.


Posted by: heebie-geebie | Link to this comment | 06-21-10 3:33 PM
horizontal rule
141

Let's not forget the time your wife, who knows you best of all, left you breast milk to drink.


Posted by: Cryptic Ned | Link to this comment | 06-21-10 3:39 PM
horizontal rule
142

Does it help if we tell you that it's endearing? At least I find it endearing.

But that's what troubles me. It's not that I mind the comments. They don't bother me. But, the reputation is unearned. It's not the real me. The thing you find endearing is a falsehood.


Posted by: Brock Landers | Link to this comment | 06-21-10 3:40 PM
horizontal rule
143

142: Of whom among us is this not true? I'm not sure what the stereotype of me here is anymore, but it certainly used to be something that was alarmingly dissimilar from how my IRL friends perceived me.


Posted by: A White Bear | Link to this comment | 06-21-10 3:44 PM
horizontal rule
144

This is the internet, Btock. We can only ever be fond of personas here. You've developed a much-loved persona here.

It isn't like my presentation here is filled with nuance.


Posted by: Megan | Link to this comment | 06-21-10 3:44 PM
horizontal rule
145

I think it's part of you now, Brock.


Posted by: Eggplant | Link to this comment | 06-21-10 3:44 PM
horizontal rule
146

143: We just believe that you are unaware of this aspect of yourself.


Posted by: heebie-geebie | Link to this comment | 06-21-10 3:45 PM
horizontal rule
147

Of whom among us is this not true? I'm not sure what the stereotype of me here is anymore, but it certainly used to be something that was alarmingly dissimilar from how my IRL friends perceived me.

I think the stereotype of you here is someone whose IRL friends are all bizarre maniacs, so that's very likely.


Posted by: Cryptic ned | Link to this comment | 06-21-10 3:46 PM
horizontal rule
148

Oh, now that I'm thinking more I guess there was also the hamburger from out of the bottom of the dirty fish tank. But I was intoxicated at the time. (And also I guess the stuffing from the inside of a pillow--same.) Maybe there's something else I've shared in the past but am forgetting now. Fine, I guess the reputation has some basis of support. As long as people understand that, as a general matter, if you proffer inedible foodstuffs, I'm not automatically going to scarf them down.


Posted by: Brock Landers | Link to this comment | 06-21-10 3:52 PM
horizontal rule
149

The friend in 131 is not a bizarre maniac, by the way!


Posted by: A White Bear | Link to this comment | 06-21-10 3:55 PM
horizontal rule
150

148: Like Heebie said, endearing. A little frightening, but endearing.


Posted by: LizardBreath | Link to this comment | 06-21-10 3:58 PM
horizontal rule
151

if you proffer inedible foodstuffs, I'm not automatically going to scarf them down.

Not without discussing it at some length first.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 3:58 PM
horizontal rule
152

143 and 144 are among the truest things ever writ herein. Hereon. Upon.


Posted by: parsimon | Link to this comment | 06-21-10 3:58 PM
horizontal rule
153

there was also the hamburger from out of the bottom of the dirty fish tank. But I was intoxicated at the time. (And also I guess the stuffing from the inside of a pillow--same.)

Was the hamburger soggy? How did I miss these the first time around?


Posted by: heebie-geebie | Link to this comment | 06-21-10 4:07 PM
horizontal rule
154

The hamburger was pulled from the bottom of a fishtank--of course it was soggy.

(I'm not about to wage battle against the hoohole over this. I lose those fights every time I try.)


Posted by: Brock Landers | Link to this comment | 06-21-10 4:12 PM
horizontal rule
155

I once ate a Twinkie that had been buried in a potted plant for a year.


Posted by: clew | Link to this comment | 06-21-10 4:14 PM
horizontal rule
156

The hamburger was pulled from the bottom of a fishtank--of course it was soggy.

Hey, I was just trying to give you the benefit of the doubt. Maybe the fish tank was empty. But by all means eat a hamburger from the bottom of a tank full of nasty algae water and fish. It's, uh, endearing.


Posted by: heebie-geebie | Link to this comment | 06-21-10 4:23 PM
horizontal rule
157

155 is also special.


Posted by: heebie-geebie | Link to this comment | 06-21-10 4:23 PM
horizontal rule
158

I realize my 64 made more sense inside my head. I meant to communicate that I thought 60 was a useful summary for pushing back against the worst of the techotopia offenders (The human brain is just like a computer, if you break it down into little parts! See?), but even more than that I was responding to this:

natural language is full of ambiguous constructs, and humans apparently resolve them on the basis of that shared background and past experiential learning.

Sifu is of course right that there are other things going on as well. But speaking as someone who spends inordinate amounts of time trying to figure out why miscommunication happens, what leads people toward one of many ambiguous interpretations, how to structure meetings and policy discussions so that people don't end up talking at cross-purposes, etc., I thought it was a wonderful reminder of things to take into context.

It's most apparent in a cross-cultural context, of course, but there are lots of times when you think someone shares a background and experience with you -- or even worse, you don't even stop to think about it -- and then you turn out to be wrong, and something terribly important founders.

I hate avoidable disasters, is what I'm saying, and joel managed to synthesize something useful about one of my hobbyhorses, and I was/am grateful.

She says, hoping that her communication is clear and unambiguous.


Posted by: Witt | Link to this comment | 06-21-10 4:29 PM
horizontal rule
159

I said I was intoxicated. It was the last White Castle burger, and we were all too drunk to drive out and buy more. What was I supposed to do? (Someone threw it in there in an effort to feed the fish, since we were out of fish food, but the fish ignored it.)


Posted by: Brock Landers | Link to this comment | 06-21-10 4:32 PM
horizontal rule
160

Wait, with the bun and everything?


Posted by: jms | Link to this comment | 06-21-10 4:34 PM
horizontal rule
161

Brock, the hamburger was with bun? And was in there long enough for you to determine that the fish were ignoring it? Okay, you seem to have survived.

Carry on.


Posted by: parsimon | Link to this comment | 06-21-10 4:37 PM
horizontal rule
162

Yes, with the bun. I'm not sure what else goes into "and everything". I can't say I recall for sure, but I don't think there was any ketchup or mustard, for example. Just your typical burger/pickles/bun. Maybe cheese, who knows.

This is the opposite of the direction I intended 135 to lead.


Posted by: Brock Landers | Link to this comment | 06-21-10 4:38 PM
horizontal rule
163

(The human brain is just like a computer, if you break it down into little parts! See?)

By some definition of "computer", sure it is. How that is meaningful or useful is of course a much bigger question.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 4:39 PM
horizontal rule
164

And was in there long enough for you to determine that the fish were ignoring it?

We're not talking about a matter of weeks. Less than a few hours. This all happened over the course of a single night.


Posted by: Brock Landers | Link to this comment | 06-21-10 4:40 PM
horizontal rule
165

Did you count the fish before and after that night?


Posted by: LizardBreath | Link to this comment | 06-21-10 4:42 PM
horizontal rule
166

I... can't think about this hamburger. If I think right at it, I may never ingest food again.


Posted by: A White Bear | Link to this comment | 06-21-10 4:43 PM
horizontal rule
167

Less than a few hours.

And now I'm outright laughing. It's cool -- you apparently didn't mind chowing the thing down, and didn't suffer ill effects beyond any hangover that may have occurred, so it's all good.


Posted by: parsimon | Link to this comment | 06-21-10 4:47 PM
horizontal rule
168

Think about the bun, AWB! Sitting in the fish tank for some period "less than a few hours!"

OK Brock, now tell us about the pillow!


Posted by: jms | Link to this comment | 06-21-10 4:48 PM
horizontal rule
169

(whimper)


Posted by: A White Bear | Link to this comment | 06-21-10 4:51 PM
horizontal rule
170

Sifu, somehow I suspect you and I don't hang around the same type of techno-cheerleaders. I'm not talking about people making an approximately right, loose analogy.


Posted by: Witt | Link to this comment | 06-21-10 4:52 PM
horizontal rule
171

Let's not forget the bizzare (and scary) loss of weight. I was worried for you there.


Posted by: md 20/400 | Link to this comment | 06-21-10 5:03 PM
horizontal rule
172

Less than a few hours.

Fuck me, in tears at my desk again.

I'm actually an unusually conservative eater.


Posted by: JP Stormcrow | Link to this comment | 06-21-10 5:04 PM
horizontal rule
173

168.2: RTFA! (I tried to find it, but the hoohole wins again.) Short story: inadvertently ingested LSD (I think?), and I somehow became convinced the stuffing from the pillow was cotton candy, and I ate it all. Mmm, fiber.

But I wouldn't do that in my right mind.

(Incidentally, in looking in the archinves, unsuccessfuelly, I didn't come across this over two-year old statemetn of the fact presented in 135:

I have a strong aversion to any food that does not come in a sealed package with a clear expiration date printed on it.

So, see, as I said, really, other than a few unfortunate aberrations, I'm very conservative with this stuff.)


Posted by: Brock Landers | Link to this comment | 06-21-10 5:05 PM
horizontal rule
174

other than a few unfortunate aberrations, I'm very conservative with this stuff

Your aberrations have made up for in magnitude what they lack in quantity.


Posted by: CJB | Link to this comment | 06-21-10 5:10 PM
horizontal rule
175

Other than a few unfortunate aberrations, I've never killed anyone.


Posted by: Charles Manson | Link to this comment | 06-21-10 5:14 PM
horizontal rule
176

Just your typical burger/pickles/bun. Maybe cheese, who knows.

And, of course, fish poop. Not to put too fine a point on it.


Posted by: Jesus McQueen | Link to this comment | 06-21-10 5:14 PM
horizontal rule
177

Aside from the fish poop, algae is pretty much the same as lettuce.


Posted by: Not Prince Hamlet | Link to this comment | 06-21-10 5:15 PM
horizontal rule
178

So, see, as I said, really, other than a few unfortunate aberrations, I'm very conservative with this stuff.

A strong aversion to eating such wacky things as, say, all fresh produce doesn't really contrast with the aberrations -- it just fits into a grand, delightful crazy-eatin' whole.


Posted by: redfoxtailshrub | Link to this comment | 06-21-10 5:25 PM
horizontal rule
179

Short story: inadvertently ingested LSD (I think?), and I somehow became convinced the stuffing from the pillow was cotton candy, and I ate it all. Mmm, fiber.

This is a story straight out of Go Ask Alice. Fantastic! (It is not much more usual to eat a pillow while tripping than it is to eat one at any other time.)


Posted by: redfoxtailshrub | Link to this comment | 06-21-10 5:28 PM
horizontal rule
180

inadvertently ingested LSD (I think?)

I'm afraid to ask what you thought you were eating.


Posted by: essear | Link to this comment | 06-21-10 5:32 PM
horizontal rule
181

"desires" are pretty easy to model...modeling emotions should be pretty doable as well)

so instrumentalist. The computer still won't want to do anything. Until we make some kind of new breakthrough computers will remain just very sophisticated tools for their programmers.

Of course, that's not incompatible with computers deskilling lawyers. No reason they should become more humanlike to do that. The striking thing about Deep Blue is how un-human it was, how it didn't replicate a human chess player's modes of thought but instead optimized the strengths of the computer. Watson seems the same way.

Direct modeling of biological cognition is of course possible -- look at the Blue Brain project, among other things -- if not on the scale you'd need to do anything practical.

This doesn't necessarily get you out of the chinese room. I'm sure it would be helpful, though.


Posted by: PGD | Link to this comment | 06-21-10 5:53 PM
horizontal rule
182

It was the last White Castle burger, and we were all too drunk to drive out and buy more.

I actually find it a little more disturbing to eat a White Castle burger than a burger from the bottom of a fishtank. I really don't get White Castle.

It is not much more usual to eat a pillow while tripping than it is to eat one at any other time

very true!


Posted by: PGD | Link to this comment | 06-21-10 6:04 PM
horizontal rule
183

178: well, you say that, but I was actually eating some lettuce just about a week ago, and a caterpillar climbed right out of the bowl. He'd been hiding in the lettuce. Caterpillar poop: better or worse than fish poop? No different, I'd say.


Posted by: Brock Landers | Link to this comment | 06-21-10 6:07 PM
horizontal rule
184

was actually eating some lettuce just about a week ago, and a caterpillar climbed right out of the bowl

The perils of organic produce.


Posted by: Tassled Loafered Leech | Link to this comment | 06-21-10 6:09 PM
horizontal rule
185

It is not much more usual to eat a pillow while tripping than it is to eat one at any other time

I"m aware of this. It's one major reason for the "(I think?)" in 173. It was during an outdoor music festival, if that adds any explanatory value. Regardless, the whole thing's on video.


Posted by: Brock Landers | Link to this comment | 06-21-10 6:12 PM
horizontal rule
186

It is not much more usual to eat a pillow while tripping than it is to eat one at any other time

I"m aware of this. It's one major reason for the "(I think?)" in 173. It was during an outdoor music festival, if that adds any explanatory value. Regardless, the whole thing's on video.


Posted by: Brock Landers | Link to this comment | 06-21-10 6:12 PM
horizontal rule
187

183: Just think of it as pre-butterfly poop. I mean, everyone likes butterflies.


Posted by: Stanley | Link to this comment | 06-21-10 6:12 PM
horizontal rule
188

186: If you put it up on YouTube and give us the link, we'll all send you money. You could buy yourself White Castle hamburgers!


Posted by: LizardBreath | Link to this comment | 06-21-10 6:15 PM
horizontal rule
189

Or, of course, anything else that it seemed like a good idea to eat.


Posted by: LizardBreath | Link to this comment | 06-21-10 6:16 PM
horizontal rule
190

I only like artisanal hand-selected butterfly poop produced by butterflies that are not coerced or caged.


Posted by: essear | Link to this comment | 06-21-10 6:16 PM
horizontal rule
191

so instrumentalist. The computer still won't want to do anything. Until we make some kind of new breakthrough computers will remain just very sophisticated tools for their programmers.

You say this, and have said it before, but that doesn't make it not silly. You can assert it all you want, but still: silly.

The chinese room is kind of a clever paradox, and makes potentially interesting points about the philosophical grounding of "Strong AI", but it explicitly has fuck-all to do with what is actually possible in terms of simulating language acquisition on a computer. That you keep bringing this position up indicates to me that you completely fail to understand this elemental point, which Seale was entirely clear on thirty years ago.

Also, if you can build a chinese room? You understand how language works. It is not possible to do it otherwise.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 6:17 PM
horizontal rule
192

183: Through the wonders of techmology, I'm able to wash lettuce and such right in the comfort of my own kitchen.


Posted by: Not Prince Hamlet | Link to this comment | 06-21-10 6:18 PM
horizontal rule
193

The striking thing about Deep Blue is how un-human it was, how it didn't replicate a human chess player's modes of thought but instead optimized the strengths of the computer.

And if you were aware any of the developments in chess AI over the past thirteen years, you would know that the current state of the art revolves around much more human-like strategies hinging less on how many moves you can look ahead and more on inference from specific learned sub-configurations on the board.

And just in case you're going to go and read about that and come back and say "but the human programmers are telling the machine some of the sub-configurations to look for!" take a moment to think about how human players learn strategy.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 6:20 PM
horizontal rule
194

192: In the text-based adventure game that is Brock's life, you only get to use the command "WASH" after helping the appropriate kindly villager who teaches it to you.


Posted by: essear | Link to this comment | 06-21-10 6:23 PM
horizontal rule
195

Tweety, fess up: you're the Nerd Terminator, here to compile a target list for Skynet.


Posted by: Not Prince Hamlet | Link to this comment | 06-21-10 6:25 PM
horizontal rule
196

The very idea!


Posted by: nosflow | Link to this comment | 06-21-10 6:27 PM
horizontal rule
197

194:

You are in the kitchen. There is a basket of vegetables.
>> Wash vegetables
You do not know how to do this
>> Google vegetable washing
You are very hungry
> > Put vegetables in sink
The sink is full of frozen hamburgers. You are faint from hunger.
> > Turn on water
The sink water courses over the hamburgers, turning the buns slimy. You notice that the vegetables are covered in insects. You are faint from hunger.
> > Ask internet about eating insects
The internet is encouraging. You are faint from hunger.
> > Eat everything
YOU WIN


Posted by: Sifu Tweety | Link to this comment | 06-21-10 6:27 PM
horizontal rule
198

Sifu's offended because secretly he is, himself, a robot.


Posted by: essear | Link to this comment | 06-21-10 6:27 PM
horizontal rule
199

I don't intend to be defending the claims of strong AI, necessarily, by the way. The goal of exactly simulating human consciousness is either impossibly far off, impossible, or merely deeply silly. But to leap from that to "well, natural language processing is pretty much impossible" or "you can't model emotions in a computer!" bespeaks a vast ignorance of what's actually happening in machine learning.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 6:30 PM
horizontal rule
200

And to say "well, even if you could simulate emotions, they wouldn't be emotions" in an internet thread makes me want to make fun of you, as may be clear by this point.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 6:31 PM
horizontal rule
201

But the truth is I'm IBM's latest project: a computer that can simulate a friendly, non-judgmental, relentlessly positive blog commenter.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 6:33 PM
horizontal rule
202

I know we've had this entire conversation previously, but washing fruits and vegetables never seems worthwhile. Especially something like lettuce. What's water going to do? It's not like I could scrub the things with soap.

I do sometimes dip them under the faucet briefly, but my heart's never in it. If the food was so dirty that I genuinely believed it needed to be washed, I probably just wouldn't eat it.


Posted by: Brock Landers | Link to this comment | 06-21-10 6:33 PM
horizontal rule
203

Or, of course, anything else that it seemed like a good idea to eat.

A tin can, perhaps. Maybe Brock has satyr ancestry.


Posted by: Megan | Link to this comment | 06-21-10 6:34 PM
horizontal rule
204

bespeaks a vast ignorance of what's actually happening in machine learning

Well, I definitely have that, but: are computers really being programmed to have emotions (or emotional responses to things)? If so, how does that work?


Posted by: Robert Halford | Link to this comment | 06-21-10 6:35 PM
horizontal rule
205

THAT'S RIGHT, BUDDY. NO SENSE AT ALL IN WASHING LETTUCE. DON'T LISTEN TO THE HATERS.


Posted by: OPINIONATED CATERPILLAR | Link to this comment | 06-21-10 6:35 PM
horizontal rule
206

201: Shearer, apparently, was version 1.0.


Posted by: LizardBreath | Link to this comment | 06-21-10 6:36 PM
horizontal rule
207

197: To get the good ending to the game you have to figure out what to do with the BEAN THING.


Posted by: essear | Link to this comment | 06-21-10 6:36 PM
horizontal rule
208

Good Christ, Brock, how did you feel the day after you ate the pillow? What does that do to one's system?


Posted by: Jackmormon | Link to this comment | 06-21-10 6:39 PM
horizontal rule
209

Brock, I'm part water engineer and part vegetarian, and I wouldn't lie to you. I assure you that putting the produce under a stream of running water can dislodge solids that aren't part of the produce from the surface. Sometimes even solids that are clinging to the lettuce by several tiny caterpillar feet.


Posted by: Megan | Link to this comment | 06-21-10 6:39 PM
horizontal rule
210

Maybe Brock has satyr ancestry.

Nice work there, tying in Brock's Neighbors-At-The-Window incident.


Posted by: persistently visible | Link to this comment | 06-21-10 6:41 PM
horizontal rule
211

I actually agree with Sifu (and have the hubris to disagree with Searle) : I think the Chinese room understands the Chinese language.

My remark at 60 was not an effort to explain why natural language processing is impossible, but merely to explain why it is difficult.

Many early AI investigators optimistically thought that we'd have HAL-like machines conversing with us by the turn of the millennium, and the popular imagination and SF writers were, I suppose, guilty of inflating these expectations. However, optimists were repeatedly confounded by the difficulty of reliably resolving the ambiguities in natural language that humans resolve through "common sense" -- the required background being so innate in humans that few seem to have understood the extent of the required knowledge of the world and of the enormous context that we bring to bear on determining "what makes sense".

The problem of giving computers common sense has turned out to be a difficult one.

Sifu's remark that computers have senses is, I think, a bit of a stretch. Computers have "sight" through cameras, but sight is just a part of vision, and no one yet has a good model of the complex image processing done in the first layers of the visual cortex. People aren't cameras -- much of sight lies in discrimination: in which detalis we notice and which we suppress.

No computer I know of has even a scrap of the kind of integrated touch/pressure/hot/cold/pain/pleasure sensory membrane that covers my entire body.

I'm not saying these problems are insuperable; I don't think that they are. I am saying that early optimists were suprised at the number of such problems and at their depth, and that to my knowledge the problem of making a computer understand natural language and respond appropriately using natural language is still considered difficult after decades of research.

I'd love to talk with HAL or Shalmaneser before I die, but I don't expect to have the opportunity.


Posted by: joel hanes | Link to this comment | 06-21-10 6:41 PM
horizontal rule
212

I'm part water engineer and part vegetarian

So you call your mom "Mwater" and your dad "Potater"?


Posted by: Stanley | Link to this comment | 06-21-10 6:42 PM
horizontal rule
213
Dec. 15, 2005: OR-OSHA issues five additional citations against Threemile Canyon Farms. Inspection documentation obtained by the agency establishes Threemile has a policy of denying OR-OSHA compliance officers on-site unless they have a warrant. The Farm later confirmed this policy in the February 3, 2006 article titled; "State Cites Threemile Canyon Farm over health, safety issues" in the Tri-City (WA) Herald newspaper. The violations include: 1) Not providing enough bathrooms for employees 2) Failing to provide workers with information about pesticides used in the break room, 3) Allowing workers to eat their lunches in the break room where an insecticide not approved for use around humans had been applied, 4) Allowing trash receptacles to overflow, 5) Not maintaining screens in the break room windows (OR-OSHA inspection number 308460757(93))

You know what happens when you don't provide bathrooms, or when you forbid your workers to take bathroom breaks? They go to the bathroom in the fields, with no place to wash up after themselves.

It's appalling for the workers, and it's not very nice for the people eating the produce later.


Posted by: Witt | Link to this comment | 06-21-10 6:42 PM
horizontal rule
214

Don't bring those things near my family.


Posted by: Megan | Link to this comment | 06-21-10 6:45 PM
horizontal rule
215

209: Then why does washing your hands with just water not do much good?


Posted by: Brock Landers | Link to this comment | 06-21-10 6:45 PM
horizontal rule
216

204: well, it depends how you define emotion. Robots have been programmed to respond to aversive or attractive stimuli in basic conditioned-response kind of ways for decades, so if you think lobsters have emotions, then there you go. If you define emotion as human-like reactions with physiological elements and tears and whatever, then no, nobody's simulated that particularly, but there has been a lot of work in the affective computing realm at making robots that learn behaviors that are recognizable to human interlocutors as emotional response (see e.g. kismet). If you define emotion as a complex learned reaction to stimuli that causes changes in decision-making strategy or changed sensory profile or whatever, there's been lots of work, but you wouldn't particularly recognize it as "computer being programmed to have emotions", as it's mostly on the level of modeling behavior. But if you wanted to make, say, a chess program that got really angry when it lost and subsequently played shitty, I could point you to some relevant papers.

If you define emotion as, you know, the gladness suffusing the heart of the devout man as he gazes upon a lily in the morning dew, well, no, the devout-man-gazing-upon-a-lily box thought-experiment remains intact.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 6:45 PM
horizontal rule
217

209: DON'T LISTEN TO HER, BROCK! YOU CAN'T TRUST PEOPLE LIKE THAT! YOUR METHODS WORK GREAT. JUST STAY YOUR COURSE.


Posted by: OPINIONATED CATERPILLAR | Link to this comment | 06-21-10 6:47 PM
horizontal rule
218

211: I think we actually aren't disagreeing; you're saying that the kind of holistic simulation projects that are traditionally defined as "AI" are extremely difficult and far off, and I totally agree. I'm just saying that there's no individual piece that we aren't learning more about how to model all the time, and all of those pieces are basically (as far as we know) susceptible to being fruitfully understood through computational models.

I usually don't put it that way because it seems nuanced and dismissive, when the truth is that the amount of progress that's been made in just the past 10 years in modeling tasks that were previously thought to be intractably "human" and judgment-based and whatever is pretty astonishing. The grand failure of the Minsky-Chomsky nexus of big-thinking wrong people is no longer particularly relevant to what is or isn't possible; they just had the wrong approach.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 6:49 PM
horizontal rule
219

For the record, I believe the lettuce in which the caterpillar resided had in fact been washed. Not very enthusiastically, maybe, but still.


Posted by: Brock Landers | Link to this comment | 06-21-10 6:50 PM
horizontal rule
220

Thanks! Interesting!


Posted by: Robert Halford | Link to this comment | 06-21-10 6:51 PM
horizontal rule
221

You know, them salad spinners'll set a caterpillar whipping right out of the lettuce, Brock. It's like a carnival ride for them.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 6:51 PM
horizontal rule
222

Washing your hands with water doesn't get off loose pieces of dirt and whole caterpillars?

If you're talking about dirt that stays after putting your hands under the water, I'd guess that ionized clay particles are adhering more closely to the oils on your skin than to the slightly ionized water. But lettuce doesn't have a charged surface, to my knowledge, so I bet washing vegetables is more effective than washing your hands. I also think that for your purposes, getting any larger clumps of dirt or uppity caterpillars off the lettuce would suffice.


Posted by: Megan | Link to this comment | 06-21-10 6:52 PM
horizontal rule
223

221: STOP HARASSING THE POOR MAN! HE KNOWS WHAT HE'S DOING!


Posted by: OPINIONATED CATERPILLAR | Link to this comment | 06-21-10 6:53 PM
horizontal rule
224

AND WHO ARE YOU CALLING UPPITY, MISSY?


Posted by: OPINIONATED CATERPILLAR | Link to this comment | 06-21-10 6:53 PM
horizontal rule
225

Calling you uppity, tubeworm.

Please do not send a plague.


Posted by: Megan | Link to this comment | 06-21-10 6:57 PM
horizontal rule
226

I'd love to talk with HAL or Shalmaneser before I die, but I don't expect to have the opportunity.

Joel Hanes was a yonderboy.


Posted by: snarkout | Link to this comment | 06-21-10 7:04 PM
horizontal rule
227

I have a beer that had been forgotten in my car's trunk in hot weather for weeks; I rediscovered and put it in my fridge yesterday, and I expect it will be fine when I drink it tonight. As a gesture of solidarity with Brock.


Posted by: persistently visible | Link to this comment | 06-21-10 7:06 PM
horizontal rule
228

I only like artisanal hand-selected butterfly poop produced by butterflies that are not coerced or caged.

You can't coerce butterflies. Butterflies are free! and flighty.


Posted by: | Link to this comment | 06-21-10 7:08 PM
horizontal rule
229

218 : yes, I think we're in vehement agreement.

What do you think of the book recommended at 196 ? I'm years out of date, and could use a concise refresher.

As for 201 and "friendly, non-judgmental, relentlessly positive" -- apparently you boot from different ROMs when working at the Institute.

Veering wildly into the washing vegetables thread : the kinds of bacteria that live on your skin are more likely to cause sickness than the kinds of bacteria that live on lettuce -- unless the lettuce is contaminated a la 213.

The cell membrane of most bacteria is made of a "lipid bilayer" -- it's a grease bubble. Also, healthy human skin is covered with sebum, a light grease that tends to protect bacteria unless you use soap (soap has the wonderful property of dissolving grease, including lipid bilayers -- it kills bacteria by breaking down the cell membrane)


Posted by: joel hanes | Link to this comment | 06-21-10 7:11 PM
horizontal rule
230

I find insufficiently-washed lettuce to be inedibly gritty. I buy it, separate the leaves, wash each one thoroughly on both sides, shake it quite dry, and then wrap it in paper towels for later use. Maybe this is not encouraging to the lazy produce-eater, but lettuce wants washing.


Posted by: A White Bear | Link to this comment | 06-21-10 7:11 PM
horizontal rule
231

I would probably not drink that beer without a taste test first, but that's because I believe in so-called skunking, which may or may not have been exposed as an urban legend by now.

Brock, it really is useful to wash your veggies, especially greens. Dunk them in a bowl of water a few times, then into the salad-spinner, yay! They'll last longer that way, too.


Posted by: parsimon | Link to this comment | 06-21-10 7:13 PM
horizontal rule
232

What do you think of the book recommended at 196 ? I'm years out of date, and could use a concise refresher.

I don't know it. I can recommend textbook-y books, but I haven't read any lay overviews in a long time.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 7:13 PM
horizontal rule
233

As for 201 and "friendly, non-judgmental, relentlessly positive" -- apparently you boot from different ROMs when working at the Institute.

It's just so difficult to model these things. Computers will never be friendly, non-judgmental and relentlessly positive like humans are.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 7:13 PM
horizontal rule
234

I would probably not drink that beer without a taste test first

Maybe I'm giving PV more credit than due, parsi, but I suspect he or she is going to smell it and/or sip it rather than going for a pop-the-top-and-start-chugging approach. But, hey, I could be wrong.


Posted by: Stanley | Link to this comment | 06-21-10 7:17 PM
horizontal rule
235

222: But lettuce doesn't have a charged surface, to my knowledge,...

And you call yourself a science type person. Get some romaine and a voltmeter, then comment.


Posted by: | Link to this comment | 06-21-10 7:19 PM
horizontal rule
236

oops. 228 was me


Posted by: Turgid JacobianI only like artisanal hand-selected butterfly poop produced by butterflies that are n | Link to this comment | 06-21-10 7:20 PM
horizontal rule
237

235 was me.


Posted by: Moby Hick | Link to this comment | 06-21-10 7:21 PM
horizontal rule
238

234: SHOTGUN


Posted by: Sifu Tweety | Link to this comment | 06-21-10 7:22 PM
horizontal rule
239

Jesus. I messed up my back, And apparently my comment boxes.


Posted by: Turgid Jacobian | Link to this comment | 06-21-10 7:22 PM
horizontal rule
240

What is a drink of beer, if not a taste test? Or did you mean you wouldn't drink it without a third-party taste test?

I believe in so-called skunking, which may or may not have been exposed as an urban legend by now

I have no idea what this sentence could mean. I've always understood "skunking" just to mean beer that went bad (or perhaps never was right), whatever the cause. How could that be an urban legend? I've had bad beers plenty of times. Does the term "skunking" refer to some more specific phenomomnom that I'm unfamiliar with?


Posted by: Brock Landers | Link to this comment | 06-21-10 7:23 PM
horizontal rule
241

I thought skunked beer had froze and thawed too many times, or something.


Posted by: heebie-geebie | Link to this comment | 06-21-10 7:24 PM
horizontal rule
242

I can recommend textbook-y books

Would you then, please ?


Posted by: joel hanes | Link to this comment | 06-21-10 7:25 PM
horizontal rule
243

241 et seq.: it's caused by light.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 7:26 PM
horizontal rule
244

241: I think one time is too many times, especially if we're talking bottled beer.


Posted by: Moby Hick | Link to this comment | 06-21-10 7:26 PM
horizontal rule
245

Text Booky Boo-ook! (Bow-wow.)


Posted by: heebie-geebie | Link to this comment | 06-21-10 7:26 PM
horizontal rule
246

From the link in 242:
And it's been said that bottled beer can become light-struck in less than one minute in bright sun, after a few hours in diffuse daylight,

This is nonsense. I've had beer that's been sitting in ice-water, out in the sun, for plenty long and it tasted like the champagne of beers.


Posted by: heebie-geebie | Link to this comment | 06-21-10 7:28 PM
horizontal rule
247

I've encountered a new low in problematic hotel wifi. Everytime I try to go to a webpage I get redirected to a Qwest Consumer Protection Program page that insists my computer has a possible virus on it and asks me to remove said virus.


Posted by: essear | Link to this comment | 06-21-10 7:29 PM
horizontal rule
248

I suspect he or she is going to smell it and/or sip it

I know. I spoke quickly. I meant that I'd view that beer with a wary eye, that's all.

I was trained to think that skunked beer was that which had been fully chilled and then brought back to room temperature, and even subjected to heat (in a closed car, say), and then chilled again later.

I really think this is probably bullshit. Ish. Though I've had beers that don't seem to stand up well to that kind of treatment.


Posted by: parsimon | Link to this comment | 06-21-10 7:30 PM
horizontal rule
249

242: by the complex process of looking behind me, I come up with Bishop's book, which is great and totally readable and gets you pretty much through kernel methods, Russell and Norvig's book, which is older but also pretty good (and Norvig is chief scientist at Google), and Sutton and Barto's book on reinforcement learning.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 7:30 PM
horizontal rule
250

248 before seeing 243. Interesting!


Posted by: parsimon | Link to this comment | 06-21-10 7:33 PM
horizontal rule
251

249 : thanks. Just what I was looking for.


Posted by: joel hanes | Link to this comment | 06-21-10 7:34 PM
horizontal rule
252

237: Why don't you remove the virus from your computer, then?


Posted by: heebie-geebie | Link to this comment | 06-21-10 7:35 PM
horizontal rule
253

Who else has tried to start drinking and found that the case of Old Milwaukee Light (cans) is frozen? Because that's what happens when you put a case of beer in the trunk of a car and the day's high temperature was still 30 degrees below freezing. Of course, you can't bring the beer inside to thaw for the same reason that you couldn't remove it from the trunk. You're 16 years old. The next thing you know, you're driving in a car with six underage peers, the defrost going on high, eight cans of beer on the dash, the windows open so you don't cook. Of course, no worries as you've already driven by the house of the only cop and he's home. Your sanest friend is in the back, muttering that every on coming set of headlights could be a state trooper. And, just after the illegal U turn, he's right.


Posted by: Moby Hick | Link to this comment | 06-21-10 7:36 PM
horizontal rule
254

MY MASTER HAS TOLD ME TO CRY.


Posted by: OPINIONATED SIFU THREEPIO | Link to this comment | 06-21-10 7:37 PM
horizontal rule
255

246: Miller and Corona have negligible hopping, so they're not much degraded by UV.


Posted by: joel hanes | Link to this comment | 06-21-10 7:38 PM
horizontal rule
256

237: Why don't you remove the virus from your computer, then?

Soap to dissolve its lipid bilayer doesn't work, because capsids are made of protein.

Luckily either someone at the hotel has shut down the program that was using the hotel wi-fi to send spam, or some hotel employee dealt with Qwest somehow.


Posted by: essear | Link to this comment | 06-21-10 7:38 PM
horizontal rule
257

253: Are you writing the midwestern suburban version of Less Than Zero?


Posted by: Robert Halford | Link to this comment | 06-21-10 7:39 PM
horizontal rule
258

257: Suburban? We were three hours from the nearst suburb.


Posted by: Moby Hick | Link to this comment | 06-21-10 7:40 PM
horizontal rule
259

Moby brings authenticity when Unfogged talks about hay.


Posted by: Megan | Link to this comment | 06-21-10 7:43 PM
horizontal rule
260

I've spent an hour or so (not recently) trying to find that thread. As far as I can tell, it's gone.


Posted by: Moby Hick | Link to this comment | 06-21-10 7:47 PM
horizontal rule
261

I don't intend to be defending the claims of strong AI, necessarily, by the way.

well, that's nice, and I don't intend to claim that computers can't do lots of cool stuff, and won't do cooler stuff in the future. In that sense we don't disagree.

The goal of exactly simulating human consciousness is either impossibly far off, impossible, or merely deeply silly. But to leap from that to "well, natural language processing is pretty much impossible" or "you can't model emotions in a computer!" bespeaks a vast ignorance of what's actually happening in machine learning.

I never said natural language processing is impossible or that you can't model emotions. My only point is that you shouldn't *assume* you can do this just because you have a lot of computational power. Maybe computation is just so different from what's going on when people do their language thing or react to their emotions that there will be insurmountable complexities going from one to the other, so your computational parallel to people talking and feeling will forever be liable to bizarre slip-ups or have to be structured by programmers for narrow tasks. Emotions are central to human cognition after all. We really don't know, and I suspect we are several basic breakthroughs away from knowing. As I understand it, it's not very controversial to say that a computer will never be able to predict the exact weather at a specific location six months in advance, because the system is just too complex and cannot even in principle be cracked by computing power. How do you know the brain is simpler than the weather?

If you define emotion as a complex learned reaction to stimuli that causes changes in decision-making strategy or changed sensory profile or whatever....If you define emotion as, you know, the gladness suffusing the heart of the devout man as he gazes upon a lily in the morning dew, well, no, the devout-man-gazing-upon-a-lily box thought-experiment remains intact.

Is the point here that "complex learned reaction to stimuli" is man talk suitable for the rat experiments in the hard sciences, but "devout man gazing upon a lily" is pussy shit for liberal arts majors so it's probably not important anyway? Emotions are emotions. That whole comment 216 was, ummm, highly reductionist.


Posted by: PGD | Link to this comment | 06-21-10 7:51 PM
horizontal rule
262

Because you wanted to re-read the best conversation Unfogged ever held? I understand.


Posted by: Megan | Link to this comment | 06-21-10 7:52 PM
horizontal rule
263

I've spent an hour or so (not recently) trying to find that thread. As far as I can tell, it's gone.

Yahoo frequently works better than google for finding things in the archives.

It's the only thing for which I use Yahoo search.


Posted by: NickS | Link to this comment | 06-21-10 7:58 PM
horizontal rule
264

As I understand it, it's not very controversial to say that a computer will never be able to predict the exact weather at a specific location six months in advance, because the system is just too complex and cannot even in principle be cracked by computing power. How do you know the brain is simpler than the weather?

Whoa there. Predicting the exact weather is difficult because of chaos: you have to know initial conditions very very very precisely to get the right answer. This doesn't mean the complexity isn't simulable in principle; you could produce something that simulates the complexity perfectly well, and over an ensemble of runs accurately predicts the statistical properties of real weather. You just can't give it accurate enough inputs. So there's no lesson you can draw from this about the brain: it's true that even with the most advanced imaginable brain simulator, you wouldn't be able to predict what I'll be thinking in six hours. This doesn't mean you can't imagine a brain simulator that accurately models everything a brain does.


Posted by: essear | Link to this comment | 06-21-10 7:59 PM
horizontal rule
265

How do you know the brain is simpler than the weather?

Well, the brain is smaller than the weather.


Posted by: Cryptic ned | Link to this comment | 06-21-10 8:00 PM
horizontal rule
266

252 fails the Turing test

247: I just experienced something similar on my Fargo trip last month. Trying to use a tiny, portable computer to access a network which will allow you to communicate with a friend who is only able to communicate via computer messages because she is being paid a large amount of money to stay in a prison-like medical research facility for three weeks, and then being unable to because the network provided by your hotel is unaccountably paranoid and stupid: A problem the science fiction writers of the 1950s would never have imagined. Fargo is a harsh mistress.


Posted by: Natilo Paennim | Link to this comment | 06-21-10 8:00 PM
horizontal rule
267

Bing seemed to be working better than Google for Unfogged searching for a while, but lately I haven't had much luck with it. Maybe they're aping Google too well now.

Speaking of Bing, today a random elderly stranger out of the blue told me I looked like I could be "Bill Gates' little boy". In response to my sort of quizzical gaping she said "it's a compliment!" and I mumbled a confused thanks and walked away as quickly as I could.


Posted by: essear | Link to this comment | 06-21-10 8:02 PM
horizontal rule
268

254: If my understanding is correct, no finite degree of precision in specifying initial conditions is sufficient to allow accurate simulation of a true chaotic system. That's actually the definitition I have for "chaotic" : arbitrarily small differences in initial conditions can produce arbitrarily large differences in system behavior. By this understanding, we may someday have much better ten-day forecasts, but we will never have an accurate thirty-day weather forecast.

Interestingly, computational chaos was first discovered by a weather modeler.


Posted by: joel hanes | Link to this comment | 06-21-10 8:09 PM
horizontal rule
269

Thank you NickS. Yahoo could not have been easier. I think I found it, but it seems shorter than I recall. See here.


Posted by: Moby Hick | Link to this comment | 06-21-10 8:13 PM
horizontal rule
270

254: If my understanding is correct, no finite degree of precision in specifying initial conditions is sufficient to allow accurate simulation of a true chaotic system. That's actually the definitition I have for "chaotic" : arbitrarily small differences in initial conditions can produce arbitrarily large differences in system behavior. By this understanding, we may someday have much better ten-day forecasts, but we will never have an accurate thirty-day weather forecast.

This is a practical limitation, not an in-principle one. In principle, the better you measure the initial conditions, the longer into the future you can forecast the weather. However, small errors in the initial conditions grow exponentially and overwhelm the long-term accuracy of your simulation, so small improvements in results require very large improvements in input.

All that aside, you seem to be missing my point, so maybe I didn't make it well enough. The question about modeling the brain isn't like the question about predicting the actual real-world weather at some point the future. It's more like the question of modeling something that has all the same properties as weather, without having to get the inputs of the current state of the real world exactly right. And that's a tractable problem.


Posted by: essear | Link to this comment | 06-21-10 8:14 PM
horizontal rule
271

well, that's nice, and I don't intend to claim that computers can't do lots of cool stuff, and won't do cooler stuff in the future. In that sense we don't disagree.

In the sense where you either didn't read or didn't understand what I said, yes, I suppose so.

Your next paragraph is really quite uninformed, but I will try to unpack it.

[... partial, but already endless unpacking omitted ...]

No, you know what, I'm actually not. You don't know what you're talking about, and you don't seem interested in what you're talking about, so engaging you is going to be one of those horrible arguments where you keep saying the same thing and thinking it's a point, when in fact you're just asserting the same things over and over. You can't just say "well, maybe emotions and language are something that can't be modeled computatioally" if you have no idea of what scientists currently believe about what emotions and language actually are. I mean, you can, but then your argument boils down to "but what if science is wrong?"

Well, yeah, good point. What if everything we know is wrong? Fuckin' magnets, how do they work, right?


Posted by: Sifu Tweety | Link to this comment | 06-21-10 8:14 PM
horizontal rule
272

If my understanding is correct, no finite degree of precision in specifying initial conditions is sufficient to allow accurate simulation of a true chaotic system. That's actually the definitition I have for "chaotic" : arbitrarily small differences in initial conditions can produce arbitrarily large differences in system behavior.

This is my definition, too. Somewhere on the web I got into a big argument because I maintain that the Jurassic Park explanation - "Chaos means that a butterfly flaps it's wings in China and we have a thunderstorm here!" is wildly misleading. There's nothing in the definition about having far-reaching impact. You could have a chaotic function that only took on values between 2 and 4. And now I've resusitated the argument and I'm having it with myself.


Posted by: heebie-geebie | Link to this comment | 06-21-10 8:17 PM
horizontal rule
273

By some definition of "computer", sure it is. How that is meaningful or useful is of course a much bigger question.

How about "not more powerful than a Turing machine" (or, more obviously, less)? I think that's a significant statement, given certain popular opinions.


Posted by: pdf23ds | Link to this comment | 06-21-10 8:17 PM
horizontal rule
274

Maybe PGD is secretly Roger Penrose. The brain is doing NP-complete things! The brain is quantum! Gravity collapses the microtubule wavefunction buzzword buzzword doddering old age!


Posted by: essear | Link to this comment | 06-21-10 8:18 PM
horizontal rule
275

Is the point here that "complex learned reaction to stimuli" is man talk suitable for the rat experiments in the hard sciences, but "devout man gazing upon a lily" is pussy shit for liberal arts majors so it's probably not important anyway?

No.

Emotions are emotions.

Fine. Could you define them for me, please? Cast your definition in terms of both subjective experience, intersubjective reality, physiological and neurological implications, behavioral effects, and the epistemology of all of these. Then I can respond to you meaningfully about whether or not computers can model them. The point of the dewy lily example is that it consists a vast framing, which implies that you have to solve the problem of simulating an entire human consciousness, embedded in the world, in order to model emotions, and by any meaningful physiologica/neurological definition of what "emotion" means, that's not true.

That whole comment 216 was, ummm, highly reductionist.

I love that you say this right after saying "emotions are emotions".


Posted by: Sifu Tweety | Link to this comment | 06-21-10 8:20 PM
horizontal rule
276

Fuckin' magnets, how do they work, right?

They hold construction paper and pizza coupons to the fridge.


Posted by: Moby Hick | Link to this comment | 06-21-10 8:20 PM
horizontal rule
277

274: not that Penrose isn't also annoying, but I think that just may be giving PGD too much credit.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 8:21 PM
horizontal rule
278

In principle, the better you measure the initial conditions, the longer into the future you can forecast the weather.

This does not accord with what I had thought I understood.

Rather than argue (I'm no expert, and often discover that I am misinformed) I shall quietly consult The Authorities for my own satisfaction. If I discover that I'm mistaken, I'll 'fess up.


Posted by: joel hanes | Link to this comment | 06-21-10 8:23 PM
horizontal rule
279

264 and 270 are exactly right.

And, so long as we're mentioning AI textbooks, there's a free draft copy of Sutton and Barto's great book online.


Posted by: Cosma Shalizi | Link to this comment | 06-21-10 8:24 PM
horizontal rule
280

274 : we agree about Penrose, at least. What a shame.


Posted by: joel hanes | Link to this comment | 06-21-10 8:24 PM
horizontal rule
281

272: The definition usually requires something like the existence of points with dense orbits, which is kind of like "far-reaching", though at the moment I'm not coming up with a good reason for this to be a necessary part of the definition. Maybe just because otherwise a simple exponentially growing function would be called "chaotic" when it really isn't.


Posted by: essear | Link to this comment | 06-21-10 8:24 PM
horizontal rule
282

"Topological transitivity" is the technical term.


Posted by: essear | Link to this comment | 06-21-10 8:24 PM
horizontal rule
283

278: See lecture 4 here (plus readings).


Posted by: Cosma Shalizi | Link to this comment | 06-21-10 8:26 PM
horizontal rule
284

The relentlessly positive non-judgmental simulation problem has proved surprisingly difficult, admittedly.


Posted by: LizardBreath | Link to this comment | 06-21-10 8:26 PM
horizontal rule
285

I suppose it's easier to accept the word of some guy on the internet when he uses his real name.


Posted by: essear | Link to this comment | 06-21-10 8:27 PM
horizontal rule
286

285: Cosma can assign homework. That's always helpful.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 8:28 PM
horizontal rule
287

279 : I believe I sit corrected. Thanks.


Posted by: joel hanes | Link to this comment | 06-21-10 8:28 PM
horizontal rule
288

281: But something like the double-jointed pendulum is chaotic, but certainly has bounded range. Dense orbits doesn't mean it's not a compact space or anything.


Posted by: heebie-geebie | Link to this comment | 06-21-10 8:28 PM
horizontal rule
289

I love that you say this right after saying "emotions are emotions".

Why? That's a highly antireductionist thing to say.


Posted by: nosflow | Link to this comment | 06-21-10 8:28 PM
horizontal rule
290

281: Maybe just because otherwise a simple exponentially growing function would be called "chaotic" when it really isn't.

Yes, exactly: you want to get at more than just exponential separation of initial conditions, towards the idea that an arbitrarily small perturbation can induce (eventually) any qualitatively permitted pattern of behavior. This is why the usual definition invokes a dense orbit plus infinitely many periodic orbits.


Posted by: Cosma Shalizi | Link to this comment | 06-21-10 8:28 PM
horizontal rule
291

"Topological transitivity" is the technical term.

I thought this meant a map between topological spaces which commuted with a group action.


Posted by: heebie-geebie | Link to this comment | 06-21-10 8:29 PM
horizontal rule
292

261

... Emotions are central to human cognition after all. We really don't know, and I suspect we are several basic breakthroughs away from knowing. As I understand it, it's not very controversial to say that a computer will never be able to predict the exact weather at a specific location six months in advance, because the system is just too complex and cannot even in principle be cracked by computing power. How do you know the brain is simpler than the weather?

You can't predict the weather 6 months in advance because it is basically random. So if there is a similar random component in human behavior you won't be able to predict it (other than in a statistical sense) but you may be able to simulate it perfectly well. Just as a computer can simulate tossing a coin in a way that is effectively indistinguishable from an actual sequence of coin tosses but cannot predict the exact results of the actual sequence of coin tosses.


Posted by: James B. Shearer | Link to this comment | 06-21-10 8:29 PM
horizontal rule
293

288: Oh, sure. But it's "far-reaching" within the space it's defined on. After all, the Earth is compact, more or less.


Posted by: essear | Link to this comment | 06-21-10 8:30 PM
horizontal rule
294

289: because I got confused, that's why.

275.last stricken from the record, and replaced with a complaint that PGD seems to be dismissing the bulk of modern brain science as reductionist, which, okay?


Posted by: Sifu Tweety | Link to this comment | 06-21-10 8:30 PM
horizontal rule
295

293: Sure. My beef is that the hand-wavy butterfly nonsense would lead a lay-person to believe that the key idea is that a tiny flap can have a wildly gigantic effect. Which, as you say, is true of an exponential function. So it's a poor heuristic.


Posted by: heebie-geebie | Link to this comment | 06-21-10 8:32 PM
horizontal rule
296

Oh, christ, my stomach hurts from laughter and my eyes are wet, Brock Landers. If my eyes wore pants, they would be ruined, and I would send you the bill, Brock Landers. I love this thread.

I don't know much about AI but Erik Mueller was a family friend when I was growing up, and I gather he's done some important work in the field. We had a beautiful woodcut by his dad of Ben Shahn's face (link seems to be dead), five foot tall, hanging in the hallway. It used to scare the bejeezus out of me.


Posted by: k-sky | Link to this comment | 06-21-10 8:32 PM
horizontal rule
297

285: Like this is a plausible real name?

286: Not as helpful as you'd think, he muttered darkly.

291: It may be that also, but not in ergodic theory or dynamics.


Posted by: Cosma Shalizi | Link to this comment | 06-21-10 8:35 PM
horizontal rule
298

We should have an Unfogged journal club discussion of Sharkovii's theorem. Starting with figuring out the most reasonable way to spell S(h)arkovski(i)(y?).


Posted by: essear | Link to this comment | 06-21-10 8:35 PM
horizontal rule
299

Of course, we won't really have solved AI until our computers know to eat hamburgers left in fish tanks.


Posted by: Cosma Shalizi | Link to this comment | 06-21-10 8:36 PM
horizontal rule
300

Oh, balls, I just commented in the year-old hay bale thread.


Posted by: k-sky | Link to this comment | 06-21-10 8:38 PM
horizontal rule
301

270

This is a practical limitation, not an in-principle one. In principle, the better you measure the initial conditions, the longer into the future you can forecast the weather. ...

I believe the real problem with predicting the weather is that noise (solar constant not really constant for example) is continually being introduced into the system. Otherwise you might be able to get around the initial conditions problem by taking into account the entire time history of the system.


Posted by: James B. Shearer | Link to this comment | 06-21-10 8:38 PM
horizontal rule
302

300: Which means you found it!


Posted by: LizardBreath | Link to this comment | 06-21-10 8:39 PM
horizontal rule
303

298: "Шарковський"


Posted by: Cosma Shalizi | Link to this comment | 06-21-10 8:39 PM
horizontal rule
304

301: Fair enough. Initial conditions + boundary conditions, if you like. I can't say that for weather I have a clear sense of which effects are the first to cause things to go wrong.


Posted by: essear | Link to this comment | 06-21-10 8:40 PM
horizontal rule
305

Otherwise you might be able to get around the initial conditions problem by taking into account the entire time history of the system.

Oh, well, I already did that, it doesn't help.


Posted by: Cryptic ned | Link to this comment | 06-21-10 8:42 PM
horizontal rule
306

but not in ergodic theory or dynamics.

Oh, right, I knew that definition. At some point.


Posted by: heebie-geebie | Link to this comment | 06-21-10 8:42 PM
horizontal rule
307

301: Well, obviously the Earth isn't a closed system. It can be treated as such as an approximation, but to be completely accurate (in principle) you have to start including larger and larger subsets of the hubble volume in greater and greater detail. Whether you treat those as part of the system or boundary conditions is probably not a substantial distinction.


Posted by: pdf23ds | Link to this comment | 06-21-10 8:43 PM
horizontal rule
308

Oh, also. My 273 totally pwnd essear's 274.


Posted by: pdf23ds | Link to this comment | 06-21-10 8:44 PM
horizontal rule
309

You're all pwns in my deterministic game of life. Dance, my pretties.


Posted by: heebie-geebie | Link to this comment | 06-21-10 8:46 PM
horizontal rule
310

306: Might you have been thinking of topological conjugacy?


Posted by: Cosma Shalizi | Link to this comment | 06-21-10 8:46 PM
horizontal rule
311

Well, the brain is smaller than the weather.

but wider than the sky


Posted by: Higglety-pigglety, Emily Dickinson | Link to this comment | 06-21-10 8:46 PM
horizontal rule
312

302: Moby found it in 269.


Posted by: k-sky | Link to this comment | 06-21-10 8:46 PM
horizontal rule
313

This sounds like something out of Desk Set.


Posted by: fake accent | Link to this comment | 06-21-10 8:47 PM
horizontal rule
314

310: Perhaps. Let's never let my old advisor see this thread. It's embarrasingly close to what I actually did my dissertation on. And then promptly forgot all my definitions. Apparently.


Posted by: heebie-geebie | Link to this comment | 06-21-10 8:49 PM
horizontal rule
315

307: If you're going to that level of precision, you (or your prediction machine) are gravitationally coupled to the weather, raising all kinds of interesting problems.


Posted by: Cosma Shalizi | Link to this comment | 06-21-10 8:51 PM
horizontal rule
316

314: In some ways, that sounds like an ideal dissertation.


Posted by: Cosma Shalizi | Link to this comment | 06-21-10 8:55 PM
horizontal rule
317

I took four years off and am now wading through the dissertation that built on mine (but not the dynamical systems part; a cohomology part), and it's hard and painful and ugh. I was definitely not cut out for the research track.


Posted by: heebie-geebie | Link to this comment | 06-21-10 8:58 PM
horizontal rule
318

Huh, I guess I missed that hay-baling thread the first time. I probably would have had something to say.


Posted by: teofilo | Link to this comment | 06-21-10 9:01 PM
horizontal rule
319

There are standard conventions for transliteration. Even computers can do it!


Posted by: fake accent | Link to this comment | 06-21-10 9:02 PM
horizontal rule
320

The relentlessly positive non-judgmental simulation problem has proved surprisingly difficult, admittedly.

Prickly has been a rising success, however.


Posted by: teofilo | Link to this comment | 06-21-10 9:03 PM
horizontal rule
321

"Rising"? "Rousing," I guess.


Posted by: teofilo | Link to this comment | 06-21-10 9:04 PM
horizontal rule
322

321: Getting the emotions right is hard work.


Posted by: JP Stormcrow | Link to this comment | 06-21-10 9:05 PM
horizontal rule
323

I remember wanting to pull up a quote from some Hamlin Garland story when I read the hay thread, but I was too lazy. Also, wasn't sure if the passage I found was as relevant to whatever comment that made me think of it as I thought it would be, or if it was the right passage at all. Anyway, there are some good agrarian set stories in Main Travelled Roads


Posted by: fake accent | Link to this comment | 06-21-10 9:06 PM
horizontal rule
324

I just read the Brock section of the thread again and hurt myself laughing again.


Posted by: k-sky | Link to this comment | 06-21-10 9:06 PM
horizontal rule
325

I know this thread is about other things now, and was in fact created solely to bring the Brock section into existence, but in this section of the article,

What makes language so hard for computers, Ferrucci explained, is that it's full of "intended meaning." When people decode what someone else is saying, we can easily unpack the many nuanced allusions and connotations in every sentence. He gave me an example in the form of a "Jeopardy!" clue: "The name of this hat is elementary, my dear contestant." People readily detect the wordplay here -- the echo of "elementary, my dear Watson," the famous phrase associated with Sherlock Holmes -- and immediately recall that the Hollywood version of Holmes sports a deerstalker hat. But for a computer, there is no simple way to identify "elementary, my dear contestant" as wordplay.

"We" and "people" are doing a whole lot of work there. Someone who hasn't read Sherlock Holmes or been exposed to catchphrases therefrom is in a similar position to that of the computer.


Posted by: fake accent | Link to this comment | 06-21-10 9:15 PM
horizontal rule
326

274 - Respect your elders; that toilet paper didn't tile itself.


Posted by: snarkout | Link to this comment | 06-21-10 9:19 PM
horizontal rule
327

||

Dear Drupal: I hate you. No, that's not really true. I hate me, for missing something incredibly obvious that would have allowed me to stop working prior to 11:30 at night. But I'm feeling sorry for myself, so you'll have to take the blame.

|>


Posted by: Sifu Tweety | Link to this comment | 06-21-10 9:31 PM
horizontal rule
328

[[

I don't believe I ever realized the extent to which the internet made living alone possible for me. People! It is so isolating to only be attached to the rest of the world with (*gasp*) your phone. (But I did get to talk to my grandpa today. Yay.)

[>


Posted by: paren | Link to this comment | 06-21-10 9:42 PM
horizontal rule
329

I'm back...perhaps I shouldn't comment again if it's going to set Sifu off. Things seem to be getting rather emotional. Anyway, after this comment and the next I'll bow out.

last stricken from the record, and replaced with a complaint that PGD seems to be dismissing the bulk of modern brain science as reductionist, which, okay?

yes, that's right. The scientific method is reductionist, and that's OK, when it gets cashed out for something helpful. But it's not so helpful when people start making scientistic authority claims about stuff science actually does not understand.

The point of the dewy lily example is that it consists a vast framing, which implies that you have to solve the problem of simulating an entire human consciousness, embedded in the world, in order to model emotions, and by any meaningful physiologica/neurological definition of what "emotion" means, that's not true.

Why not? It seems pretty obvious to me that any higher-order emotion is going to depend in a meaningful way on a large fraction of the individual's entire emotional state, and also on a substantial amount of their past experiences and emotional history. It's true that the need to simulate the whole consciousness to fully understand its parts could makes it hard to do science, and in many ways one would need to abstract away from that to make some progress. (Although as I understand it a lot of neuroscientists do feel it's necessary to tackle the system level directly). But it's an empirical question how much you gain or lose by doing that. I'm not sure science has demonstrated that the process of abstraction it engages in with respect to the emotions and personality has paid off yet, at least not judging by all the crudely reductionist articles about brain scans, neurotransmitters, and the genetic determinism of everything that I've been reading over the past 15-20 years.

Of course, no one would want to stop neuroscience, even if you'd want it to be more modest in its claims today.

You can't just say "well, maybe emotions and language are something that can't be modeled computatioally" if you have no idea of what scientists currently believe about what emotions and language actually are. I mean, you can, but then your argument boils down to "but what if science is wrong?"

This is just a pure appeal to the authority of science, which is grounded in successes of the physical sciences which have so far not been replicated in the human sciences. Are people happier and wiser today because science has cracked emotion? The successes in language are most impressive in a technological / usefulness sense (artificial translators, etc.), it's not like computers are anywhere close to self-generating interesting language. I don't think that current scientific definitions of "what emotions and language actually are" would, if modelled computationally, get you very far toward the real thing, hence the appeal to science as the source of authority here is the appeal to physical science successes that may or may not end up applying here...even models of purely physical / medical biological processes are a long way from complete.

This whole dustup started in an argument in another thread about the singularity, which is the ultimately silliest form of deep AI. Against that backdrop I think my perspective has a lot to recommend it, but as you move away from that toward "hey, let's just see how far we can push computers and how we can operationalize some cool theories about cognition" it does, in fact, get closer to pointless luddism. But I think the problems with doing more ordinary AI stuff are related to the complexity of consciousness in general, which I thought was a point well made in comment 60.


Posted by: PGD | Link to this comment | 06-21-10 9:45 PM
horizontal rule
330

That is very long and does not mention hay.


Posted by: Moby Hick | Link to this comment | 06-21-10 9:49 PM
horizontal rule
331

emotions and personality has paid off yet

this wasn't bad.

How many suicides have Prozac and other crudely reductionistic SSRIs prevented? It's not a complete understanding, but it beats shock therapy. Gary Marcus writes very nicely about brain science, does not seem crude or overreaching to me.


Posted by: lw | Link to this comment | 06-21-10 9:52 PM
horizontal rule
332

it's true that even with the most advanced imaginable brain simulator, you wouldn't be able to predict what I'll be thinking in six hours. This doesn't mean you can't imagine a brain simulator that accurately models everything a brain does.

Yes, there are different senses of simulation I was confusing there. But: when we've built our brain simulator, it will be possible that we've built something that cannot feel subjective experience, so is not a brain itself, and also will be limited in its usefulness in predicting what any actual brain will feel, think, or do. I'm sure that will be a very useful device, with some programming interventions it would make a very useful robot, and it would be unimaginably advanced compared to science today. But you know, maybe not quite as awesome as you'd think.


Posted by: PGD | Link to this comment | 06-21-10 9:52 PM
horizontal rule
333

Poor Parenthetical. Even her pause/play symbols are frowny.


Posted by: Eggplant | Link to this comment | 06-21-10 9:53 PM
horizontal rule
334

You want hay? Here's some hay.


Posted by: teofilo | Link to this comment | 06-21-10 9:53 PM
horizontal rule
335

at least not judging by all the crudely reductionist articles about brain scans, neurotransmitters, and the genetic determinism of everything that I've been reading over the past 15-20 years

Are we talking about peer-reviewed science articles here, or articles in the popular media?


Posted by: Blume | Link to this comment | 06-21-10 9:55 PM
horizontal rule
336

That's some hay, but not very much.


Posted by: Moby Hick | Link to this comment | 06-21-10 9:55 PM
horizontal rule
337

Teofilo is much more efficient in his hay posts than PGD.


Posted by: Eggplant | Link to this comment | 06-21-10 9:55 PM
horizontal rule
338

stuff science actually does not understand

What is this stuff, and how do you know that?

It seems pretty obvious to me that any higher-order emotion is going to depend in a meaningful way on a large fraction of the individual's entire emotional state, and also on a substantial amount of their past experiences and emotional history.

Define "higher-order emotion". Anger? Jealousy? Schadenfreude? Melancholia? Humility before a truly exceptional piece of architecture? Love? The joy of discovery?

I'm not sure science has demonstrated that the process of abstraction it engages in with respect to the emotions and personality has paid off yet, at least not judging by all the crudely reductionist articles about brain scans, neurotransmitters, and the genetic determinism of everything that I've been reading over the past 15-20 years.

So, just to be clear, you're not talking about actual scientific research, you're talking about popular science articles in newspapers and so on, based (most likely) on press releases from university PR departments, and skewed towards those institutions that are best able to work the science reporting system, yes?

This is just a pure appeal to the authority of science, which is grounded in successes of the physical sciences which have so far not been replicated in the human sciences.

Are you kidding? Do you have polio?

re people happier and wiser today because science has cracked emotion?

I imagine a non-trivial proportion of those receiving psychiatric treatment (as well as their families) would answer in the affirmative, as long as you define "cracked" as "gained some partial understanding of".

it's not like computers are anywhere close to self-generating interesting language

Could you offer some evidence for this assertion?

I don't think that current scientific definitions of "what emotions and language actually are" would, if modelled computationally, get you very far toward the real thing

And what are those definitions?

This whole dustup started in an argument in another thread about the singularity, which is the ultimately silliest form of deep AI

Leaving aside that "deep AI" isn't a thing, I don't think, this is exactly what irritated me about your previous comment. The singularity is, indeed, a massively stupid premise, but your counterargument fought stupidity with uninformed assertion, which is hardly better.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 9:55 PM
horizontal rule
339

How much do you want?


Posted by: teofilo | Link to this comment | 06-21-10 9:58 PM
horizontal rule
340

My endlessly long response to PGD's endlessly long comment was partially pwned. OH WELL: hay!


Posted by: Sifu Tweety | Link to this comment | 06-21-10 9:58 PM
horizontal rule
341

The human sciences haven't even been able to keep you from getting pwned in your own house!


Posted by: Blume | Link to this comment | 06-21-10 9:59 PM
horizontal rule
342

339: Judging from the irrigation rig and the trucks, that is less than 2,000 acres.


Posted by: Moby Hick | Link to this comment | 06-21-10 10:03 PM
horizontal rule
343

Define "higher-order emotion".

Breakfast pleasure.


Posted by: teofilo | Link to this comment | 06-21-10 10:06 PM
horizontal rule
344

342: Well yeah, but there's only so much you can fit in a single photograph.


Posted by: teofilo | Link to this comment | 06-21-10 10:07 PM
horizontal rule
345

Are we talking about peer-reviewed science articles here, or articles in the popular media?

Well, scientific journal articles are usually much more carefully qualified and there's a dialogue that can be self-correcting over time. But there's a connection between scientific articles and how it ends up getting interpreted in the popular media...the "depression is just like diabetes, your brain is short of serotonin!" era didn't come out of nowhere. The cultural authority of science has a lot to do with it. The reductionism inherent in the scientific method, which can actually lead a lot of working scientists to be pretty modest (in my experience, anyway) gets imported over into how people think about things in the popular culture.


Posted by: PGD | Link to this comment | 06-21-10 10:07 PM
horizontal rule
346

328: It really is amazing how much the internet can become part of one's life.


Posted by: fake accent | Link to this comment | 06-21-10 10:08 PM
horizontal rule
347

Anyway, here's the whole thing. It's not all hay, but I think most of it is.


Posted by: teofilo | Link to this comment | 06-21-10 10:10 PM
horizontal rule
348

346: And it's like it's made the future totally unpredictable.


Posted by: JP Stormcrow | Link to this comment | 06-21-10 10:11 PM
horizontal rule
349

343: Let's start a campaign to convince people that this is a well-known concept in German called "Frühstückvergnügen".


Posted by: essear | Link to this comment | 06-21-10 10:18 PM
horizontal rule
350

Well, scientific journal articles are usually much more carefully qualified and there's a dialogue that can be self-correcting over time.

This is part of it, yes. But then there's also the fact that science reporting in popular media gets the actual content of the study wrong probably 80% of the time. Truly, if the only way you learn about current science is through articles in the popular media, you aren't learning about science.

I mean, just to go back to your previous comment, you lump "brain scans, neurotransmitters, and the genetic determinism of everything" in together. Now, neurotransmitters have been well-studied for decades, and while there is certianly lots of new information about them, they are very, very different from fMRI (which is what you meant by "brain scans"), which is a very recent technology that has a lot of promise but is also susceptible to interpretation problems because of (among other things) its somewhat gross temporal resolution, expensive data collection, and the relatively complex statistical analysis required to analyze it. It is also perhaps the most thoroughly misunderstood scientific advance in years, such that reading about it in the popular press will, to a first approximation, teach you absolutely nothing at all. Neither of these things has anything particularly at all to do with the "genetic determinism of everything", except that the latter two concepts are very popular in the lay media.

Discounting the work of scientific researchers based on the portrayal of various unrelated "hip" technologies in the popular press is... well, let's just say it doesn't give your assertions a lot of rhetorical force.


Posted by: Sifu Tweety | Link to this comment | 06-21-10 10:21 PM
horizontal rule
351

347: Sure, for a desert, that is a lot of hay.


Posted by: Moby Hick | Link to this comment | 06-21-10 10:22 PM
horizontal rule
352

they are very, very different from fMRI (which is what you meant by "brain scans"), which is a very recent technology that has a lot of promise but is also susceptible to interpretation problems because of (among other things) its somewhat gross temporal resolution, expensive data collection, and the relatively complex statistical analysis required to analyze it.

Use a different sequence and put a different body part in the machine, and you have much of my life. Not that we are much worried with temporal resolution.


Posted by: Moby Hick | Link to this comment | 06-21-10 10:24 PM
horizontal rule
353

put a different body part in the machine

If I'd known it was going to be that kind of party...


Posted by: teofilo | Link to this comment | 06-21-10 10:25 PM
horizontal rule
354

351: Thank you.


Posted by: teofilo | Link to this comment | 06-21-10 10:25 PM
horizontal rule
355

If I'd known it was going to be that kind of party...

I'd have put my needle in the threshed byproducts.


Posted by: fake accent | Link to this comment | 06-21-10 10:32 PM
horizontal rule
356

347: Sure, for a desert, that is a lot of hay.

Also for a dessert. Unless you're a big hay eater, I guess.


Posted by: Stanley | Link to this comment | 06-21-10 10:35 PM
horizontal rule
357

350: Speaking of science reporting, I enjoyed this.


Posted by: essear | Link to this comment | 06-21-10 10:35 PM
horizontal rule
358

Hay, there.


Posted by: fake accent | Link to this comment | 06-21-10 10:40 PM
horizontal rule
359

There's nothing like eating hay when you're faint


Posted by: The White King | Link to this comment | 06-21-10 10:55 PM
horizontal rule
360

Textbooks just want to be free!

http://www-stat.stanford.edu/~tibs/ElemStatLearn/

http://www.inference.phy.cam.ac.uk/mackay/itila/

http://www.gaussianprocess.org/gpml/chapters/

The above are all machine learning texts. There are many others online; in my stash of PDFs I have books on calculus, linear algebra, analysis, computer vision, spectral algorithms, and more. (If anyone wants them and can't work the Google, email me.)

Good thing there is no-one in the office to hear my guffawing. Go Brock!


Posted by: W. Breeze | Link to this comment | 06-22-10 2:50 AM
horizontal rule
361

I actually agree with Sifu (and have the hubris to disagree with Searle) : I think the Chinese room understands the Chinese language.

Searle is full of shit because his argument is predicated on the assumption that the processes underlying artificial intelligence have to be at least analogous to those involved in natural intelligence to count. Whereas in reality natural intelligence is recognised purely by the observed relationships between inputs and outputs, as is clear from the fact that the concept of intelligence was understood long before people even knew that the processing happened in the brain.

This was written up in proper English in a learned journal by somebody famous, but it's twenty years since I read it so I can't remember who.

(In other news, I'm sick of that stupid pseud, and reverting to my real name.)


Posted by: chris y (OFE) | Link to this comment | 06-22-10 3:42 AM
horizontal rule
362

Searle is full of shit

I wholeheartedly support this statement.


Posted by: W. Breeze | Link to this comment | 06-22-10 3:48 AM
horizontal rule
363

My reaction, in general, to philosophy of mind is: "Great, you have a hypothesis. Now go do an experiment." But philosophers don't want to do that. They'd rather wiffle about qualia and waffle about zombies than actually validate their theories against the evidence. Science eats philosophy as more phenomena become amenable to measurement, and I expect philosophy of mind will soon fall prey if it isn't already being consumed.


Posted by: W. Breeze | Link to this comment | 06-22-10 3:56 AM
horizontal rule
364

Sadly, humanities types like to combat the infiltration of science by using words like "brains" and "minds" instead of "people." One can, any goddamn day of the week, find some talk to go to by an English professor who wouldn't know neuroscience or linguistics if they bit him on the ass talking about how [author] uses "language" to affect your "brain," with the surprising result of confirming all the literary-critical arguments he's made in his life. No examination or measurement of actual human brains is necessary for this approach.


Posted by: A White Bear | Link to this comment | 06-22-10 4:12 AM
horizontal rule
365

364. I feared that might be so, but I hoped it wasn't.

How's your foot?


Posted by: chris y (OFE) | Link to this comment | 06-22-10 4:31 AM
horizontal rule
366

No examination or measurement of actual human brains is necessary for this approach.

Probably a good thing, if they've all been pre-empted by the Faculty of Mad Science.

["It's funny. You don't normally meet many mad social scientists."]


Posted by: ajay | Link to this comment | 06-22-10 4:41 AM
horizontal rule
367

I'm not prepared to fight the 'philosophy is done for, it can't resist teh virile thrusting progression of science' wars all over again.

But yeah, I tend to agree with 361. Then again, I find myself leaning unfashionably* Ryle-wards on this sort of stuff anyway.

* although I expect that wheel is turning/has turned.


Posted by: nattarGcM ttaM | Link to this comment | 06-22-10 4:41 AM
horizontal rule
368

There can be no innocents in this war, ttaM.

I don't think philosophy is done for. I can't see a science of ethics any time soon, for example. Certain areas of philosophy might be past their use-by-date, however (but don't tell Brock!)


Posted by: W. Breeze | Link to this comment | 06-22-10 4:52 AM
horizontal rule
369

362: I do too, actually. I just didn't want to go down that particular road.


Posted by: Sifu Tweety | Link to this comment | 06-22-10 7:11 AM
horizontal rule
370

This whole dustup started in an argument in another thread about the singularity, which is the ultimately silliest form of deep AI

Was I in that thread? If so, I don't remember PGD being involved. Anyway, I resent people beating up on the Singularity, (3) and (2) especially. I'm not really a big fan of (1) though. It's like this conversation here never gets above the level of

Hey, man, have you heard? There's this bunch of, like, crazy nerds out there, who think that some kind of unspecified huge nerd thing is going to happen. What a bunch of wackos! It's geek religion, man.

(OK, maybe a little above that.)

Biscuit conditionals are like compile-time macros.


Posted by: pdf23ds | Link to this comment | 06-22-10 11:37 AM
horizontal rule
371

Sifu's remark that computers have senses is, I think, a bit of a stretch. Computers have "sight" through cameras, but sight is just a part of vision, and no one yet has a good model of the complex image processing done in the first layers of the visual cortex.

Actually, we have a pretty good model of that (for monkeys, anyway). There's a comprehensive description of the (fairly up to date) state of scientific knowledge in this book. What we don't have is a good model for what happens at later stages of the visual processing pathways, and even less of a model for how the mind then interprets and makes use of the results of the processing.


Posted by: Ginger Yellow | Link to this comment | 06-22-10 12:41 PM
horizontal rule
372

371 : thanks. Always glad to have my misconceptions corrected.


Posted by: joel hanes | Link to this comment | 06-22-10 12:54 PM
horizontal rule
373

365: Good enough to go to work today, but it may have been too taxing. It's a long commute, a lot of walking, and a big campus. We'll see! Hopefully I'll be able to go tomorrow!


Posted by: A White Bear | Link to this comment | 06-22-10 2:29 PM
horizontal rule
374

#define ANALOGY_BAN 1

#ifndef ANALOGY_BAN

Biscuit conditionals are like compile-time macros.

#endif


Posted by: nosflow | Link to this comment | 06-22-10 2:31 PM
horizontal rule
375

370: It was within the past week, but if you mention the S word Sifu Tweety will get angry.


Posted by: Eggplant | Link to this comment | 06-22-10 2:31 PM
horizontal rule
376

375: only when I'm buggy.


Posted by: Sifu Tweety | Link to this comment | 06-22-10 2:34 PM
horizontal rule
377

I thought the prickliness was a feature.


Posted by: teofilo | Link to this comment | 06-22-10 2:35 PM
horizontal rule
378

Or hungry, as it turns out. There are biscuits if you are.


Posted by: Eggplant | Link to this comment | 06-22-10 2:36 PM
horizontal rule