Re: They're Already Smarter Than Some People

1

He never says computers will write novels; he only says they'll write poems. Obviously the literature algorithm is exponential on word count.


Posted by: lourdes kayak | Link to this comment | 03-12-16 11:02 PM
horizontal rule
2

Yeah, I switched it, because size matters.


Posted by: ogged | Link to this comment | 03-12-16 11:13 PM
horizontal rule
3

I dunno, AlphGo really is a lot different from chess computers in terms of not just computing the hell out of things but actually learning to have an intuitive understanding of Go positions. I now think it's about 50/50 that computers will be better than me at proving theorems before I retire. Poems and painting are weirder because I'm not sure how you actually determine whether the computer is good at them.


Posted by: Unfoggetarian: "Pause endlessly, then go in" (9) | Link to this comment | 03-12-16 11:51 PM
horizontal rule
4

WHO IS THE TOLSTOY OF THE COMPUTERS


Posted by: AlpaBellow | Link to this comment | 03-12-16 11:56 PM
horizontal rule
5

I might trust a computer-driven car over a Tolstoy-driven car.


Posted by: lourdes kayak | Link to this comment | 03-13-16 12:11 AM
horizontal rule
6

That NORAD computer in War Games totally figured out to end the Cold War, and that was in like 1983.


Posted by: Mossy Character | Link to this comment | 03-13-16 12:50 AM
horizontal rule
7

I don't know about novels, but computers can already compose music which is indistinguishable from human compositions.


Posted by: ajay | Link to this comment | 03-13-16 1:10 AM
horizontal rule
8

But we generously refrain from doing so, so that human composers can continue to find meaning in life.


Posted by: Masaq' Orbital | Link to this comment | 03-13-16 1:16 AM
horizontal rule
9

Proving mathematical conjectures has already happened too. Negotiating peace? If anything that sounds easier to automate. Computers have been simulating military strategy and tactics for decades.
What would be interesting is Douglas Adams' "Reason": a decision support program that, given a desired course of action, constructs a series of plausible sounding arguments for doing it. A sort of AI political think tank.


Posted by: ajay | Link to this comment | 03-13-16 1:17 AM
horizontal rule
10

8: thank you, Hub.


Posted by: ajay | Link to this comment | 03-13-16 1:18 AM
horizontal rule
11

Negotiating peace? If anything that sounds easier to automate.

That's why Dr. Forbin worked it out way back in the 1970s.


Posted by: fake accent | Link to this comment | 03-13-16 1:34 AM
horizontal rule
12

Computers have been simulating military strategy and tactics for decades.
Successfully? My (very limited and outdated) understanding is that those simulations are not-very-useful war games.
OTOH I know there is a model that started predicting casualty levels very accurately in the 1990s. But predicting attrition rates is very far from devising a winning strategy. (Which of course humans often can't manage either.)


Posted by: Mossy Character | Link to this comment | 03-13-16 2:23 AM
horizontal rule
13

And AlphaGo lost game 4. Kottke psyched it out. It couldn't handle the pressure.


Posted by: Walt Someguy | Link to this comment | 03-13-16 3:07 AM
horizontal rule
14

See 8.


Posted by: Mossy Character | Link to this comment | 03-13-16 3:15 AM
horizontal rule
15

Once it started to lose, it apparently started to play really badly. Reading go discussion boards, you can see the level of hysteria recede. So 8 is probably the truth.


Posted by: Walt Someguy | Link to this comment | 03-13-16 4:12 AM
horizontal rule
16

You'd think AI people would have learned that it's to their own benefit to tamp down the hype cycle by now.

TD-Gammon beat the best backgammon player in the world in 1992 (1993?) by using creative moves that had never occurred to anybody before (just as AlphaGo has been breathlessly reported to do). Then people started using those moves against it, and started beating it easily.

The primary advance of AlphaGo is that computers are much faster than they used to be. This is a nice breakdown.


Posted by: Beefo Meaty | Link to this comment | 03-13-16 5:25 AM
horizontal rule
17

I now think it's about 50/50 that computers will be better than me at proving theorems before I retire.

I'm pretty sure they are already better than I am at proving theorems.


Posted by: Spike | Link to this comment | 03-13-16 5:33 AM
horizontal rule
18

15- I guess it still has a ways to go if it hasn't learned the universal "never read the comments" lesson.


Posted by: SP | Link to this comment | 03-13-16 5:40 AM
horizontal rule
19

This bit from the article wasn't highlighted in the OP, but seemed especially wrong to me:

until recently machines were predictable and more or less easily understood. That's central to the definition of a machine, you might say. You build them to do X, Y, & Z and that's what they do. A car built to do 0-60 in 4.2 seconds isn't suddenly going to do it in 3.6 seconds under the same conditions.
"Wait it did what!?" is a pretty reliable feature of human artifacts, and probably the bit about them that leads to a massive amount of technological progress, either in indicating that there's something we didn't understand about what we'd just made, or the physical world in general, or just by letting us know about a possibility that we hadn't seen before.

The endless and predictable AI hype that shows up any time someone manages to make a program that does something new got a lot less annoying to me when a bunch of my friends had kids. "It did something! Soon it will do anything!!" suddenly became a lot more familiar feeling to me, and a bit more endearing (if, in this context, no less dumb).


Posted by: MHPH | Link to this comment | 03-13-16 6:02 AM
horizontal rule
20

You still need human judgment to decide whether the computer had really done it "better" (or accurately in the case of the mathematical proof).


Posted by: Adam Kotsko | Link to this comment | 03-13-16 6:09 AM
horizontal rule
21

The anti-AI argument seems to be that the human thought process is so amazing and complex that it can't ever possibly be modeled to a reasonable approximation by computational means. I find that assumption to be anthrochauvinistic, and I don't buy it.


Posted by: Spike | Link to this comment | 03-13-16 6:20 AM
horizontal rule
22

Perhaps we can build a man from straw that thinks, and dreams, and paints, and loves!


Posted by: Beefo Meaty | Link to this comment | 03-13-16 6:40 AM
horizontal rule
23

On the internet, no one can tell if you're playing backgammon.


Posted by: heebie-geebie | Link to this comment | 03-13-16 6:52 AM
horizontal rule
24

21: "Reasonable approximation" is carrying a truly massive amount of weight in that sentence, since no one has the slightest clue what that would be except for "pretty damn detailed".

Given the computing power required to model very, very simple chemical reactions and the amount of them going on simultaneously in the human brain at any given second actually building something that would accurately model that - let alone the amount of stuff required to develop and model the level of external feedback required to make it work* - whatever that 'reasonable approximation' amounts to it had better be pretty damn approximate. And while 'the brain is a kind of computer!' rhetoric was popular for a while at least, and still is to some extent, it's really only true if you broaden your definition to the point where a pot of beef stew on the stove is also a computer. (Neurons really, really aren't just a squishy kind of transistor.)

*This is the part that I think often gets left out in AI discussions. Brains simply don't work without an incredible number of mostly consistent inputs** flowing into it in a steady stream, and a constant, massive, and consistent feedback along every line of that. Even the kinds of sensory deprivation that we can manage in practice will send them careening wildly out of balance, and that's at the level of "bob around in water with the lights off". There's a very good chance that making a computer model that would look like a strong AI would require basically creating a model of the world that would be almost indistinguishable in detail with the actual one, at which point, why bother?
**Despite saying 'it's really not a computer' it's hard to discuss this without using computer terms, since I'm talking about computer modeling. It's data, in some sense, but that's not necessarily the sense you'd get from computer modeling and it's certainly not the sense of data we'd have to use as the foundation of whatever model we're talking about.


Posted by: MHPH | Link to this comment | 03-13-16 7:45 AM
horizontal rule
25

modelling the human brain accurately would be a waste of time; there's not much to be gained from creating an illogical machine that's blind to its own biases and prone to lying when cornered.


Posted by: cleek | Link to this comment | 03-13-16 7:58 AM
horizontal rule
26

It could have people being on jury duty.


Posted by: Moby Hick | Link to this comment | 03-13-16 7:59 AM
horizontal rule
27

"have" s/b "save"


Posted by: Moby Hick | Link to this comment | 03-13-16 8:04 AM
horizontal rule
28

16.3: That's not really true. Alpha Go uses a fundamentally different technique, one that required many algorithmic breakthroughs in using training data to work. If you gave Google the same hardware in 2005 and had them try deep reinforcement learning, it wouldn't have worked.


Posted by: Walt Someguy | Link to this comment | 03-13-16 8:24 AM
horizontal rule
29

I dunno, AlphGo really is a lot different from chess computers in terms of not just computing the hell out of things but actually learning to have an intuitive understanding of Go positions.

I'm surprised you apparently think those are different.


Posted by: nosflow | Link to this comment | 03-13-16 8:52 AM
horizontal rule
30

I think the right metaphor for what deep networks like Alpha Go do is "instinct" rather than "intelligence". Not just inborn animal instinct, but the way people describe playing sports as instinctive after years of repetitive practice. Deep networks have something resembling that kind of trainable instinct.


Posted by: Walt Someguy | Link to this comment | 03-13-16 9:14 AM
horizontal rule
31

I think the right metaphor for what deep networks like Alpha Go do is "instinct" rather than "intelligence". Not just inborn animal instinct, but the way people describe playing sports as instinctive after years of repetitive practice. Deep networks have something resembling that kind of trainable instinct.

A large part of Hubert Dreyfus' argument in What Computers Can't Do is that human intellectual skill and expert knowledge are also a lot of like this--i.e, that what an experienced surgeon does is "see" what to do in a given situation without a lot of conscious analysis or weighing of options (even when the situations have lots of novel characteristics). He would be fundamentally loath to admit, though, that you could produce a roughly analagous training regime to what a surgeon goes through without an actually embodied way of acting on the world and getting perceptual feedback.


Posted by: Criminally Bulgur | Link to this comment | 03-13-16 9:36 AM
horizontal rule
32

I guess "Behold... the AWESOME POWER OF THE HUMAN BRAIN" doesn't impress me much when I can't find my car keys.

On the one hand, sure molecular modeling takes a great deal of horsepower and techniques we haven't figured out yet. On the other hand, you don't need to model the molecular patterns, you need to model the output, and that's where "reasonable approximation" comes in.

Will a reasonable approximation be able to produce identical results? No. But here's the thing. The reasonable approximation has other strengths that humans can't match. It can evolve at a speed orders of magnitude faster than whats found in nature. The reasonable approximation can digest vastly more inforamtion at a far higher speed than its human equivalent. The reasonable approximation isn't going to have to spend its resources on things like breathing, finding food, and trying to get laid. And the reasonable approximation isn't going to forget where the car keys are.

Is this available today or tomorrow. Not beyond the level of a fruit-fly brain, no. But the "can't beat a human at Go" milestone was a pretty big one, and that's just fallen. Twenty years ago that was a pipe dream. Twenty years from now, who the fuck even knows?


Posted by: Spike | Link to this comment | 03-13-16 9:46 AM
horizontal rule
33

Once you get to the stage where you're saying that modeling the human brain would only mean the results and not the mechanism though you're risking hitting a level of modeling that might as well not count as modeling at all. (I know, I know functionalism and so on but this is where one if its biggest set of problems comes from.) At some point you end up saying roughly "what we need is a device that can do everything a hand does but be completely different in almost every single way and do it in unrelated and totally different ways", and at that point you're setting what is at best a bizarre goal and almost certainly not one people would be inclined to call an artificial hand. (Also I think a lot of people would just say "wouldn't it be better just to make independent tools for those things?" and that's what we've got right now in AI stuff, or at least for a lot of things.)

One of the big problems talking about modeling outputs is that we really don't know about (and won't unless something big changes about how ability to compute stuff) is exactly how much of the human stuff we'd need to put in in order to make something that could do some reasonable approximation of what we do. (Or in other words, we'd have to be putting most of the human flaws in there too, and we don't know how many or to what extent.)

The reason that we think that a computer would have perfect recall isn't that SD cards or whatever are more stable and less likely to decay over time as synapses, it's that computer memory as we make it now is fundamentally different in how it works than human memory. So if we were trying to model actual human reasoning* or memory we'd have to be using something that wasn't anything like human memory, or replicating the same flaws, or we'd have to specify "outputs" at a level of generality that would lead us to say things like "a rake is a reasonable approximation of a hand".

*Which would also require adding in recognizably human emotions too: they're not only not effectively separable from reasoning in humans they're a critical part of it. There's no abstract 'reasoning about stuff' that floats above emotional states, even if we try to use one as a useful fiction in a lot of areas. (And it is a useful one - but it's still a fiction and not something that could actually exist.)


Posted by: MHPH | Link to this comment | 03-13-16 10:07 AM
horizontal rule
34

(Also re:32.1 - that's why I use 'beef stew' as my go-to example when it comes to why modeling the human brain isn't just like making a map of where the neurons are and then building circuits that look like that, and not something that seems magnificent or awe inspiring. "Behold the majesty of the human intellect" is indeed a stupid argument to make: "behold the baroque complexity of billions of different cells all engaging in incredibly complicated interactions with each other in response to massive amounts of other factors" is the relevant point.)


Posted by: MHPH | Link to this comment | 03-13-16 10:10 AM
horizontal rule
35

One day in the not-so-terribly distant future, our AI overlords will read TFA of Unfogged, and take note that I was in their corner, and MHPH was not.


Posted by: Spike | Link to this comment | 03-13-16 10:18 AM
horizontal rule
36

28: I said the primary advance; gradient descent is twenty-five years old, Sutton & Barto's RL book came out in 2003 (and TD-Gammon was '92) and deep belief networks have been popular for at least 15 years.


Posted by: Beefo Meaty | Link to this comment | 03-13-16 10:22 AM
horizontal rule
37

"Anthrochauvinistic"? Now there's a word we were better off without.

34: It's like the semiannual announcement from some Silicon Valley people that they are going to shift their focus to bioscience and use their amazing computing power and Big Data to disrupt how drugs are designed. This has been addressed, as "the Andy Grove fallacy", many times by Derek Lowe among others. By saying that biological systems cannot be modeled with current technology, we're not trying to impress you with how amazing they are. Just saying that they cannot be modeled like human-created systems can. We don't know enough of the details.


Posted by: Cryptic ned | Link to this comment | 03-13-16 10:22 AM
horizontal rule
38

31.2: That is exactly what the latest "AI" techniques are good at. They're worse at it than people, in that it requires people much less data to get up to human-level performance, but given a narrow problem and enough data, computers can be competitive. You could probably build a computer with perfect recognition of your voice, if you are willing to spend 1000 hours reciting sentences first.

36: It's not a deep belief network, which have been completely abandoned over the last 5 years. (Anyway, I picked 2005 because I thought training algorithms for deep belief networks weren't developed until 2006, though I could be off by a few years.) It's a feedforward network with more than one hidden layer. People concluded years ago that networks with more than one hidden layer were untrainable, but a series of incremental improvements -- rectified linear activation units, dropout, ADAM -- have made them workable, and they seem to dramatically outperform the previous generation of neural networks. None of this is new with AlphaGo, but it is new. Alpha Go, like Deep Mind's Atari 2600 playing program, shows that it can usefully be applied to reinforcement learning.


Posted by: Walt Someguy | Link to this comment | 03-13-16 10:50 AM
horizontal rule
39

37 is a good example, yeah. Biological systems aren't designed in ways that make sense, and as often as not what we think should be doing something (like the brain doing long division) isn't doing it at all (even if it appears like it is), or is doing a randomly selected third of that thing while the rest is being done by bones in the foot or something*. I think there's a lot of solid evidence that it wouldn't really be possible to make something resembling an artificial (human)intelligence** by modeling the brain - you'd need to model the whole human being to get it to work. (And quite possibly, if you want to get a model of all the stuff we can do cognitively, one at a level of detail where you really just might as well make one the old fashioned way. If it had to be embodied the way a human is you wouldn't be able, for example, to make it think way way faster than a person does unless you could also make it move way way faster than a human does, and even then you might end up with something really weird.)

I don't think it's at all a coincidence that a lot of the coolest AI-ish research is being done in robotics right now, and a lot of it involves just building something that works kind of like what you're trying to model and sticking a not-especially-powerful computer in it rather than making a detailed simulation of how to do respond to different situations (like the famous big dog robot which, to my knowledge, wasn't actually specifically programmed to do that scrambling about in response to being kicked).


*I'm pretty sure this example isn't true. Pretty sure.
**I said "(human)intelligence" because I'm generally of the opinion that, like with reasoning, "intelligence" isn't an abstract description of some kind of responsiveness to the world, but is actually pretty tightly tied into how humans do things, which would have to include dumb things. An awful lot of what we're talking about when we compare intelligence between different human beings isn't so much "gets the right answer" as "responds the way we naturally think is appropriate rather than a different way". The Wittgenstein example which I love is that all of those "here are eight numbers what is the next one?" math problems that (along with similar kinds of things) we use when talking about or testing intelligence are kind of bizarre since, as far as the math is concerned, you could put any damn number you pleased in the ninth place and be right.


Posted by: MHPH | Link to this comment | 03-13-16 10:56 AM
horizontal rule
40

(Skip to about :35 to see the ice bit I was talking about.)


Posted by: MHPH | Link to this comment | 03-13-16 10:57 AM
horizontal rule
41

it requires people much less data to get up to human-level performance

Sort of by definition.


Posted by: nosflow | Link to this comment | 03-13-16 11:06 AM
horizontal rule
42

38.2: I misspoke when I said gradient descent; I meant backprop.


Posted by: Beefo Meaty | Link to this comment | 03-13-16 12:03 PM
horizontal rule
43

41: I'm proud to say I achieved human-level performance in under 20 years. Take that, Western Canadian checkers computer.


Posted by: Cryptic ned | Link to this comment | 03-13-16 12:24 PM
horizontal rule
44

-- paint a picture, write a poem, prove a difficult mathematical conjecture, negotiate peace --

Folks seem to be interpreting the "negotiate peace" bit as negotiating peace between groups of humans, but what "negotiate peace" really means is "dictate the terms of humanity's surrender in the robot wars."

I can see why that makes people uneasy.


Posted by: Spike | Link to this comment | 03-13-16 1:20 PM
horizontal rule
45

I hesitate to ask, but how sensorially deprived can someone be and still be considered human by MHPH?


Posted by: Eggplant | Link to this comment | 03-13-16 1:24 PM
horizontal rule
46

Oh fuck off.


Posted by: MHPH | Link to this comment | 03-13-16 1:43 PM
horizontal rule
47

How successful would a computer peace negotiation need to be before it would be judged to be human caliber? I feel like computers today could write peace treaties that are as successful in keeping peace as those written by humans.


Posted by: urple | Link to this comment | 03-13-16 1:46 PM
horizontal rule
48

Sneaking "negotiate peace" onto that list is a way to give the computer some confidence right away with an easy win.


Posted by: Cryptic ned | Link to this comment | 03-13-16 1:52 PM
horizontal rule
49

We bombed Cambodia because Henry Kissinger used an Intel 8008-based machine built by Jeb Magruder to write a treaty with them.


Posted by: Moby Hick | Link to this comment | 03-13-16 1:55 PM
horizontal rule
50

44: See 11.


Posted by: fake acccent | Link to this comment | 03-13-16 2:04 PM
horizontal rule
51

There's an (obvious) difference between computers being better than humans at doing things and being like humans at doing things. I bet a computer could come up with new and counterintuitive ways to not find its keys, but I don't really need to optimize a computer for not finding keys when I already do it well enough.


Posted by: fake accent | Link to this comment | 03-13-16 2:09 PM
horizontal rule
52

Also, I recently bought a copy of Dreyfus' What Computers Still Can't Do but haven't read it yet because I'm not a machine and don't read that quickly.


Posted by: fake accent | Link to this comment | 03-13-16 2:16 PM
horizontal rule
53

51: You say that, but I bet that you have a computer that's as close to perfect at not finding keys as anything could be. Be honest, how often as your computer found your keys?

I think in practice that distinction is a lot less distinct than it sounds generally, though, if for no other reason than once you have to start saying what that thing is specifically you end up talking a lot about not just something that's achieved but some way it's achieved. Or you end up picking super bizarre things that the computer is doing, e.g., "when it comes to taking up existing in one location and then at a later date existing in a separate location this computer more efficiently minimizes the time required between being in the first and being in the second location" or something. As soon as you say how it's done then you're speaking about something way more specific and the "doing things better than human"/"doing things like a human" distinction suddenly becomes really really muddled.


Posted by: MHPH | Link to this comment | 03-13-16 2:33 PM
horizontal rule
54

MHPH's argument strikes me as just massively, massively moving the goalposts (and/or arguing a strawman, I'm not picky about my fallacies). I don't see why an AI needs to recapitulate the human organism, in any capacity really, in order to be considered really very intelligent in some meaningful way. Remember, Turing's criterion was "can have a half-way decent conversation via teletype" not "can perceive dress vs. not-dress and also get confused about its color given the lighting context and also form ill-founded stereotypes about its wearer." I think we will get to the point where computers can drive cars and diagnose tumors and probably even program other computers and we probably won't be at all confused about whether the computers that do that are or are not "brains" or simulations thereof.


Posted by: Yawnoc | Link to this comment | 03-13-16 2:35 PM
horizontal rule
55

(I'm not really a fan of the Turing Test myself, but it's a useful benchmark for what (a reasonably-intelligent) someone might consider "intelligent" that hasn't yet been fully achieved, despite everything else.)


Posted by: Yawnoc | Link to this comment | 03-13-16 2:38 PM
horizontal rule
56

Sure, we'll get them to do all kinds of cool things. But I was responding to this:

The anti-AI argument seems to be that the human thought process is so amazing and complex that it can't ever possibly be modeled to a reasonable approximation by computational means. I find that assumption to be anthrochauvinistic, and I don't buy it.
And none of the things you listed would require anything like modeling the human thought process to the extent that we'd think it was an artificial intelligence, as opposed to a really awesome tool. (Also, with the Turing test, I don't think there's any reason to take that seriously as setting criteria for intelligence.)


Posted by: MHPH | Link to this comment | 03-13-16 2:48 PM
horizontal rule
57

55: If I asked about a Turing test in which one side spoke English and the other Chinese and the English side said: "No intelligence found, just gibberish." would y'all understand the historical reference? And the importance for the OP question?

In some sense the question of AI is trivial or primitive, there is a whole field about the difficulties of translation and recognizing and communicating alterity. I deal with it every night "inu no Bob" is not "Bob's dog." But can I understand the precise way it differs?

The question of recognizing and understanding alien intelligences is not limited to the one I am typing on or sending to but is a condition of being human in a world of humans, to avoid the mystical and the concrete for the moment.

The definition and limiting of "intelligence" is simply a movement to objectify and control the Other.


Posted by: bob mcmanus | Link to this comment | 03-13-16 2:59 PM
horizontal rule
58

Aw hell, even Alphago:

Wasn't site specific but "run" on as vast network of machine, and we should claim its location was the place of output.

Was a dynamic self-adjusting microsecond by microsecond process that honestly lacked a stable state unless made dumb

Pretty only made sense as an AI in a context where it was interacting with another intelligence

When I said "Other" above I meant not the person across from me but fucking Everything. The big Other is the world and the unconscious and intelligence is found out there, viewed by the desire to find patterns to control. Spinoza.


Posted by: bob mcmanus | Link to this comment | 03-13-16 3:13 PM
horizontal rule
59

Oh fuck off.
In your flailing about to find a scientific sounding reason that human intelligence will be forever beyond our comprehension you've settled on an argument that, say, a blind quadriplegic will necessarily have sub-human intelligence. You don't actually believe this, but that's what your argument assumes.


Posted by: Eggplant | Link to this comment | 03-13-16 3:14 PM
horizontal rule
60

s/b "shouldn't claim its location was the place..."


Posted by: bob mcmanus | Link to this comment | 03-13-16 3:15 PM
horizontal rule
61

59: Oh fuck off.


Posted by: Cryptic ned | Link to this comment | 03-13-16 3:19 PM
horizontal rule
62

I can't say authoritatively what Spike meant but I think you are over-interpreting "modeled to a reasonable approximation." And this

Once you get to the stage where you're saying that modeling the human brain would only mean the results and not the mechanism though you're risking hitting a level of modeling that might as well not count as modeling at all.

doesn't track for me at all. When we talk about AI these days we are talking about systems with specific abilities that seem to us hallmarks of human intelligence (like, oh, say, playing Go); we are not talking about modeling the human organism such that the AI is indistinguishable from human.


Posted by: Yawnoc | Link to this comment | 03-13-16 3:20 PM
horizontal rule
63

AI is more divisive than the primary?


Posted by: Moby Hick | Link to this comment | 03-13-16 3:23 PM
horizontal rule
64

AI IS NOT ELECTABLE, SHEEPLE


Posted by: apostropher | Link to this comment | 03-13-16 3:30 PM
horizontal rule
65

At some point you end up saying roughly "what we need is a device that can do everything a hand does but be completely different in almost every single way and do it in unrelated and totally different ways", and at that point you're setting what is at best a bizarre goal and almost certainly not one people would be inclined to call an artificial hand.

This seems like a kind of bizarre position. I think tactile-visual substitution systems result in something worth calling sight, regardless of whether or not they work like eyes.


Posted by: nosflow | Link to this comment | 03-13-16 3:35 PM
horizontal rule
66

Or, less indirectly, I don't see why, to the objection "but you aren't doing it the way the brain does it", the response "so what?" isn't perfectly adequate.


Posted by: nosflow | Link to this comment | 03-13-16 3:38 PM
horizontal rule
67

To be fair, "meh" would work, too, nosflow.


Posted by: Cala | Link to this comment | 03-13-16 3:43 PM
horizontal rule
68

The adequacy of one response does not exclude the adequacy of others. Adequacy is multiply realizable!


Posted by: nosflow | Link to this comment | 03-13-16 3:47 PM
horizontal rule
69

Adequacy, like your mom,


Posted by: nosflow | Link to this comment | 03-13-16 3:48 PM
horizontal rule
70

the famous big dog robot which, to my knowledge, wasn't actually specifically programmed to do that scrambling about in response to being kicked).
What the fuck, Kevin?


Posted by: SP | Link to this comment | 03-13-16 4:03 PM
horizontal rule
71

59: Oh fuck off.
I suppose I could've stated my objection in a less inflammatory manner, but don't statements like you'd need to model the whole human being and you wouldn't be able, for example, to make it think way way faster than a person does unless you could also make it move way way faster than a human does prompt you to wonder about those who aren't whole and those who can't move at all? If a simulating a complex environment is the barrier to human intelligence emulation, the existence of obviously intelligent people with very limited communications bandwidth to that environment forces some low bound on how difficult this problem is.


Posted by: Eggplant | Link to this comment | 03-13-16 4:05 PM
horizontal rule
72

Pretend that last sentence wouldn't get me flagged in a Turing test.


Posted by: Eggplant | Link to this comment | 03-13-16 4:05 PM
horizontal rule
73

In your flailing about to find a scientific sounding reason that human intelligence will be forever beyond our comprehension you've settled on an argument that, say, a blind quadriplegic will necessarily have sub-human intelligence. You don't actually believe this, but that's what your argument assumes.

Go ahead: show where I said anything fucking like that about human intelligence. And no "I think the computational challenges are vastly understated given the amount of stuff that you'd need to account for" doesn't actually mean that human intelligence is beyond our comprehension. If it was then we wouldn't know what those factors were, would we.

Oh, and also here's an actual list of distinct human senses (with things that are either subsets or distinct senses folded into the general one in parentheses)*: sight (perception of color; perception of black and white or lines); smell; taste (different nerve endings entirely for salt; sweet; sour; bitter; protein-y (possibly more)); hearing; pressure; vibration; temperature (technically one for heat and a separate one for cold but unified by the brain into one general sense unless you intentionally fuck with it or step into a warm shower with cold feet); pain (pricking pain; throbbing pain; burning pain (part of temperature); pain in skin; pain in bones; pain in organs); itching (god only knows why we have this but there you go); proprioception (position of joints; tension of muscles and tendons); equilibrioception (sense of position relative to the ground; sense of motion through space; stretching of bladder; stretching of intestinal tract (hunger; distention); inflation level of lungs; salt/water balance (internal to cells; external to cells); ph of blood; extent to which you have been poisoned**; and a whole bunch of ones that are actually fucking controversial.

So no it's not fucking "blind quadriplegic" you shit. Remove any substantial set of those in gestation and you would absolutely fuck up cognitive development ferociously, even leaving aside the fact that without some of those you literally wouldn't be able to survive outside of the womb (go ahead - take out your sense of blood ph and whether or not your lungs are inflated and see how breathing goes). Remove any number of different ones in a fully healthy adult person and it screws with any number of different cognitive functions already, some of which people can adapt to quickly, some slowly, and some not really at all.


*Ambiguous/unclear: sense of passage of time. It acts like one in a lot of ways, but...
**There's a whole host of different chemoreceptors that each respond to a different chemical or chemicals here but they're all basically going to the vomit center of the brain and there's what your sense of whether or not you've been poisoned and need to throw up right now amounts to. Usually it's in the "I'm fine no reason to notice" setting, but I think everyone has felt the other way it can go too.


Posted by: MHPH | Link to this comment | 03-13-16 4:18 PM
horizontal rule
74

"You might be able to simulate the brain, but you'll never simulate the gut."


Posted by: Eggplant | Link to this comment | 03-13-16 4:33 PM
horizontal rule
75

Oh, and also some kinds of paralysis - specifically facial because of its importance and also the fact that there are a whole bunch of people out there just voluntarily doing it to themselves - can have a noticeable effect on cognition/cognitive capacities. So there's a pretty good reason to think that, yeah, cognition isn't somehow magically separable from the brain and things that happen to the body can affect it an awful lot.


Posted by: MHPH | Link to this comment | 03-13-16 4:40 PM
horizontal rule
76

So there's a pretty good reason to think that, yeah, cognition isn't somehow magically separable from the brain and things that happen to the body can affect it an awful lot.
This is true, but it's a long way from:
There's a very good chance that making a computer model that would look like a strong AI would require basically creating a model of the world that would be almost indistinguishable in detail with the actual one, at which point, why bother?
and
Brains simply don't work without an incredible number of mostly consistent inputs** flowing into it in a steady stream, and a constant, massive, and consistent feedback along every line of that.


Posted by: Eggplant | Link to this comment | 03-13-16 4:58 PM
horizontal rule
77

Gee, then it's a pity that aside from that one sentence I didn't say anything else in defense of that point. Really I should have typed out multiple pretty long comments or something.


Posted by: MHPH | Link to this comment | 03-13-16 5:00 PM
horizontal rule
78

Here is a nice post by somebody who is a field expert if anybody is (John Langford, creator of the Journal for Machine Learning Research and the Vowpal Rabbit learning package) about why the AlphaGo result is nifty-ish but neither massively surprising nor terribly ground-breaking. In particular the sentence "Global exploration strategies are known to result in exponentially more efficient strategies in general for deterministic decision process(1993), Markov Decision Processes (1998), and for MDPs without modeling (2006)." is super important for why what they're doing is not anything generalizable, and is not likely to lead to anything broadly generalizable.


Posted by: Beefo Meaty | Link to this comment | 03-13-16 5:06 PM
horizontal rule
79

I met an AI the other week. It failed completely and publicly at a fairly simple and well-defined task. The IBM folk weren't very happy about that, but seeing as they'd demonstrated it to dozens of other people that week you'd have thought they'd have got used to it.

Unless I was the first to point out that the Wikipedia article on TCP will not actually help you work out why you got a BGP4 route flap with an error saying MD5 Mismatch. IMHO this is pretty wank - it basically returned a random networking-related wikipedia page.

But they trained it against a big pile of networking documentation and trouble tickets, so it wasn't going to give me wiki pages on fish. In that sense it was in fact probably worse than just picking pages at random. Nice visualisations though.


Posted by: Alex | Link to this comment | 03-13-16 5:10 PM
horizontal rule
80

Another nice post by Langford that addresses, albeit indirectly, why building Go-playing machines isn't actually getting you terrible closer to general-purpose AI, why general purpose AI is probably possible, and why it's probably quite hard.

One thing he doesn't address there is that understanding the right frameworks for learning and modularity in a real-world sense is _fucking hard_, where picking a task and teaching a computer to do that task is, by comparison, easy.


Posted by: Beefo Meaty | Link to this comment | 03-13-16 5:10 PM
horizontal rule
81

79: it passed the first-level support call center turing test!


Posted by: Beefo Meaty | Link to this comment | 03-13-16 5:12 PM
horizontal rule
82

81: yeah. the application was basically "deskill your call centre even MOAR and incidentally make sure they never improve".


Posted by: Alex | Link to this comment | 03-13-16 5:12 PM
horizontal rule
83

On the other hand "mm hmm may I, the comic book buy, introduce you to the packet-switched networking seven layer burrito"... it has the flavor of minimal IT consciousness, you know? It captures something important. Did it have a condescending tone?


Posted by: Beefo Meaty | Link to this comment | 03-13-16 5:14 PM
horizontal rule
84

This thread makes me think that the fallacy "I am smart. This thing is hard for me. Ergo, a computer that can do this thing is smart." should really have a name by this point.


Posted by: Beefo Meaty | Link to this comment | 03-13-16 5:17 PM
horizontal rule
85

So no it's not fucking "blind quadriplegic" you shit.

Honestly.


Posted by: Standpipe Bridgeplate | Link to this comment | 03-13-16 5:20 PM
horizontal rule
86

Of course, the people who staff phone helplines and answer every question with "Please go to our website" are under no illusion that this behavior resembles what a human would choose to do.


Posted by: Cryptic ned | Link to this comment | 03-13-16 5:24 PM
horizontal rule
87

84: I would suggest the "Asimov Fallacy", due to the tendency of science fiction around then (including his stuff, but especially because, well, Robots) to talk about having to have huge computers to solve complicated math problems* and also lots of human size robots running around over varied terrain and having conversations with the people around them.

*And not generally that complicated either, at least by today's standards.


Posted by: MHPH | Link to this comment | 03-13-16 5:27 PM
horizontal rule
88

78, 80: I was quite confused why Spike was referring to the '"can't beat a human at Go" milestone', since "milestone" implies that it's definitely on a path that will lead to something other than beating a human at other board games.


Posted by: nosflow | Link to this comment | 03-13-16 5:34 PM
horizontal rule
89

If you call tech support for a Windows enabled pacemaker, they first ask you to turn it off and then back on.


Posted by: Moby Hick | Link to this comment | 03-13-16 5:34 PM
horizontal rule
90

That joke must have killed in 1997.


Posted by: nosflow | Link to this comment | 03-13-16 5:36 PM
horizontal rule
91

My new favorite image search incorrect guess.


Posted by: fake accent | Link to this comment | 03-13-16 5:38 PM
horizontal rule
92

I've been saving it.


Posted by: Moby Hick | Link to this comment | 03-13-16 5:38 PM
horizontal rule
93

since "milestone" implies that it's definitely on a path that will lead to something other than beating a human at other board games.

I used the term "milestone" because I've been hearing for as long as I've been in computers that "sure, computers can beat humans in chess, but they can't beat humans at go."

And now, they are beating humans at go. That's a milestone.


Posted by: Spike | Link to this comment | 03-13-16 5:57 PM
horizontal rule
94

Well, not necessarily, if the whole thing is a road to nowhere.


Posted by: nosflow | Link to this comment | 03-13-16 5:57 PM
horizontal rule
95

Other than playing games really well! It's not nothing!


Posted by: nosflow | Link to this comment | 03-13-16 5:58 PM
horizontal rule
96

I bet I can tie a computer at tic-tac-toe.


Posted by: Moby Hick | Link to this comment | 03-13-16 5:59 PM
horizontal rule
97

96: But you can't tie an expert tic-tac-toe computer. It already knows the only way to win.


Posted by: fake accent | Link to this comment | 03-13-16 6:03 PM
horizontal rule
98

Well, you know, we've never really been back to the moon, so I guess landing there wasn't actually a milestone in space exploration.


Posted by: Spike | Link to this comment | 03-13-16 6:07 PM
horizontal rule
99

The 70s were a different time.


Posted by: Moby Hick | Link to this comment | 03-13-16 6:08 PM
horizontal rule
100

Sometimes milestones are at the end of a road.


Posted by: SP | Link to this comment | 03-13-16 6:10 PM
horizontal rule
101

Much more interesting to me than the ongoing tiresome man v. computer capabilities stuff would be something along the following lines for games. Have totally open competitions, with access to as many computers, friends and networks that you like: What would be the combo that would be the best? Growing up I was always intrigued by that concept even absent computers--what would be the optimal group and command and control structure for winning at chess, go, whatever? I'm sure everyone would grant that some form of computer-augmentation would be part of the winning formula (which would change as technology advanced). Something with computers and humans each doing what they do best (like actual stuff in the world). I think the human team side of it is pretty interesting on its own and would probably differ for different games.


Posted by: JP Stormcrow | Link to this comment | 03-13-16 7:30 PM
horizontal rule
102

Meh, Regis tried various options and they were almost all useless.


Posted by: SP | Link to this comment | 03-13-16 9:29 PM
horizontal rule
103

Although I was impressed by one contestant- and I guess that show was on in the relatively early days of internet, search engines, etc.- who called his friend and didn't bother reading the question, he just spat out keywords to search. Regis was bewildered by this strategy- "I guess your friend is in front of a computer?"


Posted by: SP | Link to this comment | 03-13-16 9:31 PM
horizontal rule
104

101: Kasparov held a tournament like that.

The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and "coaching" their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.


Posted by: Mossy Character | Link to this comment | 03-13-16 9:49 PM
horizontal rule
105

Of course it's ground-breaking. A year ago, Go programs couldn't beat a single pro Go player. Now there's a Go program that's beating one of the best. In the space of a couple of years, Deep Mind has applied deep reinforcement learning to Atari games and Go with good results. New ground has been broken.

The problem is that while everybody in the field understands that general human-level intelligence is not around the corner, the media is full of morons who are incapable of understanding any scientific result, and the world is full of gullible fools raised on a diet of science fiction and futurology. (Here's a leading machine learning researcher calling Slashdot's coverage "completely, utterly, ridiculously wrong": https://www.facebook.com/yann.lecun/posts/10153426023477143)


Posted by: Walt Someguy | Link to this comment | 03-13-16 11:10 PM
horizontal rule
106

This thread makes me think that the fallacy "I am smart. This thing is hard for me. Ergo, a computer that can do this thing is smart." should really have a name by this point.

Definitely. Humans find it very easy to drive cars in traffic (most adults can do it after only a short amount of teaching and practice) and extremely difficult to fly supersonic aircraft (months or years of training, and even then only a small number of people manage), but for computers it is exactly the opposite way round.

101 reminds me of the augmented vs. enhanced human distinction made by Bob Work at CNAS last year:


[he's talking about advances in technology that will contribute to changes in combat in the near future. The first one he picks is autonomous deep learning:]
...The second component is what we call human-machine collaboration, decision making.
In 1997, a computer beats Garry Kasparov, world champion in chess. Everyone goes, wow. But in 2005, two amateurs, working with three personal computers (PCs), defeated a field of chess champions, grand masters, and machines themselves.
It was the machines -- well, Garry Kasparov using the strategic analysis of a human, combined with the tactical acuity of a computer.
The F-35 helmet is very much a human-machine collaboration-type system. Three hundred and sixty degrees of information is being crunched by the machine and portrayed in an advanced way on the heads up display on a helmet. It is designed to reduce friction. It will never reduce chance, but it can simplify the speed of operations by allowing humans to make better decisions faster.
The third component is what we call assisted human operations. Assisted human operations, not enhanced human operations. We will have a much broader debate on whether to go after enhanced human operations, but for right now, when we say assisted human operations, think of your car. Think of the lane departure warning, ding, ding, ding, you're getting ready to cross over the line.
Or when you're backing up -- beep, beep, beep, beep, you're getting closer to something. Using wearable electronics, heads-up displays, perhaps exoskeletons to assist humans to be better in combat.
Our adversaries, quite frankly, are pursuing enhanced human operations. And it scares the crap out of us, really. We're going to have to have a big, big decision on whether or not we are comfortable going that way. But we are very comfortable going after assisted human operations.


Posted by: ajay | Link to this comment | 03-14-16 2:46 AM
horizontal rule
107

104: Thanks! Exactly what I was thinking. Here's a description of the 2005 tournament from the two guys who won it (with their computers). It seems there is some continued activity in this area under the term Advanced Chess (with different variations).


Posted by: JP Stormcrow | Link to this comment | 03-14-16 6:02 AM
horizontal rule
108

And 106 is the kind of thing which is a much more immediate and important challenge for the world. People assisted/augmented by computers for good or ill depending on your point of view.


Posted by: JP Stormcrow | Link to this comment | 03-14-16 6:09 AM
horizontal rule
109

Humans find it very easy to drive cars in traffic (most adults can do it after only a short amount of teaching and practice) and extremely difficult to fly supersonic aircraft (months or years of training, and even then only a small number of people manage), but for computers it is exactly the opposite way round.

Isn't the main reason computers can't drive cars in traffic that traffic is mostly humans? If traffic were just other computer guided vehicles, the problems computers have in traffic wouldn't be very great.


Posted by: Moby Hick | Link to this comment | 03-14-16 6:20 AM
horizontal rule
110

Humans packed into shiny metal boxes like lemmings in a suicidal race, or something.


Posted by: Moby Hick | Link to this comment | 03-14-16 6:21 AM
horizontal rule
111

The solution is obvious. Give computers guide dogs. Computers aren't very good at creating a theory of the mind and that's what you need to interact in a domain surrounded by people. Dogs are fine at that and many of them are small enough ride along on a robot. The only problem is how to get the dog to input the information to the robot, but I can't solve all your problems.


Posted by: Moby Hick | Link to this comment | 03-14-16 6:24 AM
horizontal rule
112

I see a Yorkie riding around on a Segway-style conveyance that is guided by dog-assisted computer navigation. It's delivering Amazon orders and pizzas.


Posted by: Moby Hick | Link to this comment | 03-14-16 6:40 AM
horizontal rule
113

Can I have my venture capital now?


Posted by: Moby Hick | Link to this comment | 03-14-16 6:41 AM
horizontal rule
114

I have $1.


Posted by: Walt Someguy | Link to this comment | 03-14-16 6:49 AM
horizontal rule
115

Surely computers can compose music better than Sting.


Posted by: SP | Link to this comment | 03-14-16 6:51 AM
horizontal rule
116

I see a Yorkie riding around on a Segway-style conveyance that is guided by dog-assisted computer navigation. It's

SHAKING HANDS
SAYING "HOW DO YOU DO"
BUT IT'S REALLY SAYING
"I LOVE YOU"


Posted by: ajay | Link to this comment | 03-14-16 7:08 AM
horizontal rule
117

Isn't the main reason computers can't drive cars in traffic that traffic is mostly humans?

That's one reason, but also IIRC things like decent perception of the road ahead and objects on and around it, traffic lights, street signs, litter, rain, other vehicles and so on. Even on a road by itself I think that an autonomous vehicle would still not perform as well as a human.


Posted by: ajay | Link to this comment | 03-14-16 7:14 AM
horizontal rule
118

Why would an autonomous computer-driven vehicle need to see a traffic light or a road sign? It would need visual perception only for collision avoidance with non-traffic things (pedestrians, Yorkies that aren't riding Segways, litter, etc.). The rest would all be GPS and networking.


Posted by: Moby Hick | Link to this comment | 03-14-16 7:18 AM
horizontal rule
119

I would definitely watch a "Terminator" film in which all the killer robots had to have guide dogs.


Posted by: ajay | Link to this comment | 03-14-16 7:18 AM
horizontal rule
120

The rest would all be GPS and networkinghandwaving.


Posted by: Turgid Jacobian | Link to this comment | 03-14-16 7:20 AM
horizontal rule
121

It would need to see traffic lights to... know when to stop? I suppose in the scenario where there are no other vehicles on the road (and no pedestrians and no road works*) it wouldn't need to stop at all.

*ObPratchett: the sign ROAD WORKS AHEAD is almost maliciously confusing. It would be closer to the truth to say ROAD DOESN'T WORK AHEAD.


Posted by: ajay | Link to this comment | 03-14-16 7:21 AM
horizontal rule
122

It needs a signal to stop, but there's no reason at all it has to a visually-perceived signal. It could just as easily be a GPS-marked point or a the road could be a wire under the pavement with those sorts of indicators there.


Posted by: Moby Hick | Link to this comment | 03-14-16 7:25 AM
horizontal rule
123

ROAD WORKS AHEAD

I think that's why those signs say "Road Construction" here.


Posted by: Moby Hick | Link to this comment | 03-14-16 7:27 AM
horizontal rule
124

111 and 112 describe a world too beautiful to exist.

There would still need to be some way to solve the "dogs are easily distracted by certain kinds of things" problem, though. Right now we need few enough guide dogs that we can train them really seriously to focus on doing just that one thing when they're out and about, and not get distracted by interesting smells* or small, fast moving animals. But if we needed enough of them to guide all our robots around that level of training would be prohibitive, especially since you really couldn't have them working that many hours a day anyway. I'd estimate three to four dogs per robot, depending on how many hours a day the robot was doing stuff. Maybe it could just carry the ones that weren't working at the time along with it in a specially designed knapsack or something and trade them out when necessary.

Smells are, I guess, controllable to some extent. But squirrels really aren't unless you restrict the robots to a very small number of places. So we'd probably need to give the robots some kind of advanced laser weaponry to eliminate potential distractions for the dog before they got diverted too far from their task.


*Why bother with GPS tags to indicate stopping places, when you could just put a convenient marking spot for the dog?


Posted by: MHPH | Link to this comment | 03-14-16 7:37 AM
horizontal rule
125

"Plan Dog" is for a world in which human-directed traffic continues to exist along with the robots. If there are no humans driving, you don't need the dog, unless the dog is necessary to stop humans from shooting the robots for the sheer fuck of it.


Posted by: Moby Hick | Link to this comment | 03-14-16 7:43 AM
horizontal rule
126

||
Breitbart is imploding over Trump. A Breitbart reporter complaining about bullying is irony you can cut with a knife.
|>


Posted by: togolosh | Link to this comment | 03-14-16 7:44 AM
horizontal rule
127

125: Sure, but when the transition is complete enough that it's almost universally robots rather than a mix of the two (because eventually that would have to happen, right?), do you really think the robots are going to give up their dogs? Peacefully?


Posted by: MHPH | Link to this comment | 03-14-16 7:47 AM
horizontal rule
128

True. Especially not after they find out how humans treat stop signs in areas without a lot of witnesses.


Posted by: Moby Hick | Link to this comment | 03-14-16 7:49 AM
horizontal rule
129

People should be more worried about AI that isn't like human brains than AI that is. At least we have long experience dealing with humans; we have no experience dealing with aliens.

I'm not saying we are going to have either sort of AI any time soon, but the alien kind is more dangerous. What if an AI's goals are completely a-human or actively (but not necessarily even consciously) anti-human?

"Asimov Effect" has been the bane of AI research since the 50's. There were direct claims that computer chess programs would lead directly and quickly to human-level AI.


Posted by: DaveLMA | Link to this comment | 03-14-16 8:24 AM
horizontal rule
130

The solution is obvious. Give computers guide dogs. Computers aren't very good at creating a theory of the mind and that's what you need to interact in a domain surrounded by people. Dogs are fine at that and many of them are small enough ride along on a robot. The only problem is how to get the dog to input the information to the robot, but I can't solve all your problems.

At this point you're just substituting the robot uprising for the animal cyborg uprising.


Posted by: Ginger Yellow | Link to this comment | 03-14-16 10:51 AM
horizontal rule
131

I'm not saying we are going to have either sort of AI any time soon, but the alien kind is more dangerous. What if an AI's goals are completely a-human or actively (but not necessarily even consciously) anti-human?

This is Musk's argument, right? I can't say I'm convinced that it's a material (or at least materially novel) problem in the near to medium term. I mean, we already have a-human (eg almost all) AIs and anti-human (eg missile guidance) AIs. The problem is a) who's using them, and b) when they become unpredictable in a position to cause harm. These aren't problems with AI, as such, so much as with bad actors or risk.


Posted by: Ginger Yellow | Link to this comment | 03-14-16 11:00 AM
horizontal rule
132

131: Right, per my 108, Dr. Evil and/or Cruz with a Repub Congress augmented/assisted by massively powerful computer networks is my immediate concern. Actually anyone with them.

My two predictions for the year 2100.

1) There will exist an analogue of Godwin's Law for bringing up American Exceptionalism in a discussion.
2) "Don't be evil" will be the new "We've always been at war with Eastasia."


Posted by: JP Stormcrow | Link to this comment | 03-14-16 11:57 AM
horizontal rule
133

People should be more worried about AI that isn't like human brains than AI that is. At least we have long experience dealing with humans; we have no experience dealing with aliens.

"like human brains" is a red herring. Whether an AI works "like human brains" is an implementation detail.


Posted by: nosflow | Link to this comment | 03-14-16 12:03 PM
horizontal rule
134

What if an AI's goals are completely a-human or actively (but not necessarily even consciously) anti-human?

There are humans like this.


Posted by: nosflow | Link to this comment | 03-14-16 12:04 PM
horizontal rule
135

Whether an AI works "like human brains" is an implementation detail.

No it's not. Implementing non-human intelligences is profoundly different from (and probably easier than) implementing human-like intelligences, because we have no idea (or rather, only fragmentary ideas) of how human intelligence even works. If what you mean is, "they can pass the Turing Test" (a la Ex Machina) that doesn't mean they are human-like intelligences internally. Human intelligence doesn't work the same way as machine intelligence, at least so far.

There are humans like this.

And we have long experience dealing with them. Depending on how they manifest that a-human nature (though I think we have different ideas of what that means) or what sort of anti-human goals they have (Hitlerian? Singerian? other?) we have learned ways to deal with them, often terminal.


Posted by: DaveLMA | Link to this comment | 03-14-16 5:24 PM
horizontal rule
136

There is a distinction between a "human intelligence" and intelligence that is implemented by working like a human brain.

Human intelligence doesn't work the same way as machine intelligence, at least so far.

Since AFAICT "machine intelligences" are called that largely out of courtesy to their creators and their desires for grants, I'm happy to grant that.


Posted by: nosflow | Link to this comment | 03-14-16 5:36 PM
horizontal rule
137

Totally OT, but as far as I can tell, this appears to be a legitimate news article and not any sort of parody?

Less than half of America's youth are straight, new survey finds


Posted by: urple | Link to this comment | 03-14-16 6:11 PM
horizontal rule
138

Alpha Go won the final match, though apparently it was very close. This proves everyone right. Or perhaps wrong.


Posted by: Walt Someguy | Link to this comment | 03-15-16 2:49 AM
horizontal rule
139

If what you mean is, "they can pass the Turing Test" (a la Ex Machina) that doesn't mean they are human-like intelligences internally.

This Turing test sounds interesting. How does it make you feel?


Posted by: Opinionated ELIZA | Link to this comment | 03-15-16 3:01 AM
horizontal rule
140

We were discussing you, not me.


Posted by: OPINIONATED DOCTOR | Link to this comment | 03-15-16 5:01 AM
horizontal rule
141

It needs a signal to stop, but there's no reason at all it has to a visually-perceived signal. It could just as easily be a GPS-marked point or a the road could be a wire under the pavement with those sorts of indicators there.

Well, if you're going to those lengths - specially built roadways for robot-only traffic - you might as well just build a railway. And autonomous railways have been running since the 1980s.


Posted by: ajay | Link to this comment | 03-15-16 5:25 AM
horizontal rule
142

I'd think if you were starting from scratch with current technology, you'd do something very much like that to replace inter-city highways and main routes from suburbs into cities. The side streets would have the wire under the road or something less intensive.


Posted by: Moby Hick | Link to this comment | 03-15-16 5:30 AM
horizontal rule
143

When you drive on the interstate across Nebraska, as we all do, you are usually struck by the fact that it couldn't be that hard to virtually "couple" a whole string of cars so that only the guy in front has to pay attention and everybody else can take a nap.


Posted by: Moby Hick | Link to this comment | 03-15-16 5:49 AM
horizontal rule
144

Not that Nebraska is as big as some states.


Posted by: Moby Hick | Link to this comment | 03-15-16 6:14 AM
horizontal rule
145

143: "I just want you to know that we're all counting on you."


Posted by: JRoth | Link to this comment | 03-15-16 7:16 AM
horizontal rule
146

Basically on-topic: I had a roomba in 2003ish and was largely unimpressed. Have they gotten meaningfully better?


Posted by: urple | Link to this comment | 03-15-16 10:23 AM
horizontal rule
147

They haven't improved their ability to clean the floor, but they're more mindful about it.


Posted by: Moby Hick | Link to this comment | 03-15-16 10:25 AM
horizontal rule
148

I saw a tweet a while back about how the dog shit on the floor and the Roomba spread it all over the house.


Posted by: Barry Freed | Link to this comment | 03-15-16 10:30 AM
horizontal rule
149

I saw a tweet a while back about how the dog shit on the floor and the Roomba spread it all over the house.


Posted by: Barry Freed | Link to this comment | 03-15-16 10:30 AM
horizontal rule
150

That's why you want the dog to shit on the couch.


Posted by: Moby Hick | Link to this comment | 03-15-16 10:32 AM
horizontal rule
151

Or you should have one of those Asian floor toilets.


Posted by: Moby Hick | Link to this comment | 03-15-16 10:34 AM
horizontal rule
152

I guess I'll take that as a "no".


Posted by: urple | Link to this comment | 03-15-16 10:50 AM
horizontal rule
153

I've never seen one in my life so don't go by me.


Posted by: Moby Hick | Link to this comment | 03-15-16 10:51 AM
horizontal rule
154

A friend of ours has a roobma model from 3-4 years ago. It's fun to watch it wander around if you haven't seen one before, but it seems to have more entertainment value than anything else.

It seems like it would be great if you didn't have furniture with legs and didn't have cables running along the floor, but it's bad at navigating spaces that aren't kept clean of obstacles.

According to wikipedia, third gen and newer roobmas no longer get completely stuck when they hit electrical cords. That's an improvement, but they can't seem to go over electrical cords.


Posted by: sral | Link to this comment | 03-15-16 12:53 PM
horizontal rule
155

150

I hear there's a new model of Roomba that is optimized for cleaning bathrooms.


Posted by: DaveLMA | Link to this comment | 03-15-16 4:13 PM
horizontal rule
156

We'll call that "Plan B".


Posted by: Moby Hick | Link to this comment | 03-15-16 4:21 PM
horizontal rule