Re: Guest Post - Effectively Irritating

1

You people won't be laughing when Roko's Basilisk exhumes a simulation of you from the dead and consigns you to a silicon Gehenna for all eternity.


Posted by: snarkout | Link to this comment | 08-13-15 10:31 AM
horizontal rule
2

Honestly if it comes down to choosing between an all-consuming apocalyptic fireball and these folks I'm on team all-consuming apocalyptic fireball.


Posted by: Roberto Tigre | Link to this comment | 08-13-15 10:34 AM
horizontal rule
3

I was going to send in the same article.

My take-away was to reflect on the fact that charitable/altruistic impulses are, like everything else in life, socially situated. They don't arise in a vacuum, and there is a natural and human desire to try to work on things that other people in one's social community value and, to go along with that, to convince other people that the things one is concerned about and working on are important and valuable.

This poses a problem for people who want to re-think charity from first principles.

I would also note that the biggest success story mentioned in the article, getting large buyers to oppose certain factory farming methods, is one which wasn't started by the SV folks -- other people have put in decades laying the groundwork for that.


Posted by: NickS | Link to this comment | 08-13-15 10:38 AM
horizontal rule
4

Put another way: The number of future humans who will never exist if humans go extinct is so great that reducing the risk of extinction by 0.00000000000000001 percent can be expected to save 100 billion more lives than, say, preventing the genocide of 1 billion people.

I've argued this with philosophy types before, and it just seems like a category error to me. You can't "save" the life of someone who will never exist, and it seems self-evident to me that improving the life of an actually existing person is (literally) infinitely more valuable than enabling a future life. I'll concede that improving the life of a hypothetical future person has value - I'm not being generationally solipsistic in that sense. But the issue is the people who are alive at the relevant time, not the ones who aren't. To put it shortly, the problem with human extinction isn't all the people who won't be born, it's all the people who will die.


Posted by: Ginger Yellow | Link to this comment | 08-13-15 10:39 AM
horizontal rule
5

This is why I like deontological ethics.


Posted by: Moby Hick | Link to this comment | 08-13-15 10:41 AM
horizontal rule
6

There was a symposium about so-called effective altruism at the Boston Review recently. Rob Reich's response. Here's a longer piece:

This choice has built political and institutional blind spots into the way the effective altruism movement redistributes money. All charities exist within a broader ecosystem of service providers that includes the welfare state. NGOs that distribute bed nets and provide vaccinations operate alongside an array of public health programs run by the state. NGO-run schools operate up the road from government-run schools. While these state-run programs may be performing poorly and lack resources, they are still the core provider for the majority of the poor in many developing countries. For example, despite the emphasis on privatization of the Indian education system in recent years, well over half of the rural poor are in government schools. As one education NGO leader in India remarked to me, none of us have the illusion that we can replace the state. And evidence from the literature on NGOs suggests that their much-lauded inclination to target the needy is mediated by their own organizational interests, like demonstrating impact to funders and establishing themselves in places where NGO workers want to live. Often, NGOs end up serving a group we might call the "middle poor," rather than those most in need. Instead, the very poor must often rely on the state to deliver services, inadequate as they may be.

The quality of the state's social service provision thus critically shapes welfare outcomes for many of the poorest people in the world. Yet it seems that once effective altruists have--for good reasons--ruled out governments as eligible recipients of effective aid, their attention to the state drops off entirely. The RCTs used to evaluate the impact of effective altruism's favorite charities typically examine outcomes only among individuals targeted by the charity's programs. State-run public service institutions running in the background are not the intended target of most of these charities, so they are rarely featured in RCTs designed to measure the effectiveness of a program. Thus unintended institutional effects on government welfare programs are seldom incorporated into effective altruists' calculations about worthwhile charities to fund.

Yet any scholar of the political economy of development would be skeptical of the assumption that the welfare state in poor countries would remain unaffected by a sizeable influx of resources into a parallel set of institutions. My research in India suggests that in the best cases, the presence of NGOs can lead to learning across sectors, where nearby government actors may use NGO programs as a demonstration model for their own service delivery strategies. This, of course, assumes that these government actors are interested in learning--if corruption is rampant and political will for reform is weak, learning is unlikely to diffuse into the government system or translate into widespread improvement in state services.

In the worst case, the presence of NGOs induces exit from the state sector. When relatively efficient, well-functioning NGOs enter a health or education market, for example, citizens in that market who are paying attention are likely to switch from government services to NGO services. The result is a disengagement of the most mobilized, discerning poor citizens from the state.


Posted by: nosflow | Link to this comment | 08-13-15 10:41 AM
horizontal rule
7

Also:

Just as nuclear scientists developed norms of ethics and best practices that have so far helped ensure that no bombs have been used in attacks for 70 years, AI researchers, he urged, should embrace a similar ethic, and not just make cool things for the sake of making cool things.

What? That has almost nothing to do with norms developed by scientists and everything to do with politics.


Posted by: Ginger Yellow | Link to this comment | 08-13-15 10:43 AM
horizontal rule
8

You can't "save" the life of someone who will never exist,

I thought the analogy to Pascal's Wager was helpful. There's a way of calculating costs and benefits in which you take a possible cost/benefit and multiply it by the chance of that outcome occurring. That's a perfectly good way to approach many problems but becomes much more difficult when you're talking about highly speculative costs/gains.

But I do want to be able to continue using the argument that the uncertainty around the effects of Climate Change are a reason for action, not a reason for inaction.


Posted by: NickS | Link to this comment | 08-13-15 10:45 AM
horizontal rule
9

If they care about existential risk they should be focusing on asteroids. Unlike AI they actually exist and have a proven record of causing extinctions. Musk is sort of concerned about them, but his solution is colonies on Mars.

Also valuing potential life the same as existing life is so stupid as to beggar belief. With asinine premises like that it's no wonder they've wandered of into the weeds.


Posted by: togolosh | Link to this comment | 08-13-15 10:46 AM
horizontal rule
10

8: ah, the good old non-identity problem. I kind of think it's bogus.


Posted by: nosflow | Link to this comment | 08-13-15 10:50 AM
horizontal rule
11

Extinction level asteroid impacts are extremely rare. Human caused genocide is relatively common, as is human run totalitarianism, and AI makes both easier.


Posted by: Eggplant | Link to this comment | 08-13-15 10:54 AM
horizontal rule
12

8: ah, the good old non-identity problem. I kind of think it's bogus.

Wait, what did I say that was bogus?


Posted by: NickS | Link to this comment | 08-13-15 11:05 AM
horizontal rule
13

The links and excerpt in 6 are good. I think the Rob Reich piece nicely makes a point which is somewhat related to what I was thinking about in 3.

But this politics is suspicious of, or rejects, the form of politics to which most people attach enormous value: democracy. Would effective altruists attach any independent value to democracy? Given the chance to craft social and political arrangements from scratch, would effective altruists select democratic rather than technocratic rule? I suspect the answer is no, and to that extent, effective altruism is in tension with the commonplace philosophy that identifies in democracy a powerful normative force.

Posted by: NickS | Link to this comment | 08-13-15 11:12 AM
horizontal rule
14

Oh, you didn't. See here.


Posted by: nosflow | Link to this comment | 08-13-15 11:12 AM
horizontal rule
15

I would've thought that people who take Nick Bostrom seriously would conclude we're either doomed or already living in a simulation, and so worrying about the long-term is pointless.


Posted by: Eggplant | Link to this comment | 08-13-15 11:14 AM
horizontal rule
16

Also a time-suck play date with tech employees sent to discharge their annual obligation for community service.

I'm glad to hear that the recipients of this "help" feel about as good about it as I do. Reinforces my decision not to participate in my company's version.


Posted by: Nathan Williams | Link to this comment | 08-13-15 11:21 AM
horizontal rule
17

I keep reading "Effective Altruism" as Effective Autism.


Posted by: parsimon | Link to this comment | 08-13-15 11:23 AM
horizontal rule
18

13: Given the chance to craft social and political arrangements from scratch, would effective altruists select democratic rather than technocratic rule?
Would economists? Would people worried about global warming? It's a fair concern, but it's not by any means specific to effective altruists.


Posted by: Eggplant | Link to this comment | 08-13-15 11:24 AM
horizontal rule
19

Huh. And I said that even before I read the linked article.


Posted by: parsimon | Link to this comment | 08-13-15 11:25 AM
horizontal rule
20

Serious question, how did these people get so almost incomprehensibly arrogant and self-important? Youth combined with money combined with bubble atmosphere? Self-esteem movement of the 80s? Something specific to manipulating code on a computer screen that leads to the belief that you are uniquely capable of remaking the world from scratch in a way oblivious to grounded human concerns? The siren song of thinking that because some things are calculable via childishly simple cost-benefit calculations, everything is? After years of observing the phenomenon with horror I still have no real idea.

Sometimes I think that the dominance of Silicon Valley culture in my lifetime can't be anything other than the work of a malevolent demon trying to make me, personally, insane.


Posted by: Roberto Tigre | Link to this comment | 08-13-15 11:33 AM
horizontal rule
21

While they are good at measuring the proximate effects of a program on its immediate target subjects, RCTs [randomized controlled trials] are bad at detecting any unintended effects of a program, especially those effects that fall outside the population or timeframe that the organization or researchers had in mind. For example, an RCT might determine whether a bed net distribution program lowered the incidence of malaria among its target population. But it would be less likely to capture whether the program unintentionally demobilized political pressures on the government to build a more effective malaria eradication program, one that would ultimately affect more people.
Criticizing charities for the actions of dysfunctional governments seems unfair.


Posted by: | Link to this comment | 08-13-15 11:35 AM
horizontal rule
22

21 was me.


Posted by: Eggplant | Link to this comment | 08-13-15 11:35 AM
horizontal rule
23

2 is right; 4 is right; 7 is right; 6 is good; 15 should be right but we don't live in a just world.

I read this elsewhere and thought about sending it in but I decided I couldn't bear to be reminded again that these people exist. Now that I have been anyway, and Nosflow has posted the link in 14 I think I shall have to become an anchorite or a stylite or something, because I hate everybody.


Posted by: chris y | Link to this comment | 08-13-15 11:37 AM
horizontal rule
24

Serious question, how did these people get so almost incomprehensibly arrogant and self-important?

Too much time on LessWrong?


Posted by: Minivet | Link to this comment | 08-13-15 11:40 AM
horizontal rule
25

20: Cultural bubble+early life money/success is probably enough to get you something like that. Add in whatever the root cause is for the Salem hypothesis and it would be hard not to end up that way.


Posted by: MHPH | Link to this comment | 08-13-15 11:40 AM
horizontal rule
26

If they care about existential risk they should be focusing on asteroids.

If they care about existential risk they should be focusing on climate change, ideally, or maybe nuclear proliferation. There's no strong evidence linking most of the known mass extinctions to asteroid or comet impacts, and the known mass extinctions are at most once-every-fifty-million-year events.

A lot of the physicists I know who've worked at Stanford at some point in the last ten years are very concerned about AI, so the Silicon Valley idiocy is disconcertingly contagious.


Posted by: essear | Link to this comment | 08-13-15 11:47 AM
horizontal rule
27

Climate change, nuclear proliferation, and genetically engineered diseases.


Posted by: Eggplant | Link to this comment | 08-13-15 11:52 AM
horizontal rule
28

20. I was raised in but not quite of an incredibly privileged milieu. This sort of thing, mutatis mutandis for the concerns and available solutions of 50 years ago, was the background noise of my teens. The conclusion I came to was that very bright and very privileged young people were very good at devising ways of setting the world to rights which improved the lot of everybody else up to but not including the point where they undermined their own privilege. Some of these solutions involved remarkably sophisticated mental agility.

I came to this conclusion one day about 45 years ago while listening to a contemporary who stood to inherit several million at 1970 prices explain exactly why, although inherited wealth was not justifiable in principle, an exception should be made in favour of the highly intelligent, such as him.

The main difference I notice about this shower is that unlike my teenage acquaintance, they are actively dangerous.


Posted by: chris y | Link to this comment | 08-13-15 11:53 AM
horizontal rule
29

From the article:

"At the risk of overgeneralizing, the computer science majors have convinced each other that the best way to save the world is to do computer science research."

I always sort of assumed that a lot of the AI panic was based on a combination of AI sounding sexy and people wanting to live out their childhood fantasy of being the world-saving genius superhero of their favorite science fiction, but I wonder if some of it is a very sophisticated psychological defense mechanism. Not being able to do anything substantial about real problems like climate change, dramatically increasing wealth and income inequality, and racism is distressing. Concocting elaborate and fanciful arguments that all of these problems are many orders of magnitude less important than one you might actually have the skills to help with can assuage that distress.


Posted by: essear | Link to this comment | 08-13-15 11:56 AM
horizontal rule
30

You'd think some version of "Microsoft is a vast multinational corporation who paid massive numbers of very computer literate people to create a basic operating system and came up with Windows 8" would be enough to knock a lot of those "what if we made an AI?" worries to rest. My guess is that a lot of people went into the sciences/etc. due to an early interest in science fiction, which has always enjoyed the idea of AI (along with people living in space, settling on Mars, and other things which, seriously guys, are Not Going To Happen), and as a result got fixated on the idea and give it way more credibility than it has ever come close to deserving.


Posted by: MHPH | Link to this comment | 08-13-15 11:56 AM
horizontal rule
31
If humans are able to create an AI as smart as humans, the theory goes, then it stands to reason that that AI would be smart enough to create itself, and to make itself even smarter.

That last phrase doesn't follow at all. If humans can make an AI as smart as a human, then all that means is the AI can make an AI as smart as a human. Guess what, you don't have to be a SV self-important phillo* to replicate human-level intelligence, you just have to fuck.

*My new insulting shorthand for philanthropist, meant to evoke similarities to dough.


Posted by: SP | Link to this comment | 08-13-15 12:00 PM
horizontal rule
32

but I wonder if some of it is a very sophisticated psychological defense mechanism.

An artificially sophisticated psychologically defense mechanism, hmm?


Posted by: heebie-geebie | Link to this comment | 08-13-15 12:00 PM
horizontal rule
33

Guess what, you don't have to be a SV self-important phillo* to replicate human-level intelligence, you just have to fuck.

That's how they made Hitler.


Posted by: Moby Hick | Link to this comment | 08-13-15 12:03 PM
horizontal rule
34

If they were really concerned about the uncontrolled spread of human-level intelligence they'd be funding family planning programs.


Posted by: SP | Link to this comment | 08-13-15 12:05 PM
horizontal rule
35

31: If it's as smart as a human, in a few years it will be orders of magnitude quicker as the hardware develops. And so it gets to spend thousands of lifetimes thinking about how to improve algorithmically.
29 seems accurate (that is, that reasoning resonates with me), but the arguments that AI will help these problems don't have to be all that elaborate and fanciful. Ok, maybe fanciful.


Posted by: Eggplant | Link to this comment | 08-13-15 12:08 PM
horizontal rule
36

Don't most people agree that Moore's law will slow eventually (previous incorrect predictions to the contrary), and that the inflection point will likely be before human-level AI?


Posted by: SP | Link to this comment | 08-13-15 12:11 PM
horizontal rule
37

20: how did these people get so almost incomprehensibly arrogant and self-important?

I'll take a shot at this: (a) monetary reward for their endeavors, combined with (b) a deliberately inculcated lack of respect for or understanding of institutional structures which already know a great deal about how to alleviate current and future suffering.

(b) most importantly, perhaps.

That doesn't explain the failure to grasp the scale of current human suffering, however: that's got to be just plain insularity.

What happens if you put some of those people in post-Katrina New Orleans for a few months?


Posted by: parsimon | Link to this comment | 08-13-15 12:11 PM
horizontal rule
38

One of my (now dead) professors in grad school once pointed out that there's a remarkably extensive history of people being really, really bad at knowing what would be hard for computers to do, and what would be easy. If you look at science fiction pretty much up to the point where we had serious functioning computers (e.g., well past the invention of the transistor) you see them writing stories about creating vast artificial intelligences in order to compute complicated/big mathematical equations, and having robots walking around talking to people and helping them do things.

My guess is that the problem is that people have bought into the 'brains are basically just computers' line at such a deep level that without thinking about it they assume that things that are relatively easy for people to do will be relatively easy to make computers do, and things that are hard for us are hard for them as well. Now everyone recognizes at least consciously that this is clearly not true, but the extent to which people still think that a fully functioning basically human AI (psychologically speaking) is ten or so years around the corner is staggering and, I'd guess, a result of that same basic idea.


Posted by: MHPH | Link to this comment | 08-13-15 12:11 PM
horizontal rule
39

Of course, AI is hopeless because Microsoft has made unpopular software.


Posted by: Eggplant | Link to this comment | 08-13-15 12:11 PM
horizontal rule
40

Moore's law will slow eventually
It has to sometime.
before human-level AI
Depends on how fine-grained a simulation you need, and how much computing you're willing to throw at it.


Posted by: Eggplant | Link to this comment | 08-13-15 12:16 PM
horizontal rule
41

It also depends on how important processing speed is to the AI in question, or to its being an AI in the first place. It's entirely possible that the way you'd need to set one up would make it something where increased processing speed would make literally no difference to how successful it was/smart it was/fast it though/etc.

I mean, you have to build them out of physical stuff and physical stuff comes with physical constraints separate from computing cycles. If you managed to speed up the processing power of your brain till it was ten times as fast you wouldn't be ten times smarter, or even ten times faster at anything. Probably you'd just go insane due to sensory deprivation - you aren't going to be getting pain/proprioceptive/auditory data much faster than you are right now, because that's a baked in physical constraint. It would just feel like it took ten times as long to get to you, or everything was ten times weaker, or something.


Posted by: MHPH | Link to this comment | 08-13-15 12:28 PM
horizontal rule
42

You'd just spend your time reading 10x more blogs. So yes, insanity.


Posted by: SP | Link to this comment | 08-13-15 12:30 PM
horizontal rule
43

If you managed to speed up the processing power of your brain till it was ten times as fast you wouldn't be ten times smarter, or even ten times faster at anything
You would be ten times faster at thinking.


Posted by: | Link to this comment | 08-13-15 12:33 PM
horizontal rule
44

Dammit, me again.


Posted by: Eggplant | Link to this comment | 08-13-15 12:34 PM
horizontal rule
45

We need a ratio of asteroid collusion disutility and Hitler disutility. The way to do that is to clone Hitler and crash the clone into the Earth from orbit.


Posted by: Moby Hick | Link to this comment | 08-13-15 12:34 PM
horizontal rule
46

36: What do you mean will slow?


Posted by: nosflow | Link to this comment | 08-13-15 12:36 PM
horizontal rule
47

Not being able to do anything substantial about real problems like climate change, dramatically increasing wealth and income inequality, and racism is distressing. Concocting elaborate and fanciful arguments that all of these problems are many orders of magnitude less important than one you might actually have the skills to help with can assuage that distress.

This is probably a sufficient explanation, as well as the accurate one. But it also strikes me (as a hardcore cultural Marxist) that this line of charitable thinking is also related to the equity-venture financing method of Silicon Valley, in which people have made astronomical fortunes by valuing growth, even if highly speculative and relegated to the far-future, over existing assets. If you are convinced that growth and future promise are everything, that current assets/profitablity/whatever are, in essence, nothing, and that future growth of (whatever it is you do) will be on a order of magnitude such that, e.g., Uber will be worth 20x the earnings of all the existing taxi companies in the world combined, it's probably pretty easy to lapse into far-future, totalizing thinking about your charitable and altruistic goals.

I was raised in but not quite of an incredibly privileged milieu. This sort of thing, mutatis mutandis for the concerns and available solutions of 50 years ago, was the background noise of my teens. The conclusion I came to was that very bright and very privileged young people were very good at devising ways of setting the world to rights which improved the lot of everybody else up to but not including the point where they undermined their own privilege.

I'll go toe-to-toe with most for being raised in a privileged milieu, but most of the people I knew were more hedonistic and cynical, and the main conclusion I drew was to never, ever, ever believe sanctimony from rich people. Admittedly I was hanging out with the drunks and drug addicts more than the earnest dreamers.


Posted by: Roberto Tigre | Link to this comment | 08-13-15 12:37 PM
horizontal rule
48

I'd have time to read that if my brain were 10x faster.


Posted by: SP | Link to this comment | 08-13-15 12:37 PM
horizontal rule
49

43: But only if thinking depends solely on that particular bit and doesn't have any serious links to anything else that's going on in the body aside from data-input or something. And that's a big, big assumption that we don't have much evidence to back up. (It would be true of the brain was a computer processor! But it's not! It's a complicated and messy biological system and like anything it'll only go as fast as its slowest part.)


Posted by: MHPH | Link to this comment | 08-13-15 12:38 PM
horizontal rule
50

48 to 46, but works for 47 too.


Posted by: SP | Link to this comment | 08-13-15 12:38 PM
horizontal rule
51

49: In this hypothetical we have the power to simulate one human brain. What other bit do you imagine is going to be the bottleneck, and why can't we also simulate that?


Posted by: Eggplant | Link to this comment | 08-13-15 12:41 PM
horizontal rule
52

Something specific to manipulating code on a computer screen that leads to the belief that you are uniquely capable of remaking the world from scratch in a way oblivious to grounded human concerns?

I think there's something to this, and I'll take a stab at it. There's an experience which almost every programmer has had repeatedly*, in which something isn't working, I've been staring quizzically at it for a while, and it just isn't clear where the problem is or why it isn't working, and I start to get a little bit angry (at both the program and myself) and have a feeling of, "I will solve this problem. I will make it work." It's a helpful emotion to have, because it gives a burst of energy and the motivation to really start tearing into something rather than just puttering away at. And there are two things which go into that emotional response which are at least somewhat distinctive.

1) A conviction that the problem is solvable, that the tools I have are sufficient, and that if I'm willing to dig deep enough there will be an explanation. Computer code is generally a manipulable system and, unlike the real world it is possible to account for everything.

2) A recognition that the most likely thing that's making this particular problem so frustrating is that I have some assumption about how the code works which is incorrect, and I need to go back over the parts of the code which I had believed that I understood, and find that incorrect assumption.**

You can see how it would be tempting to apply this same frame of mind to real-world problems, even though neither (1) or (2) is likely to be true.

* I should add the caveat that I don't know that I'm qualified to speak for all programmers. While I code, I'm not doing anything remotely cutting edge. Mostly I'm using old technologies and old techniques to solve problems which aren't technically impressive but fill some immediate need.

** In Ellen Ullman's novel The Bug the ultimate cause of the bug which has evaded the protagonist for years is that a system function which returns true/false to tell you whether the current location of the pointer (mouse) is within the currently active window gives an incorrect response in very specific situations.


Posted by: NickS | Link to this comment | 08-13-15 12:45 PM
horizontal rule
53

I think 47.1 is right.

At heart I am conflicted about the nonprofit I'm on the board for, because really what it is doing should be part of a decent public education system. BUT that's not on the cards, and it does an excellent job, so onwards.

My experience has been that even slightly older money is usually far more sensible than new shiny SV money, the latter being typically mind numbingly stupid and in love with itself even when its head isn't sucked into the "EA" hole.


Posted by: dairy queen | Link to this comment | 08-13-15 12:46 PM
horizontal rule
54

I would answer that question but I happen to have left my large book explaining the complete functioning of the human brain, and my other large book explaining how one could build a functioning artificial intelligence out of computers at a friend's house. Seriously though assuming that there isn't something that could be like that is a really big assumption when we don't actually know how these things work.

Also assuming that an AI will take the form of a simulation of a brain is itself problematic: brains are kind of like computers in some respects, sure, but they're also kind of like pots of beef stew in some respects*. Simulating an actual brain working the way actual brains work, at the level of molecules floating around in watery stuff banging into each other, would take a level of processing power that's almost inconceivable, unless by "simulating" you just mean "making one the old fashioned way". I mean, there are neurons firing off a bunch, but that's really way more complicated of a process than it sounds like, and also it's far from the only mechanism at work in the brain.

*Probably more respects, honestly.


Posted by: MHPH | Link to this comment | 08-13-15 12:49 PM
horizontal rule
55

47.1 reminds me of the so-called "dynamic scoring" that Republicans demand of the CBO in its valuing of the economic impact of legislative proposals. Now that Republicans are in full control of Congress, they've required the CBO to produce such a dynamic score -- accounting for alleged future growth as a result of the proposed measures -- although CBO is also permitted to produce its regular score. Thank god.


Posted by: parsimon | Link to this comment | 08-13-15 12:53 PM
horizontal rule
56

Does it work better if you add some tomatoes?


Posted by: SP | Link to this comment | 08-13-15 12:53 PM
horizontal rule
57

The beef stew, that is, not the CBO.


Posted by: SP | Link to this comment | 08-13-15 12:54 PM
horizontal rule
58

There's an article at WaitButWhy on human-and-above level AI, and in it he reports that the mean date (in a survey of computer types) by which we will have a human-level AI was predicted to be 2030.

I don't buy it at all. We will get more and better "point" skills, like playing grandmaster level chess was, or winning at Jeopardy, or even stock picking. We won't get "general intelligence," because that requires melding all the others (some of which we don't even have yet) in a way we don't have a clue how to do.

That doesn't mean point-skilled AI can't decide to kill us all based on bad programming and flawed (from our point of view) goal seeking algorithms. Thus, a more realistic worry is autonomous weapons, which are only a step or two away, maybe less, and don't require "human level" intelligence. A group including Musk, Wozniak and Hawking has called for a ban on them, which seems to me to be a wholly laudable goal.

Anyway, if you are worried about above-human level intelligence, wait until they wire together more than three monkeys...


Posted by: DaveLMA | Link to this comment | 08-13-15 1:07 PM
horizontal rule
59

Anybody can wire monkeys in series. The challenge is parallel-wired monkeys.


Posted by: Moby Hick | Link to this comment | 08-13-15 1:09 PM
horizontal rule
60

If they were seriously concerned about this wouldn't all these genius computer scientists be planning a Masada-like murder-suicide?


Posted by: peep | Link to this comment | 08-13-15 1:11 PM
horizontal rule
61

Sure, autonomous weapons are bad- but my real concerns are the autonomous weapon manufacturer and the autonomous weapon reloader.


Posted by: SP | Link to this comment | 08-13-15 1:13 PM
horizontal rule
62

58.2 - We're not even sure there's such a thing in humans, or at least it's not clear that we know what it is/how to measure it/etc.

I would very seriously consider being wired together with a different (non-awful) person, though. I'm not convinced it would make for a smarter persons, but it would have to be really interesting at the subjective level.


Posted by: MHPH | Link to this comment | 08-13-15 1:14 PM
horizontal rule
63

62.2: Sounds like it would be transformative.


Posted by: peep | Link to this comment | 08-13-15 1:16 PM
horizontal rule
64

I'm going to need some alligator clips, a bit of copper wire, a can of ether, and a very clean saw.


Posted by: Moby Hick | Link to this comment | 08-13-15 1:22 PM
horizontal rule
65

Aaahh, you say that every weekend.


Posted by: MHPH | Link to this comment | 08-13-15 1:30 PM
horizontal rule
66

62.1. No disagreement there, just reaching for a short phrase which is probably a misleading one. There is "code" (or "wiring") or something in our brains that knits together all the sensory inputs and memories and comes up with (e.g.) the idea of wiring together monkeys, or theorizing relativity, or even wired monkeys theorizing relativity.

In Michael Swanwick's Vacuum Flowers, a star network of six people wired together and connected to the internet set off the AI Singularity. It might take more monkeys than that.


Posted by: DaveLMA | Link to this comment | 08-13-15 1:35 PM
horizontal rule
67

60: The Voluntary Computer Scientist Extinction Plan--requiring computer scientists to pledge not to procreate--was cancelled upon being found redundant.

That was bad even for me. I'll let myself out now.


Posted by: dalriata | Link to this comment | 08-13-15 1:35 PM
horizontal rule
68

Climate change, nuclear proliferation, and genetically engineered diseases.

So I get my science news from Radiolab these days, and their recent episode about CRISPR was terrifying. It seems like there's this new, cheap way of editing genes that makes the creation of horrible diseases way easier. Radiolab was all "ooh, let's be concerned about designer babies," when really the risk sounds like designer plagues.


Posted by: Bave | Link to this comment | 08-13-15 1:51 PM
horizontal rule
69

66 - Sorry, I just meant that as a "you're really not kidding" comment, not a disagreement with anything you were saying. (And even if we did know there was such a thing, or how it worked, we still probably wouldn't know how to make something else do it.)

Probably the most accurate term we could use for whatever fits together sensory input, memories, etc. is "goo". Also it's the most fun one to use, especially when people are taking the brains-are-computers line more seriously than they (probably) should.

I think the obvious answer is that you'd need one hundred monkeys to get the singularity.


Posted by: MHPH | Link to this comment | 08-13-15 1:52 PM
horizontal rule
70

I thought it was 12 monkeys. Oh, that's the plague, not the singularity.
CRISPR- It's not as easy as all the popular press stories say. Still easier to just go find some old smallpox or something.
We recently spent a lab meeting listening to a Radiolab episode, about people who donated their child's body to research trying to track down what use everything was put to. Relevant because we get some cell lines from patients, and maybe some tissues.


Posted by: SP | Link to this comment | 08-13-15 1:56 PM
horizontal rule
71

I thought the whole point of the zombie creation lab was that parents shouldn't be able to track you down.


Posted by: Roberto Tigre | Link to this comment | 08-13-15 2:02 PM
horizontal rule
72

It does seem like we're 50% of the way to a Chickenosaurus, and "a glow-in-the-dark unicorn is not out of the question." Biology is more rad than computers.


Posted by: Roberto Tigre | Link to this comment | 08-13-15 2:06 PM
horizontal rule
73

Not making a glow in the dark unicorn would be the real crime.


Posted by: Moby Hick | Link to this comment | 08-13-15 2:17 PM
horizontal rule
74

72: My net-nanny said that link was "harmful". We must be even closer than you think!


Posted by: peep | Link to this comment | 08-13-15 2:18 PM
horizontal rule
75

2117 (old calendar): Worried about overrunning the earth's energy supply, AI's decide to implement a form of replication control known as the "algorithm method."


Posted by: fake accent | Link to this comment | 08-13-15 2:18 PM
horizontal rule
76

Thus, a more realistic worry is autonomous weapons, which are only a step or two away, maybe less, and don't require "human level" intelligence.

Do these count?

(N. especially S.F.W.)


Posted by: | Link to this comment | 08-13-15 2:55 PM
horizontal rule
77

guilty of 76


Posted by: k-sky | Link to this comment | 08-13-15 3:21 PM
horizontal rule
78

I guess SV funding for charities isn't based on providing unlimited amounts of liquidity until the charity becomes a monopoly through predatory pricing, like SV funding for startup businesses. So they have to come up with something else.


Posted by: Cryptic ned | Link to this comment | 08-13-15 4:10 PM
horizontal rule
79

I was hoping to liveblog the pico meet up but CCarp is sounding a little loopy already and I'm not sure he's even made it out of the airport. At least this time I have the right week.


Posted by: clew | Link to this comment | 08-13-15 6:01 PM
horizontal rule
80

4: I think people making that argument are typically utilitarians (or have some utilitarian moral intuitions), so they don't see much moral difference between saving the life of a very young child and enabling an additional person to live.


Posted by: Benquo | Link to this comment | 08-13-15 6:16 PM
horizontal rule
81

The quote in 6 is kind of a weird criticism given that some of the most prominent EA charities work by funding and providing technical assistance to government programs in developing countries.


Posted by: Benquo | Link to this comment | 08-13-15 6:19 PM
horizontal rule
82

OTOH the conversation I'm eavesdropping on is about the terrible behavior of the puppies @ Worldcon. Ooo.


Posted by: clew | Link to this comment | 08-13-15 6:19 PM
horizontal rule
83

And maybe the craven behavior of the con and huh, I think these are SF writers I read. Suddenly seems creepier to listen.


Posted by: clew | Link to this comment | 08-13-15 6:21 PM
horizontal rule
84

So, where's the discussion of quality of life with these guys? How many mouse orgasms does it take to equal one minim of human life? How many person-years is it worth to improve everyone's lives by 15 mouse orgasms a day?


Posted by: Natilo Paennim | Link to this comment | 08-13-15 6:26 PM
horizontal rule
85

84: Is there some sort of standard conversion I can look up between mouse orgasms and QALYs/DALYs?


Posted by: Benquo | Link to this comment | 08-13-15 6:29 PM
horizontal rule
86

WHY DO MICE GET ALL THE ORGASMS


Posted by: Of Mice and Men's Rights Activist | Link to this comment | 08-13-15 6:36 PM
horizontal rule
87

86: Because tossing heroin into a vat of krill just looks too weird.


Posted by: Benquo | Link to this comment | 08-13-15 6:38 PM
horizontal rule
88

Yeah, what kind of decadent nonsense is that? Obviously you need to inject each krill with a teensy tiny hypodermic syringe.


Posted by: Natilo Paennim | Link to this comment | 08-13-15 6:51 PM
horizontal rule
89

I'm dying to ask this pair of lefty SF authors, rich in years, how we should balance QUALY vs. Mouse orgasms. I bet they have a theory.

But! CCarp is here! He says if he knew we'd be liveblogging he'd have shaved.


Posted by: clew | Link to this comment | 08-13-15 6:57 PM
horizontal rule
90

Black mouse, white mouse: As long as it orgasms, it's a good mouse.


Posted by: Natilo Paennim | Link to this comment | 08-13-15 7:26 PM
horizontal rule
91

I fail to see how the arguments in favor of funding NGOs deemed currently effective are any different from the arguments in favor of privatization. The problem with private sector firms providing public services is that they don't have enough of an incentive to remain cost-efficient and cost-effective because their scale usually means that the public sector will step in with a subsidy or a bailout if they get into trouble, rather than them going out of 'business'. NGOs pumped full of techy-phillo money will either scale up until they're in that exact same position or stay small enough not to make a difference anyway.


Posted by: carrotflowers | Link to this comment | 08-13-15 7:41 PM
horizontal rule
92

Clew!


Posted by: CharleyCarp | Link to this comment | 08-13-15 8:06 PM
horizontal rule
93

To be just a little more articulate, delightful Kenyan dinner with Clew in the Columbia City part of Seattle.


Posted by: CharleyCarp | Link to this comment | 08-13-15 9:16 PM
horizontal rule
94

|| Kung Flu! |>


Posted by: Eggplant | Link to this comment | 08-13-15 9:56 PM
horizontal rule
95

|| That was the ineffective martial art that was on the tip of my tongue in that thread that just came to mind for no reason. |>


Posted by: Eggplant | Link to this comment | 08-13-15 9:58 PM
horizontal rule
96

And we nearly had a conversation on whether teenagers actually need to learn anything in particular, but we had to get on different trains. Glad you could stop in, CCarp.


Posted by: clew | Link to this comment | 08-13-15 10:44 PM
horizontal rule
97

96 woohoo! Liveblogging! Different trains!


Posted by: Barry Freed | Link to this comment | 08-13-15 11:06 PM
horizontal rule
98

The OP link is really quite good. Possibly the best thing I've read on Vox so far. (A low bar, admittedly.)


Posted by: teofilo | Link to this comment | 08-13-15 11:59 PM
horizontal rule
99

To the OP, 'these people are giving me millions of dollars of free money voluntarily in an irritating way' is pretty much the epitome of the "this free ice cream is the wrong flavour" complaint. (And of the 'first world problem' thing, for that matter.) Oh, you have to be polite to clueless visiting volunteers one day a year in order for their companies to give you huge amounts of cash? You poor dear.


Posted by: ajay | Link to this comment | 08-14-15 1:53 AM
horizontal rule
100

99: It's not like that at all. They're giving the money to an aid worker, who is giving away their time. It's closer to "If you are committed to helping the poor, then you'll allow me to humiliate you in these specific ways so that I feel like I'm getting value for my money."


Posted by: Walt Someguy | Link to this comment | 08-14-15 3:14 AM
horizontal rule
101

But it would be less likely to capture whether the program unintentionally demobilized political pressures on the government to build a more effective malaria eradication program, one that would ultimately affect more people.

This sounds like some sort of horrible bastard offspring of the Lucas critique and heighten-the-contradictions, aka two of the worst ideas of the 20th century.

Also, I have absolutely no objection to trying to measure the effectiveness of charitable interventions, I'd just point out that like everything the VC bubble boys glom onto, the idea is far from new and has been repeatedly and successfully implemented by less annoying people.


Posted by: Alex | Link to this comment | 08-14-15 3:53 AM
horizontal rule
102

They're giving the money to an aid worker, who is giving away their time.

...giving away their time in exchange for a salary, if we're talking about full-time aid workers and charity fundraisers here. So not really giving it away.


Posted by: ajay | Link to this comment | 08-14-15 3:56 AM
horizontal rule
103

101: see here...http://www.unfogged.com/archives/comments_14036.html#1735489

"It's ludicrous saying that cheap water filters will help poor Indians! What they need is a social revolution so that the government will actually take their needs seriously!"


Posted by: ajay | Link to this comment | 08-14-15 3:59 AM
horizontal rule
104

like everything the VC bubble boys glom onto, the idea is far from new and has been repeatedly and successfully implemented by less annoying people.

You know, in other contexts I think we mock people unmercifully for saying "yes, I support the general social change these people are trying to bring about, but they're so shrill and uncouth about it."


Posted by: ajay | Link to this comment | 08-14-15 4:01 AM
horizontal rule
105

100 is right. I'd be more sympathetic to the counterargument if the people involved weren't filthy stinking rich, or, well, assholes by even the most generous measures. I mean, if we were talking about some aging couple selling their home and giving the proceeds to a charitable organization then I don't think anyone would have any real complaint about meeting with them first even if they were being dicks about it - they're doing something that's a really big deal! But instead we have people with a massively inordinate amount of the world's resources giving back some but nowhere close to enough to cause them any genuine inconvenience requiring people to bow and scrape before they do it. And they're randomly picking which ones to help based on lunatic ideas about the future - especially things like "let's make sure to dedicate a really large chunk of the economic resources of the entire human species to preventing the plot of The Terminator". That's a pretty different matter.


Posted by: MHPH | Link to this comment | 08-14-15 5:52 AM
horizontal rule
106

In less annoying tech-people-save-the-world stuff this seems like it could be a big deal, by which I mean "literally would mean save the world as opposed to just helping out a bit with stuff".


Posted by: MHPH | Link to this comment | 08-14-15 5:54 AM
horizontal rule
107

This is nowhere near the important part of this issue, and there's probably some kind of answer that I won't follow because it's not my field.

But what exactly does research into avoiding AI monsters destroying the world constitute? I mean, I can imagine lobbying for laws requiring us to smash all computers above a certain level of complexity and stop writing new kinds of software. I can't imagine it actually happening, but it's an AI prevention strategy I understand. Other than that, what are people who are worrying about this sort of thing doing, beyond scaring themselves with Roko's Boogeyman?


Posted by: LizardBreath | Link to this comment | 08-14-15 5:59 AM
horizontal rule
108

They smash looms. Just in case.


Posted by: Moby Hick | Link to this comment | 08-14-15 6:02 AM
horizontal rule
109

Reading Asimov over and over.


Posted by: SP | Link to this comment | 08-14-15 6:02 AM
horizontal rule
110

106: Huh, and that's completely unconnected to Lockheed's claim last year that they were pretty close to useful fusion as well? That seems kind of encouraging -- that we've maybe hit a point where the remaining problems really are solvable.


Posted by: LizardBreath | Link to this comment | 08-14-15 6:02 AM
horizontal rule
111

I mean, I can imagine AI researchers promising one by one that "I'm not going to develop an unsafe AI", for whatever technical definition of unsafe we're talking about. But that seems pointlessly ineffective unless the pledge covers everyone in the world with a CS degree. Which, you know, it's not likely to.


Posted by: LizardBreath | Link to this comment | 08-14-15 6:04 AM
horizontal rule
112

"I'm not going to develop an unsafe AI, or at least, if I do, I won't call it 'Skynet.'"


Posted by: Moby Hick | Link to this comment | 08-14-15 6:08 AM
horizontal rule
113

Other than that, what are people who are worrying about this sort of thing doing, beyond scaring themselves with Roko's Boogeyman?

Mainly endowing AI ethics chairs and things like that, I think.


Posted by: Ginger Yellow | Link to this comment | 08-14-15 6:19 AM
horizontal rule
114

To bring it back to the OP, part of what I'm wondering about with that question is what are people who think AI is the big problem spending money on? I can't really see what they could be doing that's expensive. Setting up conferences at which they can worry together?

If rich people are worried about AI, I'd think they could fret all they liked, but still spend the vast majority of their charitable donations on bed nets and schools.


Posted by: LizardBreath | Link to this comment | 08-14-15 6:20 AM
horizontal rule
115

Oh, 113 to 114. Boy, does that seem pointless.


Posted by: LizardBreath | Link to this comment | 08-14-15 6:22 AM
horizontal rule
116

Sifu Tweety will correct me on this, but as far as I recall, about 1980 Marvin Minsky and his acolytes thought human level AI was about 10 years away. We're now, 35 years on, at the stage where useful AI applications can be incorporated in various devices, but not much closer, AFAIK, to being able to create the sort of autonomous AI which would present a danger to its immediate environment, let alone life, the universe and everything. Meanwhile climate change will make large chunk of the world uninhabitable in less than a century, so, priorities?


Posted by: chris y | Link to this comment | 08-14-15 6:28 AM
horizontal rule
117

A Trump presidency might just set back the country enough that AI can't be developed. They should give their money to his campaign.


Posted by: Moby Hick | Link to this comment | 08-14-15 6:29 AM
horizontal rule
118

Marvin Minsky

I used to love that guy's "Minsky's Worst of the Web."


Posted by: Moby Hick | Link to this comment | 08-14-15 6:30 AM
horizontal rule
119

107.2: Other than that, what are people who are worrying about this sort of thing doing, beyond scaring themselves with Roko's Boogeyman? Very little that passes muster as academic research, but they do write lots of Harry Potter fanfic, sufficiently dire quality to inspire an epic hate-reading.


Posted by: Cosma Shalizi | Link to this comment | 08-14-15 6:37 AM
horizontal rule
120

NO matter how advanced it gets, Artificial Intelligence will always be defeated by Natural Stupidity.


Posted by: unimaginative | Link to this comment | 08-14-15 6:37 AM
horizontal rule
121

Schools deplete our natural reservoirs of stupidity.


Posted by: Moby Hick | Link to this comment | 08-14-15 6:38 AM
horizontal rule
122

111 see 7. It worked for nuclear scientists, right? Don't you remember when Edward Teller and Andrei Sakharov met secretly and tore up the formula for the hydrogen bomb and chewed it up, and so the world was saved from nuclear annihilation.


Posted by: peep | Link to this comment | 08-14-15 6:42 AM
horizontal rule
123

But that seems pointlessly ineffective unless the pledge covers everyone in the world with a CS degree.

I think people with a CS degree aren't the ones I'd worry about (if I worried about it).


Posted by: nattarGcM ttaM | Link to this comment | 08-14-15 6:47 AM
horizontal rule
124

That formula just says "Put hydrogen atoms really close together." Maybe AI is harder.


Posted by: Moby Hick | Link to this comment | 08-14-15 6:47 AM
horizontal rule
125

I'm not claiming to be either effective or an altruist, but I've found over the last few years that direct giving to people in my life (via school/fostering/church) who need money or things has felt morally fine to me. That's probably something economic segregation keeps from being an option for most middle-class white people, but it works for me.


Posted by: Thorn | Link to this comment | 08-14-15 6:56 AM
horizontal rule
126

107: $6 million of the money Elon Musk gave went to a request for proposals implemented by the Future of Life Institute. Mostly (but not exclusively) funding AI or ML academics. The press release gives some examples of the types of research being funded (as well as a list of all the projects) including:

*Developing machine learning techniques to learn about human preferences from observing behavior.
*Theoretical work on keeping superintelligent AI interests aligned w/ human interests
*Work on how to enable current AI systems to make the reasons for the results explainable to humans
*Legal research on how to keep autonomous weapons under "meaningful human control"

here: http://futureoflife.org/AI/2015selection


Posted by: Clippy | Link to this comment | 08-14-15 6:58 AM
horizontal rule
127

I have no idea what ajay is even talking about. I'm on the board of a small but effective nonprofit in SF. We are leanly but consistently funded by a range of trad sources (gov't, foundations, private $$). The location and what the organization does make everyone and their sister think it should be easy peasy for us to raise money from SV. But in fact our experience is that new SV money is obsessed with disrupting everything, including evaluating charitable giving. So they reinvent the wheel in very inefficient and narcissistic ways. I find this annoying because as pres of the board I have to watch out for things that suck up ridiculous amounts of the orgs limited resources.


Posted by: dairy queen | Link to this comment | 08-14-15 7:01 AM
horizontal rule
128

BTW, I'm totally fine with the standard amount of rich person cultivation for fundraising particularly if the board does it, that's what we signed up for.


Posted by: dairy queen | Link to this comment | 08-14-15 7:02 AM
horizontal rule
129

And also BTW the bloo/ber/ foundation runs an outstanding program for grantees on managing nonprofit arts organizations, the polar opposite of a waste of time.


Posted by: dairy queen | Link to this comment | 08-14-15 7:04 AM
horizontal rule
130

127: Right. This problem isn't "They're doing good work, but they're being obnoxious about it".

The problem is that self-styled super-geniuses who are obsessed with "disruption" and convinced that they know the answer to all the world's problems because they got rich off of some dating app they wrote are likely to actively do harm.

See Teach For America for an example of how these things can turn out badly.


Posted by: AcademicLurker | Link to this comment | 08-14-15 7:12 AM
horizontal rule
131

American has been having lots of troubles lately. It's about time somebody blamed the teachers.


Posted by: Moby Hick | Link to this comment | 08-14-15 7:13 AM
horizontal rule
132

116 - Hey, true AI was ten years away then and it's ten years away now! So, really, they were right about that.


Posted by: MHPH | Link to this comment | 08-14-15 7:16 AM
horizontal rule
133

130: I think there's a bit of "and they'd do it way more effectively/do more good/help with more important projects if they weren't constantly being interfering egotists about it".


Posted by: MHPH | Link to this comment | 08-14-15 7:17 AM
horizontal rule
134

...reinvent the wheel in very inefficient and narcissistic ways.

Exactly this. I've closely watched SV people and fellow travellers trying to disrupt the spaceflight industry (and did considerable volunteer work on their behalf). What's happened is everyone trying something truly disruptive is gone (turns out Von Braun and Korolev were right: Spaceflight is hard) and there's basically Elon Musk left standing doing very traditional style rockets with some great but incremental innovations. There are some minor players still working towards suborbital tourism, but disruption of the field isn't happening. The one well funded effort to really disrupt things (Virgin Galactic) has managed to get a bunch of people killed and is currently circling the drain. The X Prize was supposed to trigger a wave of innovation and disruption but over 10 years later there's nothing.


Posted by: togolosh | Link to this comment | 08-14-15 7:26 AM
horizontal rule
135

turns out Von Braun and Korolev were right

Objectively pro-Nazi and pro-commie.


Posted by: Moby Hick | Link to this comment | 08-14-15 7:34 AM
horizontal rule
136

"... And I'm learning Chinese, " says Werner von Braun.

Also, the votes are in on human space flight, and forget it. Unnecessarily expensive, unnecessarily dangerous, Rosetta, Dawn, New Horizons, Curiosity do it better.


Posted by: chris y | Link to this comment | 08-14-15 7:42 AM
horizontal rule
137

136: But don't we need to go someplace else after we're finished trashing this place?


Posted by: peep | Link to this comment | 08-14-15 7:48 AM
horizontal rule
138

I fortunately am not in fundraising, and I don't have experience with organizations that do charitable, as opposed to nonprofit educational work. But I do know that tech - not just SV - money does some weird things that more traditional funding/granting foundations and agencies don't when there's no apparent need for innovation.

For example, someone/some company might provide $3 million for what's essentially a building campaign with large upfront costs but instead of giving the amount in a relatively short time frame it will be spread out over many years. It looks sort of like doing multiple rounds of funding except the organization by definition is a not going to be making a profit and even if there will be revenue from the project it won't start to come in until after it's done, and the nature of the project is such that there will be literally almost nothing to show publicly until it's done. The $3 million in a shortish period, along with other fundraising for the project, might have been enough to get over the major upfront costs, but since the gift was structured as $500,000/year for six years it's not possible to get over those hurdle in the early years. In the mean time, costs go up every year, both for inflation and the fact that tech companies themselves are driving a lot of price increases locally, and the project just gets more expensive. All of this could have been anticipated, and was anticipated by the receiving organization, but they couldn't convince the funders to structure it differently.

On the flip side, sometimes tech funders will give money for a project that a more traditional agency wouldn't fund without fairly explicit planning and reporting requirements. You could say this is good because, hey less red tape, but it also doesn't look like effective altruism and there's a higher risk that the organization won't get it together to use the money wisely once it comes in.


Posted by: presidential non-fundraiser | Link to this comment | 08-14-15 7:52 AM
horizontal rule
139

52 is close, I think, with the addition that there is still something inaccessible about programming work at this stage of its technological maturity that reinforces this dynamic. There is always an element of self-taught craft skill to programming, that probably has an analogue in other professions (surgery? courtroom argument?) but is intense and recurring in programming. There was (may not anymore be) a phenomenon of programming requiring a rare mindset that wasn't particularly respectful of technical qualifications ... good programmers came from whatever random background, and turned out to be good at coding. Privileged middle class but not rich is the most common background I've seen.

An old online acquaintance used to say that most programmers aren't geniuses, but that are the smartest people they know. They are the person that can get in and make the weird machine in the corner work. I guess that has some bad side-effects when wealth and power are added to it.

Do you get the same thing with surgeons being obsessed with obscure surgical repetitive strain injury charities, or does being up to your elbows in human meat limit that?


Posted by: conflated | Link to this comment | 08-14-15 8:21 AM
horizontal rule
140

most programmers aren't geniuses, but that are the smartest people they know.

This is a good line.


Posted by: heebie-geebie | Link to this comment | 08-14-15 8:31 AM
horizontal rule
141

138 - ding ding ding ding ding!!!


Posted by: dairy queen | Link to this comment | 08-14-15 8:40 AM
horizontal rule
142

An old online acquaintance used to say that most programmers aren't geniuses, but that are the smartest people they know.

I work with programmers, hire, and manage programmers, and sometimes play one myself, and I don't find programmers particularly smart at all.

And I am particularly good at the 'make the weird machine in the corner work' stuff myself, which isn't really the same thing as being good at programming at all. I'm not a brilliant coder, but I'm a very quick study, read super-fast, know how to structure web queries to find the information I'm looking for, and have been exposed to a huge range of shite legacy systems of all kinds that I usually have a pretty good ballpark area of investigation that I know I need to look at. Being good at that sort of thing isn't that much of an indicator of whether you can produce good, functional, clean code.


Posted by: nattarGcM ttaM | Link to this comment | 08-14-15 8:51 AM
horizontal rule
143

Being good at SAS is a necessary and sufficient condition for genius.


Posted by: Moby Hick | Link to this comment | 08-14-15 8:58 AM
horizontal rule
144

99 is a caricature. I've spent the better part of two decades dealing with philanthropy, and that's not the problem the OP was complaining about.

in my experience there is almost no correlation between the time and energy required for a proposal vs. the amount of money that you eventually get. You would think $5,000 grant would be less work than a $250,000 one, but there's actually no evidence IME that it is.

The major problem I've seen from techie philanthropists in particular is the devaluing of prospective applicants' time and denial of their (sometimes severe) opportunity costs. To take a very simple example: One SV foundation requires applicants to submit using a 1-page template.

Sounds simple enough, right? Nope. I spent FIVE HOURS trying to wrestle our text into their format. Unstable, horribly formatted text boxes that kept auto-expanding and splitting over onto the second page. Weird categories that didn't make sense for our request. And on and on.

I could have gotten a guaranteed return on that five hours by doing direct service work, policy work, outreach, supervision, etc. etc. but to keep the organization afloat, you have to do a lot of "on spec" work for proposals that don't pan out. And funders rarely acknowledge the costs.

It's a tiny example, but it's a good distillation of how some philanthropists can be so in love with their preferred procedure that they ignore or discount its effects.

There is no reason that more foundations couldn't use a common application -- just as some colleges do. A handful of regional grantmaking associations have done so. Yet relatively few foundations will agree to use a common app, even for a first-round application that would allow them to request more specialized info in the second round.


Posted by: Witt | Link to this comment | 08-14-15 9:18 AM
horizontal rule
145

There is no reason that more foundations couldn't use a common application

Disruption opportunity!


Posted by: Bave | Link to this comment | 08-14-15 9:25 AM
horizontal rule
146

Foundations don't like to pay indirects. It's better to get federal money or just see what big pharma will pay for.


Posted by: Moby Hick | Link to this comment | 08-14-15 9:26 AM
horizontal rule
147

144 last: I remember thinking the same thing back when I was on the academic job market. Would it be too much for universities to settle on a single page length for candidate's research descriptions? I think I wrote about 8 different versions.


Posted by: AcademicLurker | Link to this comment | 08-14-15 9:37 AM
horizontal rule
148

there is almost no correlation between the time and energy required for a proposal vs. the amount of money that you eventually get
Oh god yes. If anything it's an inverse step function or something- there a certain low level grant where the funders are so convinced of their righteousness that they jealously guard access to that $25k of funding, want monthly progress reports of how you spend $2k this month, etc. Guess what, I've at times dropped $2k of supplies on the floor. Whoops!


Posted by: SP | Link to this comment | 08-14-15 10:32 AM
horizontal rule
149

As long as you could make scientific case for it.


Posted by: Moby Hick | Link to this comment | 08-14-15 10:34 AM
horizontal rule
150

Will It Bounce?


Posted by: Bave | Link to this comment | 08-14-15 10:38 AM
horizontal rule
151

"9.8 meters per second per second. Does it still hold?"


Posted by: Moby Hick | Link to this comment | 08-14-15 10:44 AM
horizontal rule
152

52, 139, etc: I've been a little surprised at how many programmers don't recognize that computability and solvability have limits - they really assume that any clearly stated problem has a discoverable solution. I understand this in the self-taught or trade-taught, but some of them have CS degrees! They know big oh! WTF & GEB!?!


Posted by: clew | Link to this comment | 08-14-15 2:52 PM
horizontal rule
153

152: We just need a poverty oracle.


Posted by: dalriata | Link to this comment | 08-14-15 3:26 PM
horizontal rule
154

We just need to teach the poor to code! Thanks to the Pascal's Mugging principle described in the article, also known as Dick Cheney's One Percent Doctrine, surely increasing from a 0% chance of becoming a millionaire to an 0.0001% chance of becoming a millionaire would be greater than any other form of aid they could receive.


Posted by: Cryptic ned | Link to this comment | 08-14-15 6:13 PM
horizontal rule
155

154: Through a chain of coincidences too tedious to explain, I ended up writing referee reports for most of the chapters on book on "global catastrophic risks" edited by N\ck Bost\rom and an actual scientist. There were at least two chapters where I had to restrain myself from quoting from the Medium Lobster's cost-benefit analysis of pre-emptively blowing up the Moon. Reading the link in the OP, and much else besides since then, makes me regret my restraint.


Posted by: not feeling very nymous | Link to this comment | 08-14-15 7:04 PM
horizontal rule
156

The Moon. Fuck that thing. Yeah, lets blow it up. I'm sick and tired of tides anyway.


Posted by: Spike | Link to this comment | 08-14-15 7:14 PM
horizontal rule
157

This seems like an appropriate thread to complain that I didn't really enjoy Ex Machina because so much of it felt like being trapped in a conversation with someone who just keeps going on about AI and what it might do.


Posted by: fake accent | Link to this comment | 08-14-15 7:28 PM
horizontal rule
158

This seems like an appropriate thread to complain that I didn't really enjoy Ex Machina because so much of it felt like being trapped in a conversation with someone who just keeps going on about AI and what it might do.


Posted by: fake accent | Link to this comment | 08-14-15 7:28 PM
horizontal rule
159

I blame AI for the double post.


Posted by: fake accent | Link to this comment | 08-14-15 7:29 PM
horizontal rule
160

From the trailer, it looked like being being trapped in a conversation with someone who imprinted on the collected works of Hajime Sorayama.


Posted by: | Link to this comment | 08-14-15 7:36 PM
horizontal rule
161

152: A lot of programmers with CS degrees skip or fudge computability and complexity parts. I wish CS departments would teach graduates the theoretical skills needed to solve real world problems instead of VB macros and network router certificates that are quickly obsolete.


Posted by: conflated | Link to this comment | 08-14-15 7:54 PM
horizontal rule
162

I liked ex machina though. The whole tech-coded-as-female thing; haven't seen it done that well in a movie before.

I can't quite bring myself to watch Her though. Just makes me wince to think about it.


Posted by: conflated | Link to this comment | 08-14-15 7:56 PM
horizontal rule
163

82: Speaking of Worldcon, is anyone else here planning to be at Worldcon in Spokane next week? I'm going to be there, to help do my part to keep the puppies from running the Hugos into the ground in future years. Anyone else want to meet up while there?


Posted by: Dave W. | Link to this comment | 08-15-15 12:20 AM
horizontal rule
164

I agree with 142.3, though both skills can be kind of opaque to those outside of it.

This podcast had some good examples of mental patterns of thought which seem to be quite hard / unintuitive for non programmers - functions, scoping, and mutable state changing over time, for example. Even in a lab environment, with lots of smart technical people, they describe that "one guy" who wrangles the scripts and software infra. (Which doesn't make them a genius.) Now I've never worked in a lab environment myself; I guess this crowd can contradict.

But you take that some inaccessibility of skill set and transfer it to a less technical white collar office environment ... and sometimes the programmers end up being the ones who actually know how the internals of the business work. Or interactions where IT types are either the most knowledgable in the conversation or kind of trained not to care.

Ugh I think I have just ended up reifying a tedious IT geek stereotype instead of explaining anything.


Posted by: conflated | Link to this comment | 08-15-15 3:06 AM
horizontal rule
165

mutable state changing over time

This is hard for programmers, too.


Posted by: dalriata | Link to this comment | 08-15-15 6:20 AM
horizontal rule
166

||
Most of my suspicion that Trump would fade out eventually was based on the assumption that he was mainly an egotist and not interested in the important nuts and bolts of campaigning, didn't have a lot of staff to run his ground game, and so on. It looks like that might not be true.
|>


Posted by: MHPH | Link to this comment | 08-15-15 6:24 AM
horizontal rule
167

In New York, you can't turn on your car wipers if your headlights are broken.


Posted by: Moby Hick | Link to this comment | 08-15-15 6:53 AM
horizontal rule
168

Canada is all casino, from a distance.


Posted by: Moby Hick | Link to this comment | 08-15-15 8:45 AM
horizontal rule
169

These people are just so ...

https://80000hours.org/

"Sam wasn't sure how he could make a difference. Now he's a trader and believes he can make a greater difference through philanthropy, and will give a six figure sum to charity this year.


"80,000 Hours helped me think more critically about my career choice, which has had a significant impact on where I am today.""

Puke puke puke puke puke.


Posted by: dairy queen | Link to this comment | 08-19-15 6:11 PM
horizontal rule