Re: The AI Chat

1
It feels a bit like the early days of the web felt.
This is accurate. I don't know where this goes, but it'll be really, really interesting. I have a friend who's been "interviewing" ChatGPT like it's a candidate for a job at his (tech) company. And it's done really, really well. To which my response has been "that simply shows that your interview method isn't very good". But maybe I'm wrong.
Posted by: Chetan Murthy | Link to this comment | 12- 4-22 8:33 PM
horizontal rule
2

I was talking about this earlier, and what it produces is precisely bullshit by Harry Frankfurt's definition -- utterances produced without any regard, one way or the other, for their truth or relationship to reality. It's a funny thing to have automated.

Some of the results are terrific, though -- you should try getting it to write sea shantys about your profession life. Those have been turning out great.


Posted by: LizardBreath | Link to this comment | 12- 4-22 9:45 PM
horizontal rule
3

Nothing will top translation party.


Posted by: fake accent | Link to this comment | 12- 4-22 10:12 PM
horizontal rule
4

And it can write (some kinds of) code for you! Which given its difficulties with some basic word problems is odd.

The bat/ball saga; part two.


Posted by: Minivet | Link to this comment | 12- 4-22 10:13 PM
horizontal rule
5

I instructed it to write a Ronald Reagan speech against car dependence with a folksy analogy. It read more like a student essay, but it did analogize cars to cake (in that it's good but too much is bad for you).


Posted by: Minivet | Link to this comment | 12- 4-22 10:14 PM
horizontal rule
6

Incredibly, translation party is still up but if it's re-running the prompts, the translations have improved so much that this thread may not be as funny as it was in 2009.


Posted by: fake accent | Link to this comment | 12- 4-22 10:19 PM
horizontal rule
7

Presumably someone has asked the chatbot to create poems in the English language modeled after real poems that end in "Fuck you, clown!"?


Posted by: fake accent | Link to this comment | 12- 4-22 10:23 PM
horizontal rule
8

I really would hate to see the guy who writes the sea shanties for our office lose his job due to automation.


Posted by: Moby Hick | Link to this comment | 12- 4-22 10:25 PM
horizontal rule
9

This guy's tweet--that GPT3 is like having an infinite number of really dumb employees--was funny, and then I got to thinking about what my computer actually does: https://twitter.com/ZackKorman/status/1599317547509108736


Posted by: snarkout | Link to this comment | 12- 5-22 5:16 AM
horizontal rule
10

Yeesh. Anyone with a basic understanding of molecular biology- each amino acid is encoded by three nucleotides- would do better at answering this. It seems great at giving supremely confident and totally incorrect answers. (Was it trained on tweets and online comment sections? Probably not because it doesn't answer half the queries with Nazi propaganda.)
it seems like a combination of a good NLP algorithm, Google calculator, and Wikipedia.


Posted by: SP | Link to this comment | 12- 5-22 6:58 AM
horizontal rule
11

Probably not because it doesn't answer half the queries with Nazi propaganda.

That's probably the biggest function of its appropriateness filter, which people have also found ways around.


Posted by: Minivet | Link to this comment | 12- 5-22 7:02 AM
horizontal rule
12

A work colleague has been setting it JS coding problems--asking it to solve some of the basic things we actually do at work, not setting it the sort of puzzles people get set in interviews--and the output is pretty decent.


Posted by: nattarGcM ttaM | Link to this comment | 12- 5-22 7:47 AM
horizontal rule
13

I was talking about this earlier, and what it produces is precisely bullshit by Harry Frankfurt's definition -- utterances produced without any regard, one way or the other, for their truth or relationship to reality. It's a funny thing to have automated.

It's interesting though to consider how much of human utterance is bullshit in this sense. All fiction is bullshit, for a start. All Carey ritual communication is bullshit.


Posted by: ajay | Link to this comment | 12- 5-22 8:01 AM
horizontal rule
14

Right, this is why it works, and why chatbots all the way back to ELIZA work - there's a lot of predictable, repeating structure in language itself, which makes it possible to guess plausible text. (Similarly, if you're learning a new language, one of the first things you'll be taught are help strategies, things you need to respond validly to other people and explain that you didn't understand what they said or don't know the meaning of some word.)


Posted by: Alex | Link to this comment | 12- 5-22 8:10 AM
horizontal rule
15

I would say fiction isn't bullshit -- that is, the one sentence I gave pointing to Frankfurt's definition might fit fiction, but to the extent it does that's bad drafting on my part. In a fictional context the concept of bullshit doesn't squarely apply. Lots of the texts produced by AI are purporting to be non-fiction, and to that extent are bullshit; the ones that purport to be fiction seem to me to have a relationship to human-generated fiction analogous to bullshit.


Posted by: LizardBreath | Link to this comment | 12- 5-22 8:22 AM
horizontal rule
16

I love the Bullshit reference; spot on for people who are familiar with it.

And this from 14 seems like the key,

there's a lot of predictable, repeating structure in language itself, which makes it possible to guess plausible text

ChatGPT is just astoundingly good at finding and reproducing those patterns. Of course you can find where it breaks down, but even at this relatively early point in the development of these tools, a lot of the code it produces is helpful, its translations are about as good as the other online systems, and it's often just delightful.


Posted by: ogged | Link to this comment | 12- 5-22 8:38 AM
horizontal rule
17

There's a book about reduced contexts for speech that I liked a lot, Forms of Talk by Goffman. He considers radio talk (superficially open and spontaneous, isn't) at length. That and say QVC seem like places where this might work-- endless praise of cubic zirconium made of kitchen gadgets. Punditry also maybe, could an automated Michael Tracey defeat 1000 automated 5-year olds?


Posted by: lw | Link to this comment | 12- 5-22 8:48 AM
horizontal rule
18

But could an automated Maxine Waters beat a 1,000 Michael Traceys?


Posted by: Barry Freed | Link to this comment | 12- 5-22 9:06 AM
horizontal rule
19

Could 1000 automated Michael Tracies defeat an automated Maxine Waters?


Posted by: ajay | Link to this comment | 12- 5-22 9:06 AM
horizontal rule
20

GOD DAMMIT BARRY


Posted by: ajay | Link to this comment | 12- 5-22 9:07 AM
horizontal rule
21

All Carey ritual communication is bullshit.

I know you're not talking about my Mariah.


Posted by: heebie-geebie | Link to this comment | 12- 5-22 9:15 AM
horizontal rule
22

I assume it's about Cubs baseball.


Posted by: Moby Hick | Link to this comment | 12- 5-22 9:18 AM
horizontal rule
23

7: I did that this morning (using asterisks to avoid the filter) but it treated the "Fuck you, clown" as diegetic, adding after it one final verse about the jarring intrusion.


Posted by: Minivet | Link to this comment | 12- 5-22 9:28 AM
horizontal rule
24

20 ha, you owe me a beer


Posted by: Barry Freed | Link to this comment | 12- 5-22 9:33 AM
horizontal rule
25

21: I'm pretty sure that all she wants for Christmas isn't me.


Posted by: ajay | Link to this comment | 12- 5-22 9:38 AM
horizontal rule
26

I asked it my favorite brain-teaser, about someone arriving at a train platform at a random time but 80% of the time boarding one of two equally-frequent lines. It got it conceptually right but flubbed the illustration:

This situation is possible because the trains on line A and line B may have different schedules, such that one line is more likely to arrive at the station platform first. For example, if line A has a train that arrives at the platform at 7:35, 7:45, 7:55, and 8:05, while line B has a train that arrives at 7:40, 7:50, and 8:00, then line A is more likely to arrive first when the man arrives at the platform at a random time between 7:30 and 8:00. Over time, the man will tend to board line A more often because it is more likely to arrive at the platform first.

It would have been completely correct if the example had been line A coming at 7:38, 7:48, etc.


Posted by: Minivet | Link to this comment | 12- 5-22 9:39 AM
horizontal rule
27

Good thread https://twitter.com/studentactivism/status/1599753552401813504?s=46&t=lzdbu8uahzaQ1N0HkDb7sw


Posted by: Barry Freed | Link to this comment | 12- 5-22 9:58 AM
horizontal rule
28

I just asked it to come up with an SF plot synopsis with at least two twists. It was pretty by-the-numbers, but the twists were real twists (even if it needed an extra push to reveal #2).


Posted by: Minivet | Link to this comment | 12- 5-22 3:34 PM
horizontal rule
29

Nice post from someone using these tools to learn a (difficult) programming language. Note that "hallucination" is the term for these models making stuff up that's not connected to reality. https://simonwillison.net/2022/Dec/5/rust-chatgpt-copilot/


Posted by: ogged | Link to this comment | 12- 5-22 6:15 PM
horizontal rule
30

The existence of a viable Republican party demonstrates that we as a society have inadequate bullshit filters, and ChatGPT is a nuclear-powered firehose bullshit generator, so we're pretty fucked.


Posted by: Hamilton-Lovecraft | Link to this comment | 12- 6-22 1:54 PM
horizontal rule
31

Ham-Love!


Posted by: teofilo | Link to this comment | 12- 6-22 1:57 PM
horizontal rule
32

It reminds me slightly of Douglas Adams' "Reason" decision support software from "Dirk Gently".


Posted by: Ajay | Link to this comment | 12- 6-22 2:28 PM
horizontal rule
33

I remember that. It was done for the C.I.A. or America in general.


Posted by: Moby Hick | Link to this comment | 12- 6-22 2:43 PM
horizontal rule
34

Some of you are linguists right? https://maximumeffort.substack.com/p/i-taught-chatgpt-to-invent-a-language


Posted by: torque | Link to this comment | 12- 6-22 8:41 PM
horizontal rule
35

34: Interesting! I think what all of this mostly demonstrates is just that lots of kinds of writing, even pretty complex types, are highly formulaic and with enough of a database the AI can imitate them very well.


Posted by: teofilo | Link to this comment | 12- 6-22 9:15 PM
horizontal rule
36

Yeah, I think it's finally time to hang up my doggerel hat. The AI is a way better poet/lyricist than I am. I


Posted by: Natilo Paennim | Link to this comment | 12- 7-22 10:08 AM
horizontal rule
37

I wonder if it can make infinite versions of "This Is Just To Say."


Posted by: Minivet | Link to this comment | 12- 7-22 10:20 AM
horizontal rule
38

BUFFALO BUFFALO BUFFALO


Posted by: AI-GENERATED GRANDMA | Link to this comment | 12- 7-22 10:32 AM
horizontal rule
39

Update: its version was horrible, and even coaching on what was wrong did not improve it - it insisted on lines being complete (if short) clauses.


Posted by: Minivet | Link to this comment | 12- 7-22 10:35 AM
horizontal rule
40

Further update: I asked it to critique the difference between the original and its version, and it gave something quite sound. I asked it to rewrite incorporating that critique, and it got a lot closer, but inexplicably threw plums in near the end (which were not part of its version).


Posted by: Minivet | Link to this comment | 12- 7-22 10:51 AM
horizontal rule