Re: On My Mind

1

For myself, I haven't tried to use ChatGPT yet. I have tried Google's version and it is not very good.

However, over the last year my perspective has shifted from, "'AI' is overhyped" to, "oh, wow, I don't have a good sense of how powerful 'AI' is right now, but I am convinced that the evolution over the next decade will be one of the most important stories of my lifetime."


Posted by: NickS | Link to this comment | 03-19-24 8:48 AM
horizontal rule
2

ogged's take seems reasonable. The thing I get caught up on with all the hype is that these things are only actually useful if their output is both predictable and controllable, and that doesn't seem to be the case consistently at this point. Maybe it will be at some point.


Posted by: teofilo | Link to this comment | 03-19-24 8:50 AM
horizontal rule
3

The garage door thing is genuinely surprising and useful and maybe a counter to this that I read earlier that I thought was pretty good https://www.wheresyoured.at/peakai/


Posted by: Barry Freed | Link to this comment | 03-19-24 8:57 AM
horizontal rule
4

That's a very clever Twitter thread, and I mean that in an entirely derogatory way.

The first post links to an authoritative (and, crucially, very long) article from a reliable source - Yahoo Finance. That's called "establishing authority". You might well click through out of curiosity, but it's a very long article, so you won't read all of it. But this tweet gives you a vague feeling that the tweeter is being honest. At least he's linked to a reliable source.

The second tweet has a screenshot of a passage from that article. If you are a bit suspicious, you might click through to check - and you'd find that yes, indeed, it is an accurate and untampered screenshot. Now you're thinking "OK, Jerrick White seems like a reliable person."

The third tweet does the same thing - and, again, it's an accurate and untampered screenshot. A really suspicious person might check that too - and again they'd be reassured.

You are now really pretty convinced that this is an honest, truthful person writing.

Hold onto that, because the fourth tweet has the payload in it.

The fourth tweet doesn't have a screenshot - it has three quotes from the article.

Barnett's lawyer described him as "really happy to be telling his side of the story" hours before his death.

This is an accurate quote from the article. Here's the full paragraph:


The previous day, Barnett had been on a roll as a video camera recorded the event. "John testified for four hours in questioning by my co-counsel Brian," says Turkewitz. "This was following seven hours of cross examination by Boeing's lawyers on Thursday. He was really happy to be telling his side of the story, excited to be fielding our questions, doing a great job. It was explosive stuff. As I'm sitting there, I'm thinking, 'This is the best witness I've ever seen.'" At one point, says Turkewitz, the Boeing lawyer protested that Barnett was reciting the details of incidents from a decade ago, and specific dates, without looking at documents. As Turkevitz recalls the exchange, Barnett fired back, "I know these documents inside out. I've had to live it."

Barnett told family/friends "If I die, it's not suicide."

This is the payload. This appears nowhere in the Yahoo Finance article, and there's no other source given. It is also by far the most explosive part of the entire thread and, naturally, it's completely unsourced. Search the article if you like.

But, four tweets into a thread which has so far been an entirely accurate summary of the article, it's very likely that you wouldn't notice that. You'd take it as gospel and spread it, which is exactly what Jerrick White wants you to do. You get dogs to swallow a pill by wrapping it up in steak; you get people to swallow a lie by wrapping it up in truth.


Posted by: ajay | Link to this comment | 03-19-24 9:00 AM
horizontal rule
5

That piece is one of the things I was reacting to!


Posted by: ogged | Link to this comment | 03-19-24 9:01 AM
horizontal rule
6

That piece doesn't include anything about Barnett telling anyone "If I die, it's not suicide", though. Have a look.

What it does include is Barnett's family saying "this guy had post-traumatic stress disorder and anxiety attacks, which is what we believe led to his death".


Posted by: ajay | Link to this comment | 03-19-24 9:04 AM
horizontal rule
7

A torsion spring isn't exactly a suble thing. It's a few feet long and they come in pairs so unless they both break, you'll see a difference.


Posted by: Moby Hick | Link to this comment | 03-19-24 9:04 AM
horizontal rule
8

Sorry, 5 was to 3.

On the Boeing guy, this local report seems to be the source of the quote. It's local news, so who knows, but the tweeter didn't just make it up.


Posted by: ogged | Link to this comment | 03-19-24 9:05 AM
horizontal rule
9

To paraphrase Henry Farrell, it may be like a microwave, something that does a few things well that were much more difficult otherwise, but nothing like the omnitool originally billed.


Posted by: Minivet | Link to this comment | 03-19-24 9:07 AM
horizontal rule
10

I knew nothing about garage doors when my torsion spring broke, but it was pretty obvious when it broke. The only surprise was how heavy the garage door was with the spring broken.


Posted by: Moby Hick | Link to this comment | 03-19-24 9:11 AM
horizontal rule
11

ChatGPT is useful for generating boilerplate code-- here's a list of fields, I'd like a class with getters and setters and error handling.

We may have talked about it here already, but I see it as something pretty interesting for making bottomless pools of soap opera plots, choose-your-own-adventure games, and the like. These generated entities are interesting to me- approximations of our verbal culture. Open world video game walkthroughs seem like related entities, and people like those similarly to films. There's a paper (arXiv, not reviewed) that I liked that considers what kind of knowledge is necessary to do this; it's not recognizing truth or deep insight, but what it can do for narrative is somehow similar to doing those. The authors cite Zellig Harris, Chomsky's advisor. I think I've mentioned this here before, apologies if I'm repeating myself.
https://arxiv.org/abs/2310.01425


Posted by: lw | Link to this comment | 03-19-24 9:25 AM
horizontal rule
12

Someone has figured out how to use AI to call me six times a day to try to sell me fraudulent Medicare supplemental insurance. I would rather destroy AI than give up my phone as a way for someone I don't already know to contact me. And I'm pretty sure that's the choice.


Posted by: Moby Hick | Link to this comment | 03-19-24 9:35 AM
horizontal rule
13

It's local news, so who knows

So it wasn't "he told family/friends" - Jerrick White made that up. It was one woman, full name unknown, who says her mother was a friend of Barnett's mother, contacted local news to say that she happened to run into him a month ago and he said, to her and to no one else, "If I die, it wasn't suicide".

vs

Dude's entire family who said "this had been a hellish ordeal for him and he had anxiety and PTSD that led to his death".

I mean, come on.


Posted by: ajay | Link to this comment | 03-19-24 9:45 AM
horizontal rule
14

The frustrating, but unsurprising thing, is that ChatGPT isn't actually great at writing essays, but it's really good at code, which is why it's the death of the humanities and not C+ junior developers or something. It's a massive pedagogical challenge but once that thing can write assessment reports it will be my friend.

The Calabat likes videos where ChatGPT plays chess remarkably well until it takes its own pieces.


Posted by: Cala | Link to this comment | 03-19-24 9:49 AM
horizontal rule
15

I see Ajay is in the pocket of big airplane. I discount the family's story insofar as they have a much better chance of recouping damages from saying Boeing gave him PTSD, versus saying Boeing had him killed. Also, his lawyers are shocked, but that's also typical in cases of suicide, so again, not dispositive. Maybe the local lady is lying. Possible! Also possible she's telling the truth. I'm not just trying to play conspiracy theorist; it genuinely feels murky to me, as these things often do. And I genuinely wish I had all day to argue (it's not Kate!) but I don't.


Posted by: ogged | Link to this comment | 03-19-24 10:01 AM
horizontal rule
16

once that thing can write assessment reports it will be my friend

Oh, another thing that they're apparently very good at: HIPAA'ed versions are recording and summarizing physician visit notes.


Posted by: ogged | Link to this comment | 03-19-24 10:03 AM
horizontal rule
17

I see Ajay is in the pocket of big airplane.

Please remove all articles from the seat back pocket before deplaning.


Posted by: Moby Hick | Link to this comment | 03-19-24 10:09 AM
horizontal rule
18

Your garage door experience is very super extremely not what has ever happened when I have tried to get ChatGPT to assist me in getting actual information about anything ever. But congratulations!


Posted by: redfoxtailshrub | Link to this comment | 03-19-24 10:26 AM
horizontal rule
19

I have no opinion on what really happened to the Boeing guy, but "If I die, it's not suicide" is the kind of thing that a guy seriously considering suicide might say, if he's concerned about his posthumous reputation, or about his family collecting on his life insurance (in fact, life insurance generally pays if the decedent held the policy for more than two years before suicide, but most poeple don't know that (in the U.S) (not legal advice)). Also the kind of thing a friend of the deceased is likely to say that he said.


Posted by: unimaginative | Link to this comment | 03-19-24 10:29 AM
horizontal rule
20

I can probably fix a garage door for you if I'm in Cleveland.


Posted by: Moby Hick | Link to this comment | 03-19-24 10:29 AM
horizontal rule
21

"I discount the family's story insofar as they have a much better chance of recouping damages from saying Boeing gave him PTSD, versus saying Boeing had him killed"

This is, and I mean this entirely seriously, the sort of thing that a sociopath would say. "Of course the dead man's family are going to lie about the death of a loved one if it marginally improves their chances of financial gain. Who wouldn't?"


Posted by: Ajay | Link to this comment | 03-19-24 10:30 AM
horizontal rule
22

Yes, 15 would make sense if the family had a whole legal and PR team on call.


Posted by: Minivet | Link to this comment | 03-19-24 10:33 AM
horizontal rule
23

As with MOOCs, I'm skeptical of the long-term business models for a lot of the proposed applications. "Look at how many people we won't have to hire!" is appealing to a certain organizational leadership type until "How did the hell did that end up being so expensive?" becomes the harsh reality.

Also, I don't get chat interfaces for things that don't seem like they'd be done in chats if you didn't have a chatbot in front of you.


Posted by: fake accent | Link to this comment | 03-19-24 10:34 AM
horizontal rule
24

Also, how the hell are there so few Google results for "chatbotage"?


Posted by: fake accent | Link to this comment | 03-19-24 10:35 AM
horizontal rule
25

However! Whisper transcription (also an OpenAI product) really is very good and makes various things in my work life a lot easier and better. I'm hoping there will soon be more options to hook it into speech-to-text for good direct dictation workflows, too, because our dysgraphic kid finds the current state of the art for dictating papers (etc.) still too clunky and unpleasant to use.


Posted by: redfoxtailshrub | Link to this comment | 03-19-24 10:35 AM
horizontal rule
26

I tried to use ChatGPT to write me an introduction for a speaker based on their biography. It was at least a starting off point, but it definitely read like a junky bot webpage.


Posted by: heebie-geebie | Link to this comment | 03-19-24 10:40 AM
horizontal rule
27

It's rather frustrating that nothing free on desktop is as good as my Android's transcription.


Posted by: Minivet | Link to this comment | 03-19-24 10:44 AM
horizontal rule
28

I'd rather have a chatbot in front of me than a skilled surgeon's lobotomy.


Posted by: Randy Baitzclick, M. D. | Link to this comment | 03-19-24 10:44 AM
horizontal rule
29

The problem with torsion springs is if they break, they break with a great deal of force. The new ones have a cable inside to keep the break controlled. Back in the day, you used to just lose your head or damage your car.


Posted by: Moby Hick | Link to this comment | 03-19-24 11:01 AM
horizontal rule
30

Maybe that's what happened to Middleton? It would be irresponsible not to speculate.


Posted by: Moby Hick | Link to this comment | 03-19-24 11:15 AM
horizontal rule
31

Kate was the torsion spring of Boeing.


Posted by: heebie-geebie | Link to this comment | 03-19-24 11:19 AM
horizontal rule
32

in fact, life insurance generally pays if the decedent held the policy for more than two years before suicide, but most poeple don't know that (in the U.S) (not legal advice)

I took out a life insurance policy after Elke was born, and the investigator who came to interview me decided to give me a pep talk about suicide at the end. "I've interviewed a lot of people, and in all these years I've only met one who really didn't have anything to live for." I assumed that one person wasn't me, but I don't know for sure.


Posted by: lurid keyaki | Link to this comment | 03-19-24 11:31 AM
horizontal rule
33

Relevant to 18, Dan Luu makes the point here that LLM "hallucinations" really aren't that much different than the shallow, garbagey results you get from Google for many practical queries these days. Like, querying "how to fix a garage door" might link to a video showing you how to check your torsion spring (actually it kind of looks like it does?) but it's just as likely to send you to an infinite number of content farm sites that are morally (or actually) equivalent to a ChatGPT session. (An actual ChatGPT session is exactly what Quora has started doing lately, which is incredibly annoying and unhelpful. I mean, yes, Quora is shit, but it used to be marginally better than the average content farm and now it's not.)


Posted by: Yawnoc | Link to this comment | 03-19-24 11:36 AM
horizontal rule
34

IME, the Google search of the archives here tends to do things like, for a search on "Uganda," returning all hits for "Africa." (If the rest of you aren't getting stuff like that, maybe it's a weird problem on my end, but I find it kind of maddening.)


Posted by: lurid keyaki | Link to this comment | 03-19-24 11:43 AM
horizontal rule
35

I have whatever kind of life insurance you can get without talking to someone about my feelings.


Posted by: Moby Hick | Link to this comment | 03-19-24 11:43 AM
horizontal rule
36

Term life?


Posted by: Moby Hick | Link to this comment | 03-19-24 11:44 AM
horizontal rule
37

Colonial Peen?


Posted by: Moby Hick | Link to this comment | 03-19-24 11:46 AM
horizontal rule
38

My So-Called Life Insurance


Posted by: lurid keyaki | Link to this comment | 03-19-24 11:47 AM
horizontal rule
39

I laughed.


Posted by: heebie-geebie | Link to this comment | 03-19-24 11:47 AM
horizontal rule
40

The family is most likely right re: the Boeing guy, but it does have a Michael Clayton vibe.

https://www.youtube.com/watch?v=NYknJmoDDPs


Posted by: JP Stormcrow | Link to this comment | 03-19-24 11:49 AM
horizontal rule
41

I think my life insurance expires the same year my son should finish college. So my wife isn't tempted to pick up some floozy.


Posted by: Moby Hick | Link to this comment | 03-19-24 12:09 PM
horizontal rule
42

16: I acknowledge its usefulness. I fed it a bibliography that was in the wrong format due to some insane house style and made it fix it. But I think it is telling that "I'm going to feed it to ChatGPT" or some similar variation is what academics say when they don't care about the writing.


Posted by: Cala | Link to this comment | 03-19-24 12:11 PM
horizontal rule
43

There was a horror story on Bluesky of someone with students whose work quality suddenly plummeted, and he discovered they had been told by other instructors to use ChatGPT to sound whiter. Interestingly, it had in fact worked with thinks like email requests to professors, where it was all about the style, but very much not for academic coursework.


Posted by: Minivet | Link to this comment | 03-19-24 12:41 PM
horizontal rule
44

12: YES! I hope it can be more useful than that, but this is my deep-seated fear. Also that personal connections and relationships in work will be even more devalued. You don't need a real human doctor to accompany you through illness, just a bot. Personalized, personal service will be a luxury product available only to professional sports players and rich Senators.


Posted by: Bostoniangirl | Link to this comment | 03-19-24 12:46 PM
horizontal rule
45

Another horror story waiting to unfold.


Posted by: Minivet | Link to this comment | 03-19-24 12:49 PM
horizontal rule
46

As the kids say, cringe:

Is this thing legit? How accurate are we talking? Break it down for me, nerd style.
Pretty darn accurate! Not perfect, but it's the next best thing to a lab test for a quick check. Powered by patented HeHealth wizardry (think an AI so sharp you'd think it aced its SATs), our AI's been battle-tested by over 40,000 users, hitting accuracy levels from 65% to 96% across various conditions. However, dive into the nerdier deets and you'll see that things like your selfie's lighting, the particular health quirks you're scouting for, and a rainbow of skin tones might tweak those numbers a bit.
Can I use Calmara on other area other than penis?
While you might be curious about using Calmara for more than just peen checks, it's really in its element when focusing on the D. Its genius brain isn't quite tuned for other zones like the balls, booty, or mouth, meaning it might miss the mark accuracy-wise. We're all about sticking to what we know best, so if you're noticing anything sus elsewhere, it's a solid move to reach out to a health pro for the full lowdown.

Considering reporting to the FDA personally just to be safe.


Posted by: Minivet | Link to this comment | 03-19-24 12:56 PM
horizontal rule
47

It's asking for a pic of your junk? That's got to be satire


Posted by: Barry Freed | Link to this comment | 03-19-24 1:01 PM
horizontal rule
48

Works every time, 60% of the time.


Posted by: Moby Hick | Link to this comment | 03-19-24 1:02 PM
horizontal rule
49

I wondered, but I found a closely related company called HeHealth that's been around for 3 years, and their founder's LinkedIn post seems pretty sincere.


Posted by: Minivet | Link to this comment | 03-19-24 1:25 PM
horizontal rule
50

Cringe aside, the fact that they seem to be carefully limiting the scope of their claims suggests that they are in fact sincere.


Posted by: teofilo | Link to this comment | 03-19-24 1:46 PM
horizontal rule
51

I can't access the link because my work blocks recently registered domains so just going by the excerpt in 46.


Posted by: teofilo | Link to this comment | 03-19-24 1:46 PM
horizontal rule
52

I think generative AI is going to end up being as consequential as the web is.

I loathe that gen AI models were trained on my words and images and those of millions of others without our consent. It feels horrendously invasive, like finding out that someone has been following you around for years with a voice recorder and has used your voice to build a weapon that will inevitably get used against more vulnerable people. I especially loathe that image generation is being touted as AI being "creative," when in reality it could not meaningfully create without all of those source images that it stole.

I am excited by the cool stories of AI usage, although most of them seem to be other types of AI (not generative AI) -- things like better detection of tumors in radiation images, better prediction of which roads will need pothole maintenance, better predictions of animal migration patterns and weather impacts.

I am downright horrified by the cavalier way that colleagues in my field admit to using generative AI. They seem to have no conception of the difference between low-stakes and high-stakes usage. Asking a gen AI model to suggest family vacation ideas? Sure, why not. Asking it to draft public policy language? Are you insane?

Someone I otherwise respect was excited about using AI to tell you if a person qualifies for food stamps. In a situation where someone could wind up committing federal fraud or being deported if the software gets it wrong? GOOD LORD ABOVE.


Posted by: Witt | Link to this comment | 03-19-24 2:04 PM
horizontal rule
53

52: if you did that food stamp queru in healthcare using chatgpt as opposed to some kind of proprietary system, you would be violating HIPAA privacy rules.


Posted by: Bostoniangirl | Link to this comment | 03-19-24 2:09 PM
horizontal rule
54

no conception of the difference between low-stakes and high-stakes usage

Yes! I think this is also implicitly thinking of it as intelligent, but instead of saying it sucks, thinking it's awesome.


Posted by: ogged | Link to this comment | 03-19-24 2:09 PM
horizontal rule
55

Funny, but how is this a business? What's the average value from getting someone this gullible to download an app, maybe $1? In-app ads for what, supplements or gadgets maybe? If 1/1000 buy the crap, that's 40 sales so far.


Posted by: lw | Link to this comment | 03-19-24 2:09 PM
horizontal rule
56

I also really, really dislike that nearly every Zoom call I join now has 1-5 AI notetakers silently recording everything. I don't know how to let people do this for bona fide accessibility reasons and forbid others from doing it, so in practice it ends up with everyone being allowed.

So now there are all of these somewhat-accurate transcripts with even more vaguely accurate summaries floating around, that people are going to refer back to in a month or a year as an accurate rendering of the meeting.

And the thing, they AREN'T. Which is really, really important when you are having meetings about consequential things that affect people's lives and livelihoods. I don't have time to fact-check all of them but when I do I'm inevitably appalled by the ways AI misunderstands people's accents, idioms, or vocabulary; its near-total inability to accurately decipher when a group of people has actually come to a decision;* and its hilariously overconfident wrongheadedness in eagerly summarizing the most "important" parts of the meeting.

*This is genuinely hard! PEOPLE struggle to do this! But the software is much worse than the median human, imo.


Posted by: Witt | Link to this comment | 03-19-24 2:10 PM
horizontal rule
57

I think this is also implicitly thinking of it as intelligent, but instead of saying it sucks, thinking it's awesome.

People really, really, REALLY struggle with understanding that there is no correlation between the educated grammar/syntax of gen AI output and any underlying accuracy to the information.

I get it; it flies in the face of what we've learned from thousands of years of trying to gauge the accuracy of human language and human intelligence. But it's frightening.


Posted by: Witt | Link to this comment | 03-19-24 2:13 PM
horizontal rule
58

53: Exactly! And further to that: for many gen AI models, if you feed a question into them, you are giving them ownership (or at least usage rights) of your data to further train their model.

What happens when enough teenagers in Texas start asking chatGPT how to get abortion medication, and then some enterprising DA decides to ask the same chatGPT model to spit out names or identifying details related to teens and abortion? Do we want to gamble their safety on the chance that the software developers have the right safeguards installed such that chatGPT won't just spit out some incriminating data?


Posted by: Witt | Link to this comment | 03-19-24 2:18 PM
horizontal rule
59

53: Exactly! And further to that: for many gen AI models, if you feed a question into them, you are giving them ownership (or at least usage rights) of your data to further train their model.

What happens when enough teenagers in Texas start asking chatGPT how to get abortion medication, and then some enterprising DA decides to ask the same chatGPT model to spit out names or identifying details related to teens and abortion? Do we want to gamble their safety on the chance that the software developers have the right safeguards installed such that chatGPT won't just spit out some incriminating data?


Posted by: Witt | Link to this comment | 03-19-24 2:18 PM
horizontal rule
60

49: I'm sincere on LinkedIn. That proves nothing.


Posted by: Moby Hick | Link to this comment | 03-19-24 2:27 PM
horizontal rule
61

Funny, but how is this a business?

Imagine millions of dick pics, all with ToS-gifted data and metadata on their owners or owners' partners. Pretty monetizable.


Posted by: Minivet | Link to this comment | 03-19-24 2:34 PM
horizontal rule
62

The internet was fun for a while, but it's obviously past time to nuke it and start over.


Posted by: Moby Hick | Link to this comment | 03-19-24 2:37 PM
horizontal rule
63

Seems bad, yeah.


Posted by: teofilo | Link to this comment | 03-19-24 3:12 PM
horizontal rule
64

Furthermore.


Posted by: teofilo | Link to this comment | 03-19-24 3:17 PM
horizontal rule
65

Maybe they could do like Hot or Not, except Diseased or Not for genitals. Not medical advice, just crowdsourced wisdom.


Posted by: Moby Hick | Link to this comment | 03-19-24 3:41 PM
horizontal rule
66

The Boeing thing still really bothers me. I usually fly Southwest.


Posted by: Moby Hick | Link to this comment | 03-19-24 4:26 PM
horizontal rule
67

I can definitely see the appeal of using an app instead of having to wear a jimmy hat. Is being able to do that worth trading away pictures of your junk? Maybe!


Posted by: Spike | Link to this comment | 03-19-24 4:28 PM
horizontal rule
68

The app that does for syphilis what Ron DeSantis did for measles.


Posted by: Moby Hick | Link to this comment | 03-19-24 4:38 PM
horizontal rule
69

One of my kids' docs just told us she's switching portal/billing/messaging services because the current one informed her that AI will soon be training on all the content.


Posted by: heebie | Link to this comment | 03-19-24 5:43 PM
horizontal rule
70

34- I recently read that you can change a setting and it will switch back to how it used to behave. Something like "verbatim results" under the settings menu.
I forgot the exact procedure, maybe I'll google it.


Posted by: SP | Link to this comment | 03-19-24 6:04 PM
horizontal rule
71

My son says all the kids use Duck Duck Goose or something.


Posted by: Moby Hick | Link to this comment | 03-19-24 6:21 PM
horizontal rule
72

||
https://www.reuters.com/world/asia-pacific/armenias-pm-says-he-must-return-disputed-areas-azerbaijan-or-face-war-tass-2024-03-19/
|>


Posted by: Mossy Character | Link to this comment | 03-19-24 6:42 PM
horizontal rule
73

Belatedly: the Dan Luu link in 33 is excellent. I don't know that I agree with every single one of his points, but the overall piece is very strong.

And the phenomena he's describing are utterly familiar to me as both the 'accidental techie' in most of my workplaces over the past 30 years and as a librarian. People who don't think Google search results have gotten appreciably worse in recent years are (in my experience) bad -- that is to say, entirely average -- at using the web in general and at distinguishing between legitimate vs scammy results in particular. Google has gotten much worse, and in ways that drive ordinary people to scams and unhelpful results much more effectively.


Posted by: Witt | Link to this comment | 03-19-24 6:51 PM
horizontal rule
74

Is there anything to the idea I've seen that chatgpt and others are ridiculously subsidized with VC money, and when that's gone costs for the user (paid somehow) are going to go way up?


Posted by: CharleyCarp | Link to this comment | 03-19-24 7:47 PM
horizontal rule
75

70: oh I'm sure. I resent it, though.


Posted by: lurid keyaki | Link to this comment | 03-19-24 7:58 PM
horizontal rule
76

72: Very strange headline, but ends with the elephant in the room:

Armenia... is nominally a Russian ally though its relations with Moscow have deteriorated in recent months over what Yerevan says is Russia's failure to protect it from Azerbaijan.
As a result, Armenia has pivoted its foreign policy towards the West, to Moscow's chagrin, with senior officials suggesting it might one day apply for European Union membership.
In a statement posted on Tuesday on the Telegram messaging app, Russian Foreign Ministry spokeswoman Maria Zakharova suggested Yerevan's deepening ties with the West were the reason for Armenia having to make concessions to Azerbaijan.

Am I avoiding a ton of shit curious enough to find the exact wording? Yes! I won't link it, but it's three sentences and the middle one is machine-translated as "Please note: this statement [by Pashinyan] is in no way connected with Russia." The Reuters summary is accurate.


Posted by: lurid keyaki | Link to this comment | 03-19-24 8:25 PM
horizontal rule
77

Sorry, had to do it


Posted by: You are always | Link to this comment | 03-19-24 11:23 PM
horizontal rule
78

Understandable.


Posted by: Georgia | Link to this comment | 03-20-24 1:04 AM
horizontal rule
79

I don't see it's strange, apart from being in the "Asia-Pacific"


Posted by: MC | Link to this comment | 03-20-24 3:09 AM
horizontal rule
80

Dear lord https://x.com/presidentaz/status/1769998494196965516?s=46&t=nbIfRG4OrIZbaPkDOwkgxQ


Posted by: Barry Freed | Link to this comment | 03-20-24 3:35 AM
horizontal rule
81

||
Did Tommy Hilfiger go for a vaguely Confederate logo deliberately?
|>


Posted by: MC | Link to this comment | 03-20-24 4:00 AM
horizontal rule
82

Not impossible, but I doubt it -- a graphic reference to the national Confederate flag as opposed to the battle flag is a deep cut, that wouldn't even work as much of a dog whistle, especially decades ago when the logo was chosen. I would be really really surprised if that were the case.


Posted by: LizardBreath | Link to this comment | 03-20-24 4:30 AM
horizontal rule
83

I doubt it. Red, white, and blue are in lots of flags. Only the battle flag (with the crossed blue bars with stars) is really ever used by modern assholes.


Posted by: Moby Hick | Link to this comment | 03-20-24 4:32 AM
horizontal rule
84

Too slow.


Posted by: Moby Hick | Link to this comment | 03-20-24 4:33 AM
horizontal rule
85

The political Confederate flag would look stupid painted on a muscle car.


Posted by: Moby Hick | Link to this comment | 03-20-24 4:41 AM
horizontal rule
86

72 et seq.: He's going to get war anyway. I'm a little surprised that AZ hasn't tried already, but it's been less than a year since they took the rest of Karabakh, so I guess they're re-stocking their drone supplies and working to replace however many men they lost.

The geography isn't going to change and the correlation of population/wealth isn't going to change. Southern Armenia is sparsely populated and connected to the northern part of the country by very few roads. Maybe the Armenians can hold these; maybe a small amount of Western aid can help them do that. (I think I read something about France opening a consulate in southern Armenia, and maybe there would be French forces as well? I dunno.) Large-scale Western military assistance is off the table as long as Ukraine is still fighting. I don't think the Armenians have enough armed forces to threaten a counterattack on Nakhichevan, but I could be wrong. At any rate, the northern approaches there are flat by local standards, could be an AZ vulnerability.

There are more Azeris in Iran than in Azerbaijan, so I guess we should ask Ogged what Tehran thinks about an expansive and militarily aggressive Azerbaijan.


Posted by: Doug | Link to this comment | 03-20-24 4:47 AM
horizontal rule
87

I guess the Armenians won immigration since they are in California.


Posted by: Moby Hick | Link to this comment | 03-20-24 5:12 AM
horizontal rule
88

62. Tim Berners-Lee seems to agree with you.


Posted by: chris y | Link to this comment | 03-20-24 5:33 AM
horizontal rule
89

Sorry, bad link.


Posted by: chris y | Link to this comment | 03-20-24 5:34 AM
horizontal rule
90

33,73 Dan Luu is fantastic. I like everything Lucene-based that I've touched, his offhand comment there seems right. I didn't know that he had written longer-form essays, more to read there.


Posted by: lw | Link to this comment | 03-20-24 5:54 AM
horizontal rule
91

||
Opus is magnificent. I think literally perfect, and perhaps a new (to me, at least) genre of film to boot.
|>


Posted by: MC | Link to this comment | 03-20-24 6:17 AM
horizontal rule
92

The penguin?


Posted by: Moby Hick | Link to this comment | 03-20-24 6:23 AM
horizontal rule
93

Musicians aren't actually penguins, Moby. They just look that way becuase of the tailcoats.


Posted by: Mossy Character | Link to this comment | 03-20-24 6:40 AM
horizontal rule
94

https://www.rogerebert.com/reviews/ryuichi-sakamoto--opus-2024


Posted by: Mossy Character | Link to this comment | 03-20-24 6:42 AM
horizontal rule
95

And extremely worth your while to see in theater.


Posted by: Mossy Character | Link to this comment | 03-20-24 6:43 AM
horizontal rule
96

The last movie I saw in the theater was awful so I'm not eager to go back.


Posted by: Moby Hick | Link to this comment | 03-20-24 6:53 AM
horizontal rule
97

Seeing an awful movie in theater is like getting thrown by a horse. If you don't get back on you'll never experience the perfect joy of driving your lance through a fleeing orc.


Posted by: Mossy Character | Link to this comment | 03-20-24 7:03 AM
horizontal rule
98

Speaking of quackery, is this real, chat? It seems fake, but there's some collaboration with the state public health department, and this JAMA letter with results.


Posted by: Minivet | Link to this comment | 03-20-24 8:11 AM
horizontal rule
99

I tutor a young person who has one of those service dogs! Her parents are pretty on the ball and science minded, so I doubt they'd be on board for something that was sketchy.

Honestly, the dog would be worth it for the emotional support alone. She is incredibly bonded to him.


Posted by: Witt | Link to this comment | 03-20-24 8:22 AM
horizontal rule
100

Dogs are great.


Posted by: Moby Hick | Link to this comment | 03-20-24 8:25 AM
horizontal rule
101

Pretty sure cats can smell some upper respiratory infections. Sensitivity might vary from cat to cat though.


Posted by: lw | Link to this comment | 03-20-24 8:29 AM
horizontal rule
102

Right. Everyone has heard of a cat scan.


Posted by: Moby Hick | Link to this comment | 03-20-24 8:34 AM
horizontal rule
103

Calmara has responded to the negative press, also on LinkedIn.

Calmara is not a silver bullet and nor are we trying to say that we are. But to anyone who wants to know more: our AI functions like how a visit to a doctor is like. Diseases like syphilis or herpes have very characteristic visual presentation, and our AI can detect them very well. And....our AI has seen more cases than any doctor possibly can. 😉

Somehow I don't think they have a US regulatory specialist on staff or contract.


Posted by: Minivet | Link to this comment | 03-20-24 8:45 AM
horizontal rule
104

Maybe they could train a dog to review people's junk?


Posted by: Moby Hick | Link to this comment | 03-20-24 8:59 AM
horizontal rule
105

They sniff crotches with no training.


Posted by: Moby Hick | Link to this comment | 03-20-24 9:02 AM
horizontal rule
106

91 I really want to see that. I met him around 2013-14 when there was a Tsai Ming-liang retro running at the Museum of the Moving Image and I saw him at almost every screening. He subsequently did some music for one of his films. Super cool guy and an incredible composer.


Posted by: Barry Freed | Link to this comment | 03-20-24 9:58 AM
horizontal rule
107

73: I think it's true that Google has gotten worse, but I think it's underrated that the web has gotten worse and Google has largely just failed to keep pace with the firehose of shit it's indexing.


Posted by: Yawnoc | Link to this comment | 03-20-24 10:03 AM
horizontal rule
108

107: There's a chicken-and-egg thing though, because a lot (most?) of that firehose of shit was crafted to vacuum up Google traffic specifically.


Posted by: Minivet | Link to this comment | 03-20-24 10:12 AM
horizontal rule
109

79: ISTM normally this gets reported as "Azeri PM says Armenia must return disputed areas or face war" -- it takes a while for the article to get totally clear on the 5 W's of the war threat, since there are so few direct quotes from Baku. The effect is a bit like Pashinyan holding up a hand puppet, as I assume he did.


Posted by: lurid keyaki | Link to this comment | 03-20-24 10:41 AM
horizontal rule
110

Arizona (AZ) declaring war on Armenia seems like taking California-bashing maybe a step too far, but I would say that.


Posted by: lurid keyaki | Link to this comment | 03-20-24 10:43 AM
horizontal rule
111

That's a long way to go for Colorado river water.


Posted by: fake accent | Link to this comment | 03-20-24 12:38 PM
horizontal rule
112

109: Oh.
86: You think Arizona'sAzerbaijan's objectives are that expansive?


Posted by: Mossy Character | Link to this comment | 03-20-24 4:01 PM
horizontal rule
113

This is a pretty good rant about some use of AI to generate poor quality content at large scale: https://www.theintrinsicperspective.com/p/here-lies-the-internet-murdered-by


Posted by: NickS | Link to this comment | 03-20-24 8:00 PM
horizontal rule
114

112: A land bridge to Nakhichevan and however much of southern Armenia comes along with that? Yes.

State propaganda about "Western Azerbaijan" and working to pretend that Armenian monuments throughout the region are Albanian (not the Hoxha kind of Albanian) are signals that more war is on its way. Aliyev's legitimacy is based on oil money and conquest, and those are seldom areas where the leader says "ok, we've got enough." Overreach is the classic failure mode there, but how much reaching will happen before the over kicks in?

Neither the West nor the Armenians probably have the connections to do it, but this might be a good time to foment unrest among Azeris in Iran. If Tehran thinks their provinces are going to be next, they're less likely to ignore, or support with weapons sales, an Aliyev land grab along their border.


Posted by: Doug | Link to this comment | 03-21-24 1:28 AM
horizontal rule
115

There's another kind of Albanian?
Oil-wise, some bits and pieces.
114.2. Point. But OTOH Azerbaijani* policy to date has as far as I've noticed been exemplary** in its patience and methodicalness***. Threats could be consistent with limited aims (say 3 small corridors, 1 big) to the exclaves, and a permanent peace (for which they seem to have a willing counterparty in Pashinyan).
Invading Armenia proper would be different in that it would violate a recognized border and trigger the CSTO, and Aliyev has to know it. There's also the possibility, unlike in Ukraine, of direct Western (let's be honest, US) intervention; and IDK but would assume such intervention could stop Azerbaijan dead.
(Selfishly, of course, everything says throw Armenia under the bus: call Russia's CSTO bluff (or open a second front that would help Ukraine); bring Armenia into the US orbit, with bases to discomfit Russia and Iran, and provide alternatives to Turkey; open pipeline routes that bypass Georgia.)
*Is there meaningfully an Azerbaijan apart from Aliyev? IDK, but the (AFAIK) smoothness of the succession suggests some successful institution-building.
**In a non-normative Machiavellian sense.
***So clunky. There must be something better.


Posted by: Mossy Character | Link to this comment | 03-21-24 5:14 AM
horizontal rule
116

There's another kind of Albanian?

There is! Caucasian Albanian, which is not all that helpful as a descriptor, but there we go. There's also, historically speaking at least, Caucasian Iberia, just so that things don't ever get simple.

Is there meaningfully an Azerbaijan apart from Aliyev?

TIL that his wife is vice president, so I'm guessing no, not really. Or that whatever is, is so opaque that I would have to pay a lot more attention to have any idea. On the other hand, she's apparently been VP since 2017, so it's not like I'm paying a super lot of attention anyway. Just opining, in time-honored blog fashion, based on old expertise, general political intuition, and eyeballing maps.

Yes, Aliyev has been more patient and methodical than I would have expected, given petro-dictatorship and nepo dude. And he's only 62, so cognitive decline is probably still a good long way off.

violate a recognized border and trigger the CSTO

I think the CSTO is a dead letter, and the Kremlin has more in common with AZ anyway. Could be wrong of course.

Trying to occupy the Aras valley and establish a land corridor to Nakhichevan (the other three exclaves are barely worth mentioning, as is the Armenian exclave that nobody seems to mention) would be tantamount to invading Armenia proper and/or a pretext for same.


Posted by: Doug | Link to this comment | 03-21-24 6:23 AM
horizontal rule
117

116 last: I was unclear. I meant threats as part of a negotiating strategy with limited objectives..


Posted by: Mossy Character | Link to this comment | 03-21-24 6:36 AM
horizontal rule
118

113: Thanks. That was fucking horrifying. I've been blasé about AI's effect on my work despite being a writer partly because I see a lot of shitty writing generated by actual humans, but that's a good story about just how bad it can get, as is the Futurism article it links to.


Posted by: Cyrus | Link to this comment | 03-21-24 9:05 AM
horizontal rule
119

The Caucuses and the Balkans have a lot in common, including Albanians I guess.


Posted by: Spike | Link to this comment | 03-21-24 9:07 PM
horizontal rule
120

The Balkans have had a population of US Midwesterners since NATO intervention in 1995, and a similar population is also found in the Iowa Caucasus.


Posted by: Ajay | Link to this comment | 03-21-24 10:48 PM
horizontal rule
121

Iowans or good ones?


Posted by: Moby Hick | Link to this comment | 03-22-24 4:48 AM
horizontal rule
122

US just called for an immediate ceasefire in Gaza. Russia and China vetoed it.
https://apnews.com/article/united-nations-us-vote-gaza-ceasefire-resolution-f6453803b3eacc9fbaa2ce5a025e2a94


Posted by: ajay | Link to this comment | 03-22-24 7:01 AM
horizontal rule
123

It fell short of demanding an immediate ceasefire:
"(The Security Council) Determines the imperative of an immediate and sustained ceasefire to protect civilians on all sides, allow for the delivery of essential humanitarian assistance, and alleviate humanitarian suffering, and towards that end unequivocally supports ongoing international diplomatic efforts to secure such a ceasefire in connection with the release of all remaining hostages..."


Posted by: Barry Freed | Link to this comment | 03-22-24 8:17 AM
horizontal rule
124

I guess after the recess, we get another vote on who is Speaker of the House.


Posted by: Moby Hick | Link to this comment | 03-22-24 8:23 AM
horizontal rule
125

123: true. It said "it is imperative that this happens immediately and we support the efforts to make it happen immediately" which is different.


Posted by: ajay | Link to this comment | 03-22-24 8:28 AM
horizontal rule
126

Calmara has presumably talked to someone with regulatory knowledge as they have changed tack to insist they are "a lifestyle product, not a medical app".


Posted by: Minivet | Link to this comment | 03-22-24 9:29 AM
horizontal rule
127

Sending pictures of your penis into web apps is a lifestyle.


Posted by: Moby Hick | Link to this comment | 03-22-24 9:38 AM
horizontal rule
128

Unfogged was ahead of the curve.


Posted by: teofilo | Link to this comment | 03-22-24 9:48 AM
horizontal rule
129

Too much curve is a medical thing.


Posted by: Moby Hick | Link to this comment | 03-22-24 9:50 AM
horizontal rule
130

There was an ad on Reddit.


Posted by: Moby Hick | Link to this comment | 03-22-24 9:53 AM
horizontal rule
131

If I want to do roleplay in the bedroom revolving around having tech that doesn't actually exist, there are much more creative options. Warhammer 40k comes to mind.


Posted by: Minivet | Link to this comment | 03-22-24 10:02 AM
horizontal rule
132

"Of course you can guess what happens next."

"He reconsecrates the Terminator armour?"

"Don't be fatuous, Jeffrey."


Posted by: | Link to this comment | 03-23-24 12:10 AM
horizontal rule