Discussion about this post

User's avatar
Caroline Beuley's avatar

I absolutely loved this Tina! My fiancé always speaks very politely to Alexa, and he says it's because he's worried if we get in the habit of screaming "SHUT UP!" or "TELL ME THE WEATHER" at a computer person, we'll be more inclined to do the same to real people, like you mentioned. And I tend to agree! Online interactions are already coloring (and contaminating, in the case of the David statue) our in-person interactions so much that it isn't much of a leap. Thanks for sharing--really interesting to think about!

Expand full comment
Isabel | Frivolous Girl's avatar

I think the discussion about how we interact with AI (or any other computer) is both interesting and fascinating. I do the same. When I chat with ChatGPT, I write as if I’m talking to a real human being. The reasons for this are:

1) I’m afraid that if I start getting sloppy and cold, it will affect how I communicate with other human beings. AI will become just as integrated into our lives as the smartphones attached to our hands, so building a good habit now feels crucial. Not to mention, I’ve seen and felt the effects of how social media sharing and consumerism have changed the way we treat friendships and communicate with each other, and the disinterest we have when meeting new people.

2) Most importantly, I respond to ChatGPT in a friendly manner because it’s so damn friendly back. You just can’t help it! That’s also why I gravitate toward using ChatGPT rather than other models; their more transactional way of presenting information feels rather boring and disengaging. As humans, we thrive on engagement. The creators behind ChatGPT understand this. (The Like-button killed engagement, and we're now desperate for it. Whether engagement comes from a computer or a real human being, the brain doesn't see a difference.)

So, whether or not Altman says that saying “thanks” to ChatGPT costs a few bottles of water and that people should stop saying thank you — or at least don’t need to — they should also consider making the model less friendly if that’s the goal. But I think that would lead us into serious trouble. It could amplify the scary scenarios of humans forgetting how to communicate with each other with empathy, and further drive the transactional/brusque tone we already adopted. Like I said, we’ve already seen how profit-driven social media has shaped our behavior toward one another. When AI becomes a daily form of communication, like an extension of our thoughts, what effects will a daily exposure to a neutral, unemotional tone have on how we treat each other? — Sacrificing relational habits for efficiency may not be worth the trade-off.

And lastly:

3) imo, how one talks to AI says a lot about a person and their EQ.

With that said, not all interactions with AI require friendliness. It's important to distinguish between conversations. Some prompts are purely transactional and quick, while others are deeper and more engaging. The latter is where I find myself holding the conversation as if I were talking to a human assistant.

Expand full comment
3 more comments...

No posts