Software

AI + ML

Turns out people don't like it when they suspect a machine's talking to them

Also this just in, people not that into insincere messages


You might find using AI technology helpful when chatting to others, but this latest research shows people will think less of someone using such tools.

Here's how the study, led by folks at America's Cornell University, went down. The team recruited participants, and split them into 219 pairs. These test subjects were then asked to discuss policy stuff over text messaging. For some of the pairs, both participants in each pairing were told to only use suggestions from Google's Smart Reply, which follows a topic of conversation and recommends things to say. Some pairs were told not to use the tool at all, and for other pairs, one participant in each pairing was told to use Smart Reply.

One in seven messages were therefore sent using auto-generated text in the experiment, and these made conversations seemingly more efficient and with a positive tone. But if a participant believed the person they were talking to was replying with boilerplate responses, they thought they were being less cooperative and felt less warmly about them. 

People might project their negative views of AI on the person they suspect is using it

Malte Jung, co-author of the research published in Scientific Reports, and an associate professor of information science at Cornell, said it could be because people tend to trust technology less than other humans, or perceive its use in conversations as inauthentic. 

"One explanation is that people might project their negative views of AI on the person they suspect is using it," he told The Register.

"Another explanation could be that suspecting someone of using AI to generate their responses might lead to a perception of that person as less caring, genuine or authentic. For example, a poem from a lover is likely received less warmly if that poem was generated by ChatGPT."

In a second experiment, 291 pairs of people were asked to discuss a policy issue again. This time, however, they were split into groups that had to manually type their own responses, or they could use Google's default smart reply, or had access to a tool that generated text with a positive or negative tone. 

Conversations that were conducted with Google smart reply or the tool that generated positive text were perceived to be more upbeat than ones that involved using no AI tools or replying with auto-generated negative responses. The researchers believe this shows there are some benefits to communicating using AI in certain situations, such as more transactional or professional scenarios.

"We asked crowdworkers to discuss policies about the unfair rejection of work. In such a work-related context, a more friendly positive tone has mainly positive consequences as positive language draws people closer to each other," Jung told us.

"However, in another context the same language could have a different and even negative impact. For example, a person sharing sad news about a death in the family might not appreciate a cheerful and happy response and will likely be put off by that. In other words, what 'positive' and 'negative' means varies dramatically with context."

Human communication is going to be shaped by AI as the technology becomes more increasingly accessible. Microsoft and Google, for example, have both announced tools aimed at helping users automatically write emails or documents.

"While AI might be able to help you write, it's altering your language in ways you might not expect, especially by making you sound more positive. This suggests that by using text-generating AI, you're sacrificing some of your own personal voice," Jess Hohenstein, lead author of the study and a research scientist at Cornell University, warned this month.  

Hohenstein told us that she would "love to see more transparency around these tools" that includes some way to disclose when people were using them. "Taking steps towards more openness and transparency around LLMs could potentially help alleviate some of that general suspicion we saw towards the AI." ®

Send us news
65 Comments

SoftBank boss Masayoshi Son predicts artificial general intelligence is a decade away

'Investo-bot, make me rich' is his vision – powered by Arm chips, natch

UK data watchdog warns Snap over My AI chatbot privacy issues

Plus: 4channers are making troll memes with Bing AI, and more

Developers build AI to read ancient scroll burnt in Mount Vesuvius eruption

Plus: US Space Force halts use of ChatGPT and more

OpenAI warns folks over GPT-4 Vision's limits and flaws

Plus: Mistral emits uncensored model, Meta expands Llama 2's context window, Alexa drills into your voice

AI processing could consume 'as much electricity as Ireland'

Boffins estimate power needed if every Google search became an LLM interaction. Gulp

Hyperscale datacenter capacity set to triple because of AI demand

And it's going to suck... up more power too

How 'AI watermarking' system pushed by Microsoft and Adobe will and won't work

Check for 'cr' bubble in pictures if your app supports it, or look in the metadata if it hasn't been stripped, or...

Google offers some copyright indemnity to users of its generative AI services

'If you are challenged, we will assume responsibility'

Ampere leads a pack of AI startups to challenge Nvidia's 'unsustainable' GPUs

AI Platform Alliance probably has Jensen Huang in tears...of laughter

UK government embarks on bargain bin hunt for AI policy wonk

Con: You won't get a Menlo Park salary. Pro: You won't have to meet Zuck

Five Eyes intel chiefs warn China's IP theft program now at 'unprecedented' levels

Spies come in from the cold for their first public chinwag

Nvidia sells Foxconn on AI factories that turn raw data into profit

Or, at least, information that can drive profits anyway