News Release

LLMs choose friends and colleagues like people

Peer-Reviewed Publication

PNAS Nexus

LLM networking

image: 

Networks generated by LLM agents exhibiting “preferential attachment” (panel A), “triadic closure” (panel B), “homophily” (panel C), and “small-world” dynamics (panel D).

view more 

Credit: Marios Papachristou and Yuan Yuan

When large language models (LLMs) make decisions about networking and friendship, the models tend to act like people, across both synthetic simulations and real-world network contexts. Marios Papachristou and Yuan Yuan developed a framework to study network formation behaviors of multiple LLM agents and compared these behaviors against human behaviors. The authors conducted simulations using several large language models placed in a network, which were asked to choose which other nodes to connect with, given their number of connections, common neighbors, and shared attributes, like arbitrarily assigned “hobbies” or “location.” The authors varied the network context, including simulations of friendships and workplace, communities; the amount of information provided to the agents; and model parameters like temperature. Generally, LLMs exhibited tendencies to connect to other nodes that were already well connected, a phenomenon called “preferential attachment.” LLLs also showed a tendency to connect to other nodes with a high number of common connections, a phenomenon called “triadic closure.” LLMs also demonstrated homophily, choosing nodes with similar hobbies or location, as well as the “small-world” phenomenon, where any two nodes are connected by surprisingly short chains of acquaintances often with just a few “degrees of separation.” In network simulations based on real Facebook friendship networks, work networks, and telecommunication networks, LLM models prioritized homophily most strongly, followed by triadic closure and preferential attachment. Finally, the authors conducted a controlled survey with around 100 human participants and asked participants and to respond to the same survey. LLMs' responses showed strong alignment with human link-formation choices, though the models displayed higher internal consistency than humans. According to the authors, these findings demonstrate the potential for LLMs to be used as a source of synthetic data when privacy concerns preclude using human data, but also raise questions regarding the design and alignment of artificial intelligence systems that make real-world decisions by interacting with human networks.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.