THE FLUENCY-BASED SEMANTIC NETWORK OF LLMS DIFFERS FROM HUMANS

The fluency-based semantic network of LLMs differs from humans

The fluency-based semantic network of LLMs differs from humans

Blog Article

Modern Large Language Models (LLMs) exhibit complexity and granularity similar to humans in the field of natural language processing, challenging the boundaries between humans and machines in stuart products emcelle tocopherol language understanding and creativity.However, whether the semantic network of LLMs is similar to humans is still unclear.We examined the representative closed-source LLMs, GPT-3.5-Turbo and GPT-4, with open-source LLMs, LLaMA-2-70B, LLaMA-3-8B, LLaMA-3-70B using semantic fluency tasks widely used to study the structure of semantic networks in humans.

To enhance the comparability of semantic networks between humans and LLMs, we innovatively employed role-playing to generate multiple agents, which is equivalent to recruiting multiple LLM participants.The results indicate that the semantic network of LLMs has poorer interconnectivity, local association organization, and flexibility compared to humans, which suggests that LLMs have lower search efficiency and more read more rigid thinking in the semantic space and may further affect their performance in creative writing and reasoning.

Report this page