CyberIntel ⬡ News
★ Saved ◆ Cyber Reads
← Back ◬ AI & Machine Learning Apr 27, 2026

Large Language Models Are Bad Dice Players: LLMs Struggle to Generate Random Numbers from Statistical Distributions

arXiv AI Archived Apr 27, 2026 ✓ Full text saved

arXiv:2601.05414v3 Announce Type: cross Abstract: As large language models (LLMs) transition from chat interfaces to integral components of stochastic pipelines and systems approaching general intelligence, the ability to faithfully sample from specified probability distributions has become a functional requirement rather than a theoretical curiosity. We present the first large-scale, statistically powered audit of native probabilistic sampling in frontier LLMs, benchmarking 11 models across 15

Full text archived locally
✦ AI Summary · Claude Sonnet


    Computer Science > Computation and Language [Submitted on 8 Jan 2026 (v1), last revised 21 Apr 2026 (this version, v3)] Large Language Models Are Bad Dice Players: LLMs Struggle to Generate Random Numbers from Statistical Distributions Minda Zhao, Yilun Du, Mengyu Wang As large language models (LLMs) transition from chat interfaces to integral components of stochastic pipelines and systems approaching general intelligence, the ability to faithfully sample from specified probability distributions has become a functional requirement rather than a theoretical curiosity. We present the first large-scale, statistically powered audit of native probabilistic sampling in frontier LLMs, benchmarking 11 models across 15 distributions. To disentangle failure modes, we employ a dual-protocol design: Batch Generation, where a model produces N{=}1000 samples within one response, and Independent Requests, comprising N{=}1000 stateless calls. We observe a sharp protocol asymmetry: batch generation achieves only modest statistical validity, with a 7% median pass rate, while independent requests collapse almost entirely, with 10 of 11 models passing none of the distributions. Beyond this asymmetry, we reveal that sampling fidelity degrades monotonically with distributional complexity and aggravates as the sampling horizon N increases. Finally, we demonstrate how the propagation of these failures into downstream real-world application tasks introduces systematic biases: models fail to enforce uniform answer-position constraints in Multiple Choice Question generation and systematically violate demographic targets in attribute-constrained text-to-image prompt synthesis. These findings indicate that current LLMs lack a functional internal sampler, necessitating external tools for applications requiring statistical guarantees. Comments: Accepted to ACL 2026 (Main Conference) Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Machine Learning (stat.ML) Cite as: arXiv:2601.05414 [cs.CL]   (or arXiv:2601.05414v3 [cs.CL] for this version)   https://doi.org/10.48550/arXiv.2601.05414 Focus to learn more Submission history From: Minda Zhao [view email] [v1] Thu, 8 Jan 2026 22:33:12 UTC (318 KB) [v2] Mon, 20 Apr 2026 01:22:06 UTC (457 KB) [v3] Tue, 21 Apr 2026 13:09:33 UTC (458 KB) Access Paper: HTML (experimental) view license Current browse context: cs.CL < prev   |   next > new | recent | 2026-01 Change to browse by: cs cs.AI stat stat.ML References & Citations NASA ADS Google Scholar Semantic Scholar Export BibTeX Citation Bookmark Bibliographic Tools Bibliographic and Citation Tools Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) Connected Papers Toggle Connected Papers (What is Connected Papers?) Litmaps Toggle Litmaps (What is Litmaps?) scite.ai Toggle scite Smart Citations (What are Smart Citations?) Code, Data, Media Demos Related Papers About arXivLabs Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
    💬 Team Notes
    Article Info
    Source
    arXiv AI
    Category
    ◬ AI & Machine Learning
    Published
    Apr 27, 2026
    Archived
    Apr 27, 2026
    Full Text
    ✓ Saved locally
    Open Original ↗