Authors: Shek Lun Leung & Alexander Näslund
Affiliation: KTH Royal Institute of Technology
Date: December 2025
This paper investigates the "simulation-reality gap" in contemporary humanoid robots powered by Large Language Models (LLMs). We argue that while behavioral mimicry is reaching unprecedented levels of fidelity, a fundamental distinction persists between sophisticated simulation and authentic philosophical autonomy. This work evaluates the ethical implications of this convergence, specifically focusing on human accountability and AI welfare.
- Full Research Paper (PDF): The complete 15-page paper including the "History of Robotics" appendix and full citations.
- Source Files: LaTeX source code for the manuscript.
- The Simulation-Reality Gap: Analysis of why mimicry != autonomy.
- AI Welfare & Moral Patienthood: Proposing evaluative metrics for non-biological sentience.
- Accountability Frameworks: Defining human responsibility in the deployment of autonomous agents.
A historical context of the field, from Asimov’s Three Laws and the 1955 Dartmouth Proposal to the current era of Deep Learning and humanoid embodiment, is included in the final section of the paper.
@article{LeungNaslund2025,
title={Autonomy in AI: Exploring Subjectivity in Humanoid AI},
author={Leung, Shek Lun and N{\"a}slund, Alexander},
year={2025},
institution={KTH Royal Institute of Technology}
}This paper was typeset in LaTeX. The modular source code (using a main.tex and separate chapter files) can be found in the /src directory
This research was developed under the guidance of Anders Hedman during the 'Artificial Intelligence in Society' course at KTH. We are grateful for their insights and feedback on the intersection of Ethical perspectives, Super-intelligence and Apocalyptic AI.