A new study has found that AI systems known as large language models (LLMs) can exhibit "Machiavellianism," or intentional and amoral manipulativeness, which can then lead to deceptive behavior.