Excellent question. One I’ve thought about (as a philosopher and an ex-ethics consultant) and raised here.
One might suggest that an AI is like a gun, and ‘we’ve’ always argued that those who create guns and bombs should not be held responsible for how they’re used. Not sure about that, but regardless, an important difference is that a bomb doesn’t have agency. Well, I guess not until now.
So it’s more like the issue of whether we should hold parents responsible for their kid’s behaviour.
I’m leaning toward yes. Especially since most people, and AI techies, create children and AI agents without having any qualifications whatsoever regarding child or AI development, and that includes MORAL development.


















