Writing algorithms for Artificial Intelligence is a lot of fun. You end up teaching your software how to move and learn in the environment and how to take actions to survive to obstacles and threats. But this is still software. How do you know your AI is not dumb as doorknobs? How do you test something that thinks and decides in its own terms? How can you be sure it is following your inputs in a strategic way?