It's paradoxical: an artificial intelligence can learn to drive a vehicle in the real world more easily than to play a classic video game like Tetris. The key, according to experts like Julian Togelius from New York University, lies in the nature of the rules. The physical world is governed by consistent and predictable laws, while the rules of a video game are arbitrary and its action space is more abstract and complex for a machine to model.
Predictable Physics vs. the Arbitrariness of Code 🤖
Autonomous driving operates in a continuous domain governed by Newtonian physics, where actions have predictable consequences. A neural network can learn these consistent patterns from real-world data. In contrast, a game like Tetris has a discrete and enormous state space, with abstract, man-made rules like piece rotation or line clearing, which have no direct physical correlation. This arbitrariness demands a type of symbolic reasoning and an understanding of abstract rules that, ironically, is more challenging. In fact, tasks like programming, with clear logical rules and immediate feedback, are domains where current language models already excel.
Implications: Redefining "Complex" for Machines 🤔
This perspective flips our intuition about difficulty for AIs. It forces us to distinguish between human complexity, based on sensorimotor experience and common sense, and abstract computational complexity. Understanding this is crucial for developing robust AI and for calibrating public perception. An autonomous car does not understand traffic like we do, but it relies on a predictable world. The true future challenge lies in endowing machines with a flexible understanding of arbitrary rules, the realm where human intelligence still reigns.
How does the complexity of abstract rules versus the predictability of the physical world determine the real difficulty of a problem for artificial intelligence?
(PS: trying to ban a nickname on the internet is like trying to block out the sun with a finger... but digitally)