Here& #39;s the thing about AI: you get what you optimize for. If you optimize for a specific skill, like chess or StarCraft, your final system will possess this skill and nothing else. It won& #39;t generalize to any other task.

To generalize, you must optimize for generality itself.
To be clear, optimizing for task-specific skill can be valuable. It gets you somewhere. But now, we& #39;re at a stage in the development of AI where generalization has become, inevitably, the bottleneck to skill acquisition.
Of course, if you can amass a sufficiently dense sampling of situations within a sufficiently narrow domain, you can always train a machine model -- but it will break down as soon as it encounters anything it has never seen before.

That& #39;s modern deep learning.
And for many high-value real-world tasks, that& #39;s just about every day. Consider self-driving cars, or domestic robotics. You can& #39;t enumerate the set of possible situations a driver might ever encounter -- billions of miles are not nearly enough.
You can& #39;t even enumerate the set of possible kitchens a robot might operate in. If you want to ever be able to deploy a L5 self-driven system or a human-level domestic robot, you have to figure out how to implement broad cognitive abilities -- beyond task-specific skills.
You can follow @fchollet.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: