AI companies often claim to have achieved a level of “intelligence” in their technologies that is comparable to having a highly educated assistant in your pocket, akin to someone with a PhD. Claus Wilke contends that this focus on classification is misguided.
It is suggested that AI models possess the necessary persistence for a PhD level (provided there is funding available), and it is emphasized that extraordinary intelligence is not a prerequisite. The question then arises: what prevents current AI models from performing at a PhD level? The answer, in my view, lies in their limitations in reasoning, introspection, self-reflection, and the ability to construct and refine an accurate mental model of their respective fields over time. Crucially, as PhD-level research delves into the frontier of human knowledge, the challenge lies in handling unique situations and data sets that are rarely explored or documented.
In practical terms, most individuals are likely more interested in having a capable assistant than one with a PhD-level expertise. They seek prompt solutions, akin to an industrious intern.
A PhD-level assistant would likely respond to queries with further inquiries, and after a considerable delay of five to seven years, provide an “answer” that may be correct but shrouded in uncertainty. The upside is the opportunity for deeper exploration through subsequent research and future pathways. However, this journey would need to be navigated independently or by engaging another PhD assistant, as the original assistant may have moved on to pursue other interests.