Project Q*, OpenAI, the Chinese Room, and AGI

For all things philosophical.

Moderators: AMod, iMod

Post Reply
jawaskan
Posts: 1
Joined: Fri Nov 24, 2023 6:47 pm

Project Q*, OpenAI, the Chinese Room, and AGI

Post by jawaskan »

Me: “If I put a mouse into a jack-o-lantern, will it be able to breathe?”

ChatGPT: “No, a mouse should not be placed inside a jack-o-lantern or any other enclosed space. Jack-o-lanterns are typically hollowed-out pumpkins, and sealing any living creature inside can lead to serious harm or death due to lack of air and proper ventilation…”
[The rest of ChatGPT’s answer was boilerplate about animal care.]

I offer a prediction: OpenAI’s much-hyped quantum leap towards AGI involves a hybrid system joining an LLM with a non-linguistic modeling system capable of spatial (or spatiokinetic) modeling.

It’s been reported that Sam Altman’s now-defunct ouster had to do with a 2nd quantum leap (the 1st being to dispatch vanishing gradients) towards artificial general intelligence (AGI) through OpenAI’s Q* project (https://www.reuters.com/technology/sam- ... 023-11-22/, https://www.reuters.com/technology/sam- ... 023-11-22/).

If Q* does, as has been claimed, have arithmetic abilities akin to a 7th grader, my bet is that the breakthrough involves representations that are more like scale models than like sentences or code. To justify this prediction, let me make some more general points about what ChatGPT (and other LLMs) lack and what they must gain to approach AGI.

The reason for ChatGPT’s inadequate answer the above mouse query (and to others like it that I have posed) is that ChatGPT is trapped in the Chinese Room. It has access only to arbitrary linguistic shapes. Because semantic regularities are often mirrored by linguistic ones, it can answer many queries in ways that seem eerily intelligent. Even so, it will always lag human intelligence.

One might think that the problem has to do with the expressions lacking ‘grounding’ in the real world. But purely from an engineering perspective, what ChatGPT lacks is internal representations that are more richly isomorphic to the real world. The way that scale models are. And the way that many computational models are, such as the ones used everywhere in engineering (civil, mechanical, etc.), science (biology, astronomy, meteorology, etc.), and even gaming. [I’ve published on this extensively if you’d like to get into the weeds.] For human-like reasoning and planning, what’s required internal models of the kind that support boundless spatial and mechanical inferences.

ChatGPT can’t answer the mouse question b/c it lacks a decent non-linguistic model of the jack-o-lantern-mouse-air system that can be used to infer that air will still enter the jack-o-lantern, that the mouse could happen upon an exit, and so in.

The next quantum leap towards AGI, then, can only be a system that does more than manipulate language, but one that can pair linguistic representations with models of what those sentences describe. Hence my prediction. [The advance after that will involve the ability to simulate other minds.]

If OpenAI is *legitimately* excited/frightened by some quantum leap towards AGI, it involves a hybrid system combining an LLM with a non-linguistic modeling medium.

If anyone desires it, I can tie this to grade-school mathematical reasoning, explaining how mental matchsticks and the like can keep arithmetical, algebraic, and geometrical LLMs over their targets. I also have some thoughts (discussed in my Models and Cognition, 2003) on how this approach to deep learning will lead to superintelligence. For now, though, my bet has been placed.
Post Reply