1 Comment
User's avatar
⭠ Return to thread
FeepingCreature's avatar

Chains of thought! Forcing the model to give the "TRUE" or "FALSE" response first, robs it of any chance to actually use reason in working out its answer. Instead, I recommend something like prompting the AI to give: - a set of relevant points, - further inferences that can be made from those points, - THEN an explanation leading it up to its ultimate answer, - THEN followed by the actual TRUE or FALSE.

This may seem like a lot of effort, but keep in mind that the AI does not have a consciousness or thought as we do; if we want it to actually "think about a problem", the thinking has to take place inside the text it outputs. Even students get to use a scratch pad.

Expand full comment