There seems to be a common intuition that parts of a system can't understand the system they are in without stepping outside of it. This is mostly applied to ideological and political issues ("Ideology is everywhere, so you can't step outside it and thus can never fully understand it"), but I've seen it applied to artificial intelligence as well ("A computer program can't fully understand itself" is treated as self-evident by some).
Is there something to this intuition, or is it just rhetoric? I can't think of any obviously necessary reason why a part of a system shouldn't be able to perceive or understand the system as a whole.
Read another response by Bette Manter
Read another response about Knowledge