PeteOlcott wrote: ↑Thu May 04, 2023 6:24 am
I always work inward from the boundary conditions. I am trying to precisely
define the nature of knowledge as the basis for the architecture for an AI
mind. It seems that you are saying something along the lines that you simply
do not believe that knowledge exists. That is not the kind of knowledge that
I am referring to.
I believe that knowledge exists, though I don't expect 100% certainty for it. I think we need to acknowledge the potential for fallibility with most things, especially empirically arrived at conclusions, however there can still be a useful distinction between knowledge and other beliefs. IOW we have rigorous, if not infallible, criteria. Yes, sometimes we may end up considering something to be knowledge that later turns out not to be the case. But it still allows us to work with some beliefs and not others. This is much how science works.
My issue with your first definition was that it leaves out, for example, pretty much all of science. Now that's fine, if that's works for you and you make it all clear. I think it is useful, for example, to consider many conclusions in science to be knowledge. I think that's a useful way of working, even if it is not perfectly infallible. There are other things I would include within knowledge despite the criteria not managing to rule out any possible error, misinterpretation or falsehood. I think that's also realistic.
Otherwise I don't think there's much left over. And I don't think your seeing the TV in your living room example works as completely infallible knowledge. And for a few reasons. If you want an AI that does math and can deduce certain conclusions using word definitions, well fine, keep it really restrictive. I think even with analytical conclusions errors are still possible in instances, but there's not reason an AI couldn't be built with that limited skill set.
Also I think there's a problem with the example of the person seeing their TV. Now, me personally, if I think I am sitting in my living room seeing my TV, I will treat that as certain - perhaps I'll wake up or something, but I'm also confident of my ability to differentiate dream from waking. That said it's not really knowledge in the sense of community knowledge.
The latter has to do with things where we can check the justification and draw conclusions that get added to the knowledge of the community. I mean, at least usually.
Of course one can define knowledge as one wants. I don't know what your goals for the AI are, so that would affect criteria. But, for example, if the AI is going to navigate, physicall environments, it's need some kind of fallible but very effective set of heuristics. Fallible because it make misinterpret shadows and forms and depth of field and so on. If it is analyzing camera data, again unless you want it to throw out pretty much every conclusion, it's going to need a conception of knowledge that is potentially fallible. If it is working with language in communication with humans, again, potentially fallible. Semantics is not like the rules of chess, the meanings of words are to some degree fuzzily defined.
If the AI is going to do math problems, fine.
It's been a while since I looked at Gettier problems but it seems to me those deal with conclusions in empirical situtions not analytical types of conclusions. So, if you are going to restrict knowledge to just analytical then Gettier isn't relevant. It's not like those problems are solved, the whole realm of conclusions is taken off the table.