Re: The Problem of Light and Speed in a Vacuum
Posted: Fri Feb 01, 2019 9:54 pm
The symbol-grounding problem is a practical problem.surreptitious57 wrote: ↑Fri Feb 01, 2019 5:56 amSymbols are a sub set of language and like any language they can be open to misinterpretation but the symbolsLogic wrote:
any symbol written on paper can and will be mis interpreted
of mathematics are very rigorous indeed and much moreso than those of non mathemathical language [ words ]
I think the symbol grounding problem is more a question for philosophy not mathematics or science for it is
fundamentally metaphysical and metaphysics has got absolutely nothing to do with mathematics or science
Would machine intelligence of the future be capable of solving the problem if it is beyond human capability ?
It would certainly have superior processing capability however would that automatically produce an answer ?
Scientists and mathematicians have no problem in conveying their ideas using mathematical language so will
carry on doing so because the solution to the symbol grounding problem is not something that concerns them
But what guarantee is there that the problem itself will not be open to misinterpretation if it is ever solved ?
The symbols themselves are not actually that important since like words it is the meaning that really matters
It is about ensuring the content/information that the “sender” is attempting to relay is interpreted exactly in the same sense by the receiver.
It requires that words have strict and objective meaning.
There are only two ways to ensure this:
1. Prescriptive linguistic rules (not ideal).
2. Consensus from zero-knowledge.
Still, communication science has a bunch of strategies for dealing with transmission errors.
In a context free of any risk where errors in communication have absolutely no negative consequences you are free to express yourself.
Dream, theorize, imagine, write poetry.
There is zero need for precision. Only aesthetics.
There are a few AI/robotics research houses which claim to have solved the problem. They have somewhat solved it - for well controlled environments they have managed to get robots to recognise/learn/agree on meaning.
It is very far from being useful between humans.
We don’t like rules - robots love them.