Wizard22 wrote: ↑Thu Feb 15, 2024 10:52 am
Sculptor wrote: ↑Thu Feb 15, 2024 10:38 amI see what you are saying. But think about what you are saying. This is the most rpimitive an ineffective form of learning for humans. Rote learing in humans is nothing more than memorising a thing like a times table; memorising the name of the presidents. Hopefully the learning in your life was nothing like this. Now consdier how difficult that was. The reason rote learning was so difficult, boring, and unsatisfyng is that it is not a natural way to learn. This has long been rejected by educational departments throughout the globe, and was never effective.
Computers on the other hand just absorb the bytes, memorising is not an active processs. It is purely passive. THe programmer pours it in like filling a bath with water. Hitting Copy & paste is not "learning" or "training" as we would understand it.
When a programmer writes a program that would be analogous to building a structure to take the information; analogous to growing a rat's brain in a vat. Entering the data is the next step.- the rote. However for humans this unnatural way of learing requires constant repetition, Washington; Adams; Jefferson; ... again and again. Computers do not even need to do this. They store it first time. No learning at all.
The recent AI phenomenon though, is about machine-learning and new versions of AI doing 'unpredictable' (to the programmer) things and behaviors. The sophistication and complexity has reached levels that surprises and "out-thinks" the AI creators. The common fear, now, is that AI will eventually "program itself" or break free from human controls and constraints entirely. But that is still some ways off.
But even the out-thinking is pre-programmed. This is not leaning as we know it. It is designed to make statements that "suprise" the programmers, but whist this is remarkable it has the edge of hype. Programmers want the AI to look amazing. IN effect what is happening is that the program design is geared of build upon the date. THere is one huge problem that, I suggest, shall never be overcome though. When we learn we have constant feedback, Stuff we do that is bad tells us we were wrong, when what we do works, it re-inforces our learning. Nothing of the sort is part of AI except in the most primitive ways - and all of those ways are human criteria.
FOr example a machine can "learn" to pick up objects from a range of different angles. But the positive feedback has nothing to do with what the a mchine might "want". The positive feedback, say being able to place the object in a box, is determined and suggested by human need.
Regardless, it's not "rote-learning" or copy-pasting anymore. AI programs are "choosing" or "deciding" solutions that are beyond the parameters they're initially programmed with.
No. not really.
There is still that gap between the "learned" behaviour and the result. Only people judge the result. The machine does not chose beyind the task, wheras humans do that all the time.
For example, humans teach children to become autonomous, make decisions "on their own" at some point, age 7, or 17, or 27, etc. Eventually a child needs to act, him or herself, based on how it was educated/indoctrinated/"programmed". This is similar to what's happening with AI technologies, currently. They're being "taught how to choose" and "of their own accord".
Not sure this is right. Much learning is actually crushing the urge to automony. CHildren have to be reigned in. Filled with dogma and false ideas so they behave. Humans have a natural tendancy to explore, investigate, and act to their own volition. This skill grows with confidence as they shove off their apron strings. And maybe your POV on this point is where you are misunderstanding what is going on.
Do you have children?
AMybe just think back to your own childhood. Were you ever told to "NOT DO THAT"? "put that down" "get that our of your mouth", "You can't slepp with her", " you cant just stay home and play games - you have to go to school".
I would say that better characterises your expoerience than, oh say, go on you can chose between more cabbage or dessert. Make up your "own" mind.
It's more complicated than I know about software engineering, but AIs parse text and information through "learning" matrixes and the outputs are often 'unknown' to the programmers, just like humans, just like animals, where we don't know the consequences of our actions and decisions, until after we "choose" and "the choice is made".