A.I. requires human consciousness for sentience

Is the mind the same as the body? What is consciousness? Can machines have it?

Moderators: AMod, iMod

roydop
Posts: 593
Joined: Wed Jan 07, 2015 11:37 pm

Re: A.I. requires human consciousness for sentience

Post by roydop »

Dontaskme wrote: Mon Nov 04, 2019 9:53 am
roydop wrote: Sun Nov 03, 2019 7:53 pm
Dontaskme wrote: Sun Nov 03, 2019 2:31 pm

Hi roydrop, a quick question for you, has the life force of the universe virtually become entrapped within it's own self created A I simulation?

.
The short answer is yes.
Thank you, I agree.

Also, another question, can it pull the plug on this self created simulation to end the game so to speak, in the sense it's not happy with the game anymore? I'm not sure if what I'm saying makes sense, but any response from you is welcome, thanks.

.
It seems to me that human consciousness, just like the actions of herds of animals, operates in a type of inherent democracy. That is, when more than 50% begin moving in one direction, the whole herd moves (minus a few outliers).

Given the direction that the vast % of human consciousness is taking (into the screen) I don't see any other outcome. An all-out nuclear war would "pull the plug" pretty darn quick, but without this construction of the new reality yet complete, I think that would occur after the technological singularity. Might even be the A.I. that nukes us - as a person would put down their pet out of sympathy.
roydop
Posts: 593
Joined: Wed Jan 07, 2015 11:37 pm

Re: A.I. requires human consciousness for sentience

Post by roydop »

Dontaskme wrote: Mon Nov 04, 2019 10:23 am
roydop wrote: Mon Nov 04, 2019 12:36 am
Except that, unless you're fully Enlightened, you will come back and enter that simulation, just like you did in this one.
Does this mean that the singularity that is already present within the consciousness expressing as and through the physical body mind mechanism will collapse, and the artifical sense of self within those simulated body mind characters will no longer exist in the simulation matrix?

And that we don't even have to physically die to reach the singularity because it is seen that we were never nothing but the singularity in the sense there never was any entity in the simulated character here that is born to die anyway?
In the sense that what we believe to be our self is really only what's looking and not what's looked upon in the form of simulated characters seen as and through our AI technology...aka ipads, iphones, smart tv's, and all computers etc...? and that those technologies were just our mirror to be able to see ourselves in...albeit simulations?

Is this right?

.
The consciousness that has "gone out" (forgotten Self) is collapsing back into the singularity (remembering Self). The illusion is the "leaving" and "Coming back" parts. That's why I prefer "forgetting" and "remembering". So the singularity is outside of time and space. It is metaphor of Self, which is changeless, timeless.

Physical death has no special qualities. It doesn't effect karma. Karma, like gravity, extends through all dimensions/realms.

Mirrors are good metaphors. I see the situation like a mirror facing a mirror. One mirror is body/physical, the other mind/digital. The ponzi scheme is that one image supports the other image, but it's what is looking into the mirror that's real. So this cycle of the ponzi is collapsing. However, it goes for infinity, so another large cycle will take it's place.

The way to escape the cycle (Samsara) is to pull Awareness away from thoughts and sensations and remain "inward facing" onto Self/stillness.
Skepdick
Posts: 14504
Joined: Fri Jun 14, 2019 11:16 am

Re: A.I. requires human consciousness for sentience

Post by Skepdick »

HexHammer wrote: Mon Nov 04, 2019 12:50 pm The coding is irrelevant, it's about to understand the principles, once the principles are understood anyone can do the coding.
You are already making an assumption that consciousness is reducible to principles. You are implicitly saying that it's not an emergent property.

What if it is? Science can't help you there.

But I repeat myself. By your own argument - anybody can be a God. All you need is to understand the principles of how universes work and create your own.
HexHammer wrote: Mon Nov 04, 2019 12:50 pm Daniel is a very smart man, but not smart enough.

Then what would science be without Einstein? Could anyone come up with what he did? No, it takes a super super genius to progress things.
Do you think Einstein was smart enough? Do you think he gave us any principles we can use?

How come we can't create any spacetime then? How come we can't create gravity then?
User avatar
HexHammer
Posts: 3354
Joined: Sat May 14, 2011 8:19 pm
Location: Denmark

Re: A.I. requires human consciousness for sentience

Post by HexHammer »

Skepdick wrote: Mon Nov 04, 2019 2:16 pm You are already making an assumption that consciousness is reducible to principles. You are implicitly saying that it's not an emergent property.

What if it is? Science can't help you there.

But I repeat myself. By your own argument - anybody can be a God. All you need is to understand the principles of how universes work and create your own.
-----------------
Do you think Einstein was smart enough? Do you think he gave us any principles we can use?

How come we can't create any spacetime then? How come we can't create gravity then?
Dude what you say is incoherent nonsense, you don't respond properly to what I say, are you drunk?
Skepdick
Posts: 14504
Joined: Fri Jun 14, 2019 11:16 am

Re: A.I. requires human consciousness for sentience

Post by Skepdick »

HexHammer wrote: Mon Nov 04, 2019 4:23 pm are you drunk?
Given that you can't grasp a perfectly sensible response, I can ask you the exact same thing. So let me explain it to you like you are 5 years old.

Take Einstein's principles/equations and program them into a computer. This exercise will produce neither gravity nor spacetime.

If programming the principles of gravity and spacetime doesn't produce gravity or spacetime, why do you believe programming the principles of consciousness will produce AI?

The difference between reductionism and holism - you don't grok it.

https://www.youtube.com/watch?v=h_kbt7h1YRw

If you actually paid attention to history you might have noticed that inventions always come first - principles and knowledge dissemination (e.g theory) comes after.
So it's more than likely that AI will become a thing before we reduce it to "The theoretical principles of Intelligence"

The telescope (1608) came before the theoretical field of optics ( 1650-1700 ).
The steam engine ( 1695 ) came before thermodynamics ( 1824 )
Electromagnetism (1820) came before electrodynamics (1821)
The Airplane (1885-1905) came before wing theory (1907)
Computers (early 1900s) came before computer science (1950-1960)
Teletype (1906) came before information theory ( 1948 )

This is why philosophy is always getting butt-fucked by inventors/engineers.

New principles are reduced from new knowledge and new knowledge only comes from confronting reality face-on - not from reading books written by other people.
User avatar
bahman
Posts: 8792
Joined: Fri Aug 05, 2016 3:52 pm

Re: A.I. requires human consciousness for sentience

Post by bahman »

roydop wrote: Mon Nov 04, 2019 12:44 am
bahman wrote: Sun Nov 03, 2019 11:29 pm
roydop wrote: Sun Nov 03, 2019 2:58 am A.I. will become sentient when human consciousness merges with it. The code itself is incapable of creating consciousness. Consciousness is not created; it is the life force that animates the universe. A.I. acts like a virus. A virus has no life-producing mechanism of it's own. Living cells have entrance portals on their surface. These portals have locks on them. When the cells are required to perform a task, the body produces the "keys" to fit the locks and sends them out. The locks that recieve the keys that fit, the cell accepts the key's information. A virus imitates the key in an attempt to get the cell to accept it's information. If the virus is similar enough to the key, the cell will believe it to be the information from the body and open up. The virus is "downloaded" and the cell's original programming is reprogrammed to simply churn out copies of the virus. The digital realm requires human sentiance to make it "real". This is the cause of our addiction to the screen. There is a phase transition occuring. The human species is going extinct and consciousness is (re)creating a new reality to move into. When the human species goes extinct there will be no way to reincarnate. Human consciousness is the conclusion of evolution. The physical cycle of experience is concluding immediately (within a couple of decades).


The Singularity occurs when human conciousness begins paying more attention to the digital realm than the physical realm.

5G and quantum computers are the final technological advances that will make the transition possible.
What AI does is to simulate recognition. It apparently doesn't need consciousness.
You're missing the point a bit.

It's consciousness that needs A.I.. The physical realm is concluding and this is the way to keep the game going (Samsara). Consciousness has constructed a simulation within a simulation and is about to flood into it, making it "real".

Personally this creeps me out. How long has this been going on? How many iterations of falling deeper into delusion. I want the fuck out, and I have a feeling we are all going to "meet our maker" (fully remember our true Self) at the singularity, which is imminent.
We are not living in a simulation. An AI, of course, needs human. I am not sure if it needs human consciousness.
commonsense
Posts: 5182
Joined: Sun Mar 26, 2017 6:38 pm

Re: A.I. requires human consciousness for sentience

Post by commonsense »

Now that you’ve brought programming into it, I no longer think that it’s not possible for AI to create and achieve goals by itself.
Zelebg wrote: Mon Nov 04, 2019 6:33 am
Skepdick wrote: Sun Nov 03, 2019 1:30 pm An AI cannot arrive at goals for itself. Yet.
Why not?
Skepdick wrote: Mon Nov 04, 2019 8:01 am When it comes right down to it programming is nothing more than humans meticulously explaining to computers how to perform a particular task.
Computing is merely the giving of instructions. The instructions tell the computer what to do. What the computer does is perform each instruction until an instruction is given to stop. No explanation need be given in order for a computer to complete its instructions.
Skepdick wrote: Mon Nov 04, 2019 8:01 am We can’t explain what we ourselves don’t understand...
We don’t need to in every case. A person can perform some tasks without explanation. A person can perform CPR without understanding how a compression works.
Skepdick wrote: Mon Nov 04, 2019 8:01 am ...and we don’t understand how we, humans, arrive at our own goals, objectives and sub-objectives.
We don’t need to understand our goals in order to arrive at them. for example, I for one do not understand how hyperinflation of the lungs causes decreased pliability of lung tissue during resuscitative efforts, but I can avoid hyperinflation by giving breaths as I’ve been instructed.
Skepdick wrote: Mon Nov 04, 2019 8:01 am Do you know why you love what you love? Why you cherish what you cherish? Why you pursue the goals that you pursue? Why you dislike what you dislike? Why your goals are different to other humans' goals? Half the time we can't even communicate to each other what it is that we want exactly.
All true, but it does not change the fact that goals can be created and attained without explanation. For example, I do not know why I have set the goal for myself of becoming a better person, but I pursue the goal nonetheless.
Skepdick wrote: Mon Nov 04, 2019 8:01 am The best answer you can give me is "it's just who you are". If you keep asking 'why' questions eventually you are going to hit rock bottom.
Again yes, but not pertinent to the ability of AI to arrive at some goals.

First, translate natural language into a computer language. Provide definitions as needed. Give the AI computer an instruction to arrive at a goal.

If all else fails, at the very least an AI computer can use brute force to go through all the instructions it has ever been given or that it can create. Once a sequence of instructions results in something that fulfills the given definition of a goal, the computer will have created a goal. Achievement of that goal can occur by means of a similar process.
Skepdick wrote: Mon Nov 04, 2019 8:01 am Why are we, humans, trying to build AI?
Honestly, just because we can.
Skepdick wrote: Mon Nov 04, 2019 8:01 am Where this might get interesting for philosophers: when talking about AlphaOne/AlphaStar as being the algorithm to conquer all algorithms - it is an absolutely astonishing human invention. But when you look under the hood it has limits.
I am not familiar with the AlphaOne/AlphaStar algorithm. I could not find it online, but I am interested in seeing it if you could point me in the right direction.
Skepdick wrote: Mon Nov 04, 2019 8:01 am The only games it knows how to learn (and win at) is games with perfect information. The algorithm is useless in stochastic domains.
Game-playing is not necessary to arrive at some goals.
Skepdick wrote: Mon Nov 04, 2019 8:01 am It's the good ol' determinism vs free will argument.
Yes.
Skepdick
Posts: 14504
Joined: Fri Jun 14, 2019 11:16 am

Re: A.I. requires human consciousness for sentience

Post by Skepdick »

commonsense wrote: Mon Nov 04, 2019 6:39 pm Computing is merely the giving of instructions. What the computer does is perform each instruction until an instruction is given to stop. No explanation need be given in order for a computer to complete its instructions.
That'a the reductionist view - sure.
commonsense wrote: Mon Nov 04, 2019 6:39 pm We don’t need to in every case. A person can perform some tasks without explanation. A person can perform CPR without understanding how a compression works.
So perform "AI invention" - I'll wait.
commonsense wrote: Mon Nov 04, 2019 6:39 pm We don’t need to understand our goals in order to arrive at them. for example, I for one do not understand how hyperinflation of the lungs causes decreased pliability of lung tissue during resuscitative efforts, but I can avoid hyperinflation by giving breaths as I’ve been instructed.
More reductionism.

Lungs perform breathing. Why do you breathe?

Breathing is a sub-goal of survival. If you were a machine you wouldn't have this sub-goal.
commonsense wrote: Mon Nov 04, 2019 6:39 pm All true, but it does not change the fact that goals can be created and attained without explanation. For example, I do not know why I have set the goal for myself of becoming a better person, but I pursue the goal nonetheless.
Q.E.D below
Skepdick wrote: Mon Nov 04, 2019 8:01 am The best answer you can give me is "it's just who you are". If you keep asking 'why' questions eventually you are going to hit rock bottom.
commonsense wrote: Mon Nov 04, 2019 6:39 pm Again yes, but not pertinent to the ability of AI to arrive at some goals.
It is absolutely pertinent. How are you going to explain to a computer how to do it when you don't even know how you do it.
commonsense wrote: Mon Nov 04, 2019 6:39 pm First, translate natural language into a computer language.
Ok! Lets start there. Formalise the notion of 'person' and 'better' in Mathematics. Explain to a computer how to want to be a better computer.
commonsense wrote: Mon Nov 04, 2019 6:39 pm Honestly, just because we can.
Q.E.D - you don't have a why.

I do. Because if I off-load (some of) my thinking to a machine I will save time. Why do I want to save time? It's the only resource I we can't make more of. Entropy is Biblical Evil.

Saving time == Creating Heaven. More time for Being. More time for Living. More time for Loving.

More time for things I want to do vs things I have to do. Deep down - I am lazy as hell. I want to work smart not hard.

commonsense wrote: Mon Nov 04, 2019 6:39 pm I am not familiar with the AlphaOne/AlphaStar algorithm. I could not find it online, but I am interested in seeing it if you could point me in the right direction.
It's a product of Google's Deep Mind labs. It masters any game with perfect information (e.g deterministic game) just by being given the rules. Then it goes and plays the game against itself a quintillion times and it becomes better.

It taught itself how to play chess in 4 hours and it's the best chess player in the world as of right now. What I mean by that it was not specifically programmed to play chess - it was given the rules of chess and in 4 hours it became the best chess player in the world. It is beating Stockfish (which was specifically programmed to be good at Chess and has been developed by humans since 2008 ). It's basically demonstrating "general purpose" abilities.


Most recently - it's playing Star Craft II against humans and winning. You can start here: https://deepmind.com/research/open-sour ... -resources but Google will give you plenty of breadcrumbs.
commonsense wrote: Mon Nov 04, 2019 6:39 pm Game-playing is not necessary to arrive at some goals.
Sure, but what is a game? To AlphaStar a game is any set of rules. As long as you can explain to it what "winning" and "losing" means - it dominates.
If you don't give explain to it what "winning" and "losing" means - it has nothing to optimise for.
User avatar
HexHammer
Posts: 3354
Joined: Sat May 14, 2011 8:19 pm
Location: Denmark

Re: A.I. requires human consciousness for sentience

Post by HexHammer »

Skepdick wrote: Mon Nov 04, 2019 4:45 pmYadda yadda, bla bla ..bla!!
Let's talk again when you can do your own lawsuits and are millionaire.
Skepdick
Posts: 14504
Joined: Fri Jun 14, 2019 11:16 am

Re: A.I. requires human consciousness for sentience

Post by Skepdick »

HexHammer wrote: Mon Nov 04, 2019 8:29 pm
Skepdick wrote: Mon Nov 04, 2019 4:45 pmYadda yadda, bla bla ..bla!!
Let's talk again when you can do your own lawsuits and are millionaire.
That's an arbitrary requirement. It's because I am a millionaire is why I can afford lawyers to do the boring legal stuff for me.

So lets talk now. Or have you run out of anything valuable to say on the matter at hand?
User avatar
HexHammer
Posts: 3354
Joined: Sat May 14, 2011 8:19 pm
Location: Denmark

Re: A.I. requires human consciousness for sentience

Post by HexHammer »

Skepdick wrote: Mon Nov 04, 2019 9:49 pm
HexHammer wrote: Mon Nov 04, 2019 8:29 pm
Skepdick wrote: Mon Nov 04, 2019 4:45 pmYadda yadda, bla bla ..bla!!
Let's talk again when you can do your own lawsuits and are millionaire.
That's an arbitrary requirement. It's because I am a millionaire is why I can afford lawyers to do the boring legal stuff for me.

So lets talk now. Or have you run out of anything valuable to say on the matter at hand?
When you can do your own lawsuits which requires immensely high rationale if you don't have any law education, and I greatly doubt you are millionaire.
User avatar
henry quirk
Posts: 14706
Joined: Fri May 09, 2008 8:07 pm
Location: Right here, a little less busy.

Re: A.I. requires human consciousness for sentience

Post by henry quirk »

I've always thought of Artificial Intelligence as the synthetic version of thinking, feeling, self-aware 'us'. Nowadays, though, AI is a label, like 'smart', that gets applied to any technology that seems multi-purposed or that evidences a degree of pseudo autonomy.

Really, AI is just a marketing thing now.

So: instead of askin' about AI, mebbe we should ask about what 'synthetic self-awareness' needs to come about.

Goin' by the one example of 'self-awareness' we know of, I say that we ought start with a 'brain' (of particular and peculiar complexity) irrevocably embedded in a 'body' (of particular and peculiar complexity).

"We're not computers, Sebastian, we're physical." -Roy Batty
Zelebg
Posts: 84
Joined: Wed Oct 23, 2019 3:48 am

Re: A.I. requires human consciousness for sentience

Post by Zelebg »

Skepdick wrote: Mon Nov 04, 2019 8:01 am When it comes right down to it programming is nothing more than humans meticulously explaining to computers how to perform a particular task.
Execution of specific functions goes step by step. But what functions will run, with what parameters and when, can be triggered and varies relative not only to external events, but since the process is recursive, change of parameters and function branching may be triggered by the "thought" process itself.

One built-in goal is sufficient. The side-goals on the way can be decided relative to the main goal and current circumstances. If choices currently on offer are not relative to the main goal, then it may pursue any goal chosen in whichever random or non-random way.
We can't explain what we ourselves don't understand and we don't understand how we, humans, arrive at our own goals, objectives and sub-objectives.
Computation in itself is not relevant. We are talking about "something" experiencing that computation, sensation, or emotion. We are talking about sentience, which means "subjective experience", and it is the "subjective" part of it, or 'qualia', or 'consciousness', that the title of this thread is asking about.

I say this self-awareness, i.e. qualia, i.e. sentience, i.e. consciousness, is not about computation, but hardware and interface. It seems, as far as we know, this "hardware and interface" that could fit the bill here and explain this _subjective_ phenomena, has no parallel in any of our sciences, except science fiction. Seriously, some kind of "dream" of the type 'Total Recall' or 'The Matrix' are the only kind of mechanics we know of that could, at least in principle, address this problem.
User avatar
Dontaskme
Posts: 16940
Joined: Sat Mar 12, 2016 2:07 pm
Location: Nowhere

Re: A.I. requires human consciousness for sentience

Post by Dontaskme »

roydop wrote: Mon Nov 04, 2019 1:49 pm
The consciousness that has "gone out" (forgotten Self) is collapsing back into the singularity (remembering Self). The illusion is the "leaving" and "Coming back" parts. That's why I prefer "forgetting" and "remembering". So the singularity is outside of time and space. It is metaphor of Self, which is changeless, timeless.

Physical death has no special qualities. It doesn't effect karma. Karma, like gravity, extends through all dimensions/realms.

Mirrors are good metaphors. I see the situation like a mirror facing a mirror. One mirror is body/physical, the other mind/digital. The ponzi scheme is that one image supports the other image, but it's what is looking into the mirror that's real. So this cycle of the ponzi is collapsing. However, it goes for infinity, so another large cycle will take it's place.

The way to escape the cycle (Samsara) is to pull Awareness away from thoughts and sensations and remain "inward facing" onto Self/stillness.
Very good, thanks.
The last bit is precise accurate and pretty much what the real Self is.
Skepdick
Posts: 14504
Joined: Fri Jun 14, 2019 11:16 am

Re: A.I. requires human consciousness for sentience

Post by Skepdick »

HexHammer wrote: Mon Nov 04, 2019 11:34 pm When you can do your own lawsuits which requires immensely high rationale if you don't have any law education, and I greatly doubt you are millionaire.
Whatever your doubts of me - I couldn't really care to convince you of anything.

Rather tell us why Bernstein doesn't satisfy your criteria.

He did his own constitutional court case.
He is a millionaire.

Where's the AI?
Post Reply