(i.e. re/constructing it within an artifact) is like how it is
difficult to show that God is on the inside. Alan Watts cut into an
apple to show his children that God is on the inside, and showed that
every time you cut into another piece of apple, you just see more of
its outside (i.e. one cannot reach God through analysis). If we could
access subjective experience by any way other than directly
experiencing it, would it not cease being subjective? We can talk
about it, we can measure its correlations (e.g. EEG activity), but
words and electrical signals are not subjective experiences.
It is comparatively easy to simulate emotion in artifacts. Maturana
said that an automobile has emotion. "You put it in first gear and you
have a powerful car. You say, 'Look how powerful this car is in
first!' It's aggressive, because when you scarcely touch the
accelerator, vrrooom! It takes off!" But isn't that metaphorical, we
ask? "To a certain extent, but more than metaphorical it is
'isophorical,' that is, it refers to something in the same class. You
put the car in fifth and you travel at a higher speed, and the car is
peaceful, fluid, serene. What is happening there? Each time you change
gears, you change the internal configuration of the automobile and it
does different things. Emotions correspond precisely to that, from the
biological perspective they are internal changes in configuration that
transform the reactivity of the living being, such that the living
being in the relational space is different."
(http://www.tierramerica.info/2000/1126/questions.html)
Note the role of the observer in attributing emotion to the
automobile. The same happens with empathy. When we observe a mother
bear become "angry" when a foreign agent approaches near to her child,
we do not (cannot?) know for certain that she actually feels anger. We
infer that she does when she expresses in a way that we recognize as
homomorphic with our own expressions when we feel anger. (The
"inference" may not be logical, but a simpler computation of
difference.) Ditto for how we respond to Johnny 5, WALL-E, etc.
Empathy requires the ability to model another's emotional state (here
I do not mean "model" in the sense of an imaginative construct, but in
the Conant-Ashby sense, as a homomorphism), and I do not suppose that
requires understanding (as humans acquire through language and
complex, imaginative models) nor personhood (although it may approach
personhood especially as it develops or is developed to respond to
persons). Here are two projects using computers that detect human
emotional states and respond accordingly -- note that they are both
attempts at understanding and personhood:
1. Cambridge Ideas - The Emotional Computer
(http://www.youtube.com/watch?v=whCJ4NLUSB8)
2. The New Face of Autism Therapy
(http://www.popsci.com/science/article/2010-05/humanoid-robots-are-new-therapists)
Aside: Do Animals Feel Empathy?
(http://www.scientificamerican.com/article.cfm?id=do-animals-feel-empathy)
Sincerely,
Joshua
On Mon, Jan 24, 2011 at 11:20 AM, Natasha Vita-More <natasha@natasha.cc> wrote:
> Hi Jennifer,
>
> You wrote:
>
>>In the meantime, I apologies if my own posts have actually contributed to
> the rise to
>>confusion :) Here is a post about the simulation of subjective experience.
> Its in
>>addition to my previous posts.
>
> You are not confusing things Jennifer. I think it is how we want to talk
> about simulation and your interest appears to be of simulation as having
> potential in computational systems, which is timely, although I'm not a
> religious person so I don't see it pertainint to God, but I do see it as
> pertaining to we are all part of an evolving cybernetics.
>
>>Like I stated before, I'm interested in the Simulation of Empathy, well
> known in humans,
>>but considered impossible in computers (for AI purposes). Empathy being the
> mental
>>simulation of the experience of the other.
>
> Empathy may be the most needed and also the most difficult experiential
> behavior to obtain. To have empathy an agent needs to understand the
> thoughts, feelings and state of another agent/person. To have empathy then,
> the agent would also have to have "personhood." So, what is personhood if
> it is not to be alive, self-awareness, and able to make decisions. How can
> something make decisions if it is not alive and self-aware? Certainly AI
> makes decisions and interacts with its environment, but not alive. So the
> issue is what makes AI alive?
>
>>Some believe that it is impossible for AI to feel, subjective experience is
> ambiguous and
>>ambiguity causes error in a computers calculations. Some believe that
> subjective experience
>>comes from a heart and soul, and that computers do not have a soul and thus
> can never
>>experience subjectivity.
>
> AI is narrow. "Strong AI" is where we would have to begin, and which takes
> us to the baby steps of A[G]I (i.e., artificial general intelligence,
> hereinafter "AGI"), which is where AI was originally headed before its
> winter (inability to achieve its original directive in producing human level
> intelligence). AGI offers the potential for being self-aware and able to
> make decisions based on "experience". Through its experience in its
> learning, it could obtain personhood at the juncture where the idea of life
> and death becomes redefined based on semi and non-biological or synthetic
> systems which develop self-awareness and may want rights, similar to the
> rights of humans.
>
> With all this said, the issue of empathy could be obtainable by AGIs. But I
> have to return to my original post on this one, if I may. A brain that is
> transferable or copied onto a computational system, would also transfer or
> copy its mind (in the material sense) and that mind would contain the
> feelings, emotions, and sensorial memory of the biological person). If the
> AGI could relate to this, it would also become familiar and experience the
> feelings, emotions and sensory memory of the human. So the merging of
> humans and technology becomes even more blurred and the AGI would learn
> empathy through its own experiential behavior.
>
> All my best,
> Natasha
>
> Natasha Vita-More
>
>
_______________________________________________
Yasmin_discussions mailing list
Yasmin_discussions@estia.media.uoa.gr
http://estia.media.uoa.gr/mailman/listinfo/yasmin_discussions
Yasmin URL: http://www.media.uoa.gr/yasmin
HOW TO SUBSCRIBE: click on the link to the list you wish to subscribe to. In the page that will appear ("info page"), enter e-mail address, name, and password in the fields found further down the page.
HOW TO UNSUBSCRIBE: on the info page, scroll all the way down and enter your e-mail address in the last field. Enter password if asked. Click on the unsubscribe button on the page that will appear ("options page").
HOW TO ENABLE / DISABLE DIGEST MODE: in the options page, find the "Set Digest Mode" option and set it to either on or off.