some of you know me but i'll shortly present myself for the ones who don't:
i am salvatore iaconesi, i tend to act in a very performative way in domains
that span across scientific research, arts, and the invention of new
practices and technologies. I am currently teaching a course called "Cross
media experimentations" at the faculty of architecture in Rome's University
La Sapienza, and i am the president of a peculiar publishing house called
FakePress: we don't publish any books but we create wonderful publications
for bodies, objects, architectures and environments.
I've been really enjoying the discussion on simulation, as it closely
concerns many of the things i do. And this is why i decided to give my
contribution after being a passive, but sincerely interested, reader for a
Yesterday we at FakePress published a webpage that performs in realtime a
simple analisys of internet's emotional state. You can see it here:
if you wait a bit while the indexes get calculated, you will see a minimal
graph forming before your eyes. This graph is generated by querying several
realtime search engines such as Collecta, OneRiot and by using the APIs
provided by several services such as twitter, friendfeed and facebook. User
contributions from the last 40 minutes of the web are analyzed to find
sentences and phrasing expressing emotional states. Emotions are classified
using the simple Plutchick scheme and we use about 2000 synonims and their
derivates to identify the terms we need.
It is something that we already saw in a number of different forms
(including the beautiful "We feel fine" by Sep Kamvar and Jonathan Harris)
and we created it in this form to achieve a precise set of results.
A thing first: i must tell you all that i was a little sad when i saw the
distinct predominance of "fear" as an emotion on a global scale
with this said, i would like to explain how this little production interacts
with the themes of simulation, and briefly describe the path that brought
FakePress to experimenting it.
language is a curious beast and when we confronted the objective to identify
emotions on a linguistic scale we faced an infinite series of problems:
idiom, cultural backgrounds, contexts, dependencies on time/place, slang,
irony and a lot more. What we did was to operate in two directions: on one
side, simulating various interpretative scenarios; on the other side, we
settled for a simplified scenario.
Wihle the second path was pretty straightforward (it involved analyzing only
sentences having a set of definite structures), the first idea (simulation
of interpretative scenarios for emotions) had quite a very suggestive
feeling to it, and it proved very interesting. What we came out with was a
modular system in which we could configure a scenario under the form of a
multidimensional matrix populated with weighting values that helped the
system select the contents that were most significant from the point of view
of emotion expression, both in terms of word usage and sentence structure.
We identified three procedures to be truly effective: turn the process on
itself, establishing feedback mechanisms; opening up the system to
interaction; finding out ways in which the information we were gathering
could be shown in an extremely synthetic and expressive form.
If yuou sum these things up, you easily see that what we were inconsciously
(yet scientifically) searching for was a simulative, emotional "machine":
something that was organic enough to be higly expressive in a naturally
synthetic fashon, dependent on relations with both external entities
(interaction) and introvert ones (feedback). A low-information, highly
expressive form of publication of complex information (emotion) using
information representation, transmission and simulation.
This is part of a series of researches we performed using a multitude of
The OneAvatar project, for example (
http://www.artisopensource.net/OneAvatar/ ) saw us creating a wearable
technology whose objective was to "publish" digital sensations on a physical
body, opening up scenarios for innovative forms of communication, with
sensorial stimuli being sent multidirectionally across digital avatars,
physical bodies and with the possibility to even remix sensorialities, share
them, connect them to physical locations in time and space (at the Fabbrica
del Vapore in Milan we had this sensorial transmission be connected to a
hybrid videogame happening both in the digital and physical worlds).
Or with the Dead on Second Life project (
http://www.artisopensource.net/dosl/main.html ) in which we used artificial
intelligences and autonomous avatars to bring "back to (a) second life" Karl
marx, Coco Chanel and Franz Kafka. In this other project we simulated the
behaviour of these three characters by linguistic, physical and social
behaviour modeling techniques. Even here the objective was, in more than
one way, the design and implementation of a very low information "machine",
where for the low dose of information we intended the simple, natural
perception of the designed character: a truly complex piece of information,
yet expressed in a very simple natural form.
In all these projects three elements arise as being significative for the
discussion: the research on simulation as an act of modeling that is both
interactive and self-generated; the representation strategies and
methodologies; the search for highly expressive, low-information mechanisms.
Back to the beginning (the emotional graph), these three steps become truly
operative. Along the lines of the OneAvatar project (and on the Conference
Biofeedback project some of you already saw in Munich:
http://www.artisopensource.net/2009/12/05/conference-biofeedback/ ) we are
about to transform the emotional graph into a wearable tecnology.
Possibly swithcing to a more complex classification scheme (Keltner?) and
turning the system into something that "you can wear to fit" by configuring
it (what emotions do i want to wear? whose? regarding what places/times?
etc.), we will confront with all three of the aforementioned issues.
The act of identification of an emotion will be paired to an act of
simulation_for_evaluation and to an act of simulation_for_reenactment, with
a form of transcoding between the way the system identifies emotion and the
way we represent it on the body.
The act of representation will also involve several layers of simulation, as
representing an emotion with (for example) a flashing led is of little,
basic interest. It is, insetead, truly interesting to give a natural
sensation to the way that is chosen to represent an emotion (for example
something to do with chill for fear, throat for anger, etcetera, or by
following formalisms that approach the issues in ways that are comparable to
chakras or others).
And, last, the focus on low-dosages of highly expressive information, as the
key to successful implementations of natural forms of communication and
simulation. You touch fire with your finger, you get burnt: a low dosage of
very expressive information.
Hope not to have bothered you all with too long of a text, I wanted to get
this all out together as i feel it is as a very compact research path.
I would love to discuss with you all the design of the wearable device we
are building, as we are in that stage in which every contribution opens up
entire new areas for research.
regards to you all,
On Sun, Jan 31, 2010 at 10:40 PM, roger malina <firstname.lastname@example.org> wrote:
> in both cases the simulation is presented as visual output-=in one
> case a dynamic graph and the other a video
> the fact that simulations have to be converted to visual output
> (or sonified) introduces very strange biases in how the simulation
> is displayed and interpreted-we tend to over emphasise structure even if
> a very small effect to guide the eye
> the scientific method itself is changing, as simulations acquire
> the status of explanations
Yasmin_discussions mailing list
Yasmin URL: http://www.media.uoa.gr/yasmin
HOW TO SUBSCRIBE: click on the link to the list you wish to subscribe to. In the page that will appear ("info page"), enter e-mail address, name, and password in the fields found further down the page.
HOW TO UNSUBSCRIBE: on the info page, scroll all the way down and enter your e-mail address in the last field. Enter password if asked. Click on the unsubscribe button on the page that will appear ("options page").
HOW TO ENABLE / DISABLE DIGEST MODE: in the options page, find the "Set Digest Mode" option and set it to either on or off.