Send Yasmin_discussions mailing list submissions to
yasmin_discussions@ntlab.gr
To subscribe or unsubscribe via the World Wide Web, visit
https://ntlab.gr/mailman/listinfo/yasmin_discussions_ntlab.gr
or, via email, send a message with subject or body 'help' to
yasmin_discussions-request@ntlab.gr
You can reach the person managing the list at
yasmin_discussions-owner@ntlab.gr
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Yasmin_discussions digest..."
THIS IS THE YASMIN-DISCUSSIONS DIGEST
Today's Topics:
1. Re: dangerous art and dangerous science (YASMIN DISCUSSIONS)
----------------------------------------------------------------------
Message: 1
Date: Thu, 11 Jul 2019 12:13:54 -0400
From: YASMIN DISCUSSIONS <yasmin_discussions@ntlab.gr>
To: Jon Ippolito <jippolito@maine.edu>, yasmin_discussions@ntlab.gr
Subject: Re: [Yasmin_discussions] dangerous art and dangerous science
Message-ID:
<mailman.17.1562867705.33654.yasmin_discussions_ntlab.gr@ntlab.gr>
Content-Type: text/plain; charset=utf-8; format=flowed
Hello everyone,
Before joining in the? discussion, I would like to introduce myself. My
name is Sofian Audry (https://sofianaudry.com/), I am an artist and
researcher working at the crossroad of artificial intelligence art and
computer art. I am Assistant Professor of New Media at the University of
Maine where I teach creative programming and AI for art and design and
where I run the Art + Artificial Agents laboratory (http://a3-lab.com/).
Over the past decade I have been working at the crossroad between
machine learning and art through research-creation / art practice. I am
in the process of writing a book on machine learning and art which
explores how artists have (and haven't been) working with machine
learning since the 1950s, and the significance of art practice as an
alternative approach to machine learning (and more generally to AI).
One of the dangers of AI I would like to point out is the degree of
confusion which seems to surround AI at the moment, presumably in large
parts because most of the content is being generated by the media (who
don't really care about understanding science) or by the
communciation/marketing departments of GAFAM and other corporations that
sell AI (who care about the social acceptability and marketability of
their product). AI is a very wide field and a highly ambiguous term. I
think the discussion becomes most interesting if we are able to
contextualize our reflections around what is specifically new with
current-day AI, and how does this novely specifically affect art,
science, society, etc. in different ways than previous iterations of AI.
I think it's important first to mention that the reason we are talking
so much about AI today is closely related to a specific sub-sub-branch
of AI which used to be called "connectionism" or "neural nets" --
rebranded as "deep learning". To make a long story short, in the
mid-2000s, this field (which was present since the 1940s but had been
largely abandonned by most of the AI community) made a come-back thanks
to a series of breakthroughs where researchers were able to train big,
multi-layered neural nets in ways previously not possible. They also
showed that these new systems were scalable to huge databases (ie. the
more the data, the more they can learn). This was timely because (1)
these findings were immediately applicable to *many* problems, not only
in computer vision, but also bioinformatics, speech, language, finance,
etc. (2) the commercialization of the internet created big databases
that companies could suddenly monetize in unprecedented ways. In
summary: after five decades, AI finally became (highly) profitable and
companies were ready to ripe the benefits.
Another thing I need to mention is that as profitable these new
algorithms are, they are also extremely limited and kind of "dumb". [1]
The kind of "intelligence" they have is really basic, they still need a
lot of data to learn even simple things (much more than humans and
animals do) and when they perform well they can usually do so only in
one very specific task (eg. driving a car, recognizing faces, composing
music that sounds like Chopin, etc) we are thus very far from animal,
let alone human-level cognition.
So if machine learning and in particular "deep learning" is driving the
current "AI hype" what does this mean for the arts?
I agree with Jon that contemporary AI is not so much in a rupture with
earlier algorithmic approaches such as stochastics/chance, complexity
and symoblic AI (eg. expert systems) -- at least, not in the way that
the media portray them (ie. smart magical "black boxes" that can learn
anything and will soon surpass humans in all cognitive tasks). Yet it is
true that deep learning is also very different in essence than more
traditional approaches within AI.
There are three broad areas where artists can work with ML which
correspond to the components that make up a machine learning system: (1)
the learning process (2) the model/machine (3) the data. Most artists
are working on the side of data, using "readymade" algorithms trained on
datasets. The most interesting works are those in which the artists
create their own dataset. For example, to create her work "Mosaic Virus"
UK artist Ana Ridler photographed 10,000 tulips which she acquired in
the Netherlands. She then trained a type of neural net known as a GAN
(Generative Adversarial Network) which is then able to generate new
images of tulips. Ridler also exhibits the dataset itself as a separate
artwork. She reflects on the expensive, repetitive, tiring, long process
required to create the dataset for the algorithm to digest, which is
very akin to craft.
Some artists explore the learning process itself for its aesthetic
potential. This was especially present in pre-2000s artworks, for
example in Karl Sims' Galapagos where the audience could participate to
an evolutive process of artificial lifeforms (which is a form of machine
learning), or in Nicolas Baginsky's The Three Sirens, a robotic improv
jazz band where the robots learned in real time using simple neural
nets. Baginsky describes the evolution of the robot performance through
time, starting with very random music, stabilizing into lively yet more
organized music and eventually becoming too conservative and a bit boring.
The model (ie. the algorithmic structure that is being trained by the
learning procedure on the dataset) plays an important role in a machine
learning system. Different types of models afford different aesthetic
effects, and often involve profoundly different approaches in terms of
practice. As a result, specific artistic movements and genres have
attached themselves to specific kinds of models, for example
evolutionary art concerns mostly parametric functions, whereas the
emerging ?neuro-aesthetics? movances largely concerns itself with the
aesthetic and conceptual potential of generative deep learning neural
networks.
In terms of practice, machine learning is in many ways very different
from traditional computing practice where one has to program all the
rules of the system. Computer programming is more akin to an engineering
approach, where one needs to build a software architecture for what one
has in mind, trying to turn one's ideas into code. With machine
learning, the user (eg. the artist or the data scientist) will instead
provide direct examples to the machine learning system, but let the
system make its own decisions. Hence, this practice is closer to one of
experimental science, where one chooses parameters, runs the experiment,
sees the results, makes adjustements, and build the work through this
iterative process of trials and errors.
Machine learning opens new forms of generative art that interface with
the world [2]. Artist Memo Akten talks about how generative deep neural
nets reveal aspects of our collective consciousness (which are,
nowadays, owned by huge multinational corporations and ironically stored
"in the cloud") [3]. About his project Everything that Happens will
Happen Today, where a generative neural network was trained on a dataset
of GPS paths taken by anonymous participants in NYC, the artist Brian
House writes: "The intelligence of AI is not spontaneous, but
socialized. It is uncanny not because it acts as if it were human, but
because it is humans, plural." [4]
Sofian Audry, PhD, MA, MSc
Assistant Professor of New Media
School of Computing and Information Science
5711 Boardman Hall #238 | +1 207 581-2951
http://sofianaudry.com
University of Maine | Orono, ME 04469
http://umainenewmedia.org | http://imrccenter.com | http://umaine.edu
[1] Yoshua Bengio (one of the most prominent scientists of the field)
has once compared them to "toasters".
[2] This was suggested to me in an interview with Dutch artists
Driessens and Verstappen, who have been working with generative
algorithms since the 1990s. They mentioned that deep learning allowed
them to connect generative systems to the world -- rather than
generative "from scratch" using algorithmic processes.
[3]
https://utvilsm.blogspot.com/2019/06/keeper-of-our-collective-consciousness.html
[4] https://brianhouse.net/works/everything_that_happens_will_happen_today/
On 2019-07-10 1:37 p.m., Jon Ippolito wrote:
> Hi Roger,
>
> You raise fascinating questions about the ethics of AI science to go
> with the questions I raised about AI art. Accountability is a huge
> question for machine learning in general, since the inscrutability of
> evolved neural networks prevents us from auditing them for the machine
> equivalent of mental illness or bigotry. I?m curious if other legal
> scholars may have thought about the gnarly questions that emerge from
> AI-mediated evidence in courtrooms.
>
> I suppose it?s possible that physicists will come up with new laws to
> govern new evolutionary paradigms, as you suggest in your previous
> email. Earlier this week Nobel-winning physicist Frank Wilczek
> reviewed three books on the "end of physics" in the Wall Street
> Journal [paywalled]:
>
> https://www.wsj.com/articles/have-we-come-to-the-end-of-physics-11562334798
>
> The Stuart Kauffman book you liked wasn't among them, but Wilczek
> cited Sabine Hossenfelder ("Lost in Math"), Richard Dawid ("String
> Theory and the Scientific Method") and John Horgan ("The End of
> Science"). Spoiler: Wilczek thinks physics is fine. He?s not worried
> that it has plateaued because physicists can use that foundation to
> make new instruments to look deeper--presumably including AI assistants.
>
> For my part, I?m skeptical that new instruments or theories built upon
> them will clear up the remaining mysteries of the physical world. In
> the 1960s and 70s scientists like Kaufman already helped usher a
> sea-change in accounting for systems far from equilibrium?what we now
> call complexity science. Its insights are bewitching, especially for
> artists: we can emulate the sound of rain from random noise, or make
> realistic-looking scenery with fractals, or just get lost in the
> vertiginous Mandelbrot Set. Complexity science even helps us predict
> the weather?but only to a degree, and nothing like the precise
> clockwork of ballistics. And it seems to me that the technologies our
> mastery of physics have made possible, from algo trading to fracking
> to DDOS attacks, are making the world less predictable rather than
> more so.
>
> Speaking of unpredictability, Bill Joel asked how today?s AI differs
> from John Cage?s use of the I Ching in his compositions. Bill is right
> that algorithmic art has a long history. Chance-based music goes back
> at least to Europe in the 1700s?including a game attributed to
> Mozart?and certainly embodies a similar inscrutability to today?s
> machine learning. (Not much point in cross-examining a pair of dice to
> find out why it rolled snake eyes.)
>
> That said, the unpredictability of today?s machine learning derives
> not from a simple chance operation, nor from an expert system. It can
> be "trained" on many types of data and contexts, but that training is
> an organic process that results in a mess of spaghetti code that works
> but is difficult to tease apart.
>
> I?ll defer to my colleague Sofian Audry to chart the various types of
> AI and how artists have employed them. Till then, as Roger suggests
> I?ll append a brief bio below and look forward to learning from others
> on this list!
>
> jon
> ________________
> Jon Ippolito is Professor of New Media and Director of the Digital
> Curation program at the University of Maine. His current
> projects--including the Variable Media Network, ThoughtMesh, and his
> co-authored books At the Edge of Art and Re-collection--aim to expand
> the art world beyond its traditional preoccupations.
>
>> On Jul 10, 2019, at 5:00 AM, yasmin_discussions-request@ntlab.gr wrote:
>>
>> Send Yasmin_discussions mailing list submissions to
>> yasmin_discussions@ntlab.gr
>>
>> To subscribe or unsubscribe via the World Wide Web, visit
>> https://ntlab.gr/mailman/listinfo/yasmin_discussions_ntlab.gr
>> or, via email, send a message with subject or body 'help' to
>> yasmin_discussions-request@ntlab.gr
>>
>> You can reach the person managing the list at
>> yasmin_discussions-owner@ntlab.gr
>>
>> When replying, please edit your Subject line so it is more specific
>> than "Re: Contents of Yasmin_discussions digest..."
>>
>>
>> THIS IS THE YASMIN-DISCUSSIONS DIGEST
>>
>>
>> Today's Topics:
>>
>> 1. dangerous art and dangerous science (YASMIN DISCUSSIONS)
>>
>>
>> ----------------------------------------------------------------------
>>
>> Message: 1
>> Date: Wed, 10 Jul 2019 09:50:59 +0200
>> From: YASMIN DISCUSSIONS <yasmin_discussions@ntlab.gr>
>> To: yasmin_discussions@ntlab.gr
>> Subject: [Yasmin_discussions] dangerous art and dangerous science
>> Message-ID:
>> <mailman.8.1562745214.33654.yasmin_discussions_ntlab.gr@ntlab.gr>
>> Content-Type: text/plain; charset="UTF-8"
>>
>> yasminers
>>
>> we have a few new members who have joined the yasmin art/sci/tech village
>> let me encourage all new members to send in a short email introducing
>> themselves
>> and their interests. In a healthy village when you meet a new person
>> on the street, you do this!
>>
>>
>> Meanwhile - i hope other members will join in the discussion on ai and
>> ethics/dangerous art
>> I would like to add to the discussion soup that AI is indeed
>> potentially dangerous but in terry irwin
>> 's language we have not done deep transition design- and expected,
>> predictable, negative aspects are being
>> addressed too late- major instutions are only now setting up programs
>> on AI and ethics (eg MIT), when this should have been started 25 years
>> ago at least.
>>
>> As an astrophysist i am aware of the growing impact on the way that
>> science is done, with the use of AI beings actually being the ones
>> making the discovery and not the human being. In the 'normal' way of
>> doing science one can talk to the scientist and ask probing questions
>> about the methodology, validity of the verification, implicit biaises.
>> Unfortunately scientists now admit that the discovery is made by the
>> AI, but its impossible to interrogate the AI scientist rigorously. At
>> what point does the AI scientist become an actual co author. And when
>> the work done by the AI is found to be erroneous, then the AI
>> scientist retracts the paper and the university dismisses the AI
>> scientist for academic fraud ? I have been using this line of argument
>> to push that we start transition redesigning of science so we can
>> anticipate and mitigate the predictable 'dangerous science' that will
>> result when we accept as fact scientific results and the humans cannot
>> validate or confirm the result. How can you replicate a scientific
>> experiment or analysis when the AI being is unable to explain what
>> they did. Judge John Marshall of Dallas Texas, who worked on the early
>> apollo program, has been trying to argue that the same problem is now
>> arising frequently in legal cases where AI is being used to analyse
>> the evidence and recommend a verdict, but it is impossible to
>> cross-examine the witness as is normal in court. These are all
>> anticipatable dangers.
>>
>> Similarly we now see videos of artificial beings that are
>> indistinguishable on video from the real persons activity that has
>> been edited using ai techniques. Such as the videos now circulating of
>> famous people saying things they never said themselves , but the AI
>> being, an animated X, is totally believable. These techniques were
>> developed by members of the art and technology village. Dangerous art
>> indeed.
>>
>> Maybe the yasmin villagers have some suggestions of how we go forward
>> in the age of dangerous art and dangerous science.
>>
>>
>> Roger is in Paris
>>
>>
>>
>> ------------------------------
>>
>> Subject: Digest Footer
>>
>> _______________________________________________
>> Yasmin_discussions mailing list
>> Yasmin_discussions@ntlab.gr
>> https://ntlab.gr/mailman/listinfo/yasmin_discussions_ntlab.gr
>>
>>
>> ------------------------------
>>
>> End of Yasmin_discussions Digest, Vol 10, Issue 1
>> *************************************************
------------------------------
Subject: Digest Footer
_______________________________________________
Yasmin_discussions mailing list
Yasmin_discussions@ntlab.gr
https://ntlab.gr/mailman/listinfo/yasmin_discussions_ntlab.gr
------------------------------
End of Yasmin_discussions Digest, Vol 10, Issue 3
*************************************************