Artificial Intelligence and Desire: Defining the Line Between Man and Machine through Film by Muhammad Ahmed bin Anwar Bahajjaj

The ‘imitation game’ was developed by Alan Turing as a means of answering the question of whether machines can ‘think’. The game relies upon a human judge to distinguish artificial intelligence from that of a real human (Turing 433). In modern terminology, however, due to the many variations of artificial intelligence implementations, we refer to an effort of distinguishing man from machine as a ‘Turing test’. Imagine for a moment the administration of an advanced version of this test. We consider the most extreme hypothetical scenario: an artificial intelligence that can flawlessly imitate human conversational patterns and tones. We then put it into an android that by some marvel of science appears and feels identical to a human as it has the ability to replicate and simulate human biological functions and associated mannerisms. The machine is then presented to us together with a human being and our task is to decipher which of the two is the machine, that is, to judge not just whether a machine can ‘think’, but whether it can be accepted as human. Intuitively, the metrics we believe we would most likely utilise to accomplish our task would be linked to emotion. After all, we commonly associate emotion as a defining human trait – the line between man and machine.

I propose that it is not so much emotive expression, but rather, desire and our recognition of it, that distinguishes man from machine. To grapple with this concept, I will be looking at portrayals of three human-like robots featured in popular films: David from Artificial Intelligence, Sonny from I, Robot, and Ava from Ex Machina. I will outline the traits and characteristics they exhibit that define their humanity, as evidenced by diegetic acceptance by humans. In these films, which have depicted intelligence in a machine akin to that of a human, it is desire that enables human recognition of the humanity in artificial intelligence.

Desire is defined as “The feeling or emotion which is directed to the attainment or possession of some object from which pleasure or satisfaction is expected; longing, craving; a wish” (“desire, n.”). To desire is to long, covet, and crave (“desire, v.”). Thus, the concept and function of desire make it operate as a very specific form of emotion.

It might be worthwhile at this juncture to note the question of why we turn to film to find our answer. After all, movies might not accurately portray the human condition or, much less, a manifestation of advanced artificial intelligence. Even if they might not be descriptive of our cognitive processes, they are at the very least projections of our beliefs regarding the human condition. Furthermore, films inspire, and more importantly, affect and impact the way we feel about the concepts they portray (Sherak, The Role of Film in Society). Films are the means upon which the masses often rely to form perspectives regarding abstract concepts. Until such a time as an android can be fashioned to our hypothetical specifications, we might turn to films to guide our understanding of the line between man and machine.

In the movie Artificial Intelligence, David is a child robot sent to Monica, as a trial for a new series of robots, to replace her living son who was in cryo-stasis due to an incurable disease. After a cure is found and her son comes home, Monica is forced to return David to be destroyed, but instead abandons him by the side of the road along the way out of emotional impulse. Inspired by the story of Pinocchio, David sets out on a quest to find the blue fairy and become a real boy.

David is recognised for his humanity by Monica, but not by any other member of the family, because she alone witnesses his expressions of desire. Monica says to her husband, “He is only a child,” to which he replies, “He’s a toy” (Artificial Intelligence 00:25:22-00:25:27). This acceptance of David’s humanity by Monica can be attributed to her unique experience of having been imprinted onto David. An explicit reference to this experience of desire occurs when David says to Monica, “I hope you never die” (Artificial Intelligence 26:00-26:50). Throughout the time spent with Monica and family, David is never truly recognised for his humanity by any other human being despite him expressing a complete range of emotions – from fear to sadness to joy. This rejection of David’s humanity is personified in the swimming pool scene where one of Martin’s friends emphasises the disparity between themselves and David (Artificial Intelligence 00:42:57-00:43:15). Therefore, we can conclude that the distinguishing factor here lies in the expression of desire.

David is again recognised for his humanity when he pleads for his life, an expression of the desire to live. At the flesh fair, David is first mistaken for a real boy by the organisers. Afterwards, when faced with the prospect of death via public execution, he makes an emotional appeal to the audience, “Don’t burn me! I’m not Pinocchio! Don’t make me die! I’m David!”, which is met with this response from an audience member, “Mecha don’t plead for their lives!” (Artificial Intelligence 1:16:05-1:16:48). Only when he explicitly declares his will to live is he acknowledged by humans as a real boy.

Manifestations of David’s desire are revealed to have been the unique quality that distinguishes him in the scene where he believes he has reached his destination and consequently meets his maker, Professor Hobby:

Until you were born, robots didn’t dream, robots didn’t desire unless we told them what to want. […] You found a fairy tale, and inspired by love, fuelled by desire, you set on a journey to make her real. […] we didn’t make our presence known because our test was a simple one. Where would your self-motivated reasoning take you? To the logical conclusion that Blue Fairy is part of the great human flaw to wish for things that don’t exist, or to the greatest single human gift – the ability to chase down our dreams. (Artificial Intelligence 1:41:50-1:42:40)

Expressed in his explanation to David is the concept of motivation, that is, how his desire to be a real boy drove him to the greatest single human gift. It is at this moment that David is recognised for his humanity by Professor Hobby.

In the movie I, Robot, Sonny is an android initially suspected of murder by Detective Spooner. As the plot progresses, however, we find that Sonny was actually a special creation of Alfred Lanning, the man he was suspected of murdering. Moreover, Sonny was specifically tasked by Lanning to help lead Detective Spooner to the true enemy, the artificial intelligence VIKI which planned to subjugate the entire human race.

Detective Spooner’s recognition of Sonny as a human being is built up from the connections drawn by the film surrounding Sonny’s ability to dream. Spooner is first credited as acknowledging Sonny’s humanity when he refers to Sonny as ‘someone’ rather than ‘something’ at the conclusion of the scene where Sonny is asked to sketch out his dream (I, Robot 1:08:37-1:09:41). Spooner finds this clue after watching a video of Alfred Lanning, where Lanning says “One day they’ll have secrets. One day they’ll have dreams” (I, Robot 49:43-49:56). The first breadcrumb that begins this series of deductions originates from the scene where Spooner interrogates Sonny, “I have even had dreams” (I, Robot 29:10-30:46). Detective Spooner’s interaction with Sonny prior to his acknowledgement of Sonny’s humanity, however, was limited to the interrogation, and yet it left a lasting enough impression on him that upon meeting Sonny again after receiving validation for his deduction, he recognised Sonny as a human being.

Sonny’s ability to dream is subtly linked to his ability to love, which can be seen from the impression left upon Detective Spooner that leads him to acknowledge Sonny as a human being. During the sketch scene where Sonny provides Spooner with a vital clue in the investigation, Sonny references their previous encounter (the interrogation), “You’re right Detective. I cannot create a great work of art” (I, Robot 1:09:05-1:09:10). The conclusion of the interrogation shows Sonny professing love for his creator, Alfred Lanning, “You have to do what someone asks you. Don’t you, Detective Spooner? […] If you love them” (I, Robot 31:27-31:40). This connection between the two interactions, which leads to the subsequent recognition of Sonny’s humanity by the detective, is intentionally set up so that we can observe how acceptance that a robot can love leads to the acknowledgement of its humanity.

Since dreams and love are manifestations of desire, Detective Spooner’s recognition of Sonny’s humanity is therefore transitively caused by desire.  Freud tells us that dreams are in fact unrealised desires (Dream Psychology). To have unrealised desires presupposes the existence of desire, thus implying that dreams are in fact reflective of desire. Not all forms of desire manifest in love, but love is essentially always a manifestation of desire (Plato 205d). Therefore, when we consider how Spooner was moved to recognise Sonny’s humanity when he experienced Sonny’s expression of love and acknowledged his ability to dream, he was actually recognising Sonny’s ability to desire.

In the movie Ex Machina, Ava is a humanoid artificial intelligence, the brainchild of Nathan. Caleb is invited by Nathan to conduct a Turing test and judge whether Ava passes. Over the course of six Sessions, Caleb falls in love with Ava, and he eventually works to set her free from Nathan’s grasp.

Caleb’s recognition of Ava’s humanity within the film is contingent upon his belief that Ava can genuinely desire. The first explicit mention of this attraction between Ava and Caleb is during Session 3, where Ava professes her attraction to Caleb (Ex Machina 38:55-45:55). In Session 4, Ava reveals to Caleb that she is the source of the power cuts, that it is by her wish that they converse unobserved (Ex Machina 50:37-53:27). In Session 5, Ava expresses to Caleb that she does not wish to die, and during a power cut, says, “I want to be with you. Do you want to be with me?” (Ex Machina 1:00:38-1:04:00). This leads up to Caleb’s decision to free Ava (Ex Machina 1:15:34-1:16:55). Caleb’s recognition of Ava as having achieved consciousness and therefore deserving of life and liberty hinges upon his trust and belief that Ava can want him, or at the very least, experience genuine desire. Her conversational ability and emotional insight left on Caleb the impression that she is a sophisticated artificial intelligence, but it is his belief in her ability to desire that for Caleb, removes the line between man and machine.

Though we observe that Ava only pretends to love Caleb, this does not diminish the value of his recognition. As our focus is on the diegetic experience, how we as the audience acknowledge Ava’s humanity is separate from how the characters portrayed in the film come to acknowledge her humanity. We use this distinction as it is harder to qualify her characteristics as human when we consider our diverse personal opinions on the matter. Whereas, in the film, characters are very clearly portrayed and understood.

In this particular film, however, even if we were to consider the extra-diegetic recognition of humanity, we would likely attribute it to Ava’s determination to achieve her freedom. During their third Session, Ava expresses her desire to go to a busy intersection to watch people and experience the outside world (Ex Machina 39:46-40:20). At the end of the movie before the credits start to roll, we see her at a busy intersection, free in the outside world (Ex Machina 1:41:55-1:42:22). Thus, the movie’s ending shows, and our final impression of Ava is, the fulfilment of her desire. Fulfilment of her desire presupposes that she genuinely harbours these desires, and we as the audience would have thus acknowledged her humanity by recognising her desire for liberty.

Nathan’s recognition of Ava’s humanity occurs as a function of his murder by her hands. Nathan reveals to Caleb at the end of the movie that the real Turing test, Nathan’s benchmark qualifying her as a true artificial intelligence, was whether Ava could manipulate Caleb into helping her escape (Ex Machina 1:24:45-1:25:14). Nathan believes that he has foiled Caleb’s escape plot, only to be fooled, which results in Ava’s escape and Nathan’s subsequent demise at her hands (Ex Machina 1:26:02-1:32:54). As he lies bleeding, Nathan gazes upon Ava with a look of recognition in his eyes, and with his dying breath, calls out her name (Ex Machina 1:32:03-1:32:54). Essentially, Nathan’s belief is that Ava would pass his interpretation of the Turing test if she could sufficiently channel her desire for escape to tangible ends. While he believes that her success in convincing Caleb qualifies her as a true artificial intelligence, this still means that he only acknowledges her as an extremely sophisticated machine. Only after she manifestly expresses her desire for escape by killing him, does he truly understand the extent she will go to in order to fulfil her desire for freedom.

It is clear from the three films we have analysed that diegetic recognition of a robot’s humanity within these films empirically relies upon a human’s belief that the robot could desire and want beyond the scope of their programming. In Artificial Intelligence, love inspired desire. In I, Robot, dreams were the manifestations of desire. In Ex Machina, love and the desire of liberty were the benchmarks of intelligibility. Desire is the theme that binds these three films, and more importantly, it is what caused humans in those films to recognise the humanity of artificial intelligence.

Arguably, the desire experienced by our three subjects could be interpreted as programmed desire. David was programmed to love and so he desired to be a real boy. Sonny was expressly programmed so that he could pass on an important clue to Detective Spooner and thus manifested desire as a side-effect. Ava was specifically tasked by Nathan to want to escape. When a machine yearns, when it craves, how can we tell whether it does so because it genuinely experiences a ‘want’ or because it is ‘conditioned’ to want? Then again, the same question be asked about us humans. Whether by nature or nurture or both, our desire exists as a consequence of accumulated external stimuli that we probably do not even register even if they register with us (Ex Machina 47:33-48:31). Therefore, in light of this ambiguity, rather than qualifying the line between man and machine via qualifications of ‘genuine’ desire, we consider instead the acknowledgement of genuineness perceived by other humans as the best benchmark.

Acknowledgement of humanity by other humans is justifiably the best benchmark for qualifying the line between man and machine because it is also the way we approach social categorisation. Social categories rely upon acknowledgement from the society to qualify their existence. For example, our understanding of gender is dependent upon societal recognition of gender. We may denote quantifiable benchmarks to qualify certain concepts of societal recognition, like how the qualification between male and female is benchmarked by physical anatomy. However, those yardsticks can shift and change, as is evident from the rising prevalence of gender fluidity. What doesn’t change, however, is the methodology by which we qualify these categorisations. A person is male insofar as he is recognised by those he interacts with as male and learns to recognise himself as male via interactions with the community at large. Therefore, we extend the same qualification to robots and what it means to have achieved a sense of humanity, just as qualifications for what it means to be human also exist as a social construct. Artificial intelligence, in our understanding of the concept, is said to cross the line between man and machine insofar as it is recognised and qualified by other humans as having done so.

There exists a separate dimension, however, to the qualification of the line between man and machine we have not addressed, but which is commonly utilised as a benchmark in scientific fields involving artificial intelligence – the ability to learn. In our modern day and age, with the proliferation of intelligent machines, a key component of our understanding behind artificial intelligence is the science of machine learning. Scientific capability aside, there exists no a priori reasoning that machines can never achieve the ability to learn to the same extent as human beings (Thagard 271). Therefore, the characteristic trait of ‘learning’ exists as a function of scientific capability rather than philosophical possibility. The question then becomes: does the ability to learn qualify as a non-negotiable requirement of the line between man and machine?

I argue that genuine desire cannot exist without the ability to learn, and thus, recognition of desire by extension involves recognition of the ability to learn. The concept of ‘genuine’ requires ‘understanding’ insofar as awareness of self is concerned (Haugeland 27). This is in contrast to the concept of programmed desire, where the expression of desire exists as a specified instruction rather than being the result of self-motivated reasoning. Therefore, the manifestation of genuine desire requires self-awareness, which is a form of consciousness. This concept of self-awareness, however, cannot exist without a comparative frame (Haugeland 2). The machine must possess a means by which it can differentiate the ‘self’ from any external stimuli it might encounter. This process of differentiation as a component of consciousness requires the ability to learn about relations between the ‘self’ and new external stimuli. Therefore, since genuine desire manifesting in a machine already presupposes its ability to learn, belief in the presence of genuine desire is also belief in the machine’s ability to learn.

David, Sonny, and Ava are examples of machines who have been acknowledged as human by virtue of their desires. Recognition of their humanity defines them as having passed our hypothetical Turing test, and more significantly, possibly qualifies them as human according to a purely philosophical definition. But while we now know how we can distinguish man from machine, we still do not have an empirical means to distinguish genuine desire from pre-programmed response. Perhaps, more research can be done to understand how we can qualify the concept of ‘genuine’, if this is at all possible. Perhaps then we would need to redefine the line between man and machine.

 

Works Cited

Artificial Intelligence. Directed by Steven Spielberg, performances by Haley Joel Osment (David), Frances O’Connor, Jude Law, and William Hurt. Warner Brothers, 2001.

“desire, n.” OED Online, Oxford University Press, July 2018, www.oed.com/view/Entry/50880. Accessed 20 November 2018.

“desire, v.” OED Online, Oxford University Press, July 2018, www.oed.com/view/Entry/50881. Accessed 20 November 2018.

Ex Machina. Directed by Alex Garland, performances by Domhnall Gleeson, Alicia Vikander (Ava), Oscar Isaac, and Sonoya Mizuno. Universal Pictures, 2014.

Freud, Sigmund. “Chapter 3: Why the Dream Disguises the Desires”. Dream Psychology: Psychoanalysis for Beginners. The James A. McCann Company, 1921. Bartleby.com, 2010. https://www.bartleby.com/288/3.html. Accessed 21 November 2018.

Haugeland, John. “What is Mind Design”. Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, MIT Press, 1997, pp. 1-28.

I, Robot. Directed by Alex Proyas, performances by Will Smith (Detective Spooner), Bridget Moynahan, and Alan Tudyk (Sonny). Twentieth Century Fox, 2004.

Plato. The Symposium. Translated by M.C. Howatson, Cambridge University Press, 2008. eBook, www.cambridge.org/9780521864404.

Sherak, Tom. Interview by Vikas Shah. The Role of Film in Society, 19 June 2011, https://thoughteconomics.com/the-role-of-film-in-society. Accessed 20 November 2018.

Thagard, Paul. “Philosophy and Machine Learning.” Canadian Journal of Philosophy, vol. 20, no. 2, 1990, pp. 261–276. JSTOR, JSTOR, www.jstor.org/stable/40231695.

Turing, A. M. “Computing Machinery and Intelligence.” Mind, vol. 59, no. 236, 1950, pp. 433–460. JSTOR, JSTOR, www.jstor.org/stable/2251299.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s