Two media news items caught my eye in recent days. In the first, the BBC revealed that it is using artificial intelligence, programmed with the profile of the BBC4 audience, to trawl its archive catalogue and create a schedule for an evening’s programming on that channel: https://www.broadcastnow.co.uk/tech/bbc-randd-uses-ai-to-uncover-hidden-gems-for-bbc4/5129603.article Meanwhile, in Belgium, AI is being used in a script editing capacity to analyse and criticise potential work being considered for production: https://www.broadcastnow.co.uk/tech/ai-is-rewriting-the-tv-script/5129743.article
The reason these stories resonated at this particular moment is that I am currently following two drama series in the well-established and recently expanding sci-fi sub-genre speculating on the implications of creating artificial human beings: HBO’s Westworld season 2 on Sky Atlantic and Humans season 3 on Channel 4. Thinking about how feasible some of what we are being presented with in these series may be, I mused on the possibility of creating an AI television critic, who may be better placed to judge the likelihood of the plot and character developments – it looks like that may not have been such a frivolous thought after all.
There are plenty of robots with humanoid form or characteristics in the history of science fiction, but it was in Stanley Kubrick’s 2001: A Space Odyssey, 50 years ago, that the implications of attempting to replicate human consciousness in machine form received its most brilliantly considered treatment. Put simply, the conundrum Kubrick and Arthur C. Clarke explored was that the greater the success in replicating human behaviour, the greater the likelihood that human failings – unreliability, vengefulness, the tendency towards violence – will emerge, and this has become a standard trope of the AI genre. The genre also offered thoughtful film makers the possibility of exploring what the essence of humanity is – it is often remarked that the computer HAL is the “most human” character in 2001 and the astronauts were presented as being efficient and emotionless to emphasise this. Having re-created human consciousness, the only way forward was the next stage in human evolution, presented at the end of the film.
Science fiction has always been way ahead of science fact and highly speculative, but the best sci-fi has often had a grounding in technological possibility. So, as robotic and AI technologies progressed, the humanoid robots came to be played by actors, rather than presented as something highly mechanical. The original film of Westworld gave us a clearly mechanical Yul Brynner, but then a new trope appeared – as the quality of sci-fi robotics improved it became difficult to know for sure who was human and who artificial, allowing dramatic “reveals” which are now becoming a cliché. Ridley Scott’s Alien (1979) shocked us when Ian Holm turned out to be an android, and the rest of the franchise continued the trope. The same director then took it much further in Blade Runner (1982), which was basically about AI consciousness and its comparison to humanity, though very much from the “replicant” point of view. Rutger Hauer’s Batty gets the best speech of the film (one of the best in all cinema) and we are left unsure whether Harrison Ford’s Deckard may be a replicant himself. For the actors involved, it meant a choice between playing their characters as they would a normal human being or using subtle indicators of their mechanical nature. A whole new acting skill developed, seen at its best in Stephen Spielberg’s AI: Artificial Intelligence (2001).
On TV, Star Trek: The Next Generation (CBS, 1987-94) allowed the development of human characteristics in an android character, Lt. Cdr. Data, over the course of its run, which in turn allowed the writers to explore themes of humanity and emotional responses through him and the actor, Brent Spiner, to develop an acting style which moved from the mechanical towards the human.
Both Westworld and Humans aspire to examine the essence of humanity through the creation of artificial individuals, though they go about it in very different ways. Both employ the established tropes of the genre, including the “reveal” of the artificial natures of characters we had assumed were human (notably Bernard in Westworld and D.I. Karen Voss in Humans). Both also contain a godlike creator figure (by coincidence, the surviving one of a duo of innovators), in each case played by a veteran actor (William Hurt in Humans, pretty much reprising his role from Spielberg’s AI; Anthony Hopkins in Westworld).
Westworld also explores questions of free will in a scenario where the “hosts” characters and back stories have been created by a writer. Indeed, the most interesting parts, especially in the current season, are those where the development of the narrative is questioned by those within it who have responsibility for it. However, such philosophical moments are few and soon give way to yet another of the shoot-outs or violent set-pieces of which the series is so fond (well, it has a lot of time to fill). Moreover, especially in the first season, these soon become tiresome because the hosts are simply re-built and death and destruction cease to have any dramatic impact. This is less the case in season 2, but the shootouts are still tiresome because of their regularity, even though they are more dramatically significant, and I preferred the fractured narrative of season 1. One nice touch in season 2 was when a group of the characters arrived in the neighbouring Shogun World and the writer remarked on how some of the “western” storylines had been replicated there, reminding us of the classic westerns which had been adapted from Kurosawa’s samurai epics.
Overall, Westworld cannot work as a western, even in part, because it simply isn’t one – it is sci-fi. But that doesn’t seem to stop the programme makers trying to have their cake and eat it too. It looks fantastic and is superbly put together but is ultimately far from satisfying. Humans, on the other hand, knows what it is and where it is going and is thus, for me, the better series. In its first season, by concentrating on the human reaction to interacting with domestic “synths” and on those synths secretly programmed with consciousness, it probed the theme of what it is to be human, in its second, with the move towards consciousness for all synths, the intervention of technology companies and disquiet amongst the human population, it became more issue-based and that has continued into the third season, in which docile, more mechanical orange-eyed synths have replaced the now fully-conscious green-eyed ones, who are perceived as a threat and kept in isolated camps and whose “human rights” are now the focus.
Whereas the actors playing the hosts in Westworld seem to be giving naturalistic performances, on the assumption that the replication of human emotions has been perfected, in Humans the synths are recognisable as such (unless they are being deceptive) because of their green eyes and blue blood (no bloody shootouts here!) as well as their perfect make-up (do synths put their own make-up on, or are they built with it already there? – that’s one for our AI critic, I think). The performances of the actors playing the synths, exemplified by Gemma Chan as Mia, are very well judged to be both mechanical and nuanced.
But would I have any confidence in one of the synths as a BBC4 scheduler? The device in question is intended to understand the essence of the channel, but, as far as I am concerned, unpredictability is a key element in a successful schedule, so I hope that has been programmed in, as well as a good sense of humour.