top of page
  • Writer's pictureTerri Favro

No Lucy, No Spock: Artificial Intelligence, 'Splained


I know, I know. You’re probably thinking: of course, an AI could trick us into believing we’re watching the real Lucy! Look at all the Lucille Ball data points an AI has to learn from: six seasons (180 episodes) of I Love Lucy, the Turner Classics Live podcast about her life, the Aaron Sorkin-penned biopic “Being the Ricardos” with Nicole Kidman as Lucy, the YouTube clips of her kibbitzing with Carol Burnett -- even the 2001 screwball comedy Rat Race with its busload full of hapless Lucy impersonators going wahhhhhhh when a wayward cigarette sets a Lucy’s hair on fire. So yeah, you could easily find yourself watching a Deep Fake version of I Love Lucy with Lucy and Ethel as grape-stomping astronauts.

My point is, the real (carbon based) Lucille Ball was an unlikely innovator. An AI couldn’t have predicted Lucy because her success was unprecedented and unpredictable. She didn’t fit into any pattern for creative success of her era. A small town girl from a poor family with little formal education, a former model and B-movie glamour girl who wasn’t afraid to look goofy, goes on to become the first millionaire on television and co-creates (with husband Desi Arnaz) the TV sitcom format and launches an entertainment dynasty.

She married Cuban-born Desi at a time when marriage between a Latino and a white American woman was scandalous. She was the first woman to appear openly pregnant on television (although she was forbidden to use the word pregnant – Lucy was ‘expecting’).

She and Desi spearheaded the triple-camera sitcom , filmed on 35 mm in front of a live studio audience in Los Angeles (rather than broadcasting live from a closed studio in New York and taping on kinescope for rebroadcast in other times zones, the way other TV shows of the early fifties did). By insisting on high quality film (paid for out of their own pockets), Lucy and Desi chose a more expensive route that preserved I Love Lucy for perpetuity, giving us the “rerun”.

She and Desi co-founded their own TV production powerhouse, Desilu, buying up sound stages in 1957 from the failing motion picture studio RKO where Lucy had been a contract player in the in the 1930s, turning up in a Marx Brothers movie and a string of Three Stooges shorts. On TV, Desilu produced hit after hit, from I Love Lucy to The Untouchables to Mission Impossible to the original Star Trek. (Fun fact: the only reason Star Trek made it to air in 1968 was because Lucy overruled her nervous board of directors to personally green-light the show. (Desi was out of the picture at this point.)

No Lucy. No Spock.

Lucy boldly went where no one –– man or woman –– had gone before. She was a game changer. (As was Star Trek's creator, Gene Roddenberry.) There was no data point for her.


Although AIs can look to the past to predict what might work in the future, they don’t innovate, although they may help us do so. AIs recognize patterns more quickly than any one human ever could. Which is a great boon to humanity if you’re doing research into say, which genes or viruses are likely to cause cancer, or which weather patterns will trigger a once-in-a-century storm, or how fast a candy factory production line can run before Lucy and Ethel have to start stuffing chocolates into their mouths .

AIs give us a backwards looking snapshot of what we liked yesterday, not where the zeitgeist will be tomorrow. If, for example, AI could travel back to the RKO studio in 1937, where Lucy was making Room Service with the Marx Brothers, we might still be watching movies like Ziegfeld Girl . If it works, why change it?

But having said all that, there’s a place for generative AI in the creative process. ChatGPT has been described as “autocorrect on steroids”. I see a great future in it as a voice assistant with the ability to carry on a seamless conversation about where I left my keys, phone and glasses. It could help writers with outlining, copy editing and even brainstorming story ideas, especially if I can do that by talking to it. Just like Spock asking the computer on Star Trek for its expert opinion on whether it's a good idea to land on Altair. (Answer: yes, because that episode provided the name for the first Altair minicomputer. Topic for another blog.) But to be honest, I'm still waiting for generative AI to make its pitch as a tool for humanity: most of us have greeted it with a mixture of delight, awe, terror and resignation. OpenAI might want to consider IBM's strategy to ease consumers into the desktop computer age by turning Charlie Chaplin's iconic silent movie character, the Little Tramp, into an everyday guy trying to figure out how to use a "Tool For Modern Times", as the campaign for IBM PCs was called in the eighties. (I worked for one of IBM's ad agencies, at the time.)


I love robots. I wrote a book about them, “Generation Robot: A Century of Science Fiction, Fact, and Speculation”. In 1969 my father (an electrician) was accidentally turned into a roboticist when the world’s first industrial robot, UNIMATE, showed up in the factory where he worked. Dad was assigned to be UNIMATE’S technician. Eventually he started building robots in our backyard, basement and vineyard. Some of them even worked, like our self-mowing lawnmower. (The automated wine bottling system, not so much.)

For “Generation Robot”, I tracked down the origins of UNIMATE and surprise, surprise, the inspiration came from the world of science fiction – Isaac Asimov’s “I, Robot”. True, Asimov was himself a scientist (a chemistry prof) but his robots show little hard science: as one MIT mechanical engineering prof told me heat, sunlight, dust and grease all play merry hod with mechanical devices , something my father witnessed in the 1960s when UNIMATE went berserk and started hurling scalding auto parts at assembly line workers.

Robots are not infallible or eternal. One roboticist at Carnegie-Mellon University pointed out to me that humans have a longer life span that the average robot, which will probably only last about as long as your car, your mobile phone or other manufactured devices.

While my main focus in “Generation Robot” were the mechanical robots of Asimov and real-life robotics manufacturer Boston Dynamics, a roboticist at Carnegie-Mellon held up his phone and told me: This is robot’s brain. All we’re trying to do is give it arms and legs. Ironically, I discovered that more than one AI researcher was inspired by a revolutionary creative idea – HAL from 2001, a joint effort by scientist-science fiction writer Arthur C. Clarke and filmmaker Stanley Kubrick. Some science imitates art, rather than the other way around.


In a conversation with an AI researcher in 2017, I asked how AIs learn. Here was his answer, quoting from the 2018 edition of “Generation Robot”:


Xavier swipes his phone to show me rooms designed by an artificial intelligence. All of the images are a bit out of focus, as though they were snapped by a drunken party guest reeling through the hallways of the designated hotel for a destination wedding. They have the faintly anonymous, blandly designed look of messy hotel rooms: rumpled beds, floor-to-ceiling curtains, cluttered surfaces. A few suggest something more intimate, possibly even romantic: Victorian-era beauty tables with crystal bottles and lace trims. At least that’s what I read into them: everything is a bit indistinct. These could be glimpses of rooms from one of the dreams I often have of houses I’ve never visited, my subconscious creating an interior from childhood memories or magazines I might have flipped through at my hair stylist’s. That’s probably a good metaphor for these rooms –– dream rooms, imaginings. None of them exist outside of the collective minds of artificial intelligence: they’re like children flipping through a stack of old home décor magazines, then drawing picture after picture of their own designs, shouting “Room! Room! Room!” And where did they derive those data-driven ideas of “room”? From you, me, and everyone else who posts photos of rooms (or our lives inside them) on Facebook, Instagram, and anywhere else that the AI hive mind can crawl.

“They’re still a bit small and blurry, but getting better all the time,” says Xavier, sounding a bit like a proud papa trying to downplay has gifted offspring’s talents.

As Xavier puts it, these interiors were designed by “non-embodied algorithms” – in other words, a HAL-like AI that interacts with us through our devices –– phones, cars, our home-heating system, smart appliances, and who knows what else.

“What’s the difference between machine learning, deep learning, and Good Old-Fashioned Artificial Intelligence?” I ask.

“GOFAI is a rules-based system that uses a common-sense approach to problem solving. What would we do? The assumption is,” he says meaningfully, “that we know ourselves. That we can give the AI a set of rules based on common sense. But machine learning means writing down many examples and letting the machines come up with the rules. Deep learning is based in neuroscience. It uses the human brain as a metaphor and a source of inspiration but does not directly try to replicate a brain. We don’t tell the AI principles: we show it examples and let it come up with its own principles.”

Xavier goes on to explain that he uses supervised learning to train AIs: “it gives you random answer and you tell it when it’s wrong. It’s a remarkably simple learning algorithm. Our own nets only take about a week to train.”

Sitting in this funky coffee bar, with Xavier talking about “training” disembodied algorithms, I feel like I’m at the edge of something large and unpredictable, possibly unruly, definitely disruptive, and quietly seductive. AIs are starting to sound…er…alive. These overachieving, hyper-intelligent virtual children may still be in kindergarten, but they sure learn fast.

“So playing a game –– getting answers right and wrong – is how an AI learns?” I ask. “Are there ever times when they don’t learn from their mistakes?”

Xavier nods. “Sometimes the AI will start to assume that because it had a particular outcome once in the past it’ll have that outcome again. That’s called over-fitting. ‘Oh I remember, when I did X, I lost,’ so…we have to use huge data sets with lots of examples, so big that the AI couldn’t possibly just memorize them. That forces it to learn principles.”

I’m skeptical about the prediction that AIs will replace the creative work of writers of books, films, etc. Generative AIs are uncannily good at predicting what word will follow another. But that's not what creative writing is. It’s about imagination, trauma, joy, love and all the other human experiences that go into building characters and stories, and surprising readers.

Much of the time, even writers themselves don’t really understand why they’re writing what they’re writing. According to Pulitzer Prize-winning author Robert Olen Butler’s “From Where You Dream: The Process of Writing Fiction”, writing is more of an emotional process than an intellectual one, drawing on our inner dreamscape rather than on data points. As Olen Butler puts it, “Fiction writers are the writer directors of the cinema of inner consciousness.”

Dreaming up imaginary people doing imaginary things isn’t a rational act, and often takes the writer (and readers) into unexpected places. That’s the joy of creation. If you try to create stories based on data, what you’ll get is new books, movies and TV shoes that are do-overs of old books, movies and TV shows. Say goodbye to the shock of the new and hello to the predictable and derivative.

Would an AI be able to write something original if it achieves true consciousness? Will the so-called Singularity result in an artificial version of Lucy?

In the words of Mr. Spock, that seems most illogical.

But if it happens...wahhhhhhhh!



87 views2 comments
bottom of page