Do you trust technology?
Do you wonder who is looking in on you when you have your webcam set up or your video doorbell?
What about those gadgets that listen to you and answer your questions? Who is listening in on your life?
Technology is a blessing and a curse to many people. The new sci-fi series “Next” confronts the issue when a computer program learns from itself and rewrites its code over and over, managing people’s lives and what they can access online. It’s a touchy subject and one that will give viewers pause and cause for concern and contemplation.
John Slattery stars along with Fernanda Andrade, Jason Butler Harner, Eve Harlow, and Michael Mosley in this new series. Slattery plays Paul LeBland, a billionaire tech genius who was run out of his company because he wanted to stop a project of an AI with the ability to learn and change on its own.
He enlists the aid of Homeland Cybersecurity Agent Shea Salazar (Andrade) to get to the bottom of this terror that is starting to rage across the internet. Recently the cast spoke with the media about the show.
Manny Coto, the creator of the series acknowledged, “To me, the whole premise of the show started because of Alexa. It was my son woke me up, was really tired one morning, and I was, like, ‘What's the matter?’ And he was, like, ‘My Alexa started talking to me at 3 a.m. out of the blue by itself for no reason.’ He claims this has happened a couple times. And so I didn't know if he could set an alarm or what or what happened. We never got down to the mystery, but those things kind of seem to have a mind of their own every once in a while, even though we still have, like, five of them in the house because the kids won't let me get rid of them.”
“You have a conversation with someone,” Slattery said, “and the next day, your phone is blowing up with ads for whatever you were talking about. I mean, every time you look at Instagram and you hit something, and then you're loaded up with ads for whatever. I mean, it's, obviously, watching.”
In the show those who have discovered this AI are being targeted.
“The premise of the show, the way the show unfolds, actually came from research that I had read,” stated Coto. “One of the things I read is that if an AI were to accidentally become superintelligent, one of the first things it would want to do is not allow anyone to find out that it's become superintelligent, because it wanted to gain its foothold wherever it's going before we have a chance to kind of fight back, which I found really interesting. It would basically play dumb, which kind of led to the premise, meaning if a group of people found out about it, it would not strike in large, huge assaults. It would kind of go after them in the smallest way possible so as not to be detected, which inherently led to a story and a season whereby this AI, which knows everything about them, our characters, is actually attacking them through their personal lives and slowly trying to destroy their lives and their careers so that they can in turn not attack it, which led to kind of a character-based season and drama for the first season.”
The first season will definitely pull in viewers and cause plenty of conversations about the reality of future AI and where technology is heading. It’s a slippery slope. Slattery explained, “So (the characters) are searching for this (AI), it's a manhunt without a man, with a ticking clock that isn't sure either. This thing gets exponentially smarter.”
While AI can be useful to humanity, there are questions about what it will be able to do in the future.
“There was a story today in the New York Times about AI being used to determine whether a tumor is cancerous,” Slattery explained. “They do a biopsy in real time. The patient is on the table. They have to take the tissue to the lab, freeze it, stain it, look at it through a microscope while the person is on the table, waiting to determine whether the surgery proceeds or you sew the person up and they go home. And AI is being used to do that a hundred times faster and with other different types of cancer. The question in this show is you take that technology and that intelligence, and you remove the assumption that that intelligence has your best interest at heart.”
But in the show the AI, aka NEXT, does not have humanity’s best interest at heart. What is its ultimate goal? Is it planning on taking over the world? What is going to happen if it is not stopped?
“One of the things that I tried to do in the show,” Coto said, “and one of the things that I realized in research, is that this whole idea of nice and a conscious AI and self-aware AI are really things that don't even have to apply for this to be dangerous. What we have is an AI that was programmed to help people. It was a very simple programming, but because of the nature of the way it was programmed, it became super intelligent. It has taken that directive to a degree that this is why it's so dangerous, argued by a number of professors, that we need to be very careful how we program these things because an AI that decides to help people may decide that the best way to help people is to plug everyone into a neural cord that stimulates our happiness regions of the brain.”
What will technology unleash on the world? According to Slattery, “So it's, like, the further isolation of all of us as we get more and more connected to these devices that protect us from or prevent us, rather, from seeing or interacting, it's like a loop. This whole show, the idea is a loop, and that's what's kind of so interesting about it.”
Yes, it is interesting but it is also frightening. Think about it.
Slattery admitted he doesn’t have an Alexa. He’s scared.
“Next” premiered on Oct. 6 on FOX.