I, Robot
Our technology is getting more and more robust, especially in the field of artificial intelligence.
Since Elon Musk's takeover of Twitter, mass market journalism's favorite platform has become quite the subject of controversy -- well, in mass market journalism, at least. Part of the controversy has revolved around some of the users, especially, at least until his latest suspension, Kanye West, whose unhinged antisemitic and pro-Nazi rants have provided more than a little fodder for the press.
West's tweets are, beyond question, utterly abhorrent. Whether they're the result of a disordered mind or a more purposeful attempt to gain publicity, probably only West himself and the people closest to him truly know. He certainly has shown every sign in public of some form of serious psychiatric distress. It's likely not an act.
But be that as it may, the rapper's antics have thrust the question of content moderation to the fore once again. Musk, once his purchase of the platform under court order was finally complete, himself made moderation an issue as he let previously banned or suspended users, including West, back on, ostensibly in the name of free speech, and with predictable results.
It's probably fair to say Musk never really understood how and why Twitter worked, or what it was really for. Maybe he still doesn't. He's first and foremost a tech entrepreneur with a Silicon Valley mindset, not someone with a firm grasp on the dynamics of the public square.
(He also may not have understood how vital Twitter is, and has been, to people in war zones, protestors in autocratic or totalitarian countries, or disaster relief and emergency response agencies, but that's a matter for another time.)
As a techie, Musk did have something right going in: The question of bots. As is the case with all proprietary social media platforms, Twitter is chock full of them, and despite management's best efforts they've never been able to fully and effectively police them.
That's not always a bad thing. Some bots are perfectly benign, designed to post and repost content at specific times to automate a function which doesn't really need human hands. In that respect, they save time, money and people, the three greatest resources of any company. In fact, on many non-proprietary platforms, especially Mastodon (which is similar to Twitter in many ways), you can purposefully create a bot and label it as such.
Others are much more nefarious. And they've been a problem on social media from the beginning. In fact, their first use by foreign governments, especially Russia, to interfere in American politics didn't come with the 2016 election cycle. Evidence now suggests Russian intelligence deployed social media bots as a disinformation tool in support of Ron Paul's quixotic presidential candidacy in 2007, to great effect.
Most likely it was a test run. And in all likelihood, the Russians were thrilled with the results (well, not of the election itself). They subsequently used the same tactics, along with a few other shots in their bag, throughout Europe, including in Ukraine, and no doubt continue to use them today despite our heightened awareness and the attention of our very capable national security institutions.
Part of the reason for the effectiveness of bots lies in human nature itself. We're all at least a little susceptible to confirmation bias (it's what makes cable news so profitable, and scam artists so effective) and the vast majority of people don't have the ability to really distinguish between a well-designed fake account and a real person, at least unless they know someone in real life. A bot telling us what want to hear can easily capture our attention; one masquerading as someone we would, or theoretically should, like to know can get us to click on "approved" when there's a friend or follow request, making us unwitting accomplices in the process.
Were bots the only problem, along with phishing scams and other security penetration techniques, it would be serious enough. But looming on the near horizon is something even more worrisome: The possibility our machines can actually get out ahead of us, leveraging the same psychological tendencies bots capitalize on to be so effective, but in their own creative and self-serving ways.
It's a theme we've seen for many years in science fiction -- a genre whose purpose is to try and peer into the future, after all -- but now it's becoming an increasingly real possibility. Advances in artificial intelligence are accelerating and the technology is becoming ever more sophisticated. So much so, even some of the practitioners are starting to get more than a little worried:
A guide to why advanced AI could destroy the world - Vox
In and of itself, the threat posed by AI is one we've been long familiar with. Since the dawn of the Computer Age, movies have been based on the idea. In Stanley Kubrick's brilliant 2001: A Space Odyssey (1968), a rogue computer on a space mission kills the human crew thanks to a conflict in its programming -- in essence, a moral dilemma it can only solve through murder.
In Colossus: The Forbin Project (1970, based on a 1966 novel) a highly advanced American computerized defense system, designed to take the human element out of the decision to launch nuclear weapons, becomes sentient and conspires with a similar Russian system to take over the world.
The two movies take slightly different approaches. In 2001, the mistake is in the initial programming done by humans. In the sequel, 2010: The Year We Make Contact, it turns out there was some skullduggery involved, of course: Why Hal 9000 Went insane HD 2010: The Year We Make Contact (1984) - YouTube.
In Colossus, the problem is a computer designed too well, with more capability than the designer intended. The consequences are utterly unforeseen. That's probably even closer to what we're seeing in AI now. At least, that seems to be the subject of a great deal of internal conflict at Google, where several engineers and ethics experts have been dismissed or suspended over the last couple years for expressing reservations, or even outright fears, about computer sentience.
The long-term implications of all this should give each of us pause. In the near term, the issues are much simpler. We don't have to wait for Elon Musk to plant a computer chip in our brains in order to have a problem. In a very real sense, we're already deeply connected to our computers; almost everyone has a Magic Rectangle in their hands already, and not only in the most advanced countries. We're already awash in automation, so much so we barely even think about it.
Which means, if we don't make the constant effort to be fully human -- to use the tools of human cognition, judgment, experience, discernment and decency -- we may suddenly wake up one morning to the realization we ourselves have already become the robots:
“The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.” (Isaac Asimov)
Odds and Ends
Of course, the technology itself is neutral (at least for now). And to be clear, Whigs are no Luddites; the quest for human progress, including through ever more capable machines, is one we fully embrace. The issue is people. Fortunately, when we focus on the positive, we can accomplish some amazing things. Take a minute or so to check out this NASA video on the Artemis I mission:
Ride Along with Artemis Around the Moon (Official NASA Video) - YouTube
We are literally taking our first steps toward reaching for the stars. Not only that, but we're also doing a better job than ever of finding the stars we may want to visit someday (hint: they're the ones with planets similar to ours in the neighborhood). We're also getting a much better look at some of the objects a lot closer to home, and it's amazing:
Just wait until we turn this baby on Encedalus.
And finally, people aren't the only ones getting into the holiday spirit:
Look: Wildlife officials rescue deer with antlers wrapped in Christmas lights - UPI.com
I'm sure ol' Rudolf, could he talk, would claim it was all an accident. But I'm not so sure. As everyone knows, there's no end to the mischief a young buck can get up to.
See you next week.
Kevin J. Rogers is the executive director of the Modern Whig Institute. He can be reached at director@modernwhig.org. When not engaged with the Institute he publishes independently to Commentatio on Substack.
___________________________________________________________
The Modern Whig Institute is a 501(c)(3) civic research and education foundation dedicated to the fundamental American principles of representative government, ordered liberty, capitalism, due process and the rule of law.