Many readers will recognize this post’s title as a reference to the first long-distance electronic message transmitted in the U.S. over Samuel Morse’s Baltimore-to-Washington telegraph line in March 1843. The phrase is Old Testament in origin (Numbers 23:23), but captures the awe, wonder, and religious grounding of the 19th century at this amazing ability to transmit information for the first time at a speed greater than that of a man on horseback.
One of the things that has buoyed my spirits over 40+ years in this business is the feeling that I was contributing to something positive. Since the time of my family’s first black rotary-dial phone on a two-party line in our Bronx apartment, I have been awed at this marvelous capability to reach out to people and connect as an inherently good thing. I remember one of my instructors at AT&T saying, “This is the network people use to call the cops and the fire department.” We felt we were doing God’s work.
In networking, our job is connecting things (people, smartphones, servers, sensors …), and that is still a necessary and desired service. Increasingly, however, we are connecting people to this new thing (relative to the PSTN) called a social network, whose irresponsible behavior has already had negative repercussions in the society at large. While companies like Google and Facebook may have launched with the most noble of intentions, their business strategy of collecting and using (maybe “weaponizing”) peoples’ web queries to sell more targeted content in a totally unregulated environment has contributed to a true social crisis.
The Social Dilemma
, an excellent documentary now airing on Netflix, dissects the crisis. The film explores the business goals, questionable tactics, and numerous negative impacts of the social media industry. What we get is a tutorial on the devious strategies social media companies use to keep people clicking and the early evidence of the harm being inflicted on the social fabric here and abroad. At the core of these strategies are some of our favorite technologies — artificial intelligence (AI) and machine learning (ML).
The New Mantra: “You Are the Product”
At the root of the dilemma is the fact that the public doesn’t fully comprehend how the digital economy works. With the advent of pay TV, people have come to understand why they get more commercials on broadcast TV, but most would be hard-pressed to explain why their Gmail or Facebook accounts are “free.”
As I learned in my marketing classes many years ago, TV ads are an expensive way to broadcast a message to a wide audience to build brand status and hopefully convert a small percentage of those viewers into buyers. The real genius of Google was figuring out how to improve the advertiser’s batting average by targeting ads based on users’ searches — if they’re looking at lawnmowers, let’s bombard them with gardening ads.
Most people seem to tolerate, and in many cases, appreciate how widening the search area for products can lead to a more comprehensive search of alternatives and better results. Similarly, Facebook has provided a wonderful service for reconnecting or remaining connected to friends and relatives and sharing pictures and the like. So how do you get from there to something evil?
The first thing to recognize is that these sites’ key objective is to keep users online and clicking so they can serve up more and more targeted ads to them. You may have noticed that the one juicy tidbit you were looking for from that click-bait news blurb is always the last element in a series of 20-some screens — each of which is populated with lord knows how much more click bait. In this environment, you can imagine the ability to pick the “get ‘em in the door” tidbit is a highly prized skill and, regrettably, one we can teach machines to do.
You may wonder why your search for lawnmowers veers into a marathon of dog-surfing videos. The filmmakers point to the platforms’ growing reliance on factors based in human psychology to keep ‘em clicking. Unfortunately, those factors include such things as self-image (Facebook “Likes”), fear scenarios (Buy gold), and stuff that is just batshit crazy (pick your political site).
What’s So Bad About That?
Not being a science fiction fan, I have spent little time worrying about scenarios where computers collaborate to take over the world — we have trouble enough getting the accounting program to run reliably. The one evil scenario that I hadn’t envisioned, and the one that inspired my title, was that technologists would willingly abdicate control of user content selection to ML algorithms to exploit information about personal choices without any thought beyond making money.
The hydrogen that powers this death star is another of our tech creations, big data. This is data that after being monitored, categorized, and brutally analyzed, ultimately serves to build amazingly detailed user preference files on, in the cases of Facebook and Google, literally billions of individuals. Thankfully some tech providers — Apple and Microsoft come to mind immediately — pointedly stay out of the mess of trading their customers’ information to advertisers. You can still hate them for other reasons, but they at least respect your right to basic privacy.
For the “not so scrupulous,” this veritable ocean of information on people’s contacts, interests, habits, and who knows what else, Facebook and Google can focus some of the most powerful computer systems on earth (big profits buy lots of tech). This advertising bounty has also allowed them to operate at the leading edge of the AI revolution. While you thought AI might be all about predicting hurricanes and the like, these guys were using it to decide if cuddly puppies or the principles of white supremacy would be more likely to get you to stay tuned in.
The crazy thing is that at this point the whole thing is on automatic pilot. Once the machine is set in motion, it keeps optimizing outcomes, selecting content, and spinning out key performance indicators to track how well it’s doing. The key is, this process is totally machine-driven, and the machine doesn’t distinguish whether the preference is for Likes from your seventh-grade classmates, extreme political views, or a support for a campaign of genocide against some group that has fallen out of favor with the authorities.
When the algorithm determines a certain category of content rings your bell, that bell will continue to ring. And, when it looks like it’s losing you, the machine will tack until it gets it right.
Who Says You Can’t Sell Genocide
Probably the most chilling story in the documentary was the Myanmar government’s use of Facebook
to help turn the general population against the minority Rohingya people, against which it then and perpetrated a genocidal campaign that left countless dead and over 900,000 refugees.
Apparently, smartphone vendors flooding the Myanmar market in the mid-2010s were incentivized to pre-install Facebook and establish Facebook accounts for all new subscribers. That maneuver opened the floodgates of highly inflammatory anti-Rohingya government propaganda in the already divided and politically volatile region. Chalk up a victory for the cyborgs at Facebook.
I don’t think you need a map to draw a line from the “echo chamber” of precisely targeted inflammatory content on Facebook and others to the highly divisive political climate we currently face in the U.S. I don’t hold social media entirely culpable for this sad state of affairs; I place equal blame on the ”legitimate” news media itself. Journalism seems to have gone away from providing the accurate, impassionate reporting of unbiased facts to a relentless diatribe obviously biased toward a particular political position. So, when you get down to it, what differentiates our public news media (CBS, NBC, ABC, CNN, Fox, etc.) from inane rants on Facebook?
Conclusion: Yes, We Should Care (We’ve Learned What AI Can Do)
The documentary drew heavily on inputs from people from Google, Facebook, Twitter, and other social media giants who participated directly in the initial plans to “monetize” these platforms, and clearly understood how the game is played — to wit, they invented the game. One of the most interesting parts of the documentary was that none of these very smart people could come up with a single concise, comprehensive sentence that crystalized the problem.
This practice has become so institutionalized that it touches every aspect of the industry, including the most important one: how they make money. This is not an evil that can be dispatched with the swipe of a pen, but something that must be addressed at its roots. What should we consider “fair play” in dealing with people’s personal information and the tools we can legitimately use to promote products or ideas — or should that “idea” market even be part of this? Tristan Harris, formerly Google’s design ethicist, turned to nostalgia as a way to frame the issue, recalling a quieter time when the FCC banned junk food ads during the Saturday morning kid’s shows. While Harris didn’t look old enough to remember that time, he used the story very effectively to describe a potential government role in these proceedings.
Of course, to date the government has been monumentally deficient in crafting meaningful regulation for this or any new information technology in recent memory. As I watch hour after hour of negative political ads, I can’t help but think that our elected officials are as complicit in this as anyone else. What is clear is that none of the Band-Aid solutions about the social media giants’ specific editorial practices will have any meaningful impact whatsoever.
Most of us who wound up in this tech business do want to be working for good. In this tainted tale, Facebook and Google have realized their utopian vision of enabling people to access information, share pictures and news, keep in touch with distant friends, and reconnect with people from our past, but unwittingly they have also created this dark underside that absolutely needs to be brought to heel.
As technology professionals, our role will be crucial. Like it or not, as useful as AI and ML tools are in doing good, it is increasingly clear that their use leads to very negative societal impacts — and that these will only intensify unless somebody starts laying out some ground rules for their use. Most importantly, these rules must be clear and understandable by any moderately educated person. Confusing, overly broad, or ill-conceived regulation will do little to curb the negative impact of AI and ML and let the positive of these technologies’ potential shine.
In any event, watch the film. I guarantee, it will be an eye-opener.
This post is written on behalf of BCStrategies, an industry resource for enterprises, vendors, system integrators, and anyone interested in the growing business communications arena. A supplier of objective information on business communications, BCStrategies is supported by an alliance of leading communication industry advisors, analysts, and consultants who have worked in the various segments of the dynamic business communications market.