I have resisted getting a fitbit. As friends started getting the little devices that track your movement to tell you whether you are active enough, I thought it seemed like a good idea because my writing schedule had ramped up and I find myself sitting at the computer much of the day. I could use a reminder to get up and move. But I’ve not bought one. The idea of a fitbit or any of the other measurement contraptions bothers me. The idea that life is measurable, plot-able on a graph or insertable into an equation, irks.
Technology suffers from delusions of grandeur. Yes, it is powerful, but some technology creators think technology is a skeleton key for life’s minor complications, a first-world fix for first-world problems. Give the algorithm the data and the app will figure your next move out for you.
Technological innovation regularly claims to copy the complexities of human decision-making. Last year, news bounced around of possible Facebook auto replies, a program that can read your previous replies and mimic you when friends posted a link or a picture. (That idea must have died because I can’t find reference to it anywhere now.) We have an app that can turn your thoughts into Shakespeare, text to appease romance-seeking girlfriends, for marital squabbles, another for finding nearby threesomes…
And now the Apple Watch straps all of that and more to one of our wrists. This is not the habit-tracking journal of old, but a tempting and vulnerable storage device. Mere weeks after the naked star photo hacking, and a year after the National Security Administration data scandal broke, the Apple Watch offers us the opportunity to put our identifying, financial, and health data all in one place. But what happens when actuarial number crunchers get our heart-rate data? Of course, they will get that data, quite easily if government controls insurance. It’s easily compiled and therefore easily required on insurance forms. Failing that, the Apple Watch has an on off switch. They’ll tap it. And if the government doesn’t tap it, rogues will hack it. Hacking is about trophies for breaking the rules, and that kind of data concentration will be too tempting to resist. Any promises Apple might make of security I find hardly reassuring.
It wouldn’t be worth the risk if all this data uptake and analysis in these wonder apps worked. But it doesn’t. Just look up the many hilarious adventures in spell check. Predictive text does pretty convincing post-modern language but probably because PoMo thought has myopic vision and logic flaws built into its system. What gives anyone confidence in computer chips’ abilities to anticipate the intricacies of the human mind?
Life Imitates Science Fiction
If this all sounds like science fiction, that might be because it is science fiction—epic and canonical science fiction. Isaac Asimov’s “Foundation” trilogy is to science fiction novels what the “Lord of the Rings” is to fantasy novels or what “Star Wars” is to event blockbusters. It is the standard, and the standard is a story about the predictive power of science.
For the non-sci-fi readers, a quick recap: [Spoiler warning for anyone who hasn’t read but wants to read the trilogy. I know it seems odd to have a spoiler warning for a 75-year-old book, but sci-fi readers have their rules, and I mention the spoiler of the story.]
Hari Seldon, the catalyst character, discovers psychohistory, a scientific discipline that could extrapolate future events if the society was large enough and remained ignorant of the discipline. As the story starts, Seldon foresees the collapse of society, followed by millennia of suffering and chaotic rule. Seldon figures he can reduce the period of suffering to a single millennia by guiding his chosen scientists to follow his psychohistory formula.
The plot has Seldon’s scientists negotiating various Seldon Crisises, decision points Hari Seldon predicts. But Seldon didn’t predict the rise of The Mule, a mysterious man who defies the math of psychohistory. He has a genetic trait Seldon thought extinct, which allows The Mule to influence others. He is also sterile so has no children and, therefore, no connection to the future to stay his will to power. He does as he pleases. He is unpredictable.
The Mule throws off Seldon’s calculations, and one of Seldon’s future leaders of his secret society of scientists, Preem Palver, has to restore the Seldon Plan by anticipating individuals’ behavior, a much harder task than predicting the behavior of masses of people.
The Comfort of a Grand Plan
Man often seeks the comfort of some grand and predictable plan. So Seldon’s mass data analysis hopped a portal out of sci-fi imaginations and into the real world a long time ago. For example, Paul Krugman, the influential Nobel Prize-winning economist with the bad record, went into economics because he thought economics was the closest real discipline to Asimov’s fictional psychohistory and simpler to apply than actual history. It’s economics as psychohistory for the real world, where the greatest good for the greatest number is a matter of finding the right equation.
Famous economist Fredrick Hayek called such grasping to quantify the free mind a “pretense of knowledge,” claiming that “the curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.” Essentially, Hayek understood there is always a Mule—whole societies of them.
In their quest for a comforting grand plan, however, Krugman and his acolytes at least stick to Seldon’s insight that you need masses of data to predict societal events or trends. Even then, Kurgman’s track record shows it doesn’t work very well. (Note the length of that post, and it’s the edited version. There are academic papers and book chapters about the things Krugman has gotten wrong. I chose The Spectator post because it discusses Krugman’s incivility.)
From Prediction to Control
But our current temptation to an app-driven life by the numbers goes beyond the pretense of knowledge. The automated life apps use mass data to anticipate individual responses. That is science fiction for science fiction. In the Foundation trilogy, Palver didn’t actually predict individual action. He dealt with the problems The Mule had created by running a long program of mental adjustment of characters’ minds in order to restore the Seldon Plan. He didn’t predict individual choices, he steered them.
The predictive powers modern techies imagine computers possess—even science fiction knows these things depend upon controlling individuals. By constantly having us stop and measure, the scope of our life shrinks from living it to capturing it. That’s what makes the apps seem to work: our shrinking outlook.
A life well-lived isn’t compatible with constant measurement. These little gadgets draw our focus to numbers from calorie counts to Twitter followers. And when we focus on the little stats, we often miss what makes life worth living.