BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Can Predictive Technology Make Us Less Predictable?

This article is more than 9 years old.

Over at the NY Times, Anna North asks if we can become more creative by using an unusual search engine called Yossarian that purports to help us see things in new ways—ways that go beyond the predictable associations we’re inclined to make when thinking about people, things, ideas, events, etc. What fascinates me about this possibility is that in order for it to be true, prediction needs to be the antidote to predictability. Without inferring where your mind is prone to wandering, neither a person nor an algorithm stands a chance of presenting something to you in a new light.

In everyday life, predictability is associated with consistency. In many cases consistency is a good thing. If your friends are so reliable that you can confidently predict they’ll stay loyal and true, you’re in with a good crowd. If you can predict how long it will take you to drive to work, you can reliably arrive on time without needing to get up earlier than necessary or feeling rushed.

Prediction has become an important feature in information and communication technology. Auto-complete makes searching on Google such an efficient experience that it can feels like mind-reading. Prediction also enables recommendation engines on services like Amazon and Pandora to determine what we’d like to purchase or listen to. And, prediction makes it possible for fitness trackers to recommend when we should take a nap and Google Now to anticipate the weather we’ll encounter when travelling. Looking towards the future, some are hoping that when the Internet of Things matures, our refrigerators will recognize when we’re running out of groceries and contact stores to replenish our stock before it runs out.

While inferring what we want can save us time, make it easier for us to accomplish goals, and expedite finding things we expect will bring us pleasure, predictive technology also can create problems. Privacy scholars like Ryan Calo note that if marketers can use big data to predict when we’re susceptible to lowering our guard, they can capitalize on our vulnerabilities. A related concern was expressed when Facebook ran it’s infamous emotion contagion experiment. If social media companies can predict, with ever-finer precision, what makes users eager to engage with their platforms, they can design features that will manipulate us accordingly.

On a more fundamental level, Cass Sunstein and others who discuss filter bubbles have expressed concern that algorithms which present us with personalized information customized to fit our expectations of relevance can be bad for democracy: the echo chambers they create can incline us towards embracing narrow—if not extremist—worldviews, eschewing diversity, and favoring conformity.

I’ve recently joined the cadre critics by arguing that there’s a significant cost involved by predictive texting—as exemplified by QuickType, a new feature on Apple ’s iOS8 operating system—removing friction from communication by having algorithms guess what we’re likely to say. When your devices do the work of being you, you’re susceptible to becoming a predictable, facsimile of yourself who gives others your second best.

In this context, predictability raises deep philosophical questions. Nicholas Carr wonders: “Where does the algorithm end and the self begin?” David Holmes states:

Philosophers have debated the nature of ‘free will’ for centuries, with some arguing that it’s the highest expression of a human’s selfhood and others arguing that free will is an illusion — our choices are merely the product of millennia of genetic variation and future circumstance. Now as algorithms continue to push and pull on our digital actions, and shape our behavior in their own data-corrupted images of ourselves, these debates over free will and identity may zoom off on an entirely new vector in the digital age. First humans claimed God pulled the strings. Then we said it was genetics. Now the agents of our fates are slowly becoming corporate-controlled robots.”

Ultimately, then, prediction is complicated. Without prediction, we’d be unable to function and would experience the world as unmanageable chaotic haze. But when certain types of predictive software guide our behavior, negative tradeoffs trump the benefits. Where the line is separating the helpful from the problematic can be hard to find. Even something as banal as predictive shopping might weaken our anticipatory dispositions.

To get a better sense of why some people are optimistic about predictive technology, I talked with Jarno Mikael Koponen, co-founder of the predictive discovery app Random and advocate of the idea that in order to “unleash the power of data” “humanists, designers and data scientists” should all be contributing to conversations about how to build “adaptive and predictive systems.”

Jarno told me that he’s concerned that filter bubbles caused by algorithms, editorial curators, and our own personal outlooks “need to be pierced” in order for us “to explore the unknown.”

There needs to be a curiosity-provoking experience—an intervention—that enables you to go beyond the familiar. An experience that exposes you to new choices and diversity. I believe that to create such an experience we need an adaptive interface and system that learns about the things that might surprise you.”

As I understand it, this appears to be Jarno’s main insight. If technology presents people with information in a purely random manner, there’s no reason to expect they’ll have any interest in what they’re shown or find it unusual. Let’s call this the predictability paradox: If people have predictable interests and are more likely to consider something new (thereby behaving contrary to stereotype) if the novelty has some connection to things they’re already keen on, pivoting folks outside of their comfort zones requires pinpointing where the zones end and determining what lies just beyond them—or at least not so far away as to seem utterly alien.

Now, I’m not so sure that people can’t develop new tastes and sensibilities by being thrown into the deep end of the unfamiliar. Still, I do have personal experience that suggests gentle nudging can be effective. I regularly teach an Introduction to Philosophy course that exposes students to what I consider to be some of the most interesting ideas ever formulated. But the majority of students who take this class aren’t majoring in Liberal Arts subjects and would, I suspect, find the subject matter alienating if I didn’t continually make connections between what famous thinkers say in a rarefied way and what we collectively experience and care about on an everyday basis. This means I’ve got to constantly predict what students will find relatable and what they’ll deem weird, and come up with examples, stories, and concepts that bridge the gaps.

It will take time to see whether tools like Yoassarian or Random can genuinely expand our horizons. The key thing to keep in mind, though, is that while prediction might be necessary to combat predictability, software by itself can only do so much. Unless we’re already internally motivated to see things in anew, it won’t matter what we view. If it isn’t clear why the things we see are significant, we’ll look right past them. And, unless we’re open to changing our minds—which can be immensely difficult, given the power of prejudice and cognitive bias—we’ll dismiss new insights that challenge what we predictably agree with.