IN THE EARLY 2000s, Alex Pentland was running the wearable computing group at the MIT Media Lab—the place where the ideas behind augmented reality and Fitbit-style fitness trackers got their start. Back then, it was still mostly folks wearing computers in satchels and cameras on their heads. “They were basically cell phones, except we had to solder it together ourselves,” Pentland says. But the hardware wasn’t the important part. The ways the devices interacted was. “You scale that up and you realize, holy crap, we’ll be able to see everybody on Earth all the time,” he says—where they went, who they knew, what they bought.

And so by the middle of the decade, when people were massive social networks like Facebook were taking off, Pentland and his fellow social scientists were beginning to look at network and cell phone data to see how epidemics spread, how friends relate to each other, and how political alliances form. “We’d accidentally invented a particle accelerator for understanding human behavior,” says David Lazer, a data-oriented political scientist then at Harvard. “It became apparent to me that everything was changing in terms of understanding human behavior.” In late 2007 Lazer put together a conference entitled “Computational Social Science,” along with Pentland and other leaders in analyzing what people today call big data.

In early 2009 the attendees of that conference published a statement of principles in the prestigious journal Science. In light of the role of social scientists in the Facebook-Cambridge Analytica debacle—slurping up data on online behavior from millions of users, figuring out the personalities and predilections of those users, and nominally using that knowledge to influence elections—that article turns out to be prescient.

Click Here to Read More