Lior Strahilevitz has researched and taught Internet privacy law for 15 years. When he started, most law schools didn’t offer courses in the subject—there wasn’t a ton of interest in the topic, he remembers, much less scholarship. The dog-eared casebook he issued to his students that first semester, back in 2003, still rests on the shelf in his well-appointed office at the University of Chicago Law School. It was the first edition ever published.

The internet has changed considerably in the intervening decade-and-a-half. Strahilevitz has watched carefully, tracking the rise of social media and the debates over user privacy rights that inevitably followed. With Facebook’s Mark Zuckerberg set to testify before Congress in Washington this week, in part atoning for his company’s role in the brewing Cambridge Analytica scandal, we swung through Hyde Park to chat with Strahilevitz about the state of digital privacy in 2018. (The conversation has been edited for length and clarity.)

When you started teaching your own classes, what were you pulling from, and what were your students aware of in the field?

This was before social media really, or before what we now think of as social media. The story I started with that year was this Davos story. It was famous in 2004. Laurie Garrett was her name. Garrett was a journalist who went to Davos for the World Economic Forum and she wrote an email back to five or 10 friends, this sort of breathless, “Oh, it’s so fabulous, rubbing elbows with all these really fabulous people.” And she forwards it to 10 friends. And it was one of the first things that went viral. One of her friends forwarded to 30 more, and then 50 more got it, and over the span of a couple days, a whole bunch of people who didn’t know Laurie Garrett the journalist were making all kinds of judgments about her, and it was getting written up in mainstream news outlets. I started with that story on the first day back in 2003, not realizing that it was basically proto-social media. It was just email!

But it was the same problem, which is a pervasive problem in privacy: almost nobody keeps all their secrets or sensitive information to themselves. You tell your best friend secrets, you tell you spouse secrets, you tell your doctor secrets. Your lawyer, your priest. And expectations are that by sharing information with some people, you’re not opening yourself up to all that information being printed on the front page of the New York Times.

The basic Laurie Garrett problem continues to be the core of a lot of the really interesting issues in 2018. [For example,] "I took a dumb personality test on Facebook thinking that some psychologist in Great Britain was going to get some information for his academic research. No big deal. And then it turns out this information went to Cambridge Analytica, who used it to manipulate what I saw during the election. Whoa."

That problem of sharing with someone you trust or showing it for a particular purpose doesn’t necessarily mean that you lose all control over the information. That’s the core problem that I’ve been thinking about now for 16 or 17 years.

I assume the problem has been made more acute by the rise of social media and the growth of tech companies generally? Is that something you foresaw when you started?

I started to. I remember when a colleague showed me Friendster. I had written this article called “A Social Networks Theory of Privacy.” It’s probably the most cited piece of scholarship I’ve written. It was written just a few years before the rise of social media.

But it basically talked about this problem of, can you share information with someone and still have an expectation of privacy in it? And it drew on social network theory, which is largely a sociological discipline that talks about the odds that information shared between two people in a network, A and B, will ultimately permeate the network. So I used social network theory to analyze the problem of how to figure out when something jumps from being private to public and how to figure out if somebody can reasonably expect that the information they share with someone will stay private. And a couple years after that, Mark Zuckerberg became someone whose name people were hearing.

So yes, I think the problem predates social media. I don’t know if I anticipated that social media would be what it was. I’d be a richer man than I am now. [Laughs] But I actually don’t think social media is totally new. It’s just a more efficient network for disseminating this Laurie Garrett type of thing.

Having looked at this over the span of two decades now, do you think users of these technologies approach them with more skepticism than they used to?

They’ve adjusted. I do a lot of quantitative research that surveys Americans and really tries to narrow down on what their expectations of privacy are. There’s a big generational divide. Younger people adhere strongly to the belief that sharing information with one person does not mean sharing it with the whole world. They are used to having two Facebook accounts—one their parents see and one their parents don’t see—or managing their online persona so that there is one version of them that gets presented to prospective employers and another one that gets presented to their close friends. That’s something that strikes Millennials as completely obvious and intuitive.

Baby Boomers, at least in our data, are much less likely to believe that. I think Baby Boomers by and large believe that if it’s on the internet, everybody knows about it. And this kind of curation of their online identities that young people do is something that older people by and large just don’t get.

I’ve seen you describe Internet privacy as an “intermediate good.” Can you unpack that a little bit for me?

I think sometimes people think that privacy is the goal. And I would say privacy is never the goal. Privacy is something that furthers really important social goals, like participation in democratic decision-making, like people seeking out medical assistance and figuring out whether they are HIV positive or not.

Privacy furthers psychological interests. When we’re constantly being watched, we can’t relax, we can’t experiment with different things. Our patterns of consumption and behavior will become totally mainstream. Knowing, in the privacy of my own home, that I can go ahead and listen to some crazy band or read some book by someone who is on the extreme left or right and not all my friends will know about it—it’s that process of being able to consume information privately that allows me to become the authentic version of me, not sort of bland version of me that is more palatable to those around me.

But I think that advocates for privacy sometimes get into trouble when they say things like, “privacy is more important than security.” Well maybe, but only because privacy gets us something. Privacy is just the state of control over information about yourself. And sometimes we like that and sometimes we don’t like that.

So when we frame things as privacy-versus-speech, or privacy-versus-security, or privacy-versus-innovation, those are the kinds of debates we often hear about in Washington D.C., for example, or in a court. You have to ask yourself: What is privacy in this particular setting getting us? Once you start understanding that privacy is an intermediate good, it allows you to more easily identify those situations in which privacy ought not to win.

The last few months, we’ve seen a backlash against some of these tech companies. We’ve seen contracts being rewritten, policies updated, terms and conditions hitting you inbox. Why do you think there’s this activity now, compared to at other times? Is it a string of events causing it? What’s shifted?

I think it’s two things. The problem of identity theft—data breaches and identity theft that stems from those breaches—has been around now for a long time and it’s not getting any better. I don’t think people get any sense that as hard as banks and credit card companies and governments are trying to stop identity theft, that substantial progress is being made. And so in any given year, around one in 20 Americans are going to be victimized by identity theft. Those numbers are not declining in any significant way. In contrast to these other problems like spam email and telemarketing phone calls, that problem of identity theft hasn’t been solved. There were some huge breaches like the Target breach that really got people exercised.

I think the second thing is Cambridge Analytica. Facebook has had privacy goofs a dozen times before. Mark Zuckerberg has apologized for privacy mistakes that Facebook made at least a dozen times. And they’ve been serious! It generates two or three days of bad headlines for Facebook and Zuckerberg apologizes and it goes away. Nine months later, something else happens and he apologizes again and people are exercised for three days and then it goes away.

What’s really different about Cambridge Analytica, which Facebook didn’t see coming, is that it got tied into bitterness over the presidential election. Bitterness over the presidential election is not going away, at all. So this is the one instance—Facebook has done terrible things on privacy before! But something else has always pushed it out of the headlines. But a large swath of the American public, and particularly young people, are so upset about the election and so upset about Trump. In ways that Facebook couldn’t have anticipated, some of that blame is now moving in Facebook’s direction. That’s why I think this is probably the gravest danger that Facebook has faced since the company’s IPO. They haven’t figured out a way to make this story go away. And it’s not clear that they can, given the sustained scrutiny about the 2016 election.

Having looked at all this quantitative data over the years, do you think that energy and interest will sustain past this election-related surge?

I don’t know yet, in part because we haven’t run a national survey since Cambridge Analytica. What we’ve seen over time is that, for the most part, attitudes about privacy are pretty stable. They changed a little bit after the Edward Snowden revelations, but not hugely.

Why do you think users have been so passive when it comes to privacy issues, and understanding what they get into? Or maybe that’s a mischaracterization? But that’s my sense of it, at least.

Initially, people really loved Facebook. There’s a reason that company was growing like crazy: 2 billion people use it. People really enjoy connectivity. They loved finding old friends, loved seeing what acquaintances from high school were up to, loved checking out their old significant others. I think that delivered a lot of value to people.

One of the problems Facebook is trying to figure out is that the newer version of Facebook is not making people as happy as Facebook did back in the good old days. That’s a business problem for them. It’s not clear it’s a legal problem for government to solve. Government would probably make it worse.

When you’re getting this social media that’s exciting and is causing you to feel more connected to the world, then you’re not going to care so much about the privacy. Then, once Facebook is making you unhappy and it’s being used in a way that’s manipulative or you’re just seeing people in your social circle fighting all the time about politics, then Facebook doesn’t go in with this deep reservoir of goodwill. People are going to say, “Oh, they aren’t respecting my privacy, that’s a problem.”

Do you think that the regulations in Europe that are coming on board will have any impact stateside? Or are those legal systems too different?

Those are cutting-edge issues in litigation right now. One issue that’s percolating up through the European courts is the question of the “right to be forgotten,” and whether the “right to be forgotten” has what we call extraterritorial application.

The basic problem is this: European citizens under the General Data Protection Regulation, which is the regulatory framework for privacy in Europe, have the right to remove information about them that’s old or no longer relevant or misleading. Old embarrassing stuff, like a bad debt that I incurred seven or eight years ago. I can petition Google and, if I’m successful, then Google will de-index that result from eight years ago.

OK, so far so good.… The problem is that people are still going to want that information. So let’s say someone is applying to be a nanny in Paris and an employer in Paris would like to know more information about this person who might be watching their kids. So they contact a company in America that does a Google search and the Google search that comes back is not going to reflect the “right to be forgotten”—it’ll reflect whatever Google thinks is relevant. And it may be that information is visible in the United States but is not visible in Europe.

To a lot of European privacy regulators, that’s a big problem, and a loophole in the “right to be forgotten,” one that justifies requiring Google to remove search results that are found to be irrelevant worldwide. Google’s position, which resonates with how a lot of Americans would think about this issue, is this: It’s one thing if Google has to censor their search results in France in order to comply with French or European law, so be it. But now you’re telling us in America that there’s information that were not going to have access to because of French law? No no no no. We believe in free speech, free Internet!

That’s the challenge, the conflict. I’ve got no idea how that’s going to get resolved. But it’s an almost irreconcilable conflict.

When it comes to the higher-profile companies, the bigger tech companies, is it your sense that they are starting to take this stuff more seriously than they have in the past? Or are they just doing enough to save face while preserving their business models?

There have always been companies that care more about this stuff than less about this stuff. I would say that in the past, the market has not rewarded companies that were really good at privacy very much. There’s a great search engine if you care about privacy called DuckDuckGo. Nobody uses it. Their market share is way less than 1 percent of web searches. Companies see that DuckDuckGo’s market share is way below the market share of other search engines and they say, “Do we want to go for the one-tenth of 1 percent market share, or do we want to go for the 30 percent market share?” There are a few exceptions, but for the most part, doing privacy really well doesn’t get rewarded by the market.

And in this country, there’s been relatively lax regulation and relatively lax enforcement of the laws we do have on the books.… The Federal Trade Commission, which is the main government agency responsible for enforcing privacy, has no authority to fine anyone a penny until that company has entered into a consent decree (a settlement basically) that gives the FTC the power to pursue monetary fines. And even when the FTC has this power, the biggest fines they are going for are $20 million, $40 million. It sounds like a lot of money, but to Google or Facebook, it’s not a lot of money.

So, think about that. We’ve charged a federal agency with protecting us against privacy invasions and we’ve given them essentially no meaningful enforcement power for first violations. You get one free massive goof and then the FTC might be able to force you to settle and you might give them this power to do that. That seems to indicate that Washington D.C. doesn’t take consumer privacy all that seriously.

There are other things that tech companies worry about. They worry about class action attorneys more than they worry about the FTC. They worry a little bit about state attorneys general. They worry a lot more about ticking off all their customers. What has Facebook really worried is not so much the FTC, it’s the Delete Facebook movement. That would really be a problem for Facebook. Their user engagement is already dwindling. Figuring out a way to generate that goodwill—that’s a really [hard] problem, and I don’t envy them the task.

Before I left you go, I’d be curious how your Internet use has changed over the years, having studied this stuff. Do you feel like you use the Internet like you would have if you were completely ignorant of it?

Strahilevitz: No. [Laughs] I’m not a Delete Facebook person—I never joined. I never joined because I was really wary about exactly how much information and how valuable that information about everyone I’m connected with would be. And I don’t feel like an idiot for not having joined Facebook. I did join Twitter. so it’s somewhat hypocritical. But I assume basically everything I do on Twitter is fully public and that the whole world can see it, and so I act accordingly. And I think people get into big trouble on Twitter when they assume people can’t, that there is some expectation of privacy.

I’m wary of installing apps on my smartphone. I check out the permissions. I basically never authorize anything to use my camera or microphone because there have been a lot of abuses that show up where people are authorizing the camera or microphone on their phone and it’s getting used for things that people really can’t imagine.

Tesla is recording in real time lots and lots of driver information, about reaction times, about your propensity to drive in a safe way or unsafe way, and they are amassing that in a proprietary database. What are they going to do with that? God only knows. But it wouldn’t surprise me if down the road, that data gets used to inform pricing models for life insurance, pricing models for health insurance if you’re on the individual market, credit worthiness. There are all kinds of things that correlate with particular sorts of driving behavior. People have no idea what they are doing when they readily agree to give up that data.

I live a pretty boring lifestyle. I’d probably be better off sharing more data. [Laughs] It would probably get me a better rate on mortgage or something. But I’ve just decided I don’t want to do it.