OMMA Panel: Bad Science – Online Marketing Technology is Creating a Monster

Is the flip side of better ad targeting a nosier, intrusive marketer who knows more about us than we care to share? It seems clear now that digital media must confront and resolve its natural tensions with privacy and security concerns before the industry can move forward. The weird science of behavioral tracking now snoops into our browsing, search, shopping, e-mail, and perhaps even online social network habits and physical location. Microsoft, Yahoo and AOL are consolidating networks and technologies in order to link together many of these user habits into deep profiles that allow precise targeting with scale. Have private companies ever been entrusted with so much information about private citizens? Despite industry assurances that personally identifiable data are not involved, marketers now have privacy advocates, the FTC, and legislators taking a hard look at whether a new age of marketing science requires new levels of regulation. What hurdles will the industry need to vault? Could digital marketing really survive if the fully opt-in system that many advocates propose is adopted? Are the best laid plans of marketing science going to hit a wall called privacy rights?

MODERATOR:
Wendy Davis, Senior Writer, MediaPost’s OnlineMediaDaily
SPEAKERS:
Eileen Harrington, Deputy Director of the Bureau of Consumer Protection, Federal Trade Commission
Jeff Hirsch, President & CEO, Revenue Science
Frank Pasquale, Professor of Law, Seton Hall Law School
Ari Schwartz, VP and Chief Operating Officer, The Center for Democracy and Technology
Mike Zaneis, Vice President, Public Policy, Interactive Advertising Bureau
Q: For years, BT campaigns have collected data anonymously, and used it to serve people ads. If it is anonymous, whats the problem?
A: Ari: We don’t consider it to be anonymous. If you have data about someone over time, that isn’t anonymous. The consumers show interest, and the debate is about how much data is being collected about them.
A: Frank: The tailoring of the user experience is important. Users want to know they have control over what is being collected about them. Users should have a persistent opt-out opportunity to maintain privacy for as long as they want.
A: Mike: There is a danger in collected info about people. Just because you can track someone over time doesn’t mean that its not anonymous. If it is not trackable back to a specific individual, there is less to be concerned about.

Q: Aren’t there instances where people who have been identified?
A: Mike: Yes, but lets not brand an entire industry because there is a potential for that to happen.
A: Hirsch: The debate is about what is PII and what isn’t. And that’s where the line is.

Ari: But you are taking your business model and applying it to policy. I’m suggesting that there are business models that definitely have something to fear. And that’s where you start to run afowl of the benefits of interactive media.
Q: Who owns the data? And does it matter from a privacy point of view?
Hirsch: Who owns it is partially a business issue. Whoever created the data should own it.
Eileen: The issue is also about control, not just privacy. The issue in the policy arena is that the facts are obscured. Congress and the FTC have continued to try to collect really good factual information about exactly what is being collected and how it is being used. Businesses that do this are obviously reluctant to be forthcoming in ways that have been meaningful. And the practice is constantly changing. Trying to define what is PII is a red herring. We just want to know what is being collected, how it is being used, and how it can be combined with other data. Those are the questions. The longer the debate goes on without really good, candid, reliable information, the more probable it is that policy wont be made. There’s a game of chicken being played. The longer businesses refuse to come forward, the more likely it is that policy will develop that people will complain about.
Frank: but there is also an issue about what is information and what are business and trade secrets in order to conduct business. There needs to be a lot more computer scientists and engineers in order to get at the guts of this.
Eileen: The FTC has a pretty good record for maintaining the confidentiality of trade secrets.
Mike: The process the FTC has undertaken has been the right one, and has been very deliberative. Private meetings and town halls have been a deliberative process, and been focused on what the business model is and what is the right approach. We get in to trouble when we start to speculate what might be possible in the future. There are a lot of things that are possible in an open-architecture environment. We need to focus on what IS going on vs. what we can imagine in our wildest dreams.

Q: But who would have imagined that AOL would have released search data?
Mike: But people knew they were collecting it. They just didn’t expect that it would be released.
Jeff: My mother didn’t understand that AOL was tracking all of her searches.
Ari: its about user control. If you run a business model that is counter to user control, policy makers will look into it. I’m not picking a winner or a loser, that’s just the reality of the process.
Eileen: There are types of information that are very sensitive, like searching for information on health conditions and then serving ads against it. I don’t think that most people know much about how data is being collected and used, or that it is explained clearly in a privacy policy. The difference between online vs offline is the volume of data, and the potential of aggregation that simply isn’t possible offline. What has changed since we started looking at behavioral targeting? Everything. The volume, the usage, the techniques, all of it. This is a very juicy policy issue in Washington, and business models are going to develop in a way that give consumers clear notice and a meaningful choice or it is going to get dumped.
Q: There has been self regulation for years, so why are we still having this discussion?
Jeff: Not everyone follows the guidelines, and the new BT companies that leverage ISP data raised the issue again and proved that people weren’t following the guidelines.
Eileen: Meaningful self regulation means policing. There are excellent examples of meaningful policing. Most advertising policing is done through self regulatory policy with real sanctions.
Q: how do you sanction a company that says it isn’t going to follow that self regulation policy?
Eileen: If companies don’t comply, then we take enforcement action.
Frank: I have a lot of worries about self regulation. There are a lot of opportunities for a bottom to happen.
Eileen: The reason it makes sense, if it worked, is that the technology and practices change so quickly that the policy to regulate will be static and out of date.
Ari: We’ve had 8 years of self regulation, and we’re still having these problems. We’re going to need a backstop legislation. If we can get at a self regulatory model, and back that up, we’ll be much better off.
Mike: We have that already. We have that around email and children’s spaces. But it is a harm based model, and we haven’t come up with a definition of harm. But the industry recognizes that we have a relationship with customers, and we need their trust in order to have a business. That’s why we need a broader initiative than just the NAI.
Q: how do you compete with companies that don’t follow the practices?
Jeff: We see business models where the data is being used without a publisher’s knowledge. But that is a business issue, not a privacy issue.
Q: Does the technology involved change the standards? Does ISP targeting require different policy?
Mike: The same piece of data in the hands can be tracked back differently than if it is in the hands of another company.