Thursday, July 14, 2011

Rebecca McKinnon

This is the shit! This talk just about sums up in one place the things about Big Tech that are missing from every piece of softball coverage of Big Tech. I add Rebecca McKinnon to the short list of Big Tech critics: Cory Doctorow, Eben Moglen, Eli Pariser, Rebecca McKinnon, Marc Rotenberg and EPIC, the EFF...


I happen to think that a lot of the time, the toughness of the coverage is tempered by starry-eyed fondness for the coolness of the technology, or even by an awareness that the newspaper/journalism group has interconnections with Facebook, Apple, Amazon, Google, Microsoft, Twitter, and relies on them for their bread and butter in the form of the underlying delivery platform for their content. It's hard to fight the powers when they're big, have their tentacles most everywhere, and everyone has been told that this is the new way to do *promotion* of anything. I presume if I went to the Speakers Bureau web pages for the organizations that represent Pariser or McKinnon or Doctorow, they'll all have the F and the T in their sidebar. There is nothing any of us can do about it in 2011. You still use Facebook in the short term because that is where loads of people will be to read what you say. But that sense of economic need, expressed as a massive vote of approval made of millions (?) of little one-off decisions that I need to be there because my people are there, tends to keep inertia inertia. And also provides fuel for the rebuttal that hey, we must be doing something right because we are massively *popular*. I would tend to want to be there too. I admit it. The fear of "missing out" stops me jumping ship. Also I think it's terrible that third-party services require you to throw in with one of a very small number of Big Tech entities. Khan Academy, the popular and hypergenerous open learning website and organization, as of right now, only supports accounts from Facebook or Google. So you have two choices: use Big Tech and in a tiny, incremental way, be complicit in all of their creepy initiatives, or be shut out from participating.

Thursday, July 7, 2011

Negative space and surveillance

This idea has been rattling around my head for a while. Let's say you have five people in a room. Four have either a favorable or shrugging approach to surveillance either in the form of cameras at intersections, GPS tracking in iPhones, or other manifestations. The fifth person is opposed to it. The way that surveillance works in this situation is that people volunteer to report the GPS coordinates of their silhouettes. Let's say this takes place in 2050. We have finer-grained GPS now, and if you favor radical "sharing", you can elect to broadcast a three-dimensional map of your silhouette 24 hours a day. A certain part of the population thinks this is cool. So the point of this picture is that the silhouette of the fifth person may be able to be derived from the silhouettes of the other four and the rectangular box they are sitting in. So we have this 5th person who is a privacy advocate, or worries about false positives, and so has taken advantage of government and/or corporate Opt Out mechanisms.

I think this can be extrapolated to some other kinds of things. I don't have the data but intuitively I feel that process of elimination is something to bear in mind. Suppose that at some point in the near future, we have additional intrusive surveillance systems in U.S. society but our way of trying to be fair about it is to institute "Do-Not-Track" lists. Depending upon other factors, the person who has opted out may still be trackable by something like the "negative space" story.

Wednesday, July 6, 2011

More "Scunthorpe" syndrome, in China

More false positives come under the dragnet of internet censorship in China. This time, rather than "Scunthorpe," it's searches for Jiang, or River, according to this piece on TPM.

"If you want to discuss rivers of any kind online right now in China, you're going to be out of luck," writes Sarah Lai Stirland.

As often is the case, not only is something legitimate swept up in the dragnet, but the dragnet is also really arbitrary. "For example," Stirland writes, "Weibo has also blocked the phrase 'myocardial infarction,' but not the phrase 'heart attack.'"

Friday, July 1, 2011

Scunthorpe / Doctorow

This Cory Doctorow talk is interesting in a few ways. To start with one that is a a simple false-positive problem that I had never heard of before, you have something called "the Scunthorpe problem." He mentioned it in passing. It's hard to search for from within corporate firewalls that are crudely, dumbly censoring certain strings and substrings. One of the impacts of false positives is just extra nuisance. People were also remarking on this a year or so ago when Google started their "instant" search and for certain naughty words and subjects, they won't search instantly, and you have to do the awesome work of pressing return yourself. And there are also lots of false positives and weird inconsistencies in Google's inappropriateness dragnet. ("'Cocaine' is blocked, but 'marijuana' and 'heroin' are not," said Bianca Bosker in this Huffington Post story.)

I am thrilled that Doctorow gave this talk and I want to stick my middle finger in the air. When something is *diffuse*, the diffuseness can be used as a benefit by someone. If the action and the consequence are far apart, it's much harder to make a case that one is causing the other. Doctorow talks about eating junk and then gaining weight. The consequence comes on a whole other day and by that time, the fifteen different instances when you ate well or ate poorly are often far apart and hard to tie together in cause and effect. And Doctorow is suggesting, among other points, that Facebook takes advantage of a similarly diffuse timeline.

The way that some privacy conversations are contextualized as toy privacy, inherently comedic, nothing really at stake, irritates me. For example, observations of a supposed social phenomenon where kids/teenagers are undergoing a paradigm shift in their approach to privacy is usually expressed in amused or detached terms, and although it is a related topic, there is a latent grin or a latent sense that nothing serious goes on there. The only things at stake, the examples used in the discussions, are party photos of someone wearing a punchbowl on their heads, or this very irritating contextualization where a parent scratches their head that a kid is writing candidly about sex, or about gossip. It's like the parent feels an obligation to be permissive and merely report rather than acting.

Also, the latent grin comes from the parent's sense that it's only the new generation's equivalent of whatever they themselves did. Looked at Playboy when they weren't supposed to. Of course this is an extremely broad statement and every family is different. But there is a giant shrug going on with regards to Facebook and things like it.

And there's also something there which is a little hard to articulate, but it feels to me like a divide-and-conquer. Old people sheepishly and apologetically devalue themselves, and play up their differences from younger people rather than play up their common interests in fighting back. Old people are embarassed about young peoples' deeper grasp of new technology, and feel, in some cases, like it's futile to try to intervene. They would just be lost. Cue the cliche about parents deploying censor software, and the general sensation that the kid will always find a way around it, because the kid knows 100 times more about computers than the parent does. And what this does is cloud the possibility that a big sell-out, either to commerce or to DHS or both, *hurts everyone*. The stakes are high, not silly, not solely the stuff of comedy-of-errors or a socially-based Trauma Lite, in which the kid is sullen for two days and then gets over it.

I'm being vague about what the hell the harm would be, specifically. I'm hurting for specific examples and if I can't think of any, maybe it isn't really a problem or isn't as serious as I'm making it out to be. Panopticon - SO WHAT? Intuitively I think there is harm, but I'm trying to always substantiate.

Sellouts to commerce are bad because databasing is a precursor to identity theft. I'm looking for something more tangible than "it's creepy." Sellouts to commerce are bad because commerce doesn't give a shit if you take up smoking and die of lung cancer.

Sellouts to DHS are bad because there is a danger that DHS will do harm to false positives, either accidentally or deliberately, for their "One-Percent Doctrine," bordering on a "The Lottery" doctrine. And even if a particular government has an excellent record of ethics at a given time, the infrastructure remains, the privacy norms are eroded and may still be eroded when new people come into power who are less ethical.

There are genuine social interconnections of compassion and concern when someone you know is a false positive who gets hurt, and these social networks are not gated off by age lines. This is our common interest from 9 to 99. If a 14-year-old girl seems inherently trite, with trite interests, trite language - the children of the false positives sent to Guantanamo for a bounty are also young people, and I think it's entirely possible that ubiquitous and accepted surveillance will play a role in the next round of such an atrocity. Would it be better, finer, more tailored because of the general climate of disclosure? I don't know. It depends on where you want to set your bar. This dovetails with some other posts in which it becomes socially acceptable to let "just one" false positive incident happen now and again, especially if the false positive's skin color, economics, culture are different from the observer's own.

Thank you Andy, for sending me the link! Also, discussions with Christopher are extremely influential in writing this blog.