Saturday, December 15, 2007

Court upholds using the Fifth Amendment to refuse to disclose your password

The U.S. District Court for the District of Vermont has held that you can invoke the Fifth Amendment privilege against self-incrimination and refuse to give up the password you have used to encrypt data files.

Here are the essential facts in United States v. Boucher, 2007 WL 4246473 (November 29, 2007):

On December 17, 2006, defendant Sebastien Boucher was arrested on a complaint charging him with transportation of child pornography in violation of 18 U.S.C. § 2252A(a)(1). At the time of his arrest government agents seized from him a laptop
computer containing child pornography. The government has now determined that the relevant files are encrypted, password-protected, and inaccessible. The grand jury has subpoenaed Boucher to enter a password to allow access to the files on the computer. Boucher has moved to quash the subpoena on the grounds that it violates his Fifth Amendment right against self-incrimination.

The district court held that Boucher could invoke the Fifth Amendment and refuse to comply.

I did an earlier post about this general issue and, as I explained there, in order to claim the Fifth Amendment privilege the government must be (i) compelling you (ii) to give testimony that (iii) incriminates you. All three of these requirements have to be met or you cannot claim the Fifth Amendment privilege. (And if you voluntarily comply by giving up your password, you can’t try to invoke the privilege later because a court will say that you were not compelled to do so – you did so voluntarily.)

In the earlier post or two I did on this issue, I was analyzing a scenario, which has come up in a few instance (though not in any reported cases I’m familiar with) in which someone is stopped by Customs officers while entering or leaving the U.S. In my scenario, which is the kind of circumstance I’ve heard about, the officers check the person’s laptop, find it’s encrypted and demand the password. The question then becomes whether the laptop’s owner can (i) invoke the Fifth Amendment privilege or (ii) invoke Miranda. As I’ve written before, to invoke Miranda you have to be in custody, and you arguably are not here. And to be “compelled” under the Fifth Amendment, you have to be commanded to so something by judicial process or some analogous type of official coercion (like losing your job); you probably (?) don’t have that here, either.

But in the Boucher case, he had been subpoenaed by a federal grand jury which was ordering him to give up the password, so he was being compelled to do so.

As to the second and third requirements, the district court held that giving up the password was a testimonial, incriminating act:
Compelling Boucher to enter the password forces him to produce evidence that
could be used to incriminate him. Producing the password, as if it were a key to a locked container, forces Boucher to produce the contents of his laptop. . . .

Entering a password into the computer implicitly communicates facts. By entering
the password Boucher would be disclosing the fact that he knows the password and has control over the files on drive Z. The procedure is equivalent to asking Boucher, `Do you know the password to the laptop?’ . . .

The Supreme Court has held some acts of production are unprivileged such as
providing fingerprints, blood samples, or voice recordings. Production of such
evidence gives no indication of a person's thoughts . . . because it is undeniable that a person possesses his own fingerprints, blood, and voice. Unlike the unprivileged production of such samples, it is not without question that Boucher possesses the password or has access to the files.

In distinguishing testimonial from non-testimonial acts, the Supreme Court has
compared revealing the combination to a wall safe to surrendering the key to a strongbox. The combination conveys the contents of one's mind; the key does not and is therefore not testimonial. A password, like a combination, is in the suspect's mind, and is therefore testimonial and beyond the reach of the grand jury subpoena.
United States v. Boucher, supra.

The government tried to get around the testimonial issue by offering “to restrict the entering of the password so that no one views or records the password.” The court didn’t buy this alternative:
While this would prevent the government from knowing what the password is, it would not change the testimonial significance of the act of entering the password. Boucher would still be implicitly indicating that he knows the password and that he has access to the files. The contents of Boucher's mind would still be displayed, and therefore the testimonial nature does not change merely because no one else will discover the password.
United States v. Boucher, supra.

So Boucher wins and the court quashes the subpoena, which means it becomes null and void and cannot be enforced.

I applaud the court’s decision. I’ve argued for this outcome in chapters I’ve written for a couple of books and in some short articles (and in discussions with my students). I think this is absolutely the correct result, but I strongly suspect the government will appeal the decision. Let’s hope the appellate court goes along with this court.

There is, again, a caveat: Remember that Boucher had been served with a grand jury subpoena so there was no doubt he was being compelled to give up the password. The airport scenario is much more difficult, because compulsion is not as obvious. We won’t know whether anyone can take the Fifth Amendment in that context unless and until someone refuses to provide their password to Customs officers and winds up litigating that issue in court.

Wednesday, December 12, 2007

"Consent to Assume Online Presence"


I just ran across something I’d not seen before: a law enforcement (FBI) form called “Consent to Assume Online Presence.”

Before I get to the form, what it does and why it’s new to me (anyway), I should explain what I mean by “consent.”

As I wrote in an earlier post, the Fourth Amendment creates a right to be free from “unreasonable searches and seizures.” That means, among other things, that law enforcement officers do not violate the Fourth Amendment when they conduct a search or seizure that is “reasonable.”

As I also explained in that post, a search or seizure can be reasonable in either of two ways: (i) if it is conducted pursuant to a warrant (search warrants for searching and seizing evidence, arrest warrants for seizing people); or (ii) if it is conducted pursuant to a valid exception to the warrant requirement. As I explained in the earlier post, consent is an exception to the warrant requirement. With consent, you essentially waive your Fourth Amendment rights and let law enforcement search and/or seize you or your property.

Unlike many of the exceptions to the warrant requirement, consent does not require that the officer have probable cause to believe he or she will find evidence of criminal activity in the place(s) they want to search. Probable cause is irrelevant here because you’re voluntarily giving up your Fourth Amendment rights.

To be valid, consent must be voluntary (so police can’t threaten to beat you until you consent) and it must be knowing (which means you have to know you had the right not to consent . . . but courts presume we all know that, so an officer doesn’t have to tell you that you have the right NOT to consent for your consent to be valid).

Officers can rely on oral consent (they ask if you’ll consent to let them search, say, your car, you say “ok” and they proceed, having gotten your consent), but there’s really a preference in law enforcement for having the person sign a form. Consent is, after all, a kind of contract: You agree to give up your Fourth Amendment rights and that creates an agreement with law enforcement under which they will search the property for which you have given consent. If officers rely on oral consent, the person can always say later that they didn’t’ consent at all or didn’t consent to the scope of the search that was conducted (i.e., the officers searched more than the person agreed to having them do). So officers, especially federal officers, generally have the person sign a form, a “Consent to Search” form.

Enough background. Let’s get to the “Consent to Assume Online Presence.” As far as I can tell, the “Consent to Assume Online Presence” form has so far been mentioned in only two reported cases, both federal cases and both involving FBI investigations.

In United States v. Fazio, F. Supp.2d, 2006 WL 1307614 (U.S. District Court for the Eastern District of Missouri, 2006), the FBI was conducting an online investigation of child pornography when they ran across an account (“salvatorejrf”) that was associated with the creation and posting of “four visual depictions of naked children.” United States v. Fazio, supra. They traced the account to Salvatore Fazio and, after some more investigation, obtained a warrant to search his home.

FBI agents executed the warrant, seized computers, CDs and other evidence. One of the agents, Agent Ghiz, also wound up interviewing Fazio, who said “he was acting in an undercover capacity to identify missing and exploited children” and “admitted that he had downloaded images of children from the internet and uploaded them on other sites.” United States v. Fazio, supra. According to the opinion, during the interview
Agent Ghiz did not accuse the defendant of lying nor did he use any psychological ploys to encourage Mr. Fazio to talk. . . . According to Agent Ghiz, [Fazio] never attempted to leave during the execution of the search warrant or the interview. Toward the conclusion of the interview, Agent Ghiz asked [Fazio] if he would be willing to continue to help in the investigation by allowing the FBI to use his online identity to access other sites to help investigate other child pornography crimes. [Fazio] was willing to cooperate and gave consent to the FBI's assuming his online presence. Government's Exhibit 8, a copy of a form entitled Consent to Assume Online Presence, was introduced at the evidentiary hearing. It was signed by [Fazio] in the presence of Agent Ghiz.
United States v. Fazio, supra. The evidentiary hearing came when Fazio moved to suppress the evidence the agents had obtained.

The other, more recent case is much more recent. In United States v. Jones, 2007 WL 4224220 (U.S. District Court for the Southern District of Ohio, 2007) the FBI was conducting another investigation into the online distribution of child pornography. In the course of the investigation, they ran across an account that was registered to Joseph Jones. United States v. Jones, supra. They obtained a warrant to search his home and went there to execute it but no one was there. The agents and some local police officers then went looking for Jones, whom they eventually found talking to two other men at the end of a driveway in what seems to have been a rural area. United States v. Jones, supra.

And FBI agent, Agent White, explained to Jones why they were looking form him and, at his request, showed him the search warrant for the property they had identified earlier. I won’t go into all the details, but Jones wound up consenting to their searching another location with which he also had ties. United States v. Jones, supra. He gave his consent to the search of that property by, as I noted earlier, signing a “Consent to Search” form, a traditional form. The FBI agent also had brought a “Consent to Assume Online Presence” form and Jones wound up signing that, too:

[Agent] White and [Jones] completed the `Consent To Assume Online Presence’ form. This form gave the FBI permission to take over [Jones’] `online presence’ on Internet sites related to child pornography so agents could discover other offenders. [Jones] filled in the spaces on the form calling for his online accounts, screen names, and passwords, and he signed and dated the form at the bottom.

United States v. Jones, supra.

I find the “Consent to Assume Online Presence” form very interesting, for a couple of reasons. One is that it doesn’t act like a traditional consent in that it doesn’t conform to the usual dynamic of a Fourth Amendment search and seizure.

The usual dynamic, which goes back centuries, is that law enforcement officers get a warrant to search a place for specified evidence and seize the evidence when they find it. They then go to the place and, if the owner is there, give the owner a copy of the warrant (which is their “ticket” to be there), conduct the search and seizure, give the owner an inventory of what they’ve taken and then leave. This dynamic is structured, both spatially and temporally: It happens “at” a specific real-space place (or places). It has a beginning, a middle and an end.

The same thing is true of traditional consent searches. So if consent to let police search my car for, say, drugs, they can search the car for drugs. The car is the “place,” so they can search that “place” and no other. And the search will last as only long as it takes to reasonably search the car (can’t routinely take it apart). Here, too, the owner of the car is usually there and observes the search.

Now look at the “Consent to Assume Online Presence” search, as I understand it: Agents, or officers, obtain the consent to assume the person’s online identity, which they do at some later time (that not being convenient at the moment consent is given, as we see in these two cases). The “place” to be searched is, I gather, cyberspace, since the Consent to Assume Online Presence lets officers use the person’s online accounts to search cyberspace for other evidence, i.e., to find others involved in child pornography in the two cases described above. So the “place” to be searched is apparently unbounded, and I’m wondering if the temporal dimension of the consent is pretty much the same. I don’t see any mention of the “Consent to Assume Online Presence’s” form limiting the length of time in which the consenting person’s online accounts can be used for this purpose. I suppose there’s a functional self-limitation, in that the consent expires when the accounts do or when they’re otherwise cancelled.

But even with that limitation, this is a pretty amazingly unbounded consent to search. It’s basically an untethered consent to search: As I said earlier, traditional consent searches have definite spatial and temporal limitations: “Yes, officer, you can search my car for drugs” lets an officer search the car (only that car) until he either finds drugs or gives up after not finding drugs. There, the search is tethered to the place being searched and is limited by the reasonable amount of time such a search would need. Here, the consent is untethered in that it apparently lets officers use the consenting person’s accounts to conduct online investigations.

I’m not even sure this is a consent to search, in the traditional sense. In these two cases, law enforcement had already gained access to the persons’ online accounts, so there wasn’t any going to be any additional, incremental invasion of their privacy. Law enforcement officers had already been in their online accounts and seen what there was to see. The consent in these cases picks up, as you can see from the facts summarized above, after the suspect has already been identified, after search warrants have been executed (and, in one case, a regular, spatial consent search executed) and after the suspect has effectively been transformed into a defendant. So that investigation is really over.

This is a consent to investigate other, unrelated cases. That’s why it doesn’t strike me as a traditional search. It’s really a consent to assume someone’s identity to investigate crimes committed by persons other than the one consenting. Now, there are cases in which law enforcement officers key in on a suspect, get the suspect to consent to letting them search property – a car, say – where they think they will find evidence of someone else’s being involved in the criminal activity they’re investigating the suspect for. There the officers are getting consent to carry on an investigation that at least partially impacts on someone other than the person giving consent. But there the consent search is a traditional consent search because it conforms to the dynamic I outlined above – it has defined spatial and temporal dimensions.

I could ramble on more about that aspect of the “Consent to Assume Online Presence” searches (or whatever they are) but I won’t. I’ll content myself with making one final point that seems interesting about them.

When I consent to a traditional search, I can take it back. That is, I can revoke my consent. So if the officer says, “Can I search your car for drugs?” and I (foolishly) say, “yes,” I can change my mind. If, while the officer is searching, I say, “I’ve changed my mind – stop searching right now”, then the officer has to do just that. If the officer has found drugs before I change my mind, then the officer can keep those drugs and they can be used in evidence against me because they were found legitimately, i.e., they were found while my consent was still in effect.

How, I wonder, do you revoke your “Consent to Assume Online Presence”? Do you email the agency to which you gave the consent, on call them or visit them or have your lawyer get in touch and say, “by the way, I changed my mind – quit using my account”?

Saturday, December 08, 2007

Law and the 3D Internet


Over the last month or three, I’ve read several news stories about how IBM and Linden Labs, along with a number of IT companies, are working to develop “avatar interoperability.”


“Avatar interoperability,” as you may know, means that you, I or anyone could create an avatar on Second Life and use that same avatar in other virtual worlds, such as HiPiHi or World of Warcraft or Entropia.

The premise is that having created my avatar – my virtual self – I could then use that avatar to travel seamlessly among the various virtual worlds.

In a sense, I guess, the interoperable avatar becomes my passport to participate in as many virtual worlds as I like; I would not longer be tethered to a specific virtual world by my limited, idiosyncratic avatar.


Avatar interoperability seems to be one aspect of creating a new 3D Internet. One article I read said the ultimate goal is to replace our current, text-based Internet with “a galaxy of connected virtual worlds.” So instead of experiencing cyberspace as a set of linked, sequential “pages,” each of which features a combination of text, graphics and sound, I’d log on as my virtual self and experience cyberspace as a truly virtual place. Or, perhaps more accurately, I would experience cyberspace as a linked series of virtual places, just as I experience the real-world as a linked series of geographically-situated places.

Cyberspace would become an immersive, credible pseudo 3D reality – the evolved twenty-first analogue of the hardware-based virtual reality people experimented with fifteen years or so ago . . . the tethered-to-machinery virtual reality depicted in 1990’s movies like The Lawnmower Man and Disclosure. That older kind of virtual reality was seen as something you used for a particular purpose – to play a game or access data.

The new 3D Internet featuring interoperable avatars is intended to make cyberspace a more immersive experience. Our approach to privacy law in the United States is often described as sectoral; that is, instead of having general, all-encompassing privacy laws, we have discrete privacy laws each of which targets a distinct area of our lives. So we have medical privacy laws and law enforcement search privacy laws and wiretap privacy laws and so on.

I think our experience of cyberspace is currently sectoral, in this same sense: I go on, I check my email, I check some news sites, I might do a little shopping on some shopping sites, then I might watch some videos or check out some music or drop into Second Life to socialize a bit or schedule flights or do any of the many, many other things we all do online. I think my doing this is a sectoral activity because I move from discrete website to discrete website. I may log in multiple times, using different login information. I go to each site for a specific, distinct purpose.

I think, then, that the custom of referring to websites as “web pages” accurately captures the way I currently experience cyberspace. really is much more analogous to browsing the pages in a book than it is to how we experience life in the real, physical world. In the real-world I do go to specific places (work, grocery, dry cleaner’s, restaurants, hotels, dog groomer, book store, mall, etc.) for distinct purposes. But I’m “in” the real-world the whole time. I don’t need to reconfigure my reality to move from discrete place to discrete place; the experience is seamless.


So that seems to be the goal behind the development of the 3D Internet. It seems to be intended to promote a more immersive, holistic experience of cyberspace while, at the same time, making it easier and more realistic to conduct work, commerce, education and other activities online. Avatars, currency and the other incidents of our online lives would all become seamlessly portable.

Personally, I really like the idea. I think it would make cyberspace much easier and much more interesting to use. It would also really give us the sense of “being” in another place when we’re online.

When I first heard about avatar interoperability, I wondered about what I guess you’d call the cultural compatibility of migrating avatars. It seemed, for example, incongruous to imagine a World of Warcraft warrior coming into Second Life or vice versa (Second Life winged sprite goes into WoW). And that’s just one example. I had basically the same reaction when I thought of other kinds of avatars leaving their respective environments and entering new and culturally very different worlds.

But then, as I thought about it, I realized that’s really what we do in the real world. We don’t have the radical differences in physical appearance and abilities (or inclinations) you see among avatars, but we definitely have distinct cultural differences. We may still have a way to go in some real-world instances (I’m personally not keen on going to Saudi Arabia, for example), but we’ve come a long way from where we were centuries ago when xenophobia was the norm.

And the ostensible cultural (and physical) differences among avatars will presumably be mitigated by the fact that an avatar is only a guise a human being uses to interact online. Since it seems humanity as a whole is becoming increasingly cosmopolitan and tolerant, the presumably superficial, virtual differences among avatars may not generate notable cultural incompatibilities as they move into the galaxy of interconnected virtual worlds.

I also wondered about what this might mean for law online. Currently, as you may know, the general operating assumption is that each virtual world polices itself. So Linden Lab deals with crimes and other “legal” issues in Second Life, and the other virtual worlds do the same. There have been, as I’ve noted in other posts, some attempts to apply real world laws to conduct occurring in virtual worlds. Earlier this year, the Belgian police investigated a claim of virtual rape on Second Life; I don’t know what happened with the investigation. As I’ve written elsewhere, U.S. law currently would not consider whatever occurs online to be a type of rape, because U.S. law defines rape as a purely real-world physical assault. Online rape cannot qualify as a physical assault and therefore cannot be prosecuted under U.S. law, even though it can inflict emotional injury. U.S. criminal law, anyway, does not really address emotional injury (outside harassment and stalking).

That, though, is a bit of a digression. My general point is that so far law generally treats online communities as separate, self-governing places. Second Life and other virtual worlds functionally have a status analogous to that of the eighteenth- and nineteenth-century colonies operated by commercial entities like the Hudson Bay Company or the British East Indian Company. That is, they are a “place” the population of which is under the governing control of a private commercial entity. As I, and others have written, this makes a great deal of sense as long as each of these virtual worlds remains a separate, impermeable entity. As long as each remains a discrete entity, and as long as we only inhabit cyberspace by choice, we in effect consent to have the company that owns and operates a virtual world settle disputes and otherwise act as law-maker and law-enforcer in that virtual realm.

Things may become more complicated once avatars have the ability to migrate out of their virtual worlds of origin and into other virtual worlds and into a general cyberspace commons. We will have to decide if we want to continue the private, sectoral approach to law we now use for the inhabitants of discrete virtual worlds (so that, for example, if my Second Life avatar went into WoW she would become subject to the laws of WoW) or change that approach somehow.

It seems to me the most reasonable approach, at least until we have enough experience with this evolved 3D Internet to come up with a better alternative, is to continue to treat discrete virtual worlds as individual countries, each of which has its own law. This works quite well in our real, physical world: When I go to Italy, I submit myself to Italian law; when I go to Brazil I submit myself to Brazilian law and so on. At some point we might decide to adopt a more universal, more homogeneous set of laws that would generally conduct in cyberspace. Individual enclaves could then enforce special, supplemental laws to protect interests they deemed uniquely important.

One of my cyberspace law students did a presentation in class this week in which she told us about the British law firms that have opened up offices and, I believe, practices in Second Life. That may be just the beginning. Virtual law may become a routine feature of the 3D Internet.

Criminal liability for unsecured wireless networks?

I just received this email (from a source that will remain anonymous):

Good afternoon,

I have a wireless router (WiFi) which for technical reasons I won’t bore you with, has no encryption. If a third party were to access the internet via my unencrypted router and then commit an illegal act, could I be held liable? I’m not sure if this question in anyway broaches your area of expertise and if not please excuse the intrusion. I’ve asked some technical colleagues but they were not able to answer.

It’s a very good question. I’ve actually argued in several law review articles that people who do not secure their systems, wireless or otherwise, should be held liable – to some extent – when criminals use the networks they’ve left open to victimize others.

In those articles, as in nearly everything I do, I was analyzing the permissibility of using criminal liability to encourage people to secure their computer systems . . . which I think is the best way to respond to cybercrime. Since I’m not sure if the person who sent me this email is asking about criminal liability, about civil liability or about both, I’ll talk about the potential for both, but focus primarily on criminal liability.

There are essentially two ways in which one person (John Doe) can be held liable for crimes committed solely by another person – Jane Smith, we’ll say (with my apologies to any and all Jane Smiths who read this). One is that there is a specific provision of the law – a statute or ordinance or other legal rule – which holds someone in Doe’s position (operating an unsecured wireless network, say) liable for crimes Smith commits.

I’m not aware of any laws that currently hold people who run unsecured wireless networks liable for crimes the commission of which involves exploiting the insecurity of those networks. I seem to recall reading an article a while back about a town that had adopted an ordinance banning the operation of unsecured wireless networks, but I can’t find the article now. If such an ordinance, or such a law, existed, it would in effect create a free-standing criminal offense. That is, it would make it a crime (presumably a small crime, a misdemeanor, say) to operate an unsecured network.

That type of law goes to imposing liability on the person who violated it, which, in our hypothetical, would be John Doe, who left his wireless network unsecured. That approach, of course, simply holds Doe liable for what Doe, himself, did (or did not do). It doesn’t hold him criminally liable for what someone else was able to do because he did not secure his wireless network. And unless that law explicitly creates a civil cause of action for people who were victimized by cybercriminals (our hypothetical Jane Smith). Some statutes, like the federal RICO statute, do create a civil cause of action for people who’ve been victimized by a crime (racketeering, under the RICO provision) but absent some specific provision to the contrary, statutes like this only let a person who’s been victimized sue the individual(s) who actually victimized them (Jane Smith).

As I wrote in an earlier post, there are essentially two ways one person (John Doe) can be held liable for the crimes another person (Jane Smith) commits: one is accomplice liability and the other is a type of co-conspirator liability. While these principles are used primarily to impose criminal liability, they could probably (note the qualifier) be used to impose civil liability under provisions like the RICO statute that let victims sue to recover damages from their victimizers.

So let’s consider whether John Doe could be held liable under either of those principles. Accomplice liability, it applies to those who “aid and abet” the commission of a crime. So, if I know my next-door neighbor is going to rob the bank where I work and I give him the combination to the bank vault, intending to assist his commission of the robbery, I can be held liable as an accomplice.

The requirements for such liability are, basically, that I (i) did something to assist in or encourage the commission of the crime and (ii) I did that with the purpose of promoting or encouraging the commission of a crime. In my example above, I hypothetically provide the aspiring robber with the key to the bank vault for the express purpose of helping him rob the bank. The law says that when I do this, I become criminally liable for the crime – here, the robbery – he actually commits. And the neat thing about accomplice liability, as far as prosecutors are concerned, is that I in effect step into the shoes of the robber. That is, I can be held criminally liable for aiding the commission of the crime someone else committed in the same way as, and to the same extent as, the one who actually committed it. In this hypothetical, my conduct establishes my liability as an accomplice to the bank robbery, so I can be convicted of bank robbery.


I don’t see how accomplice liability could be used to hold John Doe criminally liable for cybercrimes Jane Smith commits by exploiting his unsecured wireless network. Yes, he did in effect assist – aid and abet – the commission of those cybercrimes by leaving his network unsecured. I am assuming, though, that he did not leave it unsecured in order to assist the commission of those crimes – that, in other words, it was not his purpose to aid and abet them. Courts generally require that one charged as an accomplice have acted with the purpose of promoting the commission of the target crimes (the ones Jane Smith hypothetically commits), though a few have said you can be an accomplice if you knowingly aid and abet a crime.

If we took that approach here, John Doe could be held liable for aiding and abetting Jane Smith’s cybercrimes if he knew she was using his unsecured wireless network and did nothing to prevent that. It would not be enough, for the purpose of imposing accomplice liability, if he knew it was possible someone could use his network to commit cybercrimes; he’d have to know that Jane Doe was using it or was about to use it for that specific purpose. I don’t see that standard’s applying to our hypothetical John Doe – he was, at most, reckless in leaving the network unsecured, maybe just negligent in doing so. (As I’ve written before, recklessness means you consciously disregard a known risk that cybercriminals will exploit your unsecured network to commit crimes, while negligence means that an average, reasonable person would have known this was a possibility and would have secured the network).


The other possibility is, as I wrote in that earlier post, what is called Pinkerton liability (because it was first used in a prosecution against a man named Pinkerton). To hold someone liable under this principle, the prosecution must show that they (John Doe) entered into a conspiracy with another person (Jane Smith) the purpose of which was the commission of crimes (cybercrimes, here). The rationale for Pinkerton liability is that a criminal conspiracy is a type of contract, and all those who enter into the contract become liable for crimes their fellow co-conspirators commit.

Mr. Pinkerton (Daniel, I believe) was convicted of bootlegging crimes his brother (Walter, I think) committed while Daniel was in jail. The government’s theory was that the brothers had entered into a conspiracy to bootleg before Daniel went to jail, the conspiracy continued while he was in jail, so he was liable for the bootlegging crimes Walter committed. I don’t see how this could apply to our John Doe-Jane Smith hypothetical because there’s absolutely no evidence that Doe entered into a criminal conspiracy with Smith. He presumably doesn’t even know she exists and/or doesn’t know anything about her plans to commit cybercrimes by making use of his conveniently unsecured network.


In my earlier post, which was about a civil lawsuit, I talked about how these principles could, or could not, be used to hold someone civilly liable for crimes. I’ll refer you to that post if you’re interested in that topic.

Bottom line? I suspect (and this is just speculation, not legal advice) that it would be very difficult, if not impossible, to hold someone who left their wireless network unsecured criminally liable if an unknown cybercriminal used the vulnerable network to commit crimes.

Sunday, December 02, 2007

Defrauding a machine

A recent decision from the United Kingdom held that it is possible to defraud a machine, as well as a human being.

The case is Renault UK Limited v. FleetPro Technical Services Limited, Russell Thoms (High Court of Justice Queen’s Bench Division) (November 23, 2007), [2007] EWHC 2541.

According to the opinion, FleetPro Technical Services operated a program with Renault UK that let members of the British Air Line Pilots Association (BALPA) buy new Renaults at a discount. In the ten months the program was in effect, FleetPro sent 217 orders through the system, only 3 of which were submitted by members of BALPA. The opinion says that Russell Thoms, FleetPro’s director and employee, placed the other 214 orders and passed on the discounts to brokers who sold the cars to members of the public.


Renault discovered what had been going on and sued FleetPro and Thoms for fraud. At trial, the defense counsel argued that there was no fraud because there was, in effect, no fraudulent representation made by one human being to another. The court described the relevant facts as follows:
[W]hat happened when orders produced by Mr. Thoms and sent by e-mail as attachments to Mr. Johnstone [the Renault fleet sales executive who handled the orders] were received was that he opened them, printed them off and gave them to Fiona Burrows to input into a computer system information including the BALPA FON [the code used to process orders]. The evidence was that no human mind was brought to bear at the Importer's end on the information put into the computer system by Fiona Burrows. No human being at the Importer consciously received or evaluated the specific piece of information in respect of each relevant order that it was said to fall within the terms of the BALPA Scheme. . . . [T[he last human brain in contact with the claim that a particular order fell within the terms of the BALPA Scheme was that of Fiona Burrows at the Dealer. The point of principle which thus arises is whether it is possible in law to find a person liable in deceit if the fraudulent misrepresentation alleged was made not to a human being, but to a machine.
Renault UK Limited v. FleetPro Technical Services Limited, supra.

Judge Richard Seymour held that it is, in fact, possible to hold someone liable when a fraudulent misrepresentation is made to a machine:
I see no objection . . . to holding that a fraudulent misrepresentation can be made to a machine acting on behalf of the claimant, rather than to an individual, if the machine is set up to process certain information in a particular way in which it would not process information about the material transaction if the correct information were given. For the purposes of the present action, . . . a misrepresentation was made to the Importer when the Importer's computer was told that it should process a particular transaction as one to which the discounts for which the BALPA Scheme provided applied, when that was not in fact correct
Renault UK Limited v. FleetPro Technical Services Limited, supra.

After I read this decision, I did some research to see if I could find any reported American cases addressing the issue. I could not.

I’m not sure why. Maybe the argument simply has not been raised (which, of course, means that it may be, and some U.S. court will have to decide whether to follow this approach or not).

Or maybe the reason it hasn’t come up has to do with the way American statutes, or at least American criminal statutes, go about defining the use of a computer to defraud. Basically, the approach these statutes take is to make it a crime to access a computer “or any part thereof for the purpose of: . . . executing any scheme or artifice to defraud”. Idaho Code § 18-2202. You see very similar language in a many state computer crime statutes, and the basic federal computer crime statute has language that is analogous. See 18 U.C. Code § 1030(a)(4) (crime to knowingly “and with intent to defraud” access a computer without authorization or by exceeding authorized access and thereby further “the intended fraud”).

So maybe the issue of defrauding a machine hasn’t arisen in U.S. criminal law because our statutes are essentially tool statutes. That is, they criminalize using a computer as a tool to execute a “scheme or artifice to defraud.”

In the U.K. case, Renault was claiming that Thoms had defrauded it by submitting false purchase orders for discounted cars. The defense’s position was that to recover Renault would have to show that Thoms had intentionally made a false statement of fact directly to Renault, intending that Renault rely on the representation to its detriment. And that is the classic dynamic of fraud. Historically, fraudsters lied directly to their victims to induce them to part with money or other valuables. That is why, as I’ve mentioned before, fraud was originally known as “larceny by trick:” The fraudster in effect stole property from the victim by convincing him to hand it over to the fraudster in the belief he would profit by doing so. Here, the distortion of fact is direct and immediate; the victim hands over the property because he believes what the perpetrator has told (or written) him.

Many American fraud statutes predicate their definition of fraud crimes on executing a “scheme or artifice to defraud,” language that comes from the federal mail fraud statute, 18 U.S. Code § 1341. Section 1341, which dates back to 1872, makes it a crime to send anything through the mail for the purpose of executing a “scheme or artifice to defraud.” It was enacted in response to activity that is functionally analogous to online fraud: After the Civil War, con artists were using the U.S. mails to defraud many people remotely and anonymously. The sponsor of the legislation said it was needed “to prevent the frauds which are mostly gotten up in the large cities . . . by thieves, forgers, and rapscallions generally, for the purpose of deceiving and fleecing the innocent people in the country.” McNally v. United States, 483 U.S. 350 (1987). So § 1341 is really a fraud statute; it merely utilizes the “use of the mail to execute a scheme or artifice to defraud” language as a way to let the federal government step in an prosecute people who are committing what is really a garden variety state crime: fraud.

But as I said, many modern state computer crime statutes also use the “scheme or artifice to defraud” terminology. To some extent, that may simply be an artifact, a result of the influence federal criminal law has on the states; we have grown accustomed to phrasing fraud provisions in terms of executing schemes or artifices to defraud, so that language migrated to computer crime statutes.

Does that language eliminate the problem the U.K. court deal with? Does it eliminate the need to consider whether it is possible to defraud a machine by predicating the crime on using a computer to execute a scheme to defraud instead of making it a crime to make false representations directly to another person for the purpose of inducing them to part with their property?

On the one hand, it might. Under computer crime statutes modeled upon the mail fraud statute, the crime is committed as soon as the perpetrator makes any use of a computer for the purposes of completing a scheme to defraud a human being. Courts have long held that you can be charged with violating the federal mail fraud statute as soon as you deposit fraudulent material into the mail; it’s not necessary that the material actually have reached the victim, been read by the victim and induced the victim to give the perpetrator her property.

I think the same approach applies to computer crime statutes based on the mail fraud statute: the computer fraud offense is committed as soon as the perpetrator makes use of a computer with the intent of furthering his goal of defrauding a human being out of their property. Under that approach, it doesn’t really matter whether a person was actually defrauded, or whether a computer was defrauded. It’s enough that the perpetrator used a computer in an effort to advance his goal of defrauding someone.

I suspect this accounts for the fact that I, anyway, can’t find any U.S. cases addressing the issue of whether or not it is possible to defraud a computer. It’s an issue that may not be relevant in criminal fraud cases. It may, however, arise in civil fraud cases where, I believe, you would actually have to prove that “someone” was defrauded out of their property by the defendant’s actions.

Wednesday, November 21, 2007

The Stop Terrorist and Military Hoaxes Act of 2004


I’d somehow overlooked this one.

This statute, which was added to the federal code in December of 2004 by § 6702(a) of Title VI of Public Law # 108-458, criminalizes disseminating hoax information about possible terrorist or military attacks.


It's codified as 18 U.S. Code §1038. Section 1038 has two different prohibitions, the first of which appears in § 1038(a)(1). It provides as follows:
Whoever engages in any conduct with intent to convey false or misleading information under circumstances where such information may reasonably be believed and where such information indicates that an activity has taken, is taking, or will take place that would constitute a violation of chapter 2, 10, 11B, 39, 40, 44, 111, or 113B of this title, section 236 of the Atomic Energy Act of 1954 (42 U.S.C. 2284), or section 46502, the second sentence of section 46504, section 46505(b)(3) or (c), section 46506 if homicide or attempted homicide is involved, or section 60123(b) of title 49, shall--

(A) be fined under this title or imprisoned not more than 5 years, or both;

(B) if serious bodily injury results, be fined under this title or imprisoned not more than 20 years, or both; and

(C) if death results, be fined under this title or imprisoned for any number of years up to life, or both.
The other substantive prohibition appears in section 1038(a)(2). It provides as follows:
Any person who makes a false statement, with intent to convey false or misleading information, about the death, injury, capture, or disappearance of a member of the Armed Forces of the United States during a war or armed conflict in which the United States is engaged--

(A) shall be fined under this title, imprisoned not more than 5 years, or both;

(B) if serious bodily injury results, shall be fined under this title, imprisoned not more than 20 years, or both; and

(C) if death results, shall be fined under this title, imprisoned for any number of years or for life, or both.
Amazingly (to me, anyway), someone has been convicted of violating this statute. Actually, a number of people have been convicted of violating it, several for anthrax hoaxes. I’m more interested in the case I’m going to talk about because it involves publishing a story online, not sending a letter claiming to have deposited anthrax in a government facility.

According to the district court’s opinion in United States v. Brahm, 2007 WL 3111774 (U.S. District Court for the District of New Jersey), in September of 2006 Jake Brahm, who lived in Wauwatosa, Wisconsin, posted this message on the www.4chan.org site:
On Sunday, October 22, 2006, there will be seven “dirty” explosive devices detonated in seven different U.S. cities: Miami, New York City, Atlanta, Seattle, Houston, Oakland, and Cleveland. The death toll will approach 100,000 from the initial blast and countless other fatalities will later occur as a result from radio active fallout.

The bombs themselves will be delivered via trucks. These trucks will pull up to stadiums hosting NFL games in each respective city. All stadiums to be targeted are open air arenas excluding Atlanta's Georgia dome, the only enclosed stadium to be hit. Due to the open air the radiological fallout will destroy those not killed in the initial explosion. The explosions will be near simultaneous with the city specifically chosen in different time zones to allow for multiple attacks at the same time.

The 22nd of October will mark the final day of Ramadan as it will fall in Mecca, Al-Qaeda will automatically be blamed for the attacks later through Al-Jazeera, Osama Bin Laden will issue a video message claiming responsibility for what he dubs “America's Hiroshima”. In the aftermath civil wars will erupt across the world both in the Middle East and within the United States. Global economies will screech to a halt and general chaos will rule.
The opinion says the post “became a news story of some national prominence” in the days leading up to October 22, even though the authorities did not take it seriously.

Federal agents tracked Brahm down and he was indicted for violating §§ 1038(a)(1) and (a)(2). He moved to dismiss the indictment, arguing that the phrase “may reasonably be believed” in §1038(a)(1) had to be construed in light of his target audience. United States v. Brahm, supra.

Brahm claimed that “reasonably” had to be interpreted in a way that took into account whether the "audience addressed by false or misleading information would believe it to be true.” United States v. Brahm, supra. He argued for a subjective audience-sensitive interpretation of “reasonably,” so he could be held liable only if the government could prove that the readers of the www.4chan.org website would have believed his statement. The prosecution argued that it should be interpreted to permit a conviction if, under the circumstances, a reasonable person would have believed the posting. The district court reviewed the use of “reasonableness” in other threat statutes, and agreed with the government. United States v. Brahm, supra.
According to news stories, Brahm was a 20-year-old grocery clerk living with his parents. An FBI agent said Brahm thought it would be funny to post the story because he thought it was so preposterous no one would believe it. As to that, the agent, Richard Ruminski, said, "`It's a hoax. It's nonsense, not a credible threat. . . . But in a post 9-11 world, you take these threats seriously. It's almost like making a threat going onto an airplane -- you just don't do it’”.

The district court denied Brahm’s motion to dismiss the indictment on October 19, which wasn’t very long ago. I can’t find any reported developments in the case since then. He faces up to 5 years in prison on the federal charge. He was extradited to New Jersey for prosecution there.

The district court did not consider whether Brahm’s posting – the joke – was protected by the First Amendment, though it noted that the First Amendment protects humorous speech, even when it’s false. So that may be an issue he will raise in the future.

In its opinion, the court cited the famous War of the Worlds broadcast as a hoax that “might not qualify as something within the reasonable belief required by the statute,” but “would represent the kind of intentionally false information anticipated by section 1038.” United States v. Brahm, supra. It noted that “a fictitious broadcast of a terrorist attack on a major city with the goal of making a . . . political or artistic statement, causes greater concern, as . . . expressive, protected speech . . . might be affected” by the statute. United States v. Brahm, supra. In a footnote, the court pointed out that the War of the Worlds broadcast and 1983 and 1994 broadcasts dealing with fictional terrorist attacks provided disclaimers intended to alert the audience to their fictional nature. United States v. Brahm, supra.

The disclaimers in the 1983 and 1994 broadcasts were repreated throughout the shows. The War of the Worlds disclaimers did not begin until that show had been on the air for 40 minutes. They came after a number of New York police officers invaded the control room of the studio from which the broadcast was originating. The officers seemed to think they should arrest someone, but weren’t sure who to arrest or for what. According to one story I’ve read, Welles expected to be arrested immediately after the broadcast ended, but the police finally gave up and left because they still couldn’t figure out what to charge him or anyone else involved in the broadcast with.

It looks like Welles could have been prosecuted under § 1038, if it had existed at the time. I’m not sure anything in the radio play would violae § 1038(a)(1), but I believe members of the armed forces die fighting the armed Martian invaders in the “War of the Worlds” radio script, so that would probably violate § 1038(a)(2). He could try to defend himself by pointing out the inherent incredibility of the broadcast, i.e., by arguing that no one would be silly enough to really believe we were under attack by Martians . . . but a lot of people were silly enough to believe just that in 1939.

I’m not sure where I come out on this statute. I can definitely see the utility of being able to prosecute people who pull off anthrax and similar hoaxes. There, though, the conduct is far less ambiguous: They send letters or other messages claiming to have planted anthrax – or bubonic plague or bombs or the horrors of your choice – somewhere it can do a great deal of damage. Conduct like that is a threat, just as it’s a threat for John to tell Jane he’s going to kill her.

Our law has no difficulty criminalizing that kind of speech because what is being criminalized is not speech, as such – it’s the act of using speech to terrorize people (and perhaps cause consequential injuries and damage, as in the anthrax hoax cases). The speech at issue in the Brahm case is very different: He did not send a threat directly to anyone. He probably did go beyond what Orson Welles did because the “War of the Worlds” broadcast was purely expressive speech – art, in other words. Brahm claimed that what he posted was a joke – a satire analogous to the stories posted on The Onion, say. If it’s satire, it should not be criminalized.

The problem Brahm faces is to a great extent one of context: In the direct, anthrax kind of hoax, the hoaxer sends the functional equivalent of a threat to his victims. The prosecution’s theory in the Brahm case seems to be that he perpetrated an indirect kind of hoax by putting his joke online, where it could be read by anyone. Context comes into play in deciding whether a post like Brahms’ will reasonably be understood by those who read it as (i) satire or (ii) a credible threat report. If he’d posted his joke on an obviously satiric site like The Onion, would that take it out of the category of a criminal hoax under § 1038? Or are there some things we just cannot joke about at the beginning of the twenty-first century?

Monday, November 19, 2007

Fradulently obtaining a website?

When someone is hired to create a website for a business, does so, turns it over to the business but isn’t paid for their work, who owns the site . . . the website creator or the business that contracted for it?

That’s the issue the New Mexico Supreme Court addressed in State v. Kirby, 141 N.M. 838, 161 P.3d 883 (2007). You can find the opinion on the New Mexico Supreme Court’s website, if you’re interested.

The facts in the case are pretty simple. Here is how the court explained what happened:
[Richard] Kirby owned a small business, Global Exchange Holding, LLC. . . . Kirby hired Loren Collett, a sole proprietor operating under the name Starvation Graphics Company, to design and develop a website. The two entered into a website design contract. As part of the contract, Kirby agreed to pay Collett $1,890.00, plus tax, for his services.

Collett . . . designed the web pages and incorporated them into the website, but he was never paid. When Kirby changed the password and locked Collett out from the website, Kirby was charged with one count of fraud over $250 but less than $2,500, a fourth degree felony. New Mexico Statutes Annotated § 30-16-6. The criminal complaint alleged that Kirby took `a Website Design belonging to Loren Collett, by means of fraudulent conduct, practices, or representations.’
State v. Kirby, supra. After a jury convicted him of the charge, Kirby appealed to the Court of Appeals, which upheld his conviction. He then appealed to the Supreme Court. State v. Kirby, supra.

Kirby claimed he couldn’t be convicted of defrauding Collett out of the website because the website belonged to him, Kirby. Kirby argued, in effect, that he could not defraud himself. The New Mexico Supreme Court therefore had to decide who owned the site.

The Court of Appeals found that “because a `website includes the web pages,’ and Kirby never paid Collett for the web pages as contractually agreed, ownership remained with someone other than Kirby”, i.e., remained with Collett. The Supreme Court agreed with “that reasoning as far as it goes,” but decided “further analysis may assist the bar and the public in understanding this . . . novel area of the law.” State v. Kirby, supra.
We first turn our attention to the legal document governing the agreement between Collett and Kirby the `Website Design Contract.’ Collett was engaged `for the specific project of developing . . . a World Wide Website to be installed on the client's web space on a web hosting service's computer.’ Thus, the end product of Collett's work was the website, and the client, Kirby, owned the web space. Kirby was to `select a web hosting service’ which would allow Collett access to the website. Collett was to develop the website from content supplied by Kirby.

While the contract did not state who owned the website, it did specify ownership of the copyright to the web pages. `Copyright to the finished assembled work of web pages’ was owned by Collett, and upon final payment Kirby would be `assigned rights to use as a website the design, graphics, and text contained in the finished assembled website.’ Collett reserved the right to remove web pages from the Internet until final payment was made. Thus, the contract makes clear that Collett was, and would remain, the owner of the copyright to the web pages making up the website. Upon payment, Kirby would receive a kind of license to use the website.
State v. Kirby, supra.

Kirby conceded the site “`contained copyright material that belonged to Loren Collett’” but claimed Collett's ownership of the copyright was “separate from ownership of the website. Thus, because the contract only specified ownership of the copyright interest in the web pages and not ownership of the website,” Kirby argued that “from the very beginning he and not Collett owned the website.” State v. Kirby, supra.
Kirby argues that because he owned certain elements that are part of a website and help make it functional, he was the website owner regardless of who owned the copyright to the web pages. Kirby purchased a `domain name’ for the website and had contracted with an internet hosting service for `storage’ of that website. This same hosting service was the platform from which the website was to be displayed on the internet. Kirby, as the owner of the domain name and storage service, also owned the password that enabled him to `admit or exclude’ other people from the website. Kirby argues that his control of the password, ownership of the domain name, and contract with an internet hosting service provider gave him ownership of the web site.
State v. Kirby, supra.

The New Mexico Supreme Court disagreed:
While a domain name, service provider, and password are all necessary components of a website, none of them rises to the importance of the web pages that provide content to the website. A domain name is also referred to as a domain address. A domain address is similar to a street address, `in that it is through this domain address that Internet users find one another.’ . . . But it is nothing more than an address. If a company owned a domain name . . . but had no web pages to display, then upon the address being typed into a computer, only a blank page would appear. A blank web page is of little use to any business enterprise. It is the information to be displayed on that web page that creates substance and value. Similarly, the service provider only stores that information on the web pages and relays that communication to others. Having a service provider meant little to Kirby if the web pages were blank. Thus, the predominant part of a website is clearly the web page that gives it life.
State v. Kirby, supra.

The Supreme Court held that Collett owned the website: “the contract between Kirby and Collett clearly recognized Collett's legal ownership of the copyright to the web pages. Payment was to be the pivotal point in their legal relationship, and even then Kirby was only to receive a license to use those pages. The contract never transferred any interest in the web page design or ownership of the website to Kirby. As the owner of the copyright, Collett was the owner of the website, and any change was conditioned upon payment.” The court therefore upheld Kirby’s conviction.

Saturday, November 17, 2007

Causing suicide (again)

A few months ago I did a post about whether someone could be held criminally liable for causing another person to commit suicide.

In that post I primarily focused on whether it would be possible to hold a person criminally liable if the prosecution could show that this was their purpose, i.e., that they WANTED the other person to kill themselves.

And that is a logical possibility. As the drafters of the Model Penal Code, a template for criminal statutes, said of this scenario, “it’s a pretty clever way to commit murder.”


Here I want to talk about a different, but related issue: Whether someone can be held criminally liable for another’s suicide if it was not their purpose to cause the victim to kill herself, but their conduct in fact contributed to the victim’s doing so.

I’m prompted to write this post by what I’ve read recently of the Megan Meier case, the tragic story of the 13-year-old Missouri girl who committed suicide after being the victim of a MySpace hoax.


Here’s a summary of the facts of that case as they appeared in the St. Charles Journal: Thirteen-year-old Megan Meier lived with her parents in a Missouri suburb. She had attention deficit disorder, battled depression and had “talked about suicide” when she was in the third grade. She had been heavy but was losing weight and was about to get her braces off. She had just started a new school, and was on their volleyball team. She had also recently, according to her mother, ended an off-again, on-again friendship with a girl who lived down the street.

Megan had a MySpace page, with the permission of her parents. She was contacted by a sixteen-year old boy named Josh, who said he wanted to add her as a friend. Megan’s mother let her add him, and for six weeks Megan corresponded with Josh. Josh seems to have told her she was pretty and clearly gave her the impression that he liked her . . . at last, until one day when he sent her an email telling her he didn’t know if he wanted to be her friend because he’d heard she wasn’t “very nice” to her friends.

He seems to have followed that up with other, not-very-nice emails. And then, according to the news story I cited above, Megan began to get messages from others, saying she was fat and a slut. After this went on for a bit, Megan hanged herself in her closet and died the next day. Her father went on her MySpace account and saw what he thought was the final message from Josh – a really nasty message (according to her father) that ended with the writer telling Megan that the “world would be a better place without her.”


Megan’s parents tried to email Josh after she died, but his MySpace account had been deleted. Six weeks later, a neighbor met with them and told them there was no Josh, that he and his MySpace page were created by adults, the parents of the girl with whom Megan had ended her friendship. According to the police report in the case, which is quoted in the story cited above, this girl’s mother and a “temporary employee” created the MySpace page so the mother could “find out” what Megan was saying about her daughter.

It gets really murky from there, as to what was going on in the Megan-Josh correspondence, but it seems that others – including other children who knew Megan – had passwords to the Josh account and posted messages there. When the police interviewed this woman, she said she believed the Josh incident contributed to Megan’s suicide, but did not feel all that guilty because she found out Megan had tried to commit suicide before. (Actually, she seems to have talked about it in the third grade, as I noted above).


Megan’s parents and others in the community seem to have wanted the police to charge the adults who created and operated the Josh MySpace page with some type of crime for their role in Megan’s suicide. There are several reasons why they can’t be charged even if, as seems reasonable, their conduct was a factor resulting in Megan’s decision to take her own life.

One factor is that they clearly never intended for that to happen. I can’t begin to figure out what these adults thought they were doing (never mind the children involved), but whatever it was, they didn’t set out to kill Megan. They were, at most, reckless or negligent in embarking on a course of conduct that resulted in tragedy.

Every state makes it a crime to cause another person’s death recklessly or negligently. The difference between the two types of homicide goes to the foreseeability of the result.
  • You act recklessly if you consciously disregard and substantial and unjustifiable risk that the result (death) will result from your conduct. So to be liable for recklessly causing Megan’s death, these adults would have had to have been aware, at some level, that what they were doing could cause her to kill herself. If they were actually aware that this was a possibility and persisted in sending emails that could cause this result, then they could be held liable for reckless homicide.
  • You act negligently if a reasonable person (an objective standard) would have realized that your conduct created a risk that Megan could commit suicide. Here, the law looks not at what the allegedly culpable person actually knew, but at what a reasonable person, the average American adult, would have realized in this situation. So if the law finds that a reasonable person would have realized there was a risk that Megan would kill herself if that person conducted the Josh hoax, they could be held liable for negligent homicide.
I sympathize with Megan’s parents and I cannot comprehend why adults had nothing better to do than to play such a cruel trick on a child, but however stupid and cruel their conduct was, those responsible for the Josh hoax cannot be held liable under either standard. To explain why, I’m going to use a very recent Minnesota case: Jasperson v. Anoka-Hennepin Independent School District (Minnesota Court of Appeals, Case # A06-1904, decided October 30, 2007).

It’s a very sad case. The opinion says, “J.S. was a 13-year-old eighth-grade student . . . . who lived with his mother and father and his older brother.” He’d been having trouble in school: He had received failing grades in his classes, but was bringing his grades up. He was being bullied by two boys who attended a different school (a school for students with “behavioral problems”). According to the opinion, they grabbed his bike, told J.S. they knew where he lived and which room in the house was his and threatened to kill him. His mother met with Assistant Principal Ploeger at J.S.’ school and told him all this. Ploeger said he’d see that the boys were charged with trespassing on school property, but told her she’d have to talk to the school liaison police officer Wise about protecting J.S. from the boys. Ploeger advised J.S. to leave school by a different route or leave with friends. Wise met with J.S. and his mother, determined that no crime had been committed and suggested he walk with friends and avoid the boys.

A week or two later, J.S. got F’s in his mid-quarter grades for all his classes except for Physical Education. According to one student, J.S.’ science teacher Lande told him he was the “dumbest student” the teacher had ever had, and that he was “going nowhere.” Lande later said he had been angry and may have spoken louder than he intended. The observing student said J.S. cried afterward. The family discussed J.S.’ grades that night, and he said it was his teachers’ fault. He also said he couldn’t concentrate because the two boys were handing around his school. J.S.’ mother told him she’d talk to the school about getting him a new science teacher and about dealing with the two boys.

The next day, J.S.’ father, mother and brother left for work and school before he did, which wasn’t unusual. He often rode to school with friends. When J.S.’ brother came home that afternoon, he found J.S. dead on the living room floor, with a suicide note beside him. J.S. had shot himself. In the note he said his life was going nowhere so he didn’t need to live, left his love for his family and his dog and said he’d miss them.

The parents brought a civil suit against the school, claiming the school’s negligence caused J.S.’ suicide. The trial court held that J.S. suicide was not foreseeable, so the parents didn’t have a claim. The Minnesota Court of Appeals agreed:
[T[he record does not support assertions that any school personnel knew or had reason to know that J.S. continued to have problems with the two boys [or[ that J.S.'s failing grades were caused by his terror of the two boys. . . . Mere speculation or conjecture is not sufficient. . . . The district court did not err in concluding that given the evidence, Ploeger, Wise and Lande could not have foreseen any harm to J.S.
Jasperson v. Anoka-Hennepin Independent School District, supra.

Both courts also found that the evidence did not establish that Ploeger’s and Lande’s conduct caused J.S. to commit suicide:
Appellant argues that the school district failed to protect J.S. from a known danger; was in a position to end J.S.'s “terror” and should have anticipated that its failure would likely result in J.S.'s harm; and was in a far superior position to end the threats from the two boys than J.S. or his parents. But the record does not show that anyone at the school had any knowledge that J.S. was subject to harm from the two. The record does not suggest any change in J.S.'s behavior indicating that he was experiencing terror, and none of J.S.'s friends alerted school personnel that J .S. was in fear. There is no evidence that J.S.'s suicide was foreseeable and therefore could have been prevented.

Appellant relies on the fact that J.S.'s midterm grades and a suicide note containing the same words Lande allegedly used were found at his side as evidence that Lande's remarks were a substantial factor in bringing about J.S.'s suicide. But “a mere possibility of causation is not enough.” The district court did not err in concluding that, as a matter of law, the required causal connection between the conduct of school personnel and this tragic suicide is not established by evidence in the record.
Jasperson v. Anoka-Hennepin Independent School District, supra.

Privacy and anonymity

As you may have read, Donald Kerr, a deputy director of national intelligence, said last week Americans need to re-think their conception of privacy. He said privacy will no longer mean anonymity but will, instead, mean that government and the private sector will have to take appropriate steps to “safeguard people’s private communications and financial information.”

I’m not sure if I agree with him or not, so I thought I’d use this post to sort through my reactions to Kerr’s comments.

On the one hand, I don’t know that privacy has ever been synonymous with anonymity. My neighbors know who I am and where I live, as do the local police in my small Ohio suburb. I’m far from being anonymous to those around me. That, though, doesn’t mean I’ve lost my privacy. What I do in my home is still private, at least to the extent I pull the drapes and otherwise take some basic steps to shield what I’m doing from public view.

Maybe that’s what he means by equating privacy and anonymity . . . the notion that what I do in public areas is not private, at least not unless I take steps to conceal my identity and my activities. But that’s not a new notion – it’s common sense. As far as I know, no one has ever tried to argue that what they do in public (walk, drive, shop, go to a movie, rollerblade, whatever) is private under our Constitution . . . because they believed it was private or under some other theory.

Anonymity is not really an aspect of privacy under our Fourth Amendment law, except insofar as remaining anonymous makes it difficult or impossible for someone to tell what you’ve been doing in an area where you can readily be observed by others. Our Fourth Amendment law has traditionally been about the privacy of enclaves – your home, your office, your car, phone booths (when they still existed), and other physical (and perhaps intangible) places. One court, at least, has assumed that a password-protected website is a private enclave, analogous to these real-world enclaves. The Fourth Amendment also protects the containers (luggage, safes, lockers, sealed mail, DVD’s and other storage media) we use to store and to transport things. It is intended to prevent the police from intruding into real and conceptual spaces as to which we have manifested a reasonable expectation of privacy.

I don’t see where anonymity comes in to the traditional Fourth Amendment conception of privacy. The police can see John Doe walking down the street carrying a bag and really want to open that bag because they think he’s transporting drugs, but they can’t open it, or make him open it, just because they know who he is (John Doe). His lack of anonymity has no impact on the legitimate Fourth Amendment expectation of privacy he has in the contents of that bag. The fact that he’s carrying a bag is not private because anyone can see him carrying it. The contents of the bag, though, are private unless and to the extent that the bag is transparent; as long as it’s opaque, its contents are and will remain private.

Anonymity, as such, is actually the focus of a different constitutional provision: the First Amendment. The Supreme Court has interpreted the First Amendment as establishing the rights both to speak anonymously and to be able to preserve the anonymity of one’s associations. The Court has found that protecting anonymity in this context furthers free speech, political advocacy and other important values.

I think what Mr. Kerr is really talking about is an issue I’ve written on before: whether we have a Fourth Amendment expectation of privacy in the information we share with third-parties, such as businesses, Internet and telephone service providers and financial institutions. I think what he’s referring to is what I believe to be a widespread, implicit assumption among Americans, anyway: the notion that what we do online stays safely and obscurely online. I may be wrong, but I think we unconsciously tend to assume that the data we generate while online – the traffic data our ISP collects while we’re surfing the web and the transactional data companies collect from us when we make purchases or otherwise conduct business online – is entre nous . . . is just between me and my ISP or me and my bank or me and Amazon.

We know at some level that we are sharing that data with an uncertain number of anonymous individuals -- the employees of ISPs, banks, businesses, etc. – but we don’t tend to correlate sharing information with them with sharing that information with law enforcement. We essentially assume we are making a limited disclosure of information: I inevitably share data with my ISP as an aspect of my surfing the web or putting this post on my blog. I know I’m sharing information with the ISP, but I don’t assume that by doing that I’m also sharing information with law enforcement.

The problem with that assumption is, as I’ve noted before, that the Supreme Court has held that data I share with third-parties like banks or ISPs is completely outside the protections of the Fourth Amendment. According to the Court, I cannot reasonably expect that information I share with others, even with legitimate entities, is private. This means that under the Fourth Amendment, law enforcement officers do not have to obtain a search warrant to get that information.

(There are statutory requirements, but these both requirements go beyond the current interpretation of the Fourth Amendment and often provide less protection than that Amendment. They often allow officers to obtain third-party data without obtaining a search warrant; a subpoena or court order may suffice.)

So how does all of this relate to Mr. Kerr’s comments about anonymity and privacy? Well, at one point he said that we have historically equated privacy with anonymity but “in our interconnected and wireless world, anonymity - or the appearance of anonymity - is quickly becoming a thing of the past”. Actually, I’d tend to argue the opposite: I think cyberspace actually gives us more opportunities to remain anonymous than we’ve ever had.

Think about a pre-wired world. Think about the America of a hundred or a hundred and fifty years ago. Most Americans in this era, like most people throughout the millennia preceding that era, lived in small towns or villages. They pretty much knew everyone in the town or village where they lived. They traveled very little, both in terms of frequency and distance, so they lived their lives almost exclusively in that town or village. One consequence of this is that everyone in the town or village tended to know pretty much everything about everyone else. They knew who was having an affair with whom. They knew who was buying opium-based products at the general store and getting high. They knew who the drunks were and who the wife- and child-beaters were. They might not know everything that went on in each other’s homes behind closed doors, but they knew pretty much everything else.

The lives of those who lived in cities were probably not subject to quite so much scrutiny from their neighbors. My impression, though, is that city-dwellers during this and earlier eras tended to reside in a specific neighborhood, do their shopping in that neighborhood and generally socialize with people in that neighborhood. So much of what I said about town and village dwellers also applied to those who lived in cities. City dwellers probably had the possibility of going into other parts of the city to carry out their affairs, buy their opium products or otherwise engage in conduct they’d prefer not be widely known in the neighborhood where they resided.

My point is that there wasn’t much anonymity back then, or in all the years before then.

In modern America, we have much more control over the information we share with others. Our neighbors may still be able to pick up a lot of information about our habits and predilections, good and bad, but if we’re concerned about that we have alternatives: We can seclude ourselves in a remote area and commute to work, live in a high-rise and ignore our neighbors or take other means to reduce the amount of information that leaks out to those with whom we share living space. We may still buy our groceries and medications and clothing and other necessities from a face-to-face clerk (or not, as I’ll note below), but we can conceal our identity from the clerk by paying with case. We can try to obscure patterns in our purchases of necessities by patronizing various stores, in the hopes of interacting with different clerks. We can also rely on the fact that in today’s increasingly-urbanized, increasingly-jaded world clerks may not pay attention to use and our purchases because they don’t care who we are. We’re no longer joint components in a small, geographically-circumscribed social unit.

We can also take information about our purchasing habits and financial transactions out of local circulation by making purchases and conducting financial and other transactions online. This brings us back to Mr. Kerr’s comments. I may be wrong, but I don’t think we assume we’re anonymous when we conduct our affairs online. I do think we believe we are enhancing the privacy of our activities by removing them from the geographical context in which we conduct our lives. Online, I deal with strangers, with people who do not know Susan Brenner and, by inference, do not care what Susan Brenner is buying or selling or otherwise doing online.

Empirically, that’s a very reasonable assumption. The problem is that it founders on a legal and practical Catch-22: We conduct our online transactions with strangers who don’t know us and, by extension, don’t care about what we do. We therefore assume we have overcome the memory problem, the fact that historically those with whom we dealt face-to-face could, and would, remember us and our transactions. This brings us to the first, practical component of the Catch-22. Although we overcome the memory problem, we confront another problem: the technology we use to conduct our online transactions records every aspect of those transactions. We replace the uncertain memory of nosy clerks with disinterested but the irresistibly accurate transcription of machines.

The second, legal component of the Catch-22 is the issue I noted above – the recorded data we share with these third parties is not private under the Fourth Amendment and can, therefore, be shared with law enforcement. So in one sense we have more privacy as we move our activities online, and in another sense we have less.

I’m not sure what Mr. Kerr meant when he said that privacy now means that government and the private sector will have to take appropriate steps to “safeguard people’s private communications and financial information.” Does he mean we should revise our view of the Fourth Amendment to bring this information within its protections? Or does he mean we should enact statutes designed to accord a measure of privacy to this data by setting limits on how it can be shared with law enforcement?

Saturday, November 03, 2007

Virtual child pornography -- the product?


Some may find this posting offensive or even disturbing.



I’m going to explore some scenarios that might ensue from the current state of U.S. law on child pornography.


I’m not arguing that these or any scenarios will come to pass . . . . I'm just working through the logical possibilities, based on the current state of U.S. law dealing with real and not-real child pornography.


As I explained in an earlier post, the U.S. Supreme Court has held:

  • that the First Amendment does not preclude U.S. law’s criminalizing “real” child pornography, i.e., child pornography the creation of which involves the use of real children;
  • that the First Amendment does bar U.S. law from criminalizing virtual child pornography, i.e., child pornography the creation of which does not involve the use of real children but is, instead, based on computer-generated images (CGIs).
The Supreme Court held that the First Amendment does not prevent U.S. law from criminalizing real child pornography, even though it qualifies as speech under the First Amendment, because its creation involves the victimization of children, both physically and emotionally. Real child pornography is essentially a product and a record of a crime, or crimes, against children. The Court also held that the First Amendment does prevent U.S. law from criminalizing virtual child pornography because it is speech and because no real person is “harmed” in its creation; unlike real child pornography, virtual child pornography is fantasy, not recorded reality.

We were covering this in my cyberspace law class, and I asked the students to think about where virtual child pornography’s protected status under the First Amendment might take us once computer technology evolves so it is possible to create virtual child pornography (or adult pornography or movies or any visual media) that are visually indistinguishable from the real thing. The question is, what might happen with virtual child pornography once the average person cannot tell it from child pornography the creation of which involved the use of real children? We came up with what I think are some interesting scenarios.

For one thing, it would not be illegal to possess this indistinguishable-from-the-real-thing virtual child pornography. That, alone, has several consequences. It could create real difficulties for law enforcement officers who are trying to find real child pornography and prosecute those to create, distribute and possess it. If a regular person cannot tell real from virtual child pornography by simply looking at a movie or other instance of child pornography, how are police involved in investigations supposed to know what they’re dealing with?

Another consequence could be that it becomes functionally impossible, or at least very difficult, for prosecutors to prove that someone being prosecuted for possessing real child pornography did so knowingly. The defendant could claim he or she believed the material he possessed was virtual, not real, child pornography. Since the prosecution has to prove the defendant “knowingly” possessed real child pornography beyond a reasonable doubt, it would presumably be difficult for prosecutors to win in cases like these (assuming the jurors followed the law and were not swayed by personal distaste for the defendant’s preferences in pornography).

Since U.S. jurisdictions cannot criminalize the possession and distribution of virtual child pornography, we might see the emergence of websites selling virtual child pornography. It would perfectly legal to sell the stuff in the U.S., to buy it or to possess it. Virtual child pornography would essentially have the same status as any other kind of fictive material; it is, after all, a fantasy, just as slasher movies or violent video games are fantasy.

To protect themselves and their clients, these hypothetical businesses might watermark the virtual child pornography they sold, to provide an easy way of proving that the stuff was virtual, not real. We talked a bit about this in my class. The watermark would have to be something that could withstand scrutiny and that would be valid, clearly credible evidence that child pornography was virtual, not real. Those who created and sold the stuff might be able to charge more if their watermark hit a gold standard – if it basically provided a guarantee that those who bought their product could not be successfully prosecuted for possessing child pornography.

The international repercussions of all this might be interesting; some countries, such as Germany, criminalize the creation, possession and distribution of all child pornography, real and virtual. The criminalization of all child pornography is the default standard under the Council of Europe’s Convention on Cybercrime, but the Convention allows parties to opt out of criminalizing virtual child pornography. So countries that take the same approach as the U.S. could also become purveyors of virtual child pornography. We could see a world in which virtual child pornography was illegal in some countries and for sale in others.

In analyzing where all of this might go, my students and I realized there could be one, really depressing implication of the commercialization of virtual child pornography: Real child pornography would probably become particularly valuable, because it would be the real thing.